咨询与建议

限定检索结果

文献类型

  • 7,589 篇 会议
  • 71 册 图书
  • 49 篇 期刊文献
  • 1 篇 学位论文

馆藏范围

  • 7,709 篇 电子文献
  • 1 种 纸本馆藏

日期分布

学科分类号

  • 6,491 篇 工学
    • 6,263 篇 计算机科学与技术...
    • 3,536 篇 软件工程
    • 732 篇 信息与通信工程
    • 493 篇 控制科学与工程
    • 272 篇 电气工程
    • 213 篇 生物工程
    • 119 篇 化学工程与技术
    • 99 篇 机械工程
    • 84 篇 电子科学与技术(可...
    • 76 篇 生物医学工程(可授...
    • 63 篇 安全科学与工程
    • 59 篇 农业工程
    • 57 篇 交通运输工程
    • 49 篇 网络空间安全
  • 1,538 篇 文学
    • 1,531 篇 外国语言文学
    • 146 篇 中国语言文学
  • 1,486 篇 管理学
    • 1,142 篇 图书情报与档案管...
    • 453 篇 管理科学与工程(可...
    • 130 篇 工商管理
  • 1,423 篇 理学
    • 758 篇 数学
    • 350 篇 物理学
    • 247 篇 生物学
    • 239 篇 统计学(可授理学、...
    • 119 篇 化学
    • 100 篇 系统科学
  • 161 篇 法学
    • 149 篇 社会学
  • 128 篇 医学
    • 93 篇 临床医学
    • 75 篇 基础医学(可授医学...
  • 111 篇 教育学
    • 105 篇 教育学
  • 68 篇 农学
    • 68 篇 作物学
  • 39 篇 经济学
  • 6 篇 哲学
  • 3 篇 艺术学
  • 1 篇 军事学

主题

  • 1,184 篇 natural language...
  • 870 篇 computational li...
  • 623 篇 natural language...
  • 283 篇 semantics
  • 165 篇 natural language...
  • 128 篇 machine learning
  • 127 篇 graphic methods
  • 123 篇 iterative method...
  • 111 篇 sentiment analys...
  • 110 篇 speech recogniti...
  • 107 篇 deep learning
  • 94 篇 syntactics
  • 90 篇 text processing
  • 86 篇 speech processin...
  • 81 篇 embeddings
  • 73 篇 information retr...
  • 69 篇 modeling languag...
  • 69 篇 artificial intel...
  • 66 篇 contrastive lear...
  • 63 篇 zero-shot learni...

机构

  • 74 篇 carnegie mellon ...
  • 35 篇 institute for na...
  • 34 篇 national univers...
  • 34 篇 language technol...
  • 33 篇 carnegie mellon ...
  • 33 篇 school of comput...
  • 31 篇 tsinghua univers...
  • 31 篇 university of wa...
  • 31 篇 university of ch...
  • 29 篇 stanford univers...
  • 28 篇 zhejiang univers...
  • 28 篇 alibaba grp peop...
  • 27 篇 nanyang technolo...
  • 27 篇 carnegie mellon ...
  • 27 篇 natl univ singap...
  • 25 篇 gaoling school o...
  • 25 篇 peking universit...
  • 24 篇 allen inst artif...
  • 24 篇 harbin institute...
  • 23 篇 stanford univ st...

作者

  • 42 篇 neubig graham
  • 39 篇 zhou guodong
  • 39 篇 smith noah a.
  • 36 篇 liu yang
  • 36 篇 lapata mirella
  • 34 篇 sun maosong
  • 32 篇 zhang min
  • 30 篇 liu qun
  • 30 篇 hovy eduard
  • 30 篇 huang xuanjing
  • 29 篇 zhao jun
  • 29 篇 gurevych iryna
  • 27 篇 schütze hinrich
  • 27 篇 liu zhiyuan
  • 24 篇 vulic ivan
  • 21 篇 chang kai-wei
  • 20 篇 wen ji-rong
  • 20 篇 zhang yue
  • 20 篇 korhonen anna
  • 20 篇 zhang qi

语言

  • 6,739 篇 英文
  • 941 篇 其他
  • 26 篇 中文
  • 8 篇 法文
  • 4 篇 土耳其文
  • 2 篇 德文
  • 2 篇 俄文
检索条件"任意字段=Proceedings of the Conference on Empirical Methods in Natural Language Processing"
7710 条 记 录,以下是431-440 订阅
排序:
SparkRA: A Retrieval-Augmented Knowledge Service System Based on Spark Large language Model
SparkRA: A Retrieval-Augmented Knowledge Service System Base...
收藏 引用
2024 conference on empirical methods in natural language processing: System Demonstrations, EMNLP 2024
作者: Wu, Dayong Li, Jiaqi Wang, Baoxin Zhao, Honghong Xue, Siyuan Yang, Yanjie Chang, Zhijun Zhang, Rui Qian, Li Wang, Bo Wang, Shijin Zhang, Zhixiong Hu, Guoping State Key Laboratory of Cognitive Intelligence iFLYTEK Research China University of Science and Technology of China Hefei China Research Center for Social Computing and Information Retrieval Harbin Institute of Technology Harbin China National Science Library Chinese Academy of Sciences China Langfang China
Large language models (LLMs) have shown remarkable achievements across various language tasks. To enhance the performance of LLMs in scientific literature services, we developed the scientific literature LLM (SciLit-L... 详细信息
来源: 评论
Novel Ensemble Sentiment Classification through Speech processing and Stacking Generalization  1
Novel Ensemble Sentiment Classification through Speech Proce...
收藏 引用
1st International conference on Electronics, Communication and Signal processing, ICECSP 2024
作者: Chowdhury, Shriya Roy, Soumo John, Mathew Chathurvedi, V Rama Chandra Das, Deepanjali Mohanty, Aparna School of Electronics Engineering Vellore Institute of Technology Vellore India
This paper introduces an innovative method for speech sentiment analysis, by employing a stacking classifier. The proposed system directly transcribes audio and extracts essential features for sentiment analysis. The ... 详细信息
来源: 评论
Plan, Verify and Switch: Integrated Reasoning with Diverse X-of-Thoughts
Plan, Verify and Switch: Integrated Reasoning with Diverse X...
收藏 引用
conference on empirical methods in natural language processing (EMNLP)
作者: Liu, Tengxiao Guo, Qipeng Yang, Yuqing Hu, Xiangkun Zhang, Yue Qiu, Xipeng Zhang, Zheng Fudan Univ Sch Comp Sci Shanghai Peoples R China Amazon AWS AI Seattle WA USA Westlake Univ Sch Engn Hangzhou Peoples R China AWS Shanghai AI Lab Shanghai Peoples R China
As large language models (LLMs) have shown effectiveness with different prompting methods, such as Chain of Thought, Program of Thought, we find that these methods have formed a great complementarity to each other on ... 详细信息
来源: 评论
On General language Understanding
On General Language Understanding
收藏 引用
conference on empirical methods in natural language processing (EMNLP)
作者: Schlangen, David Univ Potsdam Computat Linguist Dept Linguist Potsdam Germany
natural language processing prides itself to be an empirically-minded, if not outright empiricist field, and yet lately it seems to get itself into essentialist debates on issues of meaning and measurement ("Do L... 详细信息
来源: 评论
Perceptions of Linguistic Uncertainty by language Models and Humans
Perceptions of Linguistic Uncertainty by Language Models and...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Belem, Catarina Kelly, Markelle Steyvers, Mark Singh, Sameer Smyth, Padhraic Department of Computer Science University of California Irvine United States Department of Cognitive Sciences University of California Irvine United States
Uncertainty expressions such as "probably" or "highly unlikely" are pervasive in human language. While prior work has established that there is population-level agreement in terms of how humans qua... 详细信息
来源: 评论
TroL: Traversal of Layers for Large language and Vision Models
TroL: Traversal of Layers for Large Language and Vision Mode...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Lee, Byung-Kwan Chung, Sangyun Kim, Chae Won Park, Beomchan Ro, Yong Man KAIST Korea Republic of
Large language and vision models (LLVMs) have been driven by the generalization power of large language models (LLMs) and the advent of visual instruction tuning. Along with scaling them up directly, these models enab... 详细信息
来源: 评论
Pretraining language Models Using Translationese
Pretraining Language Models Using Translationese
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Doshi, Meet Dabre, Raj Bhattacharyya, Pushpak CFILT Indian Institute of Technology Bombay Mumbai India National Institute of Information and Communications Technology Kyoto Japan IIT Madras Chennai India
In this paper, we explore the utility of Translationese as synthetic data created using machine translation for pre-training language models (LMs) for low-resource languages (LRLs). Our simple methodology consists of ... 详细信息
来源: 评论
Rethinking Token Reduction for State Space Models
Rethinking Token Reduction for State Space Models
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Zhan, Zheng Wu, Yushu Kong, Zhenglun Yang, Changdi Gong, Yifan Shen, Xuan Lin, Xue Zhao, Pu Wang, Yanzhi Northeastern University United States Harvard University United States
Recent advancements in State Space Models (SSMs) have attracted significant interest, particularly in models optimized for parallel training and handling long-range dependencies. Architectures like Mamba have scaled t... 详细信息
来源: 评论
Editing Large language Models: Problems, methods, and Opportunities
Editing Large Language Models: Problems, Methods, and Opport...
收藏 引用
conference on empirical methods in natural language processing (EMNLP)
作者: Yao, Yunzhi Wang, Peng Tian, Bozhong Chen, Siyuan Li, Zhoubo Deng, Shumin Chen, Huajun Zhang, Ningyu Zhejiang Univ Hangzhou Peoples R China Zhejiang Univ Ant Grp Joint Lab Knowledge Graph Hangzhou Peoples R China Donghai Lab Zhoushan Peoples R China Natl Univ Singapore NUS NCS Joint Lab Singapore Singapore
Despite the ability to train capable LLMs, the methodology for maintaining their relevancy and rectifying errors remains elusive. To this end, the past few years have witnessed a surge in techniques for editing LLMs, ...
来源: 评论
Can Large language Models Capture Dissenting Human Voices?
Can Large Language Models Capture Dissenting Human Voices?
收藏 引用
conference on empirical methods in natural language processing (EMNLP)
作者: Lee, Noah An, Na Min Thorne, James KAIST AI Seoul South Korea
Large language models (LLMs) have shown impressive achievements in solving a broad range of tasks. Augmented by instruction fine-tuning, LLMs have also been shown to generalize in zero-shot settings as well. However, ... 详细信息
来源: 评论