咨询与建议

限定检索结果

文献类型

  • 14,558 篇 会议
  • 663 篇 期刊文献
  • 101 册 图书
  • 40 篇 学位论文
  • 1 篇 科技报告

馆藏范围

  • 15,362 篇 电子文献
  • 1 种 纸本馆藏

日期分布

学科分类号

  • 11,025 篇 工学
    • 10,359 篇 计算机科学与技术...
    • 5,436 篇 软件工程
    • 1,474 篇 信息与通信工程
    • 963 篇 电气工程
    • 925 篇 控制科学与工程
    • 446 篇 生物工程
    • 223 篇 网络空间安全
    • 220 篇 化学工程与技术
    • 187 篇 机械工程
    • 175 篇 生物医学工程(可授...
    • 144 篇 电子科学与技术(可...
    • 102 篇 仪器科学与技术
    • 99 篇 安全科学与工程
  • 2,494 篇 理学
    • 1,163 篇 数学
    • 655 篇 物理学
    • 520 篇 生物学
    • 395 篇 统计学(可授理学、...
    • 241 篇 系统科学
    • 235 篇 化学
  • 2,427 篇 管理学
    • 1,755 篇 图书情报与档案管...
    • 760 篇 管理科学与工程(可...
    • 241 篇 工商管理
    • 106 篇 公共管理
  • 1,761 篇 文学
    • 1,709 篇 外国语言文学
    • 184 篇 中国语言文学
  • 514 篇 医学
    • 303 篇 临床医学
    • 284 篇 基础医学(可授医学...
    • 113 篇 公共卫生与预防医...
  • 278 篇 法学
    • 249 篇 社会学
  • 238 篇 教育学
    • 225 篇 教育学
  • 100 篇 农学
  • 98 篇 经济学
  • 9 篇 艺术学
  • 7 篇 哲学
  • 4 篇 军事学

主题

  • 3,557 篇 natural language...
  • 1,786 篇 natural language...
  • 953 篇 computational li...
  • 740 篇 semantics
  • 682 篇 machine learning
  • 613 篇 deep learning
  • 520 篇 natural language...
  • 352 篇 computational mo...
  • 343 篇 accuracy
  • 339 篇 training
  • 335 篇 large language m...
  • 335 篇 sentiment analys...
  • 325 篇 feature extracti...
  • 312 篇 data mining
  • 290 篇 speech processin...
  • 260 篇 speech recogniti...
  • 256 篇 transformers
  • 236 篇 neural networks
  • 218 篇 iterative method...
  • 212 篇 support vector m...

机构

  • 85 篇 carnegie mellon ...
  • 52 篇 university of ch...
  • 46 篇 tsinghua univers...
  • 45 篇 carnegie mellon ...
  • 43 篇 zhejiang univers...
  • 43 篇 national univers...
  • 38 篇 nanyang technolo...
  • 36 篇 university of sc...
  • 36 篇 university of wa...
  • 35 篇 univ chinese aca...
  • 34 篇 carnegie mellon ...
  • 33 篇 gaoling school o...
  • 33 篇 stanford univers...
  • 32 篇 school of artifi...
  • 32 篇 alibaba grp peop...
  • 29 篇 tsinghua univ de...
  • 28 篇 harbin institute...
  • 26 篇 microsoft resear...
  • 26 篇 language technol...
  • 26 篇 peking universit...

作者

  • 55 篇 zhou guodong
  • 50 篇 neubig graham
  • 46 篇 liu yang
  • 39 篇 sun maosong
  • 36 篇 zhang min
  • 34 篇 liu qun
  • 33 篇 smith noah a.
  • 28 篇 schütze hinrich
  • 27 篇 liu zhiyuan
  • 26 篇 wen ji-rong
  • 26 篇 lapata mirella
  • 24 篇 chang kai-wei
  • 23 篇 zhou jie
  • 23 篇 yang diyi
  • 23 篇 zhao hai
  • 23 篇 zhao wayne xin
  • 21 篇 chua tat-seng
  • 20 篇 dredze mark
  • 18 篇 biemann chris
  • 18 篇 fung pascale

语言

  • 14,282 篇 英文
  • 966 篇 其他
  • 113 篇 中文
  • 18 篇 法文
  • 14 篇 土耳其文
  • 2 篇 德文
  • 2 篇 西班牙文
  • 2 篇 俄文
检索条件"任意字段=Conference on empirical methods in natural language processing"
15363 条 记 录,以下是1081-1090 订阅
排序:
ALCUNA: Large language Models Meet New Knowledge
ALCUNA: Large Language Models Meet New Knowledge
收藏 引用
conference on empirical methods in natural language processing (EMNLP)
作者: Yin, Xunjian Huang, Baizhou Wan, Xiaojun Peking Univ Wangxuan Inst Comp Technol Beijing Peoples R China Peking Univ Ctr Data Sci Beijing Peoples R China Peking Univ MOE Key Lab Computat Linguist Beijing Peoples R China
With the rapid development of NLP, large-scale language models (LLMs) excel in various tasks across multiple domains now. However, existing benchmarks may not adequately measure these models' capabilities, especia... 详细信息
来源: 评论
Head-wise Shareable Attention for Large language Models
Head-wise Shareable Attention for Large Language Models
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Cao, Zouying Yang, Yifei Zhao, Hai Department of Computer Science and Engineering Shanghai Jiao Tong University China Key Laboratory of Shanghai Education Commission for Intelligent Interaction and Cognitive Engineering Shanghai Jiao Tong University China Shanghai Key Laboratory of Trusted Data Circulation and Governance in Web3 China
Large language Models (LLMs) suffer from huge number of parameters, which restricts their deployment on edge devices. Weight sharing is one promising solution that encourages weight reuse, effectively reducing memory ... 详细信息
来源: 评论
BAPO: Base-Anchored Preference Optimization for Overcoming Forgetting in Large language Models Personalization
BAPO: Base-Anchored Preference Optimization for Overcoming F...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Lee, Gihun Jeong, Minchan Kim, Yujin Jung, Hojung Oh, Jaehoon Kim, Sangmook Yun, Se-Young Graduate School of AI KAIST Korea Republic of Samsung Advanced Institute of Technology Korea Republic of Department of Electrical and Computer Engineering UBC Canada
While learning to align Large language Models (LLMs) with human preferences has shown remarkable success, aligning these models to meet the diverse user preferences presents further challenges in preserving previous k... 详细信息
来源: 评论
Lion: Adversarial Distillation of Proprietary Large language Models
Lion: Adversarial Distillation of Proprietary Large Language...
收藏 引用
conference on empirical methods in natural language processing (EMNLP)
作者: Jiang, Yuxin Chan, Chunkit Chen, Mingyang Wang, Wei Hong Kong Univ Sci & Technol Guangzhou Guangzhou Peoples R China Hong Kong Univ Sci & Technol Hong Kong Peoples R China
The practice of transferring knowledge from a sophisticated, proprietary large language model (LLM) to a compact, open-source LLM has garnered considerable attention. Previous works have focused on a unidirectional kn... 详细信息
来源: 评论
Beyond Accuracy Optimization: Computer Vision Losses for Large language Model Fine-Tuning
Beyond Accuracy Optimization: Computer Vision Losses for Lar...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Cambrin, Daniele Rege Gallipoli, Giuseppe Benedetto, Irene Cagliero, Luca Garza, Paolo Politecnico di Torino Italy MAIZE SRL
Large language Models (LLMs) have demonstrated impressive performance across various tasks. However, current training approaches combine standard cross-entropy loss with extensive data, human feedback, or ad hoc metho... 详细信息
来源: 评论
TCFLE-8: a Corpus of Learner Written Productions for French as a Foreign language and its Application to Automated Essay Scoring
TCFLE-8: a Corpus of Learner Written Productions for French ...
收藏 引用
conference on empirical methods in natural language processing (EMNLP)
作者: Wilkens, Rodrigo Pintard, Alice Alfter, David Folny, Vincent Francois, Thomas UCLouvain IL&C Cental Louvain Belgium Univ Gothenburg Gothenburg Sweden France Educ Int Sevres France
Automated Essay Scoring (AES) aims to automatically assess the quality of essays. Automation enables large-scale assessment, improvements in consistency, reliability, and standardization. Those characteristics are of ... 详细信息
来源: 评论
Belief Revision: The Adaptability of Large language Models Reasoning
Belief Revision: The Adaptability of Large Language Models R...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Wilie, Bryan Cahyawijaya, Samuel Ishii, Etsuko He, Junxian Fung, Pascale Hong Kong University of Science and Technology Clear Water Bay Hong Kong
The capability to reason from text is crucial for real-world NLP applications. Real-world scenarios often involve incomplete or evolving data. In response, individuals update their beliefs and understandings according... 详细信息
来源: 评论
UrbanLLM: Autonomous Urban Activity Planning and Management with Large language Models
UrbanLLM: Autonomous Urban Activity Planning and Management ...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Jiang, Yue Chao, Qin Chen, Yile Li, Xiucheng Liu, Shuai Cong, Gao Nanyang Technological University Singapore Harbin Institute of Technology Shenzhen China DAMO Academy Alibaba group Singapore
Location-based services play a critical role in improving the quality of our daily lives. Despite the proliferation of numerous specialized AI models within spatio-temporal context of location-based services, these mo... 详细信息
来源: 评论
FACTKB: Generalizable Factuality Evaluation using language Models Enhanced with Factual Knowledge
FACTKB: Generalizable Factuality Evaluation using Language M...
收藏 引用
conference on empirical methods in natural language processing (EMNLP)
作者: Feng, Shangbin Balachandran, Vidhisha Bai, Yuyang Tsvetkov, Yulia Univ Washington Seattle WA 98195 USA Carnegie Mellon Univ Pittsburgh PA 15213 USA Xi An Jiao Tong Univ Xian Peoples R China
Evaluating the factual consistency of automatically generated summaries is essential for the progress and adoption of reliable summarization systems. Despite recent advances, existing factuality evaluation models are ... 详细信息
来源: 评论
NLMs: Augmenting Negation in language Models
NLMs: Augmenting Negation in Language Models
收藏 引用
conference on empirical methods in natural language processing (EMNLP)
作者: Singh, Rituraj Kumar, Rahul Sridhar, Vivek Samsung R&D Inst India Bangalore Karnataka India
Negation is the fundamental component in a natural language that reverses the semantic meaning of a sentence. It plays an extremely important role across a wide range of applications, yet they are under-represented in... 详细信息
来源: 评论