咨询与建议

限定检索结果

文献类型

  • 14,549 篇 会议
  • 662 篇 期刊文献
  • 101 册 图书
  • 40 篇 学位论文
  • 1 篇 科技报告

馆藏范围

  • 15,352 篇 电子文献
  • 1 种 纸本馆藏

日期分布

学科分类号

  • 11,015 篇 工学
    • 10,349 篇 计算机科学与技术...
    • 5,460 篇 软件工程
    • 1,467 篇 信息与通信工程
    • 956 篇 电气工程
    • 892 篇 控制科学与工程
    • 447 篇 生物工程
    • 221 篇 网络空间安全
    • 220 篇 化学工程与技术
    • 186 篇 机械工程
    • 177 篇 生物医学工程(可授...
    • 141 篇 电子科学与技术(可...
    • 101 篇 仪器科学与技术
    • 100 篇 安全科学与工程
  • 2,486 篇 理学
    • 1,156 篇 数学
    • 654 篇 物理学
    • 520 篇 生物学
    • 394 篇 统计学(可授理学、...
    • 241 篇 系统科学
    • 232 篇 化学
  • 2,427 篇 管理学
    • 1,756 篇 图书情报与档案管...
    • 759 篇 管理科学与工程(可...
    • 241 篇 工商管理
    • 106 篇 公共管理
  • 1,762 篇 文学
    • 1,710 篇 外国语言文学
    • 184 篇 中国语言文学
  • 515 篇 医学
    • 303 篇 临床医学
    • 286 篇 基础医学(可授医学...
    • 113 篇 公共卫生与预防医...
  • 279 篇 法学
    • 249 篇 社会学
  • 239 篇 教育学
    • 226 篇 教育学
  • 100 篇 农学
  • 96 篇 经济学
  • 10 篇 艺术学
  • 7 篇 哲学
  • 4 篇 军事学

主题

  • 3,552 篇 natural language...
  • 1,789 篇 natural language...
  • 953 篇 computational li...
  • 741 篇 semantics
  • 683 篇 machine learning
  • 612 篇 deep learning
  • 520 篇 natural language...
  • 352 篇 computational mo...
  • 343 篇 accuracy
  • 339 篇 training
  • 334 篇 large language m...
  • 334 篇 sentiment analys...
  • 325 篇 feature extracti...
  • 312 篇 data mining
  • 290 篇 speech processin...
  • 260 篇 speech recogniti...
  • 255 篇 transformers
  • 236 篇 neural networks
  • 218 篇 iterative method...
  • 212 篇 support vector m...

机构

  • 85 篇 carnegie mellon ...
  • 51 篇 university of ch...
  • 46 篇 tsinghua univers...
  • 45 篇 carnegie mellon ...
  • 43 篇 zhejiang univers...
  • 43 篇 national univers...
  • 38 篇 nanyang technolo...
  • 36 篇 university of sc...
  • 36 篇 university of wa...
  • 35 篇 univ chinese aca...
  • 34 篇 carnegie mellon ...
  • 33 篇 stanford univers...
  • 32 篇 gaoling school o...
  • 32 篇 alibaba grp peop...
  • 31 篇 school of artifi...
  • 29 篇 tsinghua univ de...
  • 28 篇 harbin institute...
  • 27 篇 peking universit...
  • 26 篇 microsoft resear...
  • 26 篇 language technol...

作者

  • 55 篇 zhou guodong
  • 50 篇 neubig graham
  • 46 篇 liu yang
  • 39 篇 sun maosong
  • 36 篇 zhang min
  • 34 篇 liu qun
  • 33 篇 smith noah a.
  • 28 篇 schütze hinrich
  • 26 篇 wen ji-rong
  • 26 篇 liu zhiyuan
  • 26 篇 lapata mirella
  • 24 篇 chang kai-wei
  • 23 篇 zhou jie
  • 23 篇 yang diyi
  • 23 篇 zhao hai
  • 23 篇 zhao wayne xin
  • 21 篇 chua tat-seng
  • 20 篇 dredze mark
  • 18 篇 biemann chris
  • 18 篇 fung pascale

语言

  • 14,307 篇 英文
  • 930 篇 其他
  • 114 篇 中文
  • 18 篇 法文
  • 14 篇 土耳其文
  • 2 篇 德文
  • 2 篇 西班牙文
  • 2 篇 俄文
检索条件"任意字段=Conference on empirical methods in natural language processing"
15353 条 记 录,以下是1011-1020 订阅
排序:
Learning to Write Rationally: How Information Is Distributed in Non-Native Speakers' Essays
Learning to Write Rationally: How Information Is Distributed...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Tang, Zixin van Hell, Janet G. College of Information Sciences and Technology The Pennsylvania State University United States Department of Psychology United States Center for Language Science The Pennsylvania State University United States
People tend to distribute information evenly during language production, such as when writing an essay, to improve clarity and communication. However, this may pose challenges to non-native speakers. In this study, we... 详细信息
来源: 评论
QUIK: Towards End-to-end 4-Bit Inference on Generative Large language Models
QUIK: Towards End-to-end 4-Bit Inference on Generative Large...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Ashkboos, Saleh Markov, Ilia Frantar, Elias Zhong, Tingxuan Wang, Xingchen Ren, Jie Hoefler, Torsten Alistarh, Dan ETH Zurich Switzerland Institute of Science and Technology Austria Xidian University China KAUST Saudi Arabia Neural Magic Inc. United States
Large language Models (LLMs) from the GPT family have become extremely popular, leading to a race towards reducing their inference costs to allow for efficient local computation. However, the vast majority of existing... 详细信息
来源: 评论
ChatRetriever: Adapting Large language Models for Generalized and Robust Conversational Dense Retrieval
ChatRetriever: Adapting Large Language Models for Generalize...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Mao, Kelong Deng, Chenlong Chen, Haonan Mo, Fengran Liu, Zheng Sakai, Tetsuya Dou, Zhicheng Gaoling School of Artificial Intelligence Renmin University of China China Université de Montréal Québec Canada Beijing Academy of Artificial Intelligence China Waseda University Tokyo Japan
Conversational search requires accurate interpretation of user intent from complex multi-turn contexts. This paper presents ChatRetriever, which inherits the strong generalization capability of large language models t... 详细信息
来源: 评论
Towards Tool Use Alignment of Large language Models
Towards Tool Use Alignment of Large Language Models
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Chen, Zhi-Yuan Shen, Shiqi Shen, Guangyao Zhi, Gong Chen, Xu Lin, Yankai Gaoling School of Artificial Intelligence Renmin University of China Beijing China Beijing Key Laboratory of Big Data Management and Analysis Methods Beijing China Tencent Inc. China
Recently, tool use with LLMs has become one of the primary research topics as it can help LLM generate truthful and helpful responses. Existing studies on tool use with LLMs primarily focus on enhancing the tool-calli... 详细信息
来源: 评论
Enhancing Systematic Decompositional natural language Inference Using Informal Logic
Enhancing Systematic Decompositional Natural Language Infere...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Weir, Nathaniel Sanders, Kate Weller, Orion Sharma, Shreya Jiang, Dongwei Jiang, Zhengping Mishra, Bhavana Dalvi Tafjord, Oyvind Jansen, Peter Clark, Peter Van Durme, Benjamin Johns Hopkins University United States Allen Institute for AI United States University of Arizona United States
Recent language models enable new opportunities for structured reasoning with text, such as the construction of intuitive, proof-like textual entailment trees without relying on brittle formal logic (Tafjord et al., 2... 详细信息
来源: 评论
A Systematic Survey and Critical Review on Evaluating Large language Models: Challenges, Limitations, and Recommendations
A Systematic Survey and Critical Review on Evaluating Large ...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Laskar, Md Tahmid Rahman Alqahtani, Sawsan Bari, M. Saiful Rahman, Mizanur Khan, Mohammad Abdullah Matin Khan, Haidar Jahan, Israt Bhuiyan, Md Amran Hossen Tan, Chee Wei Parvez, Md Rizwan Hoque, Enamul Joty, Shafiq Huang, Jimmy Xiangji York University Canada Princess Nourah Bint Abdulrahman University Saudi Arabia Nanyang Technological University Singapore National Center for AI Saudi Arabia Qatar Dialpad Canada Inc. Canada Royal Bank of Canada Canada Salesforce Research Singapore
Large language Models (LLMs) have recently gained significant attention due to their remarkable capabilities in performing diverse tasks across various domains. However, a thorough evaluation of these models is crucia... 详细信息
来源: 评论
MMCode: Benchmarking Multimodal Large language Models in Code Generation with Visually Rich Programming Problems
MMCode: Benchmarking Multimodal Large Language Models in Cod...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Li, Kaixin Tian, Yuchen Hu, Qisheng Luo, Ziyang Huang, Zhiyong Ma, Jing National University of Singapore Singapore The University of Hong Kong Hong Kong Nanyang Technological University Singapore Hong Kong Baptist University Hong Kong
Programming often involves converting detailed and complex specifications into code, a process during which developers typically utilize visual aids to more effectively convey concepts. While recent developments in La... 详细信息
来源: 评论
Mitigate Extrinsic Social Bias in Pre-trained language Models via Continuous Prompts Adjustment
Mitigate Extrinsic Social Bias in Pre-trained Language Model...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Dai, Yiwei Gu, Hengrui Wang, Ying Wang, Xin School of Artificial Intelligence Jilin University Changchun China College of Computer Science and Technology Jilin University Changchun China
Although pre-trained language models (PLMs) have been widely used in natural language understandings (NLU), they are still exposed to fairness issues. Most existing extrinsic debiasing methods rely on manually curated... 详细信息
来源: 评论
Context-aware Watermark with Semantic Balanced Green-red Lists for Large language Models
Context-aware Watermark with Semantic Balanced Green-red Lis...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Guo, Yuxuan Tian, Zhiliang Song, Yiping Liu, Tianlun Ding, Liang Li, Dongsheng National University of Defense Technology China Zhejiang University China
Watermarking enables people to determine whether the text is generated by a specific model. It injects a unique signature based on the "green-red" list that can be tracked during detection, where the words i... 详细信息
来源: 评论
AMR-Evol: Adaptive Modular Response Evolution Elicits Better Knowledge Distillation for Large language Models in Code Generation
AMR-Evol: Adaptive Modular Response Evolution Elicits Better...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Luo, Ziyang Li, Xin Lin, Hongzhan Ma, Jing Bing, Lidong Hong Kong Baptist University Hong Kong Alibaba DAMO Academy China
The impressive performance of proprietary LLMs like GPT4 in code generation has led to a trend to replicate these capabilities in open-source models through knowledge distillation (e.g. Code Evol-Instruct). However, t... 详细信息
来源: 评论