咨询与建议

限定检索结果

文献类型

  • 14,600 篇 会议
  • 627 篇 期刊文献
  • 101 册 图书
  • 37 篇 学位论文

馆藏范围

  • 15,364 篇 电子文献
  • 1 种 纸本馆藏

日期分布

学科分类号

  • 10,996 篇 工学
    • 10,331 篇 计算机科学与技术...
    • 5,391 篇 软件工程
    • 1,449 篇 信息与通信工程
    • 957 篇 电气工程
    • 878 篇 控制科学与工程
    • 433 篇 生物工程
    • 222 篇 网络空间安全
    • 218 篇 化学工程与技术
    • 185 篇 机械工程
    • 177 篇 生物医学工程(可授...
    • 141 篇 电子科学与技术(可...
    • 101 篇 仪器科学与技术
    • 100 篇 安全科学与工程
  • 2,447 篇 理学
    • 1,138 篇 数学
    • 652 篇 物理学
    • 503 篇 生物学
    • 379 篇 统计学(可授理学、...
    • 240 篇 系统科学
    • 231 篇 化学
  • 2,381 篇 管理学
    • 1,726 篇 图书情报与档案管...
    • 742 篇 管理科学与工程(可...
    • 235 篇 工商管理
    • 104 篇 公共管理
  • 1,823 篇 文学
    • 1,771 篇 外国语言文学
    • 169 篇 中国语言文学
  • 503 篇 医学
    • 301 篇 临床医学
    • 282 篇 基础医学(可授医学...
    • 111 篇 公共卫生与预防医...
  • 275 篇 法学
    • 245 篇 社会学
  • 237 篇 教育学
    • 225 篇 教育学
  • 100 篇 农学
  • 93 篇 经济学
  • 10 篇 艺术学
  • 7 篇 哲学
  • 4 篇 军事学

主题

  • 3,563 篇 natural language...
  • 1,792 篇 natural language...
  • 950 篇 computational li...
  • 752 篇 semantics
  • 678 篇 machine learning
  • 620 篇 deep learning
  • 518 篇 natural language...
  • 376 篇 computational mo...
  • 368 篇 accuracy
  • 355 篇 training
  • 351 篇 sentiment analys...
  • 349 篇 large language m...
  • 337 篇 feature extracti...
  • 313 篇 data mining
  • 289 篇 speech processin...
  • 262 篇 transformers
  • 255 篇 speech recogniti...
  • 234 篇 neural networks
  • 217 篇 iterative method...
  • 216 篇 support vector m...

机构

  • 85 篇 carnegie mellon ...
  • 51 篇 university of ch...
  • 45 篇 tsinghua univers...
  • 44 篇 carnegie mellon ...
  • 42 篇 zhejiang univers...
  • 41 篇 national univers...
  • 35 篇 univ chinese aca...
  • 35 篇 nanyang technolo...
  • 35 篇 carnegie mellon ...
  • 34 篇 university of sc...
  • 34 篇 university of wa...
  • 33 篇 alibaba grp peop...
  • 32 篇 gaoling school o...
  • 32 篇 stanford univers...
  • 30 篇 tsinghua univ de...
  • 30 篇 school of artifi...
  • 28 篇 peking universit...
  • 27 篇 harbin institute...
  • 27 篇 language technol...
  • 26 篇 univ sci & techn...

作者

  • 55 篇 zhou guodong
  • 50 篇 neubig graham
  • 46 篇 liu yang
  • 39 篇 sun maosong
  • 36 篇 zhang min
  • 34 篇 liu qun
  • 31 篇 smith noah a.
  • 29 篇 lapata mirella
  • 28 篇 schütze hinrich
  • 26 篇 wen ji-rong
  • 26 篇 liu zhiyuan
  • 24 篇 chang kai-wei
  • 23 篇 zhou jie
  • 23 篇 yang diyi
  • 23 篇 zhao hai
  • 23 篇 zhao wayne xin
  • 22 篇 wang wei
  • 21 篇 chua tat-seng
  • 20 篇 dredze mark
  • 18 篇 biemann chris

语言

  • 13,828 篇 英文
  • 1,418 篇 其他
  • 123 篇 中文
  • 18 篇 法文
  • 14 篇 土耳其文
  • 2 篇 德文
  • 2 篇 西班牙文
  • 2 篇 俄文
检索条件"任意字段=Conference on empirical methods in natural language processing"
15365 条 记 录,以下是1291-1300 订阅
排序:
language and Mental Health: Measures of Emotion Dynamics from Text as Linguistic Biosocial Markers
Language and Mental Health: Measures of Emotion Dynamics fro...
收藏 引用
conference on empirical methods in natural language processing (EMNLP)
作者: Teodorescu, Daniela Cheng, Tiffany Fyshe, Alona Mohammad, Saif M. Univ Alberta Alberta Machine Intelligence Inst Amii Dept Comp Sci Edmonton AB Canada Ludwig Maximilians Univ Munchen Ctr Informat & Language Proc MaiNLP Munich Germany Carleton Univ Ottawa ON Canada Univ Alberta Dept Psychol Edmonton AB Canada Natl Res Council Canada Ottawa ON Canada Univ Alberta Edmonton AB Canada
Research in psychopathology has shown that, at an aggregate level, the patterns of emotional change over time-emotion dynamics-are indicators of one's mental health. One's patterns of emotion change have tradi... 详细信息
来源: 评论
Mitigating the Alignment Tax of RLHF
Mitigating the Alignment Tax of RLHF
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Lin, Yong Lin, Hangyu Xiong, Wei Diao, Shizhe Liu, Jianmeng Zhang, Jipeng Pan, Rui Wang, Haoxiang Hu, Wenbin Zhang, Hanning Dong, Hanze Pi, Renjie Zhao, Han Jiang, Nan Ji, Heng Yao, Yuan Zhang, Tong Princeton University Princeton Language and Intelligence United States The Hong Kong University of Science and Technology Hong Kong University of Illinois Urbana-Champaign United States NVIDIA United States
LLMs acquire a wide range of abilities during pre-training, but aligning LLMs under Reinforcement Learning with Human Feedback (RLHF) can lead to forgetting pretrained abilities, which is also known as the alignment t... 详细信息
来源: 评论
Large language Model as an Assignment Evaluator: Insights, Feedback, and Challenges in a 1000+ Student Course
Large Language Model as an Assignment Evaluator: Insights, F...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Chiang, Cheng-Han Chen, Wei-Chih Kuan, Chun-Yi Yang, Chienchou Lee, Hung-Yi National Taiwan University Taiwan Mediatek Inc. Taiwan
Using large language models (LLMs) for automatic evaluation has become an important evaluation method in NLP research. However, it is unclear whether these LLM-based evaluators can be applied in real-world classrooms ... 详细信息
来源: 评论
Evaluating n-Gram Novelty of language Models Using RUSTY-DAWG
Evaluating n-Gram Novelty of Language Models Using RUSTY-DAW...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Merrill, William Smith, Noah A. Elazar, Yanai New York University United States Allen Institute for AI United States University of Washington United States
How novel are texts generated by language models (LMs) relative to their training corpora? In this work, we investigate the extent to which modern LMs generate n-grams from their training data, evaluating both (i) the... 详细信息
来源: 评论
language is Scary when Over-Analyzed: Unpacking Implied Misogynistic Reasoning with Argumentation Theory-Driven Prompts
Language is Scary when Over-Analyzed: Unpacking Implied Miso...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Muti, Arianna Ruggeri, Federico Al-Khatib, Khalid Barrón-Cedeño, Alberto Caselli, Tommaso DIT Università di Bologna Forlì Italy DISI Università di Bologna Bologna Italy CLCG University of Groningen Groningen Netherlands
We propose misogyny detection as an Argumentative Reasoning task and we investigate the capacity of large language models (LLMs) to understand the implicit reasoning used to convey misogyny in both Italian and English... 详细信息
来源: 评论
Where am I? Large language Models Wandering between Semantics and Structures in Long Contexts
Where am I? Large Language Models Wandering between Semantic...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Koo, Seonmin Kim, Jinsung Jang, YoungJoon Park, Chanjun Lim, Heuiseok Department of Computer science and Engineering Korea University Korea Republic of Upstage AI
As the utilization of Large language Models (LLMs) becomes more widespread, there is a growing demand for their ability to handle more complex and longer external knowledge across various use cases. Most existing eval... 详细信息
来源: 评论
The COT COLLECTION: Improving Zero-shot and Few-shot Learning of language Models via Chain-of-Thought Fine-Tuning
The COT COLLECTION: Improving Zero-shot and Few-shot Learnin...
收藏 引用
conference on empirical methods in natural language processing (EMNLP)
作者: Kim, Seungone Jool, Se June Kim, Doyoung Jang, Joel Ye, Seonghyeon Shin, Jamin Seo, Minjoon KAIST AI Seoul South Korea NAVER AI Lab Bundangdong South Korea Univ Washington Seattle WA 98195 USA
language models (LMs) with less than 100B parameters are known to perform poorly on chain-of-thought (CoT) reasoning in contrast to large LMs when solving unseen tasks. In this work, we aim to equip smaller LMs with t... 详细信息
来源: 评论
Measuring Psychological Depth in language Models
Measuring Psychological Depth in Language Models
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Harel-Canada, Fabrice Zhou, Hanyu Muppalla, Sreya Yildiz, Zeynep Kim, Miryung Sahai, Amit Peng, Nanyun University of California Los Angeles United States
Evaluations of creative stories generated by large language models (LLMs) often focus on objective properties of the text, such as its style, coherence, and diversity. While these metrics are indispensable, they do no... 详细信息
来源: 评论
ToolBeHonest: A Multi-level Hallucination Diagnostic Benchmark for Tool-Augmented Large language Models
ToolBeHonest: A Multi-level Hallucination Diagnostic Benchma...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Zhang, Yuxiang Chen, Jing Wang, Junjie Liu, Yaxin Yang, Cheng Shi, Chufan Zhu, Xinyu Lin, Zihao Wan, Hanwen Yang, Yujiu Sakai, Tetsuya Feng, Tian Yamana, Hayato Waseda University Japan Zhejiang University China Tsinghua University China CUHK Hong Kong Virginia Tech United States CUHK Shenzhen China
Tool-augmented large language models (LLMs) are rapidly being integrated into real-world applications. Due to the lack of benchmarks, the community has yet to fully understand the hallucination issues within these mod... 详细信息
来源: 评论
Towards Difficulty-Agnostic Efficient Transfer Learning for Vision-language Models
Towards Difficulty-Agnostic Efficient Transfer Learning for ...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Yang, Yongjin Ko, Jongwoo Yun, Se-Young KAIST AI Korea Republic of
Vision-language models (VLMs) like CLIP have demonstrated remarkable applicability across a variety of downstream tasks, including zero-shot image classification. Recently, the use of prompts or adapters for efficient... 详细信息
来源: 评论