咨询与建议

限定检索结果

文献类型

  • 14,600 篇 会议
  • 625 篇 期刊文献
  • 101 册 图书
  • 37 篇 学位论文

馆藏范围

  • 15,362 篇 电子文献
  • 1 种 纸本馆藏

日期分布

学科分类号

  • 10,994 篇 工学
    • 10,330 篇 计算机科学与技术...
    • 5,391 篇 软件工程
    • 1,449 篇 信息与通信工程
    • 956 篇 电气工程
    • 878 篇 控制科学与工程
    • 433 篇 生物工程
    • 222 篇 网络空间安全
    • 218 篇 化学工程与技术
    • 185 篇 机械工程
    • 177 篇 生物医学工程(可授...
    • 141 篇 电子科学与技术(可...
    • 101 篇 仪器科学与技术
    • 100 篇 安全科学与工程
  • 2,447 篇 理学
    • 1,138 篇 数学
    • 652 篇 物理学
    • 503 篇 生物学
    • 379 篇 统计学(可授理学、...
    • 240 篇 系统科学
    • 231 篇 化学
  • 2,381 篇 管理学
    • 1,726 篇 图书情报与档案管...
    • 742 篇 管理科学与工程(可...
    • 235 篇 工商管理
    • 104 篇 公共管理
  • 1,823 篇 文学
    • 1,771 篇 外国语言文学
    • 169 篇 中国语言文学
  • 504 篇 医学
    • 300 篇 临床医学
    • 282 篇 基础医学(可授医学...
    • 111 篇 公共卫生与预防医...
  • 275 篇 法学
    • 245 篇 社会学
  • 237 篇 教育学
    • 225 篇 教育学
  • 100 篇 农学
  • 93 篇 经济学
  • 10 篇 艺术学
  • 7 篇 哲学
  • 4 篇 军事学

主题

  • 3,563 篇 natural language...
  • 1,791 篇 natural language...
  • 950 篇 computational li...
  • 753 篇 semantics
  • 686 篇 machine learning
  • 620 篇 deep learning
  • 518 篇 natural language...
  • 373 篇 computational mo...
  • 369 篇 accuracy
  • 356 篇 training
  • 349 篇 large language m...
  • 338 篇 sentiment analys...
  • 328 篇 feature extracti...
  • 311 篇 data mining
  • 289 篇 speech processin...
  • 262 篇 transformers
  • 260 篇 speech recogniti...
  • 236 篇 neural networks
  • 218 篇 iterative method...
  • 216 篇 support vector m...

机构

  • 85 篇 carnegie mellon ...
  • 52 篇 university of ch...
  • 45 篇 tsinghua univers...
  • 44 篇 carnegie mellon ...
  • 42 篇 zhejiang univers...
  • 41 篇 national univers...
  • 35 篇 univ chinese aca...
  • 35 篇 nanyang technolo...
  • 35 篇 carnegie mellon ...
  • 34 篇 university of sc...
  • 34 篇 university of wa...
  • 33 篇 alibaba grp peop...
  • 32 篇 gaoling school o...
  • 32 篇 stanford univers...
  • 30 篇 tsinghua univ de...
  • 30 篇 school of artifi...
  • 28 篇 peking universit...
  • 27 篇 harbin institute...
  • 27 篇 language technol...
  • 26 篇 univ sci & techn...

作者

  • 55 篇 zhou guodong
  • 50 篇 neubig graham
  • 46 篇 liu yang
  • 39 篇 sun maosong
  • 36 篇 zhang min
  • 34 篇 liu qun
  • 33 篇 smith noah a.
  • 28 篇 schütze hinrich
  • 26 篇 wen ji-rong
  • 26 篇 liu zhiyuan
  • 26 篇 lapata mirella
  • 24 篇 chang kai-wei
  • 23 篇 zhou jie
  • 23 篇 yang diyi
  • 23 篇 zhao hai
  • 23 篇 zhao wayne xin
  • 21 篇 chua tat-seng
  • 20 篇 dredze mark
  • 18 篇 biemann chris
  • 18 篇 fung pascale

语言

  • 13,826 篇 英文
  • 1,418 篇 其他
  • 123 篇 中文
  • 18 篇 法文
  • 14 篇 土耳其文
  • 2 篇 德文
  • 2 篇 西班牙文
  • 2 篇 俄文
检索条件"任意字段=Conference on empirical methods in natural language processing"
15363 条 记 录,以下是1331-1340 订阅
排序:
Code-Switched language Identification is Harder Than You Think  18
Code-Switched Language Identification is Harder Than You Thi...
收藏 引用
18th conference of the European-Chapter of the Association-for-Computational-Linguistics (EACL)
作者: Burchell, Laurie Birch, Alexandra Thompson, Robert P. Heafield, Kenneth Univ Edinburgh Sch Informat Inst Language Cognit & Computat 10 Crichton St Edinburgh EH8 9AB Midlothian Scotland Univ Cambridge Dept Mat Sci & Met 27 Charles Babbage Rd Cambridge CB3 0FS England
Code switching (CS) is a very common phenomenon in written and spoken communication but one that is handled poorly by many natural language processing (NLP) applications. Looking to the application of building CS corp... 详细信息
来源: 评论
Narrowing the Gap between Supervised and Unsupervised Sentence Representation Learning with Large language Model  38
Narrowing the Gap between Supervised and Unsupervised Senten...
收藏 引用
38th AAAI conference on Artificial Intelligence (AAAI) / 36th conference on Innovative Applications of Artificial Intelligence / 14th Symposium on Educational Advances in Artificial Intelligence
作者: Li, Mingxin Zhang, Richong Nie, Zhijie Mao, Yongyi Beihang Univ Sch Comp Sci & Engn SKLSDE Beijing Peoples R China Zhongguancun Lab Beijing Peoples R China Beihang Univ Shen Yuan Honors Coll Beijing Peoples R China Univ Ottawa Sch Elect Engn & Comp Sci Ottawa ON Canada
Sentence Representation Learning (SRL) is a fundamental task in natural language processing (NLP), with the Contrastive Learning of Sentence Embeddings (CSE) being the mainstream technique due to its superior performa... 详细信息
来源: 评论
Improving Adversarial Robustness in Vision-language Models with Architecture and Prompt Design
Improving Adversarial Robustness in Vision-Language Models w...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Bhagwatkar, Rishika Nayak, Shravan Bashivan, Pouya Rish, Irina Mila - Quebec AI Institute Canada Université de Montréal Canada McGill University Canada
Vision-language Models (VLMs) have seen a significant increase in both research interest and real-world applications across various domains, including healthcare, autonomous systems, and security. However, their growi... 详细信息
来源: 评论
Mitigating Catastrophic Forgetting in language Transfer via Model Merging
Mitigating Catastrophic Forgetting in Language Transfer via ...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Alexandrov, Anton Raychev, Veselin Müller, Mark Niklas Zhang, Ce Vechev, Martin Toutanova, Kristina INSAIT Sofia University "St. Kliment Ohridski" Bulgaria LogicStar.ai ETH Zurich Switzerland University of Chicago United States Together AI Google DeepMind United Kingdom
As open-weight large language models (LLMs) achieve ever more impressive performances across a wide range of tasks in English, practitioners aim to adapt these models to different languages. However, such language ada...
来源: 评论
BLT: Can Large language Models Handle Basic Legal Text?  6
BLT: Can Large Language Models Handle Basic Legal Text?
收藏 引用
6th natural Legal language processing Workshop 2024, NLLP 2024, co-located with the 2024 conference on empirical methods in natural language processing
作者: Blair-Stanek, Andrew Holzenberger, Nils Van Durme, Benjamin Johns Hopkins University United States University of Maryland School of Law United States Télécom Paris - Institut Polytechnique de Paris France
We find that the best publicly available LLMs like GPT-4 and Claude currently perform poorly on basic legal text handling. This motivates the creation of a benchmark consisting of examples that lawyers and paralegals ... 详细信息
来源: 评论
Self-training Large language Models through Knowledge Detection
Self-training Large Language Models through Knowledge Detect...
收藏 引用
2024 Findings of the Association for Computational Linguistics, EMNLP 2024
作者: Yeo, Wei Jie Ferdinan, Teddy Kazienko, Przemyslaw Satapathy, Ranjan Cambria, Erik Singapore Singapore Singapore
Large language models (LLMs) often necessitate extensive labeled datasets and training compute to achieve impressive performance across downstream tasks. This paper explores a self-training paradigm, where the LLM aut... 详细信息
来源: 评论
AUTOHALLUSION: Automatic Generation of Hallucination Benchmarks for Vision-language Models
AUTOHALLUSION: Automatic Generation of Hallucination Benchma...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Wu, Xiyang Guan, Tianrui Li, Dianqi Huang, Shuaiyi Liu, Xiaoyu Wang, Xijun Xian, Ruiqi Shrivastava, Abhinav Huang, Furong Boyd-Graber, Jordan Lee Zhou, Tianyi Manocha, Dinesh University of Maryland College Park United States
Large vision-language models (LVLMs) are prone to hallucinations, where certain contextual cues in an image can trigger the language module to produce overconfident and incorrect reasoning about abnormal or hypothetic... 详细信息
来源: 评论
Scalable and Domain-General Abstractive Proposition Segmentation
Scalable and Domain-General Abstractive Proposition Segmenta...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Hosseini, Mohammad Javad Gao, Yang Baumgärtner, Tim Fabrikant, Alex Amplayo, Reinald Kim Google DeepMind United Kingdom Ubiquitous Knowledge Processing Lab Technical University of Darmstadt Germany
Segmenting text into fine-grained units of meaning is important to a wide range of NLP *** default approach of segmenting text into sentences is often insufficient, especially since sentences are usually complex enoug... 详细信息
来源: 评论
Breaking language Barriers in Multilingual Mathematical Reasoning: Insights and Observations
Breaking Language Barriers in Multilingual Mathematical Reas...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Chen, Nuo Zheng, Zinan Wu, Ning Gong, Ming Zhang, Dongmei Li, Jia Hong Kong University of Science and Technology Hong Kong Microsoft United States
Existing research predominantly focuses on developing powerful large language models (LLMs) for mathematical reasoning within monolingual languages, with few explorations in preserving efficacy in a multilingual conte... 详细信息
来源: 评论
Improving Few-Shot Cross-Domain Named Entity Recognition by Instruction Tuning a Word-Embedding based Retrieval Augmented Large language Model
Improving Few-Shot Cross-Domain Named Entity Recognition by ...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Nandi, Subhadip Agrawal, Neeraj IIT Kanpur India IISc Bangalore India
Few-Shot Cross-Domain NER is the process of leveraging knowledge from data-rich source domains to perform entity recognition on data-scarce target domains. Most previous state-of-the-art (SOTA) approaches use pre-trai... 详细信息
来源: 评论