咨询与建议

限定检索结果

文献类型

  • 7,585 篇 会议
  • 71 册 图书
  • 49 篇 期刊文献
  • 2 篇 学位论文

馆藏范围

  • 7,706 篇 电子文献
  • 1 种 纸本馆藏

日期分布

学科分类号

  • 6,483 篇 工学
    • 6,256 篇 计算机科学与技术...
    • 3,577 篇 软件工程
    • 748 篇 信息与通信工程
    • 535 篇 控制科学与工程
    • 272 篇 电气工程
    • 212 篇 生物工程
    • 121 篇 化学工程与技术
    • 100 篇 机械工程
    • 86 篇 电子科学与技术(可...
    • 74 篇 生物医学工程(可授...
    • 63 篇 安全科学与工程
    • 59 篇 农业工程
    • 57 篇 交通运输工程
    • 49 篇 网络空间安全
  • 1,522 篇 管理学
    • 1,165 篇 图书情报与档案管...
    • 467 篇 管理科学与工程(可...
    • 134 篇 工商管理
  • 1,471 篇 文学
    • 1,464 篇 外国语言文学
    • 161 篇 中国语言文学
  • 1,446 篇 理学
    • 776 篇 数学
    • 352 篇 物理学
    • 249 篇 生物学
    • 240 篇 统计学(可授理学、...
    • 120 篇 化学
    • 101 篇 系统科学
  • 164 篇 法学
    • 153 篇 社会学
  • 129 篇 医学
    • 93 篇 临床医学
    • 75 篇 基础医学(可授医学...
  • 111 篇 教育学
    • 105 篇 教育学
  • 68 篇 农学
    • 68 篇 作物学
  • 42 篇 经济学
  • 6 篇 哲学
  • 3 篇 艺术学
  • 1 篇 军事学

主题

  • 1,181 篇 natural language...
  • 872 篇 computational li...
  • 619 篇 natural language...
  • 283 篇 semantics
  • 165 篇 natural language...
  • 128 篇 machine learning
  • 127 篇 graphic methods
  • 123 篇 iterative method...
  • 111 篇 sentiment analys...
  • 110 篇 speech recogniti...
  • 105 篇 deep learning
  • 94 篇 syntactics
  • 90 篇 text processing
  • 86 篇 speech processin...
  • 81 篇 embeddings
  • 72 篇 information retr...
  • 69 篇 modeling languag...
  • 69 篇 artificial intel...
  • 66 篇 contrastive lear...
  • 63 篇 zero-shot learni...

机构

  • 74 篇 carnegie mellon ...
  • 36 篇 national univers...
  • 34 篇 carnegie mellon ...
  • 34 篇 language technol...
  • 34 篇 institute for na...
  • 33 篇 university of wa...
  • 33 篇 school of comput...
  • 32 篇 tsinghua univers...
  • 31 篇 university of ch...
  • 30 篇 nanyang technolo...
  • 30 篇 stanford univers...
  • 29 篇 zhejiang univers...
  • 27 篇 alibaba grp peop...
  • 26 篇 gaoling school o...
  • 26 篇 carnegie mellon ...
  • 25 篇 harbin institute...
  • 25 篇 peking universit...
  • 25 篇 natl univ singap...
  • 24 篇 allen inst artif...
  • 23 篇 the chinese univ...

作者

  • 42 篇 neubig graham
  • 39 篇 zhou guodong
  • 39 篇 smith noah a.
  • 36 篇 liu yang
  • 36 篇 lapata mirella
  • 34 篇 sun maosong
  • 32 篇 zhang min
  • 30 篇 liu qun
  • 30 篇 hovy eduard
  • 29 篇 zhao jun
  • 27 篇 schütze hinrich
  • 27 篇 liu zhiyuan
  • 26 篇 gurevych iryna
  • 25 篇 vulic ivan
  • 22 篇 huang xuanjing
  • 21 篇 chang kai-wei
  • 21 篇 liu kang
  • 21 篇 zhang yue
  • 21 篇 zhang qi
  • 20 篇 wen ji-rong

语言

  • 6,955 篇 英文
  • 722 篇 其他
  • 23 篇 中文
  • 8 篇 法文
  • 4 篇 土耳其文
  • 2 篇 德文
  • 2 篇 俄文
检索条件"任意字段=Proceedings of the Conference on Empirical Methods in Natural Language Processing"
7707 条 记 录,以下是71-80 订阅
排序:
Comparing a BERT Classifier and a GPT classifier for Detecting Connective language Across Multiple Social Media
Comparing a BERT Classifier and a GPT classifier for Detecti...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Lukito, Josephine Chen, Bin Masullo, Gina M. Stroud, Natalie Jomini Center for Media Engagement University of Texas Austin United States The University of Hong Kong Hong Kong
This study presents an approach for detecting connective language-defined as language that facilitates engagement, understanding, and conversation-from social media *** developed and evaluated two types of classifiers... 详细信息
来源: 评论
DEFT-UCS: Data Efficient Fine-Tuning for Pre-Trained language Models via Unsupervised Core-Set Selection for Text-Editing
DEFT-UCS: Data Efficient Fine-Tuning for Pre-Trained Languag...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Das, Devleena Khetan, Vivek Georgia Institute of Technology United States Accenture Labs
Recent advances have led to the availability of many pre-trained language models (PLMs);however, a question that remains is how much data is truly needed to fine-tune PLMs for downstream tasks? In this work, we introd... 详细信息
来源: 评论
Academics Can Contribute to Domain-Specialized language Models
Academics Can Contribute to Domain-Specialized Language Mode...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Dredze, Mark Winata, Genta Indra Kambadur, Prabhanjan Wu, Shijie Irsoy, Ozan Lu, Steven Dabravolski, Vadim Rosenberg, David S. Gehrmann, Sebastian Bloomberg United States Johns Hopkins University United States Capital One United States Anthropic United States
Commercially available models dominate academic leaderboards. While impressive, this has concentrated research on creating and adapting general-purpose models to improve NLP leaderboard standings for large language mo... 详细信息
来源: 评论
On Eliciting Syntax from language Models via Hashing
On Eliciting Syntax from Language Models via Hashing
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Wang, Yiran Utiyama, Masao Japan
Unsupervised parsing, also known as grammar induction, aims to infer syntactic structure from raw text. Recently, binary representation has exhibited remarkable information-preserving capabilities at both lexicon and ... 详细信息
来源: 评论
Factuality of Large language Models: A Survey
Factuality of Large Language Models: A Survey
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Wang, Yuxia Wang, Minghan Manzoor, Muhammad Arslan Liu, Fei Georgiev, Georgi Das, Rocktim Jyoti Nakov, Preslav MBZUAI United Arab Emirates Monash University Australia Google United States Sofia University Bulgaria
Large language models (LLMs), especially when instruction-tuned for chat, have become part of our daily lives, freeing people from the process of searching, extracting, and integrating information from multiple source... 详细信息
来源: 评论
Adaption-of-Thought: Learning Question Difficulty Improves Large language Models for Reasoning
Adaption-of-Thought: Learning Question Difficulty Improves L...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Xu, Mayi Li, Yongqi Sun, Ke Qian, Tieyun School of Computer Science Wuhan University China Intellectual Computing Laboratory for Cultural Heritage Wuhan University China
Large language models (LLMs) have shown excellent capability for solving reasoning problems. Existing approaches do not differentiate the question difficulty when designing prompting methods for them. Clearly, a simpl... 详细信息
来源: 评论
Enhancing language Model Alignment: A Confidence-Based Approach to Label Smoothing
Enhancing Language Model Alignment: A Confidence-Based Appro...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Huang, Baihe Sharma, Hiteshi Mao, Yi University of California Berkeley United States Microsoft Research United States
In recent years, Large language Models (LLMs) have demonstrated remarkable capabilities across various domains. Within the training pipeline of LLMs, the Reinforcement Learning with Human Feedback (RLHF) phase is cruc... 详细信息
来源: 评论
SCOI: Syntax-augmented Coverage-based In-context Example Selection for Machine Translation
SCOI: Syntax-augmented Coverage-based In-context Example Sel...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Tang, Chenming Wang, Zhixiang Wu, Yunfang National Key Laboratory for Multimedia Information Processing Peking University MOE Key Laboratory of Computational Linguistics Peking University School of Computer Science Peking University
In-context learning (ICL) greatly improves the performance of large language models (LLMs) on various down-stream tasks, where the improvement highly depends on the quality of demonstrations. In this work, we introduc... 详细信息
来源: 评论
Can Large language Models Learn Independent Causal Mechanisms?
Can Large Language Models Learn Independent Causal Mechanism...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Gendron, Gaël Nguyen, Bao Trung Peng, Alex Yuxuan Witbrock, Michael Dobbie, Gillian NAOInstitute University of Auckland New Zealand
Despite impressive performance on language modelling and complex reasoning tasks, Large language Models (LLMs) fall short on the same tasks in uncommon settings or with distribution shifts, exhibiting a lack of genera... 详细信息
来源: 评论
Evalverse: Unified and Accessible Library for Large language Model Evaluation
Evalverse: Unified and Accessible Library for Large Language...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Kim, Jihoo Song, Wonho Kim, Dahyun Kim, Yunsu Kim, Yungi Park, Chanjun Upstage AI
This paper introduces Evalverse, a novel library that streamlines the evaluation of Large language Models (LLMs) by unifying disparate evaluation tools into a single, user-friendly framework. Evalverse enables individ... 详细信息
来源: 评论