咨询与建议

限定检索结果

文献类型

  • 14,416 篇 会议
  • 650 篇 期刊文献
  • 101 册 图书
  • 40 篇 学位论文
  • 1 篇 科技报告

馆藏范围

  • 15,207 篇 电子文献
  • 1 种 纸本馆藏

日期分布

学科分类号

  • 10,942 篇 工学
    • 10,282 篇 计算机科学与技术...
    • 5,394 篇 软件工程
    • 1,487 篇 信息与通信工程
    • 960 篇 电气工程
    • 898 篇 控制科学与工程
    • 446 篇 生物工程
    • 239 篇 网络空间安全
    • 222 篇 化学工程与技术
    • 192 篇 机械工程
    • 173 篇 生物医学工程(可授...
    • 144 篇 电子科学与技术(可...
    • 109 篇 安全科学与工程
    • 104 篇 交通运输工程
  • 2,490 篇 理学
    • 1,167 篇 数学
    • 652 篇 物理学
    • 518 篇 生物学
    • 396 篇 统计学(可授理学、...
    • 241 篇 系统科学
    • 232 篇 化学
  • 2,426 篇 管理学
    • 1,757 篇 图书情报与档案管...
    • 763 篇 管理科学与工程(可...
    • 239 篇 工商管理
    • 109 篇 公共管理
  • 1,761 篇 文学
    • 1,709 篇 外国语言文学
    • 184 篇 中国语言文学
  • 515 篇 医学
    • 300 篇 临床医学
    • 281 篇 基础医学(可授医学...
    • 117 篇 公共卫生与预防医...
  • 274 篇 法学
    • 247 篇 社会学
  • 236 篇 教育学
    • 223 篇 教育学
  • 100 篇 农学
  • 97 篇 经济学
  • 9 篇 艺术学
  • 7 篇 哲学
  • 4 篇 军事学

主题

  • 3,531 篇 natural language...
  • 1,755 篇 natural language...
  • 952 篇 computational li...
  • 736 篇 semantics
  • 686 篇 machine learning
  • 610 篇 deep learning
  • 520 篇 natural language...
  • 345 篇 computational mo...
  • 334 篇 training
  • 333 篇 sentiment analys...
  • 329 篇 accuracy
  • 323 篇 large language m...
  • 319 篇 feature extracti...
  • 311 篇 data mining
  • 290 篇 speech processin...
  • 265 篇 speech recogniti...
  • 251 篇 transformers
  • 237 篇 neural networks
  • 217 篇 iterative method...
  • 211 篇 support vector m...

机构

  • 85 篇 carnegie mellon ...
  • 51 篇 university of ch...
  • 45 篇 carnegie mellon ...
  • 44 篇 tsinghua univers...
  • 42 篇 zhejiang univers...
  • 42 篇 national univers...
  • 38 篇 nanyang technolo...
  • 36 篇 university of wa...
  • 35 篇 univ chinese aca...
  • 34 篇 university of sc...
  • 34 篇 carnegie mellon ...
  • 33 篇 stanford univers...
  • 32 篇 gaoling school o...
  • 32 篇 school of artifi...
  • 32 篇 alibaba grp peop...
  • 29 篇 tsinghua univ de...
  • 28 篇 harbin institute...
  • 28 篇 peking universit...
  • 27 篇 language technol...
  • 26 篇 microsoft resear...

作者

  • 55 篇 zhou guodong
  • 50 篇 neubig graham
  • 46 篇 liu yang
  • 39 篇 sun maosong
  • 36 篇 zhang min
  • 34 篇 liu qun
  • 33 篇 smith noah a.
  • 28 篇 schütze hinrich
  • 27 篇 liu zhiyuan
  • 27 篇 lapata mirella
  • 26 篇 wen ji-rong
  • 24 篇 chang kai-wei
  • 23 篇 zhou jie
  • 23 篇 yang diyi
  • 23 篇 zhao hai
  • 23 篇 zhao wayne xin
  • 21 篇 chua tat-seng
  • 20 篇 dredze mark
  • 18 篇 biemann chris
  • 18 篇 fung pascale

语言

  • 14,595 篇 英文
  • 500 篇 其他
  • 102 篇 中文
  • 18 篇 法文
  • 15 篇 土耳其文
  • 2 篇 西班牙文
  • 2 篇 俄文
检索条件"任意字段=Conference on empirical methods in natural language processing"
15208 条 记 录,以下是4971-4980 订阅
排序:
iSentenizer: An incremental sentence boundary classifier
iSentenizer: An incremental sentence boundary classifier
收藏 引用
International conference on natural language processing and Knowledge Engineering
作者: Wong, Fai Chao, Sam CIS University of Macau China
In this paper, we revisited the topic of sentence boundary detection, and proposed an incremental approach to tackle the problem. The boundary classifier is revised on the fly to adapt to the text of high variety of s... 详细信息
来源: 评论
Detailed Study of Deep Learning Models for natural language processing  2
Detailed Study of Deep Learning Models for Natural Language ...
收藏 引用
2nd IEEE International conference on Advances in Computing, Communication Control and Networking, ICACCCN 2020
作者: Gupta, Megha Verma, Shailesh Kumar Jain, Priyanshu Guru Gobind Singh Indraprastha University University School of Information Communication and Technology New Delhi India
natural language processing involves computational processing, and understanding of human languages. With the increase in computation power, deep learning models are being used for various NLP tasks. Further availabil... 详细信息
来源: 评论
INTENTIONQA: A Benchmark for Evaluating Purchase Intention Comprehension Abilities of language Models in E-commerce
INTENTIONQA: A Benchmark for Evaluating Purchase Intention C...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Ding, Wenxuan Wang, Weiqi Kwok, Sze Heng Douglas Liu, Minghao Fang, Tianqing Bai, Jiaxin Liu, Xin Yu, Changlong Li, Zheng Luo, Chen Yin, Qingyu Yin, Bing He, Junxian Song, Yangqiu Department of Computer Science and Engineering HKUST Hong Kong *** Inc. Palo AltoCA United States
Enhancing language Models' (LMs) ability to understand purchase intentions in E-commerce scenarios is crucial for their effective assistance in various downstream tasks. However, previous approaches that distill i... 详细信息
来源: 评论
MemPrompt: Memory-assisted Prompt Editing with User Feedback
MemPrompt: Memory-assisted Prompt Editing with User Feedback
收藏 引用
2022 conference on empirical methods in natural language processing, EMNLP 2022
作者: Madaan, Aman Tandon, Niket Clark, Peter Yang, Yiming Language Technologies Institute Carnegie Mellon University PittsburghPA United States Allen Institute for Artificial Intelligence SeattleWA United States
Large LMs such as GPT-3 are powerful, but can commit mistakes that are obvious to humans. For example, GPT-3 would mistakenly interpret "What word is similar to good?" to mean a homophone, while the user int... 详细信息
来源: 评论
ConTReGen: Context-driven Tree-structured Retrieval for Open-domain Long-form Text Generation
ConTReGen: Context-driven Tree-structured Retrieval for Open...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Roy, Kashob Kumar Akash, Pritom Saha Chang, Kevin Chen-Chuan Popa, Lucian University of Illinois Urbana-Champaign United States IBM Research United States
Open-domain long-form text generation requires generating coherent, comprehensive responses that address complex queries with both breadth and depth. This task is challenging due to the need to accurately capture dive... 详细信息
来源: 评论
SLING: Sino LINGuistic Evaluation of Large language Models
SLING: Sino LINGuistic Evaluation of Large Language Models
收藏 引用
2022 conference on empirical methods in natural language processing, EMNLP 2022
作者: Song, Yixiao Krishna, Kalpesh Bhatt, Rajesh Iyyer, Mohit Department of Linguistics UMass Amherst United States Manning College of Information and Computer Sciences UMass Amherst United States
To understand what kinds of linguistic knowledge are encoded by pretrained Chinese language models (LMs), we introduce the benchmark of Sino LINGuistics (SLING), which consists of 38K minimal sentence pairs in Mandari... 详细信息
来源: 评论
Unsupervised Classification of Sentiment and Objectivity in Chinese Text  3
收藏 引用
3rd International Joint conference on natural language processing, IJCNLP 2008
作者: Zagibalov, Taras Carroll, John Department of Informatics University of Sussex BrightonBN1 9QH United Kingdom
We address the problem of sentiment and objectivity classification of product reviews in Chinese. Our approach is distinctive in that it treats both positive / negative sentiment and subjectivity / objectivity not as ... 详细信息
来源: 评论
Prompt-and-Rerank: A Method for Zero-Shot and Few-Shot Arbitrary Textual Style Transfer with Small language Models
Prompt-and-Rerank: A Method for Zero-Shot and Few-Shot Arbit...
收藏 引用
2022 conference on empirical methods in natural language processing, EMNLP 2022
作者: Suzgun, Mirac Melas-Kyriazi, Luke Jurafsky, Dan Stanford University United States Oxford University United Kingdom
We propose a method for arbitrary textual style transfer (TST)-the task of transforming a text into any given style-utilizing general-purpose pre-trained language models. Our method, Prompt-and-Rerank, is based on a m... 详细信息
来源: 评论
SSP: Self-Supervised Prompting for Cross-Lingual Transfer to Low-Resource languages using Large language Models
SSP: Self-Supervised Prompting for Cross-Lingual Transfer to...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Rathore, Vipul Deb, Aniruddha Chandresh, Ankish Singla, Parag Mausam Indian Institute of Technology New Delhi India
Recently, very large language models (LLMs) have shown exceptional performance on several English NLP tasks with just in-context learning (ICL), but their utility in other languages is still underexplored. We investig... 详细信息
来源: 评论
When "A Helpful Assistant" Is Not Really Helpful: Personas in System Prompts Do Not Improve Performances of Large language Models
When "A Helpful Assistant" Is Not Really Helpful: Personas i...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Zheng, Mingqian Pei, Jiaxin Logeswaran, Lajanugen Lee, Moontae Jurgens, David Carnegie Mellon University United States Stanford University United States LG AI Research Korea Republic of University of Illinois Chicago United States University of Michigan United States
Prompting serves as the major way humans interact with Large language Models (LLM). Commercial AI systems commonly define the role of the LLM in system prompts. For example, ChatGPT uses "You are a helpful assist... 详细信息
来源: 评论