咨询与建议

限定检索结果

文献类型

  • 7,585 篇 会议
  • 71 册 图书
  • 49 篇 期刊文献
  • 2 篇 学位论文

馆藏范围

  • 7,706 篇 电子文献
  • 1 种 纸本馆藏

日期分布

学科分类号

  • 6,483 篇 工学
    • 6,256 篇 计算机科学与技术...
    • 3,577 篇 软件工程
    • 748 篇 信息与通信工程
    • 535 篇 控制科学与工程
    • 272 篇 电气工程
    • 212 篇 生物工程
    • 121 篇 化学工程与技术
    • 100 篇 机械工程
    • 86 篇 电子科学与技术(可...
    • 74 篇 生物医学工程(可授...
    • 63 篇 安全科学与工程
    • 59 篇 农业工程
    • 57 篇 交通运输工程
    • 49 篇 网络空间安全
  • 1,522 篇 管理学
    • 1,165 篇 图书情报与档案管...
    • 467 篇 管理科学与工程(可...
    • 134 篇 工商管理
  • 1,471 篇 文学
    • 1,464 篇 外国语言文学
    • 161 篇 中国语言文学
  • 1,446 篇 理学
    • 776 篇 数学
    • 352 篇 物理学
    • 249 篇 生物学
    • 240 篇 统计学(可授理学、...
    • 120 篇 化学
    • 101 篇 系统科学
  • 164 篇 法学
    • 153 篇 社会学
  • 129 篇 医学
    • 93 篇 临床医学
    • 75 篇 基础医学(可授医学...
  • 111 篇 教育学
    • 105 篇 教育学
  • 68 篇 农学
    • 68 篇 作物学
  • 42 篇 经济学
  • 6 篇 哲学
  • 3 篇 艺术学
  • 1 篇 军事学

主题

  • 1,181 篇 natural language...
  • 872 篇 computational li...
  • 619 篇 natural language...
  • 283 篇 semantics
  • 165 篇 natural language...
  • 128 篇 machine learning
  • 127 篇 graphic methods
  • 123 篇 iterative method...
  • 111 篇 sentiment analys...
  • 110 篇 speech recogniti...
  • 105 篇 deep learning
  • 94 篇 syntactics
  • 90 篇 text processing
  • 86 篇 speech processin...
  • 81 篇 embeddings
  • 72 篇 information retr...
  • 69 篇 modeling languag...
  • 69 篇 artificial intel...
  • 66 篇 contrastive lear...
  • 63 篇 zero-shot learni...

机构

  • 74 篇 carnegie mellon ...
  • 36 篇 national univers...
  • 34 篇 carnegie mellon ...
  • 34 篇 language technol...
  • 34 篇 institute for na...
  • 33 篇 university of wa...
  • 33 篇 school of comput...
  • 32 篇 tsinghua univers...
  • 31 篇 university of ch...
  • 30 篇 nanyang technolo...
  • 30 篇 stanford univers...
  • 29 篇 zhejiang univers...
  • 27 篇 alibaba grp peop...
  • 26 篇 gaoling school o...
  • 26 篇 carnegie mellon ...
  • 25 篇 harbin institute...
  • 25 篇 peking universit...
  • 25 篇 natl univ singap...
  • 24 篇 allen inst artif...
  • 23 篇 the chinese univ...

作者

  • 42 篇 neubig graham
  • 39 篇 zhou guodong
  • 39 篇 smith noah a.
  • 36 篇 liu yang
  • 36 篇 lapata mirella
  • 34 篇 sun maosong
  • 32 篇 zhang min
  • 30 篇 liu qun
  • 30 篇 hovy eduard
  • 29 篇 zhao jun
  • 27 篇 schütze hinrich
  • 27 篇 liu zhiyuan
  • 26 篇 gurevych iryna
  • 25 篇 vulic ivan
  • 22 篇 huang xuanjing
  • 21 篇 chang kai-wei
  • 21 篇 liu kang
  • 21 篇 zhang yue
  • 21 篇 zhang qi
  • 20 篇 wen ji-rong

语言

  • 6,955 篇 英文
  • 722 篇 其他
  • 23 篇 中文
  • 8 篇 法文
  • 4 篇 土耳其文
  • 2 篇 德文
  • 2 篇 俄文
检索条件"任意字段=Proceedings of the Conference on Empirical Methods in Natural Language Processing"
7707 条 记 录,以下是231-240 订阅
排序:
SLANG: New Concept Comprehension of Large language Models
SLANG: New Concept Comprehension of Large Language Models
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Mei, Lingrui Liu, Shenghua Wang, Yiwei Bi, Baolong Cheng, Xueqi CAS Key Laboratory of AI Safety Institute of Computing Technology CAS China University of California Los Angeles United States University of California Merced United States University of Chinese Academy of Sciences China
The dynamic nature of language, particularly evident in the realm of slang and memes on the Internet, poses serious challenges to the adaptability of Large language Models (LLMs). Traditionally anchored to static data... 详细信息
来源: 评论
Sailor: Open language Models for South-East Asia
Sailor: Open Language Models for South-East Asia
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Dou, Longxu Liu, Qian Zeng, Guangtao Guo, Jia Zhou, Jiahui Mao, Xin Jin, Ziqi Lu, Wei Lin, Min Sea AI Lab Singapore SUTD Singapore
We present Sailor, a family of open language models ranging from 0.5B to 14B parameters, tailored for South-East Asian (SEA) languages. From Qwen1.5, Sailor models accept 200B to 400B tokens during continual pre-train... 详细信息
来源: 评论
Getting More from Less: Large language Models are Good Spontaneous Multilingual Learners
Getting More from Less: Large Language Models are Good Spont...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Zhang, Shimao Gao, Changjiang Zhu, Wenhao Chen, Jiajun Huang, Xin Han, Xue Feng, Junlan Deng, Chao Huang, Shujian National Key Laboratory for Novel Software Technology Nanjing University China China Mobile Research Beijing China
Recently, Large language Models (LLMs) have shown impressive language capabilities, while most of them have very unbalanced performance across different languages. Multilingual alignment based on the translation paral... 详细信息
来源: 评论
A Novel Metric for Measuring the Robustness of Large language Models in Non-adversarial Scenarios
A Novel Metric for Measuring the Robustness of Large Languag...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Ackerman, Samuel Rabinovich, Ella Farchi, Eitan Anaby-Tavor, Ateret IBM Research United States
We evaluate the robustness of several large language models on multiple datasets. Robustness here refers to the relative insensitivity of the model's answers to meaning-preserving variants of their input. Benchmar... 详细信息
来源: 评论
Improving Discriminative Capability of Reward Models in RLHF Using Contrastive Learning
Improving Discriminative Capability of Reward Models in RLHF...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Chen, Lu Zheng, Rui Wang, Binghai Jin, Senjie Huang, Caishuang Ye, Junjie Zhang, Zhihao Zhou, Yuhao Xi, Zhiheng Gui, Tao Zhang, Qi Huang, Xuanjing School of Computer Science Fudan University China Institute of Modern Languages and Linguistics Fudan University China Key Laboratory of Intelligent Information Processing Fudan University Shanghai China
Reinforcement Learning from Human Feedback (RLHF) is a crucial approach to aligning language models with human values and intentions. A fundamental challenge in this method lies in ensuring that the reward model accur... 详细信息
来源: 评论
One-to-many testing for code generation from (just) natural language
One-to-many testing for code generation from (just) natural ...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Uniyal, Mansi Singh, Mukul Verbruggen, Gust Gulwani, Sumit Le, Vu
MBPP is a popular dataset for evaluating the task of code generation from natural language. Despite its popularity, there are three problems: (1) it relies on providing test cases to generate the right signature, (2) ... 详细信息
来源: 评论
Efficient Unseen language Adaptation for Multilingual Pre-Trained language Models
Efficient Unseen Language Adaptation for Multilingual Pre-Tr...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Chen, Po-Heng Chen, Yun-Nung National Taiwan University Taipei Taiwan
Multilingual pre-trained language models (mPLMs) have demonstrated notable effectiveness in zero-shot cross-lingual transfer ***, they can be fine-tuned solely on tasks in the source language and subsequently applied ... 详细信息
来源: 评论
Fishing for Magikarp: Automatically Detecting Under-trained Tokens in Large language Models
Fishing for Magikarp: Automatically Detecting Under-trained ...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Land, Sander Bartolo, Max Cohere
The disconnect between tokenizer creation and model training in language models allows for specific inputs, such as the infamous_SolidGoldMagikarp token, to induce unwanted model behaviour. Although such 'glitch t... 详细信息
来源: 评论
ITINERA: Integrating Spatial Optimization with Large language Models for Open-domain Urban Itinerary Planning
ITINERA: Integrating Spatial Optimization with Large Languag...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Tang, Yihong Wang, Zhaokai Qu, Ao Yan, Yihao Wu, Zhaofeng Zhuang, Dingyi Kai, Jushi Hou, Kebing Guo, Xiaotong Zhao, Jinhua Zhao, Zhan Ma, Wei Tutu AI University of Hong Kong Hong Kong Shanghai Jiao Tong University China Massachusetts Institute of Technology United States The Hong Kong Polytechnic University Hong Kong
Citywalk, a recently popular form of urban travel, requires genuine personalization and understanding of fine-grained requests compared to traditional itinerary planning. In this paper, we introduce the novel task of ... 详细信息
来源: 评论
MAgÏC: Investigation of Large language Model Powered Multi-Agent in Cognition, Adaptability, Rationality and Collaboration
MAgÏC: Investigation of Large Language Model Powered Multi-...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Xu, Lin Hu, Zhiyuan Zhou, Daquan Ren, Hongyu Dong, Zhen Keutzer, Kurt Ng, See-Kiong Feng, Jiashi National University of Singapore Singapore ByteDance China Stanford University United States UC Berkeley United States
Large language Models (LLMs) have significantly advanced natural language processing, demonstrating exceptional reasoning, tool usage, and memory capabilities. As their applications expand into multi-agent environment... 详细信息
来源: 评论