咨询与建议

限定检索结果

文献类型

  • 14,463 篇 会议
  • 654 篇 期刊文献
  • 101 册 图书
  • 40 篇 学位论文
  • 1 篇 科技报告

馆藏范围

  • 15,258 篇 电子文献
  • 1 种 纸本馆藏

日期分布

学科分类号

  • 10,944 篇 工学
    • 10,283 篇 计算机科学与技术...
    • 5,408 篇 软件工程
    • 1,463 篇 信息与通信工程
    • 954 篇 电气工程
    • 880 篇 控制科学与工程
    • 446 篇 生物工程
    • 221 篇 网络空间安全
    • 220 篇 化学工程与技术
    • 186 篇 机械工程
    • 174 篇 生物医学工程(可授...
    • 142 篇 电子科学与技术(可...
    • 101 篇 仪器科学与技术
    • 99 篇 安全科学与工程
  • 2,473 篇 理学
    • 1,150 篇 数学
    • 649 篇 物理学
    • 518 篇 生物学
    • 391 篇 统计学(可授理学、...
    • 241 篇 系统科学
    • 232 篇 化学
  • 2,416 篇 管理学
    • 1,748 篇 图书情报与档案管...
    • 757 篇 管理科学与工程(可...
    • 239 篇 工商管理
    • 104 篇 公共管理
  • 1,761 篇 文学
    • 1,709 篇 外国语言文学
    • 184 篇 中国语言文学
  • 510 篇 医学
    • 299 篇 临床医学
    • 283 篇 基础医学(可授医学...
    • 111 篇 公共卫生与预防医...
  • 276 篇 法学
    • 248 篇 社会学
  • 237 篇 教育学
    • 224 篇 教育学
  • 100 篇 农学
  • 96 篇 经济学
  • 9 篇 艺术学
  • 7 篇 哲学
  • 4 篇 军事学

主题

  • 3,535 篇 natural language...
  • 1,768 篇 natural language...
  • 952 篇 computational li...
  • 740 篇 semantics
  • 681 篇 machine learning
  • 609 篇 deep learning
  • 520 篇 natural language...
  • 347 篇 computational mo...
  • 338 篇 training
  • 333 篇 accuracy
  • 331 篇 sentiment analys...
  • 329 篇 large language m...
  • 321 篇 feature extracti...
  • 311 篇 data mining
  • 290 篇 speech processin...
  • 260 篇 speech recogniti...
  • 252 篇 transformers
  • 235 篇 neural networks
  • 217 篇 iterative method...
  • 212 篇 support vector m...

机构

  • 85 篇 carnegie mellon ...
  • 51 篇 university of ch...
  • 45 篇 tsinghua univers...
  • 45 篇 carnegie mellon ...
  • 43 篇 zhejiang univers...
  • 43 篇 national univers...
  • 38 篇 nanyang technolo...
  • 36 篇 university of wa...
  • 35 篇 univ chinese aca...
  • 34 篇 university of sc...
  • 34 篇 carnegie mellon ...
  • 33 篇 stanford univers...
  • 32 篇 gaoling school o...
  • 32 篇 school of artifi...
  • 32 篇 alibaba grp peop...
  • 29 篇 tsinghua univ de...
  • 28 篇 harbin institute...
  • 27 篇 language technol...
  • 27 篇 peking universit...
  • 26 篇 microsoft resear...

作者

  • 55 篇 zhou guodong
  • 50 篇 neubig graham
  • 46 篇 liu yang
  • 39 篇 sun maosong
  • 36 篇 zhang min
  • 34 篇 liu qun
  • 33 篇 smith noah a.
  • 28 篇 schütze hinrich
  • 27 篇 liu zhiyuan
  • 27 篇 lapata mirella
  • 26 篇 wen ji-rong
  • 24 篇 chang kai-wei
  • 23 篇 zhou jie
  • 23 篇 yang diyi
  • 23 篇 zhao hai
  • 23 篇 zhao wayne xin
  • 21 篇 chua tat-seng
  • 20 篇 dredze mark
  • 18 篇 biemann chris
  • 18 篇 fung pascale

语言

  • 14,662 篇 英文
  • 482 篇 其他
  • 106 篇 中文
  • 18 篇 法文
  • 15 篇 土耳其文
  • 2 篇 西班牙文
  • 2 篇 俄文
检索条件"任意字段=Conference on empirical methods in natural language processing"
15259 条 记 录,以下是591-600 订阅
排序:
Sailor: Open language Models for South-East Asia
Sailor: Open Language Models for South-East Asia
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Dou, Longxu Liu, Qian Zeng, Guangtao Guo, Jia Zhou, Jiahui Mao, Xin Jin, Ziqi Lu, Wei Lin, Min Sea AI Lab Singapore SUTD Singapore
We present Sailor, a family of open language models ranging from 0.5B to 14B parameters, tailored for South-East Asian (SEA) languages. From Qwen1.5, Sailor models accept 200B to 400B tokens during continual pre-train... 详细信息
来源: 评论
Attribute Controlled Fine-tuning for Large language Models: A Case Study on Detoxification
Attribute Controlled Fine-tuning for Large Language Models: ...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Meng, Tao Mehrabi, Ninareh Goyal, Palash Ramakrishna, Anil Galstyan, Aram Zemel, Richard Chang, Kai-Wei Gupta, Rahul Peris, Charith University of California Los Angeles United States *** Inc. United States
We propose a constraint learning schema for fine-tuning Large language Models (LLMs) with attribute control. Given a training corpus and control criteria formulated as a sequence-level constraint on model outputs, our... 详细信息
来源: 评论
VE-KD: Vocabulary-Expansion Knowledge-Distillation for Training Smaller Domain-Specific language Models
VE-KD: Vocabulary-Expansion Knowledge-Distillation for Train...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Gao, Pengju Yamasaki, Tomohiro Imoto, Kazunori Toshiba Corporation Tokyo Japan
We propose VE-KD, a novel method that balances knowledge distillation and vocabulary expansion with the aim of training efficient domain-specific language models. Compared with traditional pre-training approaches, VE-... 详细信息
来源: 评论
TEMA: Token Embeddings Mapping for Enriching Low-Resource language Models
TEMA: Token Embeddings Mapping for Enriching Low-Resource La...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Zevallos, Rodolfo Bel, Núria Farrús, Mireia Universitat Pompeu Fabra Barcelona Spain Universitat de Barcelona Barcelona Spain
The objective of the research we present is to remedy the problem of the low quality of language models for low-resource languages. We introduce an algorithm, the Token Embedding Mapping Algorithm (TEMA), that maps th... 详细信息
来源: 评论
Towards Online Continuous Sign language Recognition and Translation
Towards Online Continuous Sign Language Recognition and Tran...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Zuo, Ronglai Wei, Fangyun Mak, Brian The Hong Kong University of Science and Technology Hong Kong Microsoft Research Asia China
Research on continuous sign language recognition (CSLR) is essential to bridge the communication gap between deaf and hearing individuals. Numerous previous studies have trained their models using the connectionist te...
来源: 评论
Can We Edit Multimodal Large language Models?
Can We Edit Multimodal Large Language Models?
收藏 引用
conference on empirical methods in natural language processing (EMNLP)
作者: Cheng, Siyuan Tian, Bozhong Liu, Qingbin Chen, Xi Wang, Yongheng Chen, Huajun Zhang, Ningyu Zhejiang Univ Hangzhou Peoples R China Zhejiang Univ Ant Grp Joint Lab Knowledge Graph Hangzhou Peoples R China Donghai Lab Zhoushan Peoples R China Tencent Platform & Content Grp Hangzhou Peoples R China Zhejiang Lab Hangzhou Peoples R China
In this paper, we focus on editing Multimodal Large language Models (MLLMs). Compared to editing single-modal LLMs, multimodal model editing is more challenging, which demands a higher level of scrutiny and careful co... 详细信息
来源: 评论
Contextualized Sequence Likelihood: Enhanced Confidence Scores for natural language Generation
Contextualized Sequence Likelihood: Enhanced Confidence Scor...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Lin, Zhen Trivedi, Shubhendu Sun, Jimeng University of Illinois Urbana-Champaign United States Carle's Illinois College of Medicine University of Illinois Urbana-Champaign United States
The advent of large language models (LLMs) has dramatically advanced the state-of-the-art in numerous natural language generation tasks. For LLMs to be applied reliably, it is essential to have an accurate measure of ... 详细信息
来源: 评论
OneNet: A Fine-Tuning Free Framework for Few-Shot Entity Linking via Large language Model Prompting
OneNet: A Fine-Tuning Free Framework for Few-Shot Entity Lin...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Liu, Xukai Liu, Ye Zhang, Kai Wang, Kehang Liu, Qi Chen, Enhong State Key Laboratory of Cognitive Intelligence University of Science and Technology of China China
Entity Linking (EL) is the process of associating ambiguous textual mentions to specific entities in a knowledge base. Traditional EL methods heavily rely on large datasets to enhance their performance, a dependency t... 详细信息
来源: 评论
CHAmbi: A New Benchmark on Chinese Ambiguity Challenges for Large language Models
CHAmbi: A New Benchmark on Chinese Ambiguity Challenges for ...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Zhang, Qin Cai, Sihan Zhao, Jiaxu Pechenizkiy, Mykola Fang, Meng College of Computer Science and Software Engineering Shenzhen University China Department of Mathematics and Computer Science Eindhoven University of Technology Netherlands Department of Computer Science University of Liverpool United Kingdom
Ambiguity is an inherent feature of language, whose management is crucial for effective communication and collaboration. This is particularly true for Chinese, a language with extensive lexical-morphemic ambiguity. De...
来源: 评论
Zero-Resource Hallucination Prevention for Large language Models
Zero-Resource Hallucination Prevention for Large Language Mo...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Luo, Junyu Xiao, Cao Ma, Fenglong The Pennsylvania State University United States GE Healthcare United States
The prevalent use of large language models (LLMs) in various domains has drawn attention to the issue of "hallucination", which refers to instances where LLMs generate factually inaccurate or ungrounded info... 详细信息
来源: 评论