咨询与建议

限定检索结果

文献类型

  • 260 篇 会议
  • 94 篇 期刊文献

馆藏范围

  • 354 篇 电子文献
  • 0 种 纸本馆藏

日期分布

学科分类号

  • 238 篇 工学
    • 174 篇 计算机科学与技术...
    • 160 篇 软件工程
    • 63 篇 信息与通信工程
    • 22 篇 机械工程
    • 18 篇 控制科学与工程
    • 12 篇 电子科学与技术(可...
    • 11 篇 电气工程
    • 9 篇 化学工程与技术
    • 8 篇 光学工程
    • 8 篇 生物工程
    • 6 篇 生物医学工程(可授...
    • 3 篇 仪器科学与技术
    • 2 篇 动力工程及工程热...
    • 1 篇 材料科学与工程(可...
    • 1 篇 农业工程
  • 151 篇 理学
    • 107 篇 物理学
    • 60 篇 数学
    • 32 篇 统计学(可授理学、...
    • 19 篇 系统科学
    • 10 篇 化学
    • 10 篇 生物学
    • 1 篇 地球物理学
  • 27 篇 管理学
    • 22 篇 图书情报与档案管...
    • 4 篇 管理科学与工程(可...
    • 3 篇 工商管理
  • 2 篇 法学
    • 2 篇 社会学
  • 2 篇 文学
    • 2 篇 外国语言文学
    • 1 篇 中国语言文学
  • 1 篇 经济学
    • 1 篇 应用经济学
  • 1 篇 农学
    • 1 篇 作物学
  • 1 篇 医学
  • 1 篇 艺术学

主题

  • 68 篇 speech recogniti...
  • 41 篇 training
  • 38 篇 hidden markov mo...
  • 22 篇 neural machine t...
  • 20 篇 machine translat...
  • 19 篇 decoding
  • 18 篇 computer aided l...
  • 18 篇 handwriting reco...
  • 15 篇 feature extracti...
  • 15 篇 transducers
  • 15 篇 recurrent neural...
  • 14 篇 vocabulary
  • 13 篇 error analysis
  • 12 篇 databases
  • 10 篇 modeling languag...
  • 10 篇 speech
  • 10 篇 humans
  • 9 篇 training data
  • 9 篇 signal processin...
  • 9 篇 optimization

机构

  • 52 篇 human language t...
  • 40 篇 apptek gmbh aach...
  • 40 篇 human language t...
  • 32 篇 human language t...
  • 31 篇 human language t...
  • 26 篇 apptek gmbh aach...
  • 21 篇 human language t...
  • 16 篇 human language t...
  • 14 篇 apptek gmbh
  • 13 篇 human language t...
  • 10 篇 human language t...
  • 10 篇 machine learning...
  • 9 篇 spoken language ...
  • 9 篇 human language t...
  • 8 篇 computer science...
  • 8 篇 human language t...
  • 6 篇 human language t...
  • 6 篇 human language t...
  • 6 篇 human language t...
  • 5 篇 a2ia sa

作者

  • 220 篇 ney hermann
  • 85 篇 hermann ney
  • 70 篇 schlüter ralf
  • 26 篇 ralf schlüter
  • 21 篇 ralf schluter
  • 20 篇 wuebker joern
  • 19 篇 zhou wei
  • 18 篇 gao yingbo
  • 18 篇 zeyer albert
  • 14 篇 kim yunsu
  • 14 篇 herold christian
  • 14 篇 thulke david
  • 13 篇 mansour saab
  • 13 篇 zeineldeen moham...
  • 13 篇 patrick doetsch
  • 13 篇 peitz stephan
  • 13 篇 huck matthias
  • 12 篇 peter jan-thorst...
  • 12 篇 yang zijian
  • 12 篇 michel wilfried

语言

  • 349 篇 英文
  • 4 篇 其他
  • 1 篇 中文
检索条件"机构=Human Language Technology and Pattern Recognition Group RWTH Aachen University Aachen"
354 条 记 录,以下是1-10 订阅
排序:
Document-Level language Models for Machine Translation  8
Document-Level Language Models for Machine Translation
收藏 引用
8th Conference on Machine Translation, WMT 2023
作者: Petrick, Frithjof Herold, Christian Petrushkov, Pavel Khadivi, Shahram Ney, Hermann eBay Inc. Aachen Germany Human Language Technology and Pattern Recognition Group RWTH Aachen University Aachen Germany
Despite the known limitations, most machine translation systems today still operate on the sentence-level. One reason for this is, that most parallel training data is only sentence-level aligned, without document-leve... 详细信息
来源: 评论
Comparison of Different Neural Network Architectures for Spoken language Identification  15
Comparison of Different Neural Network Architectures for Spo...
收藏 引用
15th ITG Conference on Speech Communication
作者: Bazazo, Tala Zeineldeen, Mohammad Plahl, Christian Schlüter, Ralf Ney, Hermann Human Language Technology and Pattern Recognition RWTH Aachen University Germany eBay Aachen Germany
This paper compares different neural network based architectures on the spoken language identification task. To our best knowledge such a comparison of different models on the same dataset and the same set of language... 详细信息
来源: 评论
Improving Long Context Document-Level Machine Translation  4
Improving Long Context Document-Level Machine Translation
收藏 引用
4th Workshop on Computational Approaches to Discourse, CODI 2023
作者: Herold, Christian Ney, Hermann Human Language Technology and Pattern Recognition Group Computer Science Department RWTH Aachen University AachenD-52056 Germany
Document-level context for neural machine translation (NMT) is crucial to improve the translation consistency and cohesion, the translation of ambiguous inputs, as well as several other linguistic phenomena. Many work... 详细信息
来源: 评论
Enhancing and Adversarial: Improve ASR with Speaker Labels  48
Enhancing and Adversarial: Improve ASR with Speaker Labels
收藏 引用
48th IEEE International Conference on Acoustics, Speech and Signal Processing, ICASSP 2023
作者: Zhou, Wei Wu, Haotian Xu, Jingjing Zeineldeen, Mohammad Luscher, Christoph Schluter, Ralf Ney, Hermann Rwth Aachen University Human Language Technology and Pattern Recognition Computer Science Department Aachen52074 Germany AppTek GmbH Aachen52062 Germany
ASR can be improved by multi-task learning (MTL) with domain enhancing or domain adversarial training, which are two opposite objectives with the aim to increase/decrease domain variance towards domain-aware/agnostic ... 详细信息
来源: 评论
Lattice-Free Sequence Discriminative Training for Phoneme-Based Neural Transducers  48
Lattice-Free Sequence Discriminative Training for Phoneme-Ba...
收藏 引用
48th IEEE International Conference on Acoustics, Speech and Signal Processing, ICASSP 2023
作者: Yang, Zijian Zhou, Wei Schluter, Ralf Ney, Hermann Rwth Aachen University Human Language Technology and Pattern Recognition Computer Science Department Aachen52074 Germany AppTek GmbH Aachen52062 Germany
Recently, RNN-Transducers have achieved remarkable results on various automatic speech recognition tasks. However, lattice-free sequence discriminative training methods, which obtain superior performance in hybrid mod... 详细信息
来源: 评论
Prompting and Fine-Tuning of Small LLMs for Length-Controllable Telephone Call Summarization  2
Prompting and Fine-Tuning of Small LLMs for Length-Controlla...
收藏 引用
2nd International Conference on Foundation and Large language Models, FLLM 2024
作者: Thulke, David Gao, Yingbo Jalota, Rricha Dugast, Christian Ney, Hermann AppTek GmbH Aachen Germany RWTH Aachen University Machine Learning and Human Language Technology Group Germany
This paper explores the rapid development of a telephone call summarization system utilizing large language models (LLMs). Our approach involves initial experiments with prompting existing LLMs to generate summaries o... 详细信息
来源: 评论
Robust Knowledge Distillation from RNN-T Models with Noisy Training Labels Using Full-Sum Loss  48
Robust Knowledge Distillation from RNN-T Models with Noisy T...
收藏 引用
48th IEEE International Conference on Acoustics, Speech and Signal Processing, ICASSP 2023
作者: Zeineldeen, Mohammad Audhkhasi, Kartik Baskar, Murali Karthick Ramabhadran, Bhuvana Rwth Aachen University Human Language Technology and Pattern Recognition Computer Science Department Aachen52074 Germany Google Llc New York United States
This work studies knowledge distillation (KD) and addresses its constraints for recurrent neural network transducer (RNN-T) models. In hard distillation, a teacher model transcribes large amounts of unlabelled speech ... 详细信息
来源: 评论
Revisiting Checkpoint Averaging for Neural Machine Translation  2
Revisiting Checkpoint Averaging for Neural Machine Translati...
收藏 引用
2nd Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 12th International Joint Conference on Natural language Processing, AACL-IJCNLP 2022
作者: Gao, Yingbo Herold, Christian Yang, Zijian Ney, Hermann Human Language Technology and Pattern Recognition Group Computer Science Department Rwth Aachen University AachenD-52056 Germany
Checkpoint averaging is a simple and effectivemethod to boost the performance of convergedneural machine translation models. The calculation is cheap to perform and the fact thatthe translation improvement almost come... 详细信息
来源: 评论
Does Joint Training Really Help Cascaded Speech Translation?
Does Joint Training Really Help Cascaded Speech Translation?
收藏 引用
2022 Conference on Empirical Methods in Natural language Processing, EMNLP 2022
作者: Tran, Viet Anh Khoa Thulke, David Gao, Yingbo Herold, Christian Ney, Hermann Human Language Technology and Pattern Recognition Group Computer Science Department RWTH Aachen University AachenD-52056 Germany
Currently, in speech translation, the straightforward approach - cascading a recognition system with a translation system - delivers state-of-the-art results. However, fundamental challenges such as error propagation ... 详细信息
来源: 评论
Right Label Context in End-to-End Training of Time-Synchronous ASR Models
Right Label Context in End-to-End Training of Time-Synchrono...
收藏 引用
2025 IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2025
作者: Raissi, Tina Schlüter, Ralf Ney, Hermann Machine Learning and Human Language Technology Group RWTH Aachen University Germany AppTek GmbH Germany
Current time-synchronous sequence-to-sequence automatic speech recognition (ASR) models are trained by using sequence level cross-entropy that sums over all alignments. Due to the discriminative formulation, incorpora... 详细信息
来源: 评论