咨询与建议

限定检索结果

文献类型

  • 146 篇 会议
  • 68 篇 期刊文献

馆藏范围

  • 214 篇 电子文献
  • 0 种 纸本馆藏

日期分布

学科分类号

  • 151 篇 工学
    • 111 篇 计算机科学与技术...
    • 98 篇 软件工程
    • 44 篇 信息与通信工程
    • 13 篇 控制科学与工程
    • 12 篇 电气工程
    • 11 篇 电子科学与技术(可...
    • 8 篇 机械工程
    • 6 篇 生物工程
    • 5 篇 化学工程与技术
    • 5 篇 生物医学工程(可授...
    • 4 篇 光学工程
    • 2 篇 动力工程及工程热...
  • 101 篇 理学
    • 75 篇 物理学
    • 38 篇 数学
    • 19 篇 统计学(可授理学、...
    • 12 篇 系统科学
    • 7 篇 生物学
    • 5 篇 化学
    • 1 篇 地球物理学
  • 17 篇 管理学
    • 11 篇 图书情报与档案管...
    • 4 篇 管理科学与工程(可...
    • 3 篇 工商管理
    • 2 篇 公共管理
  • 4 篇 医学
    • 4 篇 临床医学
    • 2 篇 基础医学(可授医学...
    • 2 篇 公共卫生与预防医...
  • 3 篇 法学
    • 2 篇 社会学
    • 1 篇 法学
  • 1 篇 经济学
    • 1 篇 应用经济学
  • 1 篇 教育学
    • 1 篇 体育学
  • 1 篇 农学

主题

  • 51 篇 speech recogniti...
  • 15 篇 hidden markov mo...
  • 15 篇 training
  • 13 篇 neural machine t...
  • 12 篇 machine translat...
  • 12 篇 transducers
  • 11 篇 computer aided l...
  • 11 篇 decoding
  • 9 篇 recurrent neural...
  • 8 篇 speech
  • 8 篇 feature extracti...
  • 8 篇 neural network
  • 8 篇 error analysis
  • 7 篇 modelling langua...
  • 6 篇 vocabulary
  • 6 篇 optimization
  • 6 篇 handwriting reco...
  • 6 篇 humans
  • 6 篇 automatic speech...
  • 5 篇 hierarchical sys...

机构

  • 40 篇 human language t...
  • 37 篇 apptek gmbh aach...
  • 32 篇 human language t...
  • 20 篇 human language t...
  • 10 篇 human language t...
  • 9 篇 human language t...
  • 8 篇 computer science...
  • 8 篇 human language t...
  • 7 篇 spoken language ...
  • 7 篇 apptek gmbh aach...
  • 6 篇 human language t...
  • 6 篇 human language t...
  • 6 篇 human language t...
  • 5 篇 human language t...
  • 4 篇 human language t...
  • 3 篇 human language t...
  • 3 篇 rwth aachen univ...
  • 3 篇 limsi cnrs spoke...
  • 3 篇 human language t...
  • 2 篇 computer vision ...

作者

  • 141 篇 ney hermann
  • 55 篇 schlüter ralf
  • 36 篇 hermann ney
  • 16 篇 zeyer albert
  • 16 篇 zhou wei
  • 14 篇 gao yingbo
  • 14 篇 ralf schluter
  • 12 篇 ralf schlüter
  • 12 篇 mansour saab
  • 12 篇 zeineldeen moham...
  • 12 篇 michel wilfried
  • 12 篇 zens richard
  • 11 篇 herold christian
  • 10 篇 bahar parnia
  • 10 篇 peitz stephan
  • 9 篇 peter jan-thorst...
  • 9 篇 schluter ralf
  • 9 篇 freitag markus
  • 9 篇 wang weiyue
  • 8 篇 wuebker joern

语言

  • 214 篇 英文
检索条件"机构=Human Language Technology and Pattern Recognition Group Computer Science"
214 条 记 录,以下是171-180 订阅
排序:
LATTICE-FREE SEQUENCE DISCRIMINATIVE TRAINING FOR PHONEME-BASED NEURAL TRANSDUCERS
arXiv
收藏 引用
arXiv 2022年
作者: Yang, Zijian Zhou, Wei Schlüter, Ralf Ney, Hermann Human Language Technology and Pattern Recognition Computer Science Department RWTH Aachen University Aachen52074 Germany AppTek GmbH Aachen52062 Germany
Recently, RNN-Transducers have achieved remarkable results on various automatic speech recognition tasks. However, lattice-free sequence discriminative training methods, which obtain superior performance in hybrid mod... 详细信息
来源: 评论
FULL-SUM DECODING FOR HYBRID HMM BASED SPEECH recognition USING LSTM language MODEL
arXiv
收藏 引用
arXiv 2020年
作者: Zhou, Wei Schlüter, Ralf Ney, Hermann Human Language Technology and Pattern Recognition Computer Science Department RWTH Aachen University Aachen52074 Germany AppTek GmbH Aachen52062 Germany
In hybrid HMM based speech recognition, LSTM language models have been widely applied and achieved large improvements. The theoretical capability of modeling any unlimited context suggests that no recombination should... 详细信息
来源: 评论
Early stage LM integration using local and global log-linear combination
arXiv
收藏 引用
arXiv 2020年
作者: Michel, Wilfried Schlüter, Ralf Ney, Hermann Human Language Technology and Pattern Recognition Computer Science Department RWTH Aachen University Aachen52056 Germany AppTek GmbH Aachen52062 Germany
Sequence-to-sequence models with an implicit alignment mechanism (e.g. attention) are closing the performance gap towards traditional hybrid hidden Markov models (HMM) for the task of automatic speech recognition. One... 详细信息
来源: 评论
Investigating methods to improve language model integration for attention-based encoder-decoder ASR models
arXiv
收藏 引用
arXiv 2021年
作者: Zeineldeen, Mohammad Glushko, Aleksandr Michel, Wilfried Zeyer, Albert Schlüter, Ralf Ney, Hermann Human Language Technology and Pattern Recognition Computer Science Department RWTH Aachen University Aachen52074 Germany AppTek GmbH Aachen52062 Germany
Attention-based encoder-decoder (AED) models learn an implicit internal language model (ILM) from the training transcriptions. The integration with an external LM trained on much more unpaired text usually leads to be... 详细信息
来源: 评论
Automatic learning of subword dependent model scales
arXiv
收藏 引用
arXiv 2021年
作者: Meyer, Felix Michel, Wilfried Zeineldeen, Mohammad Schlüter, Ralf Ney, Hermann Human Language Technology and Pattern Recognition Computer Science Department RWTH Aachen University Aachen52074 Germany AppTek GmbH Aachen52062 Germany
To improve the performance of state-of-the-art automatic speech recognition systems it is common practice to include external knowledge sources such as language models or prior corrections. This is usually done via lo... 详细信息
来源: 评论
Improving the Training Recipe for a Robust Conformer-based Hybrid Model
arXiv
收藏 引用
arXiv 2022年
作者: Zeineldeen, Mohammad Xu, Jingjing Lüscher, Christoph Schlüter, Ralf Ney, Hermann Human Language Technology and Pattern Recognition Computer Science Department RWTH Aachen University Aachen52074 Germany AppTek GmbH Aachen52062 Germany
Speaker adaptation is important to build robust automatic speech recognition (ASR) systems. In this work, we investigate various methods for speaker adaptive training (SAT) based on feature-space approaches for a conf... 详细信息
来源: 评论
Acoustic data-driven subword modeling for end-to-end speech recognition
arXiv
收藏 引用
arXiv 2021年
作者: Zhou, Wei Zeineldeen, Mohammad Zheng, Zuoyun Schlüter, Ralf Ney, Hermann Human Language Technology and Pattern Recognition Computer Science Department RWTH Aachen University Aachen52074 Germany AppTek GmbH Aachen52062 Germany
Subword units are commonly used for end-to-end automatic speech recognition (ASR), while a fully acoustic-oriented subword modeling approach is somewhat missing. We propose an acoustic data-driven subword modeling (AD... 详细信息
来源: 评论
MONOTONIC SEGMENTAL ATTENTION FOR AUTOMATIC SPEECH recognition
arXiv
收藏 引用
arXiv 2022年
作者: Zeyer, Albert Schmitt, Robin Zhou, Wei Schlüter, Ralf Ney, Hermann Human Language Technology and Pattern Recognition Computer Science Department RWTH Aachen University Aachen52062 Germany AppTek GmbH Aachen52062 Germany
We introduce a novel segmental-attention model for automatic speech recognition. We restrict the decoder attention to segments to avoid quadratic runtime of global attention, better generalize to long sequences, and e... 详细信息
来源: 评论
Librispeech transducer model with internal language model prior correction
arXiv
收藏 引用
arXiv 2021年
作者: Zeyer, Albert Merboldt, André Michel, Wilfried Schlüter, Ralf Ney, Hermann Human Language Technology and Pattern Recognition Computer Science Department RWTH Aachen University Aachen52062 Germany AppTek GmbH Aachen52062 Germany
We present our transducer model on Librispeech. We study variants to include an external language model (LM) with shallow fusion and subtract an estimated internal LM. This is justified by a Bayesian interpretation wh... 详细信息
来源: 评论
EFFICIENT SEQUENCE TRAINING OF ATTENTION MODELS USING APPROXIMATIVE RECOMBINATION
arXiv
收藏 引用
arXiv 2021年
作者: Wynands, Nils-Philipp Michel, Wilfried Rosendahl, Jan Schlüter, Ralf Ney, Hermann Human Language Technology and Pattern Recognition Computer Science Department RWTH Aachen University Aachen52062 Germany AppTek GmbH Aachen52062 Germany
Sequence discriminative training is a great tool to improve the performance of an automatic speech recognition system. It does, however, necessitate a sum over all possible word sequences, which is intractable to comp... 详细信息
来源: 评论