咨询与建议

限定检索结果

文献类型

  • 281 篇 会议
  • 102 篇 期刊文献

馆藏范围

  • 383 篇 电子文献
  • 0 种 纸本馆藏

日期分布

学科分类号

  • 261 篇 工学
    • 192 篇 计算机科学与技术...
    • 176 篇 软件工程
    • 66 篇 信息与通信工程
    • 20 篇 机械工程
    • 19 篇 生物工程
    • 17 篇 控制科学与工程
    • 14 篇 电气工程
    • 14 篇 电子科学与技术(可...
    • 10 篇 光学工程
    • 9 篇 化学工程与技术
    • 7 篇 生物医学工程(可授...
    • 3 篇 仪器科学与技术
    • 3 篇 动力工程及工程热...
    • 2 篇 材料科学与工程(可...
    • 2 篇 土木工程
  • 163 篇 理学
    • 109 篇 物理学
    • 66 篇 数学
    • 36 篇 统计学(可授理学、...
    • 23 篇 生物学
    • 20 篇 系统科学
    • 10 篇 化学
    • 1 篇 地球物理学
  • 37 篇 管理学
    • 27 篇 图书情报与档案管...
    • 10 篇 管理科学与工程(可...
    • 4 篇 工商管理
  • 6 篇 法学
    • 5 篇 社会学
    • 1 篇 法学
  • 2 篇 文学
    • 2 篇 外国语言文学
    • 1 篇 中国语言文学
  • 1 篇 经济学
    • 1 篇 应用经济学
  • 1 篇 农学
  • 1 篇 医学
  • 1 篇 艺术学

主题

  • 66 篇 speech recogniti...
  • 40 篇 training
  • 38 篇 hidden markov mo...
  • 29 篇 neural machine t...
  • 23 篇 machine translat...
  • 20 篇 decoding
  • 19 篇 handwriting reco...
  • 18 篇 computer aided l...
  • 16 篇 recurrent neural...
  • 15 篇 feature extracti...
  • 15 篇 transducers
  • 14 篇 vocabulary
  • 12 篇 databases
  • 12 篇 error analysis
  • 11 篇 speech
  • 10 篇 pattern recognit...
  • 10 篇 humans
  • 9 篇 training data
  • 9 篇 optimization
  • 9 篇 context

机构

  • 52 篇 human language t...
  • 40 篇 human language t...
  • 38 篇 apptek gmbh aach...
  • 32 篇 human language t...
  • 31 篇 human language t...
  • 23 篇 apptek gmbh aach...
  • 21 篇 human language t...
  • 16 篇 human language t...
  • 13 篇 human language t...
  • 12 篇 pattern recognit...
  • 10 篇 human language t...
  • 9 篇 spoken language ...
  • 9 篇 human language t...
  • 8 篇 computer science...
  • 8 篇 human language t...
  • 6 篇 pattern recognit...
  • 6 篇 human language t...
  • 6 篇 human language t...
  • 6 篇 human language t...
  • 5 篇 a2ia sa

作者

  • 212 篇 ney hermann
  • 80 篇 hermann ney
  • 61 篇 schlüter ralf
  • 22 篇 ralf schluter
  • 21 篇 ralf schlüter
  • 20 篇 wuebker joern
  • 18 篇 casacuberta fran...
  • 18 篇 zeyer albert
  • 18 篇 zhou wei
  • 16 篇 gao yingbo
  • 14 篇 kim yunsu
  • 14 篇 herold christian
  • 13 篇 mansour saab
  • 13 篇 zeineldeen moham...
  • 13 篇 patrick doetsch
  • 13 篇 peitz stephan
  • 13 篇 huck matthias
  • 12 篇 peris álvaro
  • 12 篇 peter jan-thorst...
  • 12 篇 michel wilfried

语言

  • 381 篇 英文
  • 1 篇 西班牙文
  • 1 篇 中文
检索条件"机构=Human Language Technology and Pattern Recognition"
383 条 记 录,以下是341-350 订阅
排序:
Investigation of large-margin softmax in neural language modeling
arXiv
收藏 引用
arXiv 2020年
作者: Huo, Jingjing Gao, Yingbo Wang, Weiyue Schlüter, Ralf Ney, Hermann Human Language Technology and Pattern Recognition Group Computer Science Department RWTH Aachen University Aachen52074 Germany AppTek GmbH Aachen52062 Germany
To encourage intra-class compactness and inter-class separability among trainable feature vectors, large-margin softmax methods are developed and widely applied in the face recognition community. The introduction of t... 详细信息
来源: 评论
Confidence scores for acoustic model adaptation
Confidence scores for acoustic model adaptation
收藏 引用
International Conference on Acoustics, Speech, and Signal Processing (ICASSP)
作者: Christian Gollan Michiel Bacchiani Human Language Technology and Pattern Recognition Computer Science Department 6 RWTH Aachen University Germany Google Inc. New York NY USA
This paper focuses on confidence scores for use in acoustic model adaptation. Frame-based confidence estimates are used in linear transform (CMLLR and MLLR) and MAP adaptation. We show that adaptation approaches with ... 详细信息
来源: 评论
On sampling-based training criteria for neural language modeling
arXiv
收藏 引用
arXiv 2021年
作者: Gao, Yingbo Thulke, David Gerstenberger, Alexander Tran, Khoa Viet Schlüter, Ralf Ney, Hermann Human Language Technology and Pattern Recognition Group Computer Science Department Rwth Aachen University Aachen52074 Germany AppTek GmbH Aachen52062 Germany
As the vocabulary size of modern word-based language models becomes ever larger, many sampling-based training criteria are proposed and investigated. The essence of these sampling methods is that the softmax-related t... 详细信息
来源: 评论
On architectures and training for raw waveform feature extraction in ASR
arXiv
收藏 引用
arXiv 2021年
作者: Vieting, Peter Lüscher, Christoph Michel, Wilfried Schlüter, Ralf Ney, Hermann Human Language Technology and Pattern Recognition Group Computer Science Department RWTH Aachen University Aachen52074 Germany AppTek GmbH Aachen52062 Germany
With the success of neural network based modeling in automatic speech recognition (ASR), many studies investigated acoustic modeling and learning of feature extractors directly based on the raw waveform. Recently, one... 详细信息
来源: 评论
Open-Lexicon language Modeling Combining Word and Character Levels
Open-Lexicon Language Modeling Combining Word and Character ...
收藏 引用
International Workshop on Frontiers in Handwriting recognition
作者: Michal Kozielski Martin Matysiak Patrick Doetsch Ralf Schlöter Hermann Ney Human Language Technology and Pattern Recognition Group RWTH Aachen University Aachen Germany Rheinisch-Westfalische Technische Hochschule Aachen Aachen Nordrhein-Westfalen DE
In this paper we investigate different n-gram language models that are defined over an open lexicon. We introduce a character-level language model and combine it with a standard word-level language model in a back off... 详细信息
来源: 评论
Two-way neural machine translation: A proof of concept for bidirectional translation modeling using a two-dimensional grid
arXiv
收藏 引用
arXiv 2020年
作者: Bahar, Parnia Brix, Christopher Ney, Hermann Human Language Technology and Pattern Recognition Group Computer Science Department RWTH Aachen University Aachen52074 Germany AppTek GmbH Aachen52062 Germany
Neural translation models have proven to be effective in capturing sufficient information from a source sentence and generating a high-quality target sentence. However, it is not easy to get the best effect for bidire... 详细信息
来源: 评论
Self-Normalized Importance Sampling for Neural language Modeling
arXiv
收藏 引用
arXiv 2021年
作者: Yang, Zijian Gao, Yingbo Gerstenberger, Alexander Jiang, Jintao Schlüter, Ralf Ney, Hermann Human Language Technology and Pattern Recognition Group Computer Science Department RWTH Aachen University Aachen52074 Germany AppTek GmbH Aachen52062 Germany
To mitigate the problem of having to traverse over the full vocabulary in the softmax normalization of a neural language model, sampling-based training criteria are proposed and investigated in the context of large vo... 详细信息
来源: 评论
language modeling with deep transformers
arXiv
收藏 引用
arXiv 2019年
作者: Irie, Kazuki Zeyer, Albert Schlüter, Ralf Ney, Hermann Human Language Technology and Pattern Recognition Group Computer Science Department RWTH Aachen University Aachen52074 Germany AppTek GmbH Aachen52062 Germany
We explore deep autoregressive Transformer models in language modeling for speech recognition. We focus on two aspects. First, we revisit Transformer model configurations specifically for language modeling. We show th... 详细信息
来源: 评论
Tight integrated end-to-end training for cascaded speech translation
arXiv
收藏 引用
arXiv 2020年
作者: Bahar, Parnia Bieschke, Tobias Schlüter, Ralf Ney, Hermann Human Language Technology and Pattern Recognition Group Computer Science Department RWTH Aachen University Aachen52074 Germany AppTek GmbH Aachen52062 Germany
A cascaded speech translation model relies on discrete and non-differentiable transcription, which provides a supervision signal from the source side and helps the transformation between source speech and target text.... 详细信息
来源: 评论
ROBUST KNOWLEDGE DISTILLATION FROM RNN-T MODELS WITH NOISY TRAINING LABELS USING FULL-SUM LOSS
arXiv
收藏 引用
arXiv 2023年
作者: Zeineldeen, Mohammad Audhkhasi, Kartik Baskar, Murali Karthick Ramabhadran, Bhuvana Human Language Technology and Pattern Recognition Computer Science Department Rwth Aachen University Aachen52074 Germany Google Llc New York United States
This work studies knowledge distillation (KD) and addresses its constraints for recurrent neural network transducer (RNNT) models. In hard distillation, a teacher model transcribes large amounts of unlabelled speech t... 详细信息
来源: 评论