咨询与建议

限定检索结果

文献类型

  • 35 篇 期刊文献
  • 25 篇 会议

馆藏范围

  • 60 篇 电子文献
  • 0 种 纸本馆藏

日期分布

学科分类号

  • 38 篇 工学
    • 28 篇 计算机科学与技术...
    • 25 篇 软件工程
    • 9 篇 信息与通信工程
    • 6 篇 电气工程
    • 6 篇 生物医学工程(可授...
    • 4 篇 电子科学与技术(可...
    • 3 篇 光学工程
    • 2 篇 机械工程
    • 2 篇 化学工程与技术
    • 1 篇 控制科学与工程
    • 1 篇 生物工程
    • 1 篇 安全科学与工程
  • 32 篇 理学
    • 26 篇 物理学
    • 14 篇 数学
    • 10 篇 统计学(可授理学、...
    • 2 篇 化学
    • 1 篇 生物学
    • 1 篇 科学技术史(分学科...
  • 10 篇 管理学
    • 7 篇 图书情报与档案管...
    • 3 篇 管理科学与工程(可...
    • 1 篇 工商管理
    • 1 篇 公共管理
  • 3 篇 法学
    • 3 篇 社会学
  • 3 篇 医学
    • 3 篇 临床医学
    • 2 篇 基础医学(可授医学...
    • 2 篇 公共卫生与预防医...
    • 1 篇 药学(可授医学、理...
  • 2 篇 教育学
    • 2 篇 心理学(可授教育学...
  • 1 篇 哲学
    • 1 篇 哲学
  • 1 篇 历史学
    • 1 篇 世界史
  • 1 篇 艺术学

主题

  • 11 篇 speech recogniti...
  • 5 篇 hidden markov mo...
  • 5 篇 data models
  • 4 篇 speech processin...
  • 4 篇 training
  • 3 篇 training data
  • 3 篇 signal processin...
  • 2 篇 conferences
  • 2 篇 modeling languag...
  • 2 篇 zero-shot learni...
  • 2 篇 telephone sets
  • 2 篇 bayes methods
  • 2 篇 machine learning
  • 2 篇 transducers
  • 1 篇 reliability
  • 1 篇 reproducibility
  • 1 篇 function evaluat...
  • 1 篇 clustering
  • 1 篇 reverberation
  • 1 篇 deep learning

机构

  • 18 篇 apptek gmbh
  • 12 篇 apptek gmbh aach...
  • 10 篇 machine learning...
  • 5 篇 apptek gmbh aach...
  • 4 篇 machine learning...
  • 4 篇 machine learning...
  • 4 篇 language technol...
  • 3 篇 paderborn univer...
  • 2 篇 department of ma...
  • 2 篇 computer science...
  • 2 篇 department of qu...
  • 2 篇 department of in...
  • 2 篇 machine learning...
  • 2 篇 machine learning...
  • 2 篇 diagnostic image...
  • 2 篇 machine learning...
  • 2 篇 rwth aachen univ...
  • 2 篇 machine learning...
  • 1 篇 kenvak research ...
  • 1 篇 department of ra...

作者

  • 21 篇 schlüter ralf
  • 19 篇 ney hermann
  • 9 篇 ralf schlüter
  • 8 篇 raissi tina
  • 8 篇 yang zijian
  • 6 篇 hermann ney
  • 6 篇 vieting peter
  • 6 篇 lüscher christop...
  • 4 篇 vulić ivan
  • 4 篇 berger simon
  • 4 篇 zijian yang
  • 4 篇 xu jingjing
  • 4 篇 zeineldeen moham...
  • 4 篇 wan xingchen
  • 4 篇 zhou han
  • 4 篇 korhonen anna
  • 4 篇 zhou wei
  • 3 篇 schluter ralf
  • 3 篇 beck eugen
  • 3 篇 le-duc khai

语言

  • 55 篇 英文
  • 5 篇 其他
检索条件"机构=Machine Learning and Human Language Technology Group"
60 条 记 录,以下是31-40 订阅
排序:
ON THE RELEVANCE OF PHONEME DURATION VARIABILITY OF SYNTHESIZED TRAINING DATA FOR AUTOMATIC SPEECH RECOGNITION
arXiv
收藏 引用
arXiv 2023年
作者: Rossenbach, Nick Hilmes, Benedikt Schlüter, Ralf Machine Learning and Human Language Technology Computer Science Departement RWTH Aachen University Germany AppTek GmbH Germany
Synthetic data generated by text-to-speech (TTS) systems can be used to improve automatic speech recognition (ASR) systems in low-resource or domain mismatch tasks. It has been shown that TTS-generated outputs still d... 详细信息
来源: 评论
Survival of the Most Influential Prompts: Efficient Black-Box Prompt Search via Clustering and Pruning
arXiv
收藏 引用
arXiv 2023年
作者: Zhou, Han Wan, Xingchen Vulić, Ivan Korhonen, Anna Language Technology Lab University of Cambridge United Kingdom Machine Learning Research Group University of Oxford United Kingdom
Prompt-based learning has been an effective paradigm for large pretrained language models (LLM), enabling few-shot or even zero-shot learning. Black-box prompt search has received growing interest recently for its dis... 详细信息
来源: 评论
AUTOPEFT: Automatic Configuration Search for Parameter-Efficient Fine-Tuning
arXiv
收藏 引用
arXiv 2023年
作者: Zhou, Han Wan, Xingchen Vulić, Ivan Korhonen, Anna Language Technology Lab University of Cambridge United Kingdom Machine Learning Research Group University of Oxford United Kingdom
Large pretrained language models are widely used in downstream NLP tasks via task-specific fine-tuning, but such procedures can be costly. Recently, Parameter-Efficient Fine-Tuning (PEFT) methods have achieved strong ... 详细信息
来源: 评论
Chunked Attention-Based Encoder-Decoder Model for Streaming Speech Recognition
Chunked Attention-Based Encoder-Decoder Model for Streaming ...
收藏 引用
International Conference on Acoustics, Speech, and Signal Processing (ICASSP)
作者: Mohammad Zeineldeen Albert Zeyer Ralf Schlüter Hermann Ney Computer Science Department Machine Learning and Human Language Technology RWTH Aachen University Germany AppTek GmbH Germany
We study a streamable attention-based encoder-decoder model in which either the decoder, or both the encoder and decoder, operate on pre-defined, fixed-size windows called chunks. A special end-of-chunk (EOC) symbol a...
来源: 评论
On the Relevance of Phoneme Duration Variability of Synthesized Training Data for Automatic Speech Recognition
On the Relevance of Phoneme Duration Variability of Synthesi...
收藏 引用
IEEE Workshop on Automatic Speech Recognition and Understanding
作者: Nick Rossenbach Benedikt Hilmes Ralf Schlüter Computer Science Departement Machine Learning and Human Language Technology RWTH Aachen University Germany AppTek GmbH Germany
Synthetic data generated by text-to-speech (TTS) systems can be used to improve automatic speech recognition (ASR) systems in low-resource or domain mismatch tasks. It has been shown that TTS-generated outputs still d...
来源: 评论
Fairer Preferences Elicit Improved human-Aligned Large language Model Judgments
arXiv
收藏 引用
arXiv 2024年
作者: Zhou, Han Wan, Xingchen Liu, Yinhong Collier, Nigel Vulić, Ivan Korhonen, Anna Language Technology Lab University of Cambridge United Kingdom Machine Learning Research Group University of Oxford United Kingdom
Large language models (LLMs) have shown promising abilities as cost-effective and reference-free evaluators for assessing language generation quality. In particular, pairwise LLM evaluators, which compare two generate... 详细信息
来源: 评论
Analyzing And Improving Neural Speaker Embeddings for ASR
arXiv
收藏 引用
arXiv 2023年
作者: Lüscher, Christoph Xu, Jingjing Zeineldeen, Mohammad Schlüter, Ralf Ney, Hermann Machine Learning and Human Language Technology RWTH Aachen University Aachen52074 Germany AppTek GmbH Aachen52062 Germany
Neural speaker embeddings encode the speaker's speech characteristics through a DNN model and are prevalent for speaker verification tasks. However, only a few inconclusive studies have investigated the usage of n... 详细信息
来源: 评论
Development of Hybrid ASR Systems for Low Resource Medical Domain Conversational Telephone Speech
arXiv
收藏 引用
arXiv 2022年
作者: Lüscher, Christoph Zeineldeen, Mohammad Yang, Zijian Raissi, Tina Vieting, Peter Le-Duc, Khai Wang, Weiyue Schlüter, Ralf Ney, Hermann Machine Learning and Human Language Technology RWTH Aachen University Aachen52072 Germany AppTek GmbH Aachen52062 Germany
language barriers present a great challenge in our increasingly connected and global world. Especially within the medical domain, e.g. hospital or emergency room, communication difficulties, and delays may lead to mal... 详细信息
来源: 评论
Investigating The Effect of language Models in Sequence Discriminative Training For Neural Transducers
Investigating The Effect of Language Models in Sequence Disc...
收藏 引用
IEEE Workshop on Automatic Speech Recognition and Understanding
作者: Zijian Yang Wei Zhou Ralf Schlüter Hermann Ney Computer Science Department Machine Learning and Human Language Technology RWTH Aachen University Aachen Germany AppTek GmbH Aachen Germany
In this work, we investigate the effect of language models (LMs) with different context lengths and label units (phoneme vs. word) used in sequence discriminative training for phoneme-based neural transducers. Both la...
来源: 评论
End-To-End Training of a Neural HMM with Label and Transition Probabilities
End-To-End Training of a Neural HMM with Label and Transitio...
收藏 引用
IEEE Workshop on Automatic Speech Recognition and Understanding
作者: Daniel Mann Tina Raissi Wilfried Michel Ralf Schlüter Hermann Ney AppTek GmbH Aachen Germany Machine Learning and Human Language Technology Computer Science Department RWTH Aachen University Aachen Germany
We investigate a novel modeling approach for end-to-end neural network training using hidden Markov models (HMM) where the transition probabilities between hidden states are modeled and learned explicitly. Most contem...
来源: 评论