Large language Models (LLMs) have been shown to perform well for many downstream tasks. Transfer learning can enable LLMs to acquire skills that were not targeted during pretraining. In financial contexts, LLMs can so...
详细信息
Large language models (LLMs) have made significant advancements in naturallanguageprocessing and are concurrently extending the language ability to other modalities, such as speech and vision. Nevertheless, most of ...
详细信息
ISBN:
(数字)9798350368741
ISBN:
(纸本)9798350368758
Large language models (LLMs) have made significant advancements in naturallanguageprocessing and are concurrently extending the language ability to other modalities, such as speech and vision. Nevertheless, most of the previous work focuses on prompting LLMs with perception abilities like auditory comprehension, and the effective approach for augmenting LLMs with speech synthesis capabilities remains ambiguous. In this paper, we conduct a comprehensive empirical exploration of boosting LLMs with the ability to generate speech, by combining pre-trained LLM LLaMA/OPT and text-to-speech synthesis model VALL-E. We compare three integration methods between LLMs and speech synthesis models, including directly fine-tuned LLMs, superposed layers of LLMs and VALL-E, and coupled LLMs and VALL-E using LLMs as a powerful text encoder. Experimental results show that, using LoRA method to fine-tune LLMs directly to boost the speech synthesis capability does not work well, and superposed LLMs and VALL-E can improve the quality of generated speech both in speaker similarity and word error rate (WER). Among these three methods, coupled methods leveraging LLMs as the text encoder can achieve the best performance, making it outperform original speech synthesis models with a consistently better speaker similarity and a significant (10.9%) WER reduction.
作者:
Ghosh, SagarikaDas, SomaChatterji, SanjayPratihar, SanjoyCSE
Indian Institute of Information Technology Kalyani West Bengal Kalyani India CSE
University of Engineering and Management Jaipur Rajasthan Jaipur India CSE
Institute of Engineering and Management Kolkata West Bengal Kolkata India
In contemporary software development and maintenance, the practice of reusing or copy-pasting code is prevalent to expedite processes. Consequently, numerous techniques have been developed to detect, compare, or match...
详细信息
As higher education continues to place greater emphasis on students' writing skills, high-quality writing textbooks have become pivotal to the teaching process. This study employs a text complexity framework to ex...
详细信息
Data-driven pre-trained language models typically perform shortcut learning wherein they rely on the spurious correlations between the data and the ground truth. This reliance can undermine the robustness and generali...
详细信息
The rapid development of large language models (LLMs) such as GPT-3, GPT-4, LlaMA, and mBERT has significantly advanced the naturallanguageprocessing (NLP) field across many widely spoken languages. However, the eff...
详细信息
Recognizing emotions in dialogues is vital for effective human-computer interaction, yet remains a challenging task in naturallanguageprocessing (NLP). Previous studies in Emotion Recognition in Conversation (ERC) h...
详细信息
Large language Models (LLMs) have shown impressive abilities in solving various naturallanguageprocessing tasks and are now widely offered as services. LLM services enable users to accomplish tasks without requiring...
详细信息
Large language Models (LLMs) show strong performance in naturallanguageprocessing tasks, but their application in the financial domain is limited. Current methods rely on large datasets and manual prompt engineering...
详细信息
Explainable artificial intelligence (XAI) aims to ensure an AI system's decisions are transparent and understandable by humans, which is particularly important in potentially sensitive application scenarios in sur...
详细信息
暂无评论