In recent years, the text-to-text transfer transformer (T5) neural model has proved very powerful in many text-to-text tasks, including text normalization and grapheme-to-phoneme conversion. In the presented paper, we...
详细信息
In recent years, the text-to-text transfer transformer (T5) neural model has proved very powerful in many text-to-text tasks, including text normalization and grapheme-to-phoneme conversion. In the presented paper, we fine-tuned the T5 model for the task of homograph disambiguation, which is one of the essential components of text-to-speech (TTS) systems. To compare our results to those of other studies, we used an online dataset of US English homographs called Wikipedia Homograph Data. We present our results, which outperformed the previously published single-model approaches. We also focus on more detailed error analysis, model performance on different types of homographs, and the impact of training set size on homograph disambiguation.
This paper proposes a two-stage text summary model to improve the quality of risk disclosure text summaries in prospectuses. The model combines extraction-based and generation-based methods. Firstly, the textRank algo...
详细信息
This paper discusses a study by the QH Research Group at the University of Zurich aimed at simplifying scientific text for Simpletext@CLEF-2023’s Task 3. Using the pre-trained MUSS and T5 models, we explored their ef...
详细信息
暂无评论