Despite the well-developed cut-edge representation learning for language, most language representation models usually focus on specific levels of linguistic units. This work introduces universal language representatio...
详细信息
Pseudo-haptics refers to the simulation of haptic sensations without the use of haptic interfaces, using, for example, audiovisual feedback and kinesthetic cues. Given the COVID-19 pandemic and the shift to online lea...
详细信息
ISBN:
(纸本)9781665461399
Pseudo-haptics refers to the simulation of haptic sensations without the use of haptic interfaces, using, for example, audiovisual feedback and kinesthetic cues. Given the COVID-19 pandemic and the shift to online learning, there has been a recent interest in pseudo-haptics as it can help facilitate psychomotor skills development away from simulation centers and laboratories. Here we present work-in-progress that describes the study design of a pseudo-haptics for virtual anesthesia skills development. We anticipate this work will provide greater insight to pseudo-haptics and its application to anesthesia-based training.
This study presents our approach on the automatic Vietnamese image captioning for healthcare domain in text processing tasks of Vietnamese Language and Speech Processing (VLSP) Challenge 2021, as shown in Figure 1. In...
详细信息
We identify two major steps in data analysis, data exploration for understanding and observing patterns/relationships in data;and construction, design and assessment of various models to formalize these relationships....
详细信息
Aspect-based sentiment analysis (ABSA) task consists of three typical subtasks: aspect term extraction, opinion term extraction, and sentiment polarity classification. These three subtasks are usually performed jointl...
详细信息
Though visual information has been introduced for enhancing neural machine translation (NMT), its effectiveness strongly relies on the availability of large amounts of bilingual parallel sentence pairs with manual ima...
详细信息
Pre-trained language models (PrLM) has been shown powerful in enhancing a broad range of downstream tasks including various dialogue related ones. However, PrLMs are usually trained on general plain text with common l...
详细信息
Attention scorers have achieved success in parsing tasks like semantic and syntactic dependency parsing. However, in tasks modeled into parsing, like structured sentiment analysis, "dependency edges" are ver...
详细信息
Cardiovascular disease (CVD) is a leading cause of death worldwide, with millions dying each year. The identification and early diagnosis of CVD are critical in preventing adverse health outcomes. Hence, this study pr...
详细信息
The pre-trained language model (PrLM) demonstrates domination in downstream natural language processing tasks, in which multilingual PrLM takes advantage of language universality to alleviate the issue of limited reso...
ISBN:
(纸本)9781713845393
The pre-trained language model (PrLM) demonstrates domination in downstream natural language processing tasks, in which multilingual PrLM takes advantage of language universality to alleviate the issue of limited resources for low-resource languages. Despite its successes, the performance of multilingual PrLM is still unsatisfactory, when multilingual PrLMs only focus on plain text and ignore obvious universal linguistic structure clues. Existing PrLMs have shown that monolingual linguistic structure knowledge may bring about better performance. Thus we propose a novel multilingual PrLM that supports both explicit universal dependency parsing and implicit language modeling. Syntax in terms of universal dependency parse serves as not only pre-training objective but also learned representation in our model, which brings unprecedented PrLM interpretability and convenience in downstream task use. Our model outperforms two popular multilingual PrLM, multilingual-BERT and XLM-R, on cross-lingual natural language understanding (NLU) benchmarks and linguistic structure parsing datasets, demonstrating the effectiveness and stronger cross-lingual modeling capabilities of our approach.
暂无评论