The ability to reason about events and their temporal relations is a key aspect in naturallanguage Understanding. In this paper, we investigate the ability of Large language Models to resolve temporal references with...
详细信息
The ADReSS-M Signal processing Grand Challenge was held at the 2023 IEEE internationalconference on Acoustics, Speech and Signal processing, ICASSP 2023. The challenge targeted difficult automatic prediction problems...
详细信息
The ADReSS-M Signal processing Grand Challenge was held at the 2023 IEEE internationalconference on Acoustics, Speech and Signal processing, ICASSP 2023. The challenge targeted difficult automatic prediction problems of great societal and medical relevance, namely, the detection of Alzheimer's Dementia (AD) and the estimation of cognitive test scoress. Participants were invited to create models for the assessment of cognitive function based on spontaneous speech data. Most of these models employed signal processing and machine learning methods. The ADReSS-M challenge was designed to assess the extent to which predictive models built based on speech in one language generalise to another language. The language data compiled and made available for ADReSS-M comprised English, for model training, and Greek, for model testing and validation. To the best of our knowledge no previous shared research task investigated acoustic features of the speech signal or linguistic characteristics in the context of multilingual AD detection. This paper describes the context of the ADReSS-M challenge, its data sets, its predictive tasks, the evaluation methodology we employed, our baseline models and results, and the top five submissions. The paper concludes with a summary discussion of the ADReSS-M results, and our critical assessment of the future outlook in this field.
We used deep learning methods to create an innovative system for early detection of depression based on user comments on social networks. This revolutionary approach exploits large amounts of textual data available on...
详细信息
ISBN:
(纸本)9783031850660;9783031850677
We used deep learning methods to create an innovative system for early detection of depression based on user comments on social networks. This revolutionary approach exploits large amounts of textual data available online to identify depression. The data set originates from the eRisk 2022 competition. Thanks to naturallanguage and statistical modeling techniques, our system can analyze user comments and detect their depression. We applied deep learning methods, in particular the BiLSTM(Bidirectional Long Short-Term Memory) model. Using the approach, we obtained an F-score of 42.96%.
Text sentiment analysis in naturallanguageprocessing is an important means of understanding and interpreting human emotions in texts. It plays an important role in various applications, such as customer feedback ana...
详细信息
Parameter-efficient, soft, and prompt- based tuning methods have received increasing attention in various downstream tasks due to the high cost of traditional fine-tuning methods in pre-trained language models (PLM). ...
详细信息
ISBN:
(纸本)9798350359329;9798350359312
Parameter-efficient, soft, and prompt- based tuning methods have received increasing attention in various downstream tasks due to the high cost of traditional fine-tuning methods in pre-trained language models (PLM). Prompt tuning (PT) is one such effective mechanism which has achieved remarkable performance in transferring the acquired knowledge of a PLM to perform an unseen task within the same domain using task-specific prompts and informative instructions. Even though most prior work considers in-domain knowledge transfer using PT, much work remains to be done for PT-based out-of-domain knowledge transfer. In this study we propose ProDepDet, a novel framework specifically designed to use a PLM's knowledge about structure and semantic modelling in multi-party conversations to perform the unseen, out-ofdomain task of depression detection. To our knowledge, this study is the first attempt to adapt the acquired knowledge of a PLM for out-of-domain task modelling using PT-based crosstask transferability. Experiments on few-shot and full data settings across multiple benchmark datasets demonstrate the superiority of our PT framework in two downstream tasks including depressed utterance classification and depressed speaker identification.
A very serious issue throughout social media platforms and cultural groups on the internet is the phenomenon of cyberbullying, an emerging form of victimization brought about through the digital age [1]. In this paper...
详细信息
naturallanguage Generation (NLG) acts as a bridge between input data and human communication. NLG systems are meant to generate human-understandable output making it a useful technique mainly in report generation, au...
详细信息
This paper explores the method of domain specialization for Large language Models in the field of railway track management, as well as their potential and practical effectiveness in knowledge-based question answering ...
详细信息
With the rapid advancement of artificial intelligence (AI), intelligent systems have been widely applied across various practical domains. During their experimental processes, data collection often occurs in complex e...
详细信息
ISBN:
(数字)9798350354621
ISBN:
(纸本)9798350354638;9798350354621
With the rapid advancement of artificial intelligence (AI), intelligent systems have been widely applied across various practical domains. During their experimental processes, data collection often occurs in complex environments, leading to challenges in ensuring data quality and integrity. Two significant issues are data redundancy and data noise, which can degrade model performance and impact the generalization ability and prediction accuracy of AI models. Traditional methods for handling these issues are either rule-based, relying heavily on domain knowledge, or machine learning-based, which require large volumes of high-quality training data and significant computational resources. In this paper, we propose a novel approach leveraging Large language Models (LLMs) combined with the Sequential Chain optimization algorithm to address data redundancy elimination and noise processing. By designing automated prompt templates tailored to experimental data characteristics and utilizing LLMs' extensive knowledge and reasoning capabilities, our approach improves the effectiveness of preprocessing tasks. Additionally, the Sequential Chain technique enhances LLM processing performance, reducing the hallucination phenomenon and ensuring higher accuracy in data preprocessing. Our experimental results demonstrate that LLM-based methods can achieve results comparable to traditional techniques while offering greater scalability and adaptability. Future work could focus on developing more efficient LLMs with lower computational requirements and refining prompt engineering techniques to reduce the time investment needed, making these advanced methods more accessible and practical for a broader range of applications.
The professional nature and confidentiality of the power domain hinder the public to accurately assess the authenticity of online electricity-related statements, fostering an environment conducive to the spread of ele...
详细信息
ISBN:
(纸本)9789819794393;9789819794409
The professional nature and confidentiality of the power domain hinder the public to accurately assess the authenticity of online electricity-related statements, fostering an environment conducive to the spread of electricity-related hate speech on social media. To address this challenge, we introduce a new hate speech detection task for the electric power domain. A dataset for electric power domain hate speech detection is constructed, consisting of 6000 electricity-related Weibo posts. We propose a prompt learning approach for hate speech detection in the electric power domain, which integrates power domain knowledge, such as work scenarios and terms. Subsequently, a prompt template is formulated to facilitate hate speech detection. Experimental results on the dataset indicate that the proposed prompt learning method surpasses the baseline model.
暂无评论