Mental health disorders, including non-suicidal self-injury (NSSI) and suicidal behavior, represent a growing global concern. Early detection of these conditions is crucial for timely intervention and prevention of ad...
详细信息
Mental health disorders, including non-suicidal self-injury (NSSI) and suicidal behavior, represent a growing global concern. Early detection of these conditions is crucial for timely intervention and prevention of adverse outcomes. In this study, we present Guardian-BERT (Guardian-Bidirectional Encoder Representations from Transformers), a novel approach for the early detection of NSSI and suicidal behavior in electronic health records (EHRs) using naturallanguageprocessing (NLP) techniques for the Spanish language. Guardian-BERT employs a dual-domain adaptation strategy based on a pre-trained language model. The initial adaptation phase involves training on EHR discharge reports, enabling the model to learn the structure and linguistic patterns typical of clinical text. A second adaptation phase, using EHRs from the Psychiatry department of another hospital, refines the model's understanding of the specialized terminology and nuanced expressions used by mental health professionals. empirical results show that Guardian-BERT outperforms existing pre-trained models and other supervised methods in detecting NSSI and suicidal behavior. The model achieves a more balanced trade-off between precision and recall, resulting in superior F-measure scores. Specifically, Guardian-BERT attains an F-measure of 0.95 for NSSI detection and 0.89 for suicidal behavior prediction. In addition to predictive performance, we investigated risk factors associated with these mental health conditions, identifying influences such as adverse personal circumstances and emotional distress. This analysis serves two key purposes: enhancing the interpretability of individual predictions by linking them to relevant risk factors, and enabling broader research through patient stratification and temporal studies of risk factor evolution. Our findings indicate that language technologies like Guardian-BERT offer valuable support for healthcare professionals by facilitating early detection and preventio
In this work, we provide a literature review of active learning (AL) for its applications in naturallanguageprocessing (NLP). In addition to a fine-grained categorization of query strategies, we also investigate sev...
详细信息
作者:
Habash, NizarLab
New York University Abu Dhabi United Arab Emirates
The Arabic language continues to be the focus of an increasing number of projects in naturallanguageprocessing (NLP) and computational linguistics (CL). This tutorial provides NLP/CL system developers and researcher...
详细信息
We recently introduced DRaiL, a declarative neuro-symbolic modeling framework designed to support a wide variety of NLP scenarios. In this demo, we enhance DRaiL with an easy to use Python interface equipped with meth...
详细信息
This paper focuses on paraphrase generation, which is a widely studied naturallanguage generation task in NLP. With the development of neural models, paraphrase generation research has exhibited a gradual shift to ne...
详细信息
ISBN:
(纸本)9781955917094
This paper focuses on paraphrase generation, which is a widely studied naturallanguage generation task in NLP. With the development of neural models, paraphrase generation research has exhibited a gradual shift to neural methods in the recent years. This has provided architectures for contextualized representation of an input text and generating fluent, diverse and human-like paraphrases. This paper surveys various approaches to paraphrase generation with a main focus on neural methods.
State-of-the-art deep-learning-based approaches to naturallanguageprocessing (NLP) are credited with various capabilities that involve reasoning with naturallanguage texts. In this paper we carry out a large-scale ...
详细信息
Machine learning (ML) systems in naturallanguageprocessing (NLP) face significant challenges in generalizing to out-of-distribution (OOD) data, where the test distribution differs from the training data distribution...
详细信息
ISBN:
(纸本)9798891760608
Machine learning (ML) systems in naturallanguageprocessing (NLP) face significant challenges in generalizing to out-of-distribution (OOD) data, where the test distribution differs from the training data distribution. This poses important questions about the robustness of NLP models and their high accuracy, which may be artificially inflated due to their underlying sensitivity to systematic biases. Despite these challenges, there is a lack of comprehensive surveys on the generalization challenge from an OOD perspective in naturallanguage understanding. Therefore, this paper aims to fill this gap by presenting the first comprehensive review of recent progress, methods, and evaluations on this topic. We further discuss the challenges involved and potential future research directions. By providing convenient access to existing work, we hope this survey will encourage future research in this area.
Understanding the internal reasoning behind the predictions of machine learning systems is increasingly vital, given their rising adoption and acceptance. While previous approaches, such as LIME generate algorithmic e...
详细信息
ISBN:
(纸本)9798891760608
Understanding the internal reasoning behind the predictions of machine learning systems is increasingly vital, given their rising adoption and acceptance. While previous approaches, such as LIME generate algorithmic explanations by attributing importance to input features for individual examples, recent research indicates that practitioners prefer examining language explanations that explain sub-groups of examples (Lakkaraju et al., 2022). In this paper, we introduce MaNtLE, a model-agnostic naturallanguage explainer that analyzes a set of classifier predictions and generates faithful naturallanguage explanations of classifier rationale for structured classification tasks. MaNtLE uses multi-task training on thousands of synthetic classification tasks to generate faithful explanations. Our experiments indicate that, on average, MaNtLE-generated explanations are at least 11% more faithful compared to LIME and Anchors explanations across three tasks. Human evaluations demonstrate that users predict model behavior better using explanations from MaNtLE compared to other techniques.(1)
In this paper, we provide empirical evidence based on a rigourously studied mathematical model for bi-populated networks, that a glass ceiling within the field of NLP has developed since the mid 2000s.
ISBN:
(纸本)9781948087841
In this paper, we provide empirical evidence based on a rigourously studied mathematical model for bi-populated networks, that a glass ceiling within the field of NLP has developed since the mid 2000s.
Temporal aspect is one of the most challenging areas in naturallanguage Interface to Databases (NLIDB). This paper addresses and examines how temporal questions being studied and supported by the research community a...
详细信息
暂无评论