Objectives This comparative analysis aims to assess the efficacy of encoder Language models for clinical tasks in the Spanish language. The primary goal is to identify the most effective resources within this contextI...
详细信息
Objectives This comparative analysis aims to assess the efficacy of encoder Language models for clinical tasks in the Spanish language. The primary goal is to identify the most effective resources within this contextImportance This study highlights a critical gap in NLP resources for the Spanish language, particularly in the clinical sector. Given the vast number of Spanish speakers globally and the increasing reliance on electronic health records, developing effective Spanish language models is crucial for both clinical research and healthcare delivery. Our work underscores the urgent need for specialized encoder models in Spanish that can handle clinical data with high accuracy, thus paving the way for advancements in healthcare services and biomedical research for Spanish-speaking *** and Methods We examined 17 distinct corpora with a focus on clinical tasks. Our evaluation centered on Spanish Language models and Spanish Clinical Language models (both encoder-based). To ascertain performance, we meticulously benchmarked these models across a curated subset of the corpora. This extensive study involved fine-tuning over 3000 *** Our analysis revealed that the best models are not clinical models, but general-purpose models. Also, the biggest models are not always the best ones. The best-performing model, RigoBERTa 2, obtained an average F1 score of 0.880 across all *** Our study demonstrates the advantages of dedicated encoder-based Spanish Clinical Language models over generative models. However, the scarcity of diverse corpora, mostly focused on NER tasks, underscores the need for further research. The limited availability of high-performing models emphasizes the urgency for development in this *** Through systematic evaluation, we identified the current landscape of encoder Language models for clinical tasks in the Spanish language. While challenges remain, the availability of curated corpora and models offers
This paper conducts sentiment analysis and classification of tweets pertaining to the earthquake that struck Turkey in February 2023, utilizing deep learning techniques. The study aims to provide valuable insights int...
详细信息
While encoder-only models such as BERT and ModernBERT are ubiquitous in real-world NLP applications, their conventional reliance on task-specific classification heads can limit their applicability compared to decoder-...
详细信息
While encoder-only models such as BERT and ModernBERT are ubiquitous in real-world NLP applications, their conventional reliance on task-specific classification heads can limit their applicability compared to decoder-based large language models (LLMs). In this work, we introduce ModernBERT-Large-Instruct, a 0.4B-parameter encoder model that leverages its masked language modeling (MLM) head for generative classification. We design a simple approach, extracting all single-token answers from the FLAN dataset collection, and re-purposing standard MLM pre-training to only mask this single token answer. Our approach employs an intentionally simple training loop and inference mechanism that requires no heavy pre-processing, heavily engineered prompting, or architectural modifications. ModernBERT-Large-Instruct exhibits strong zero-shot performance on both classification and knowledge-based tasks, outperforming similarly sized LLMs on MMLU and achieving 93% of Llama3-1B’s MMLU performance with 60% less parameters. We also demonstrate that, when fine-tuned, the generative approach using the MLM head matches or even surpasses traditional classification-head methods across diverse NLU tasks. This capability emerges specifically in models trained on contemporary, diverse data mixes, with models trained on lower volume, less-diverse data yielding considerably weaker performance. Although preliminary, these results demonstrate the potential of using the original generative masked language modeling head over traditional task-specific heads for downstream tasks. Our work suggests that further exploration into this area is warranted, highlighting many avenues for future improvements.
暂无评论