Summarizing legal case judgments is a complex task in Legal naturallanguageprocessing (NLP), with a gap in understanding how various summarization models, including extractive and abstractive approaches and analysin...
详细信息
ISBN:
(数字)9798350385922
ISBN:
(纸本)9798350385939;9798350385922
Summarizing legal case judgments is a complex task in Legal naturallanguageprocessing (NLP), with a gap in understanding how various summarization models, including extractive and abstractive approaches and analysing the perform within the domain of legal documents. Since there are around 4 crore pending cases in the Indian court system, this study addresses the challenge of laborious task of manually summarizing legal documents. It introduces both supervised and unsupervised models for both extractive and abstractive summarization, showcasing their effective performance through evaluations using ROUGE metrics and BERT score. BART, T5, PEGASUS, ROBERTA, Legal-PEGASUS, Legal-BERT models are used for abstractive summarisation. TextRank, LexRank, LSA, Summarizer BERT, KL-Summ are used in case of extractive summarisation. Longformer, Bert - Legal Pegasus are also considered for the task of Summarisation. In the domain of legal document summarization, we used GPT-4 and LLAMA-2, employing prompt engineering with both Zero-shot and One-shot prompts to extract summaries. As far of our knowledge, this is the first paper that used Large language Models like GPT-4 and LLama-2 for the task of Legal Text summarisation. Along with that a user-friendly chatbot has been developed utilizing the Llama model and specifically designed to respond for queries related to legal texts. Additionally, a web application has been created, allowing users to upload legal documents for summarization. An option is given to users to select from various languages including Telugu, Tamil, Kannada, Malayalam, and Hindi. As a result the summarised text is converted into respective language.
As an emerging intelligent algorithm, naturallanguageprocessing has many advantages and has been widely used. In terms of songwriting, the automatic speech synthesis technology based on the speech model has been gre...
详细信息
Speech emotion recognition is the process of accurately interpreting an individual's emotion from their speech. This paper introduces a python-based approach using naturallanguageprocessing techniques for real-t...
详细信息
Recent studies have found that many VQA models are influenced by biases, preventing them from effectively using multimodal information for reasoning. Consequently, these methods, which perform well on standard VQA dat...
详细信息
ISBN:
(纸本)9798400704369
Recent studies have found that many VQA models are influenced by biases, preventing them from effectively using multimodal information for reasoning. Consequently, these methods, which perform well on standard VQA datasets, exhibit underwhelming performance on the bias-sensitive VQA-CP dataset. Although numerous studies in the past have focused on mitigating biases in VQA models, most have only considered language bias. In this paper, we address the issue of bias in VQA task by targeting the various sources of bias. Specifically, to counteract shortcut biases, we integrate a bias detector capable of capturing both vision and language biases, and we reinforce its ability to capture biases using a generative adversarial network and knowledge distillation. To combat distribution bias, we use a cosine classifier to obtain a cosine feature branch from the base model, training it with an adaptive angular margin loss based on answer frequency and difficulty, along with a supervised contrastive loss to enhance the model's classification ability in the feature space. In the prediction stage, we fuse the cosine features with the prediction of the base model to obtain the final prediction of our model. Finally, extensive experiments demonstrate that our approach SD-VQA achieves state-of-the-art performance on the VQA-CPv2 dataset without using any data balancing, and achieves competitive results on the VQAv2 dataset.
This research introduces a new and improved model for a financial chatbot based on deep learning, machine learning, and naturallanguageprocessing to provide suitable financial services and increase the financial lit...
详细信息
Fake news evolving around us for a very long time. The gradual growth of social media platforms has provided us with an easily accessible and publishable news platform in front of the audience that news may be true or...
详细信息
The paper presents a prediction model for identifying suicide intent in social media messages, especially on Twitter and Reddit. A huge dataset of postings, including symptoms of suicide ideation or behaviour, was ana...
详细信息
This paper explores the architecture and functions of Intelligent Database Management Systems (IDBMS), which integrate advanced artificial intelligence (AI) technologies to enhance traditional database management. By ...
详细信息
ISBN:
(纸本)9798331528911;9798331528928
This paper explores the architecture and functions of Intelligent Database Management Systems (IDBMS), which integrate advanced artificial intelligence (AI) technologies to enhance traditional database management. By addressing the limitations of conventional systems, IDBMS aim to improve query optimization, resource utilization, and user interaction through machine learning, predictive analytics, and naturallanguageprocessing. The paper outlines the core components and architectural models of IDBMS, details their functionalities, presents case studies demonstrating their effectiveness, and discusses future trends and challenges in the field.
Although large language models and temporal knowledge graphs each have significant advantages in the field of artificial intelligence, they also face certain challenges. However, through collaboration, large language ...
详细信息
ISBN:
(纸本)9789819770069;9789819770076
Although large language models and temporal knowledge graphs each have significant advantages in the field of artificial intelligence, they also face certain challenges. However, through collaboration, large language models and temporal knowledge graphs can complement each other, addressing their respective shortcomings. This collaborative approach aims to harness the potential feasibility and practical effectiveness of large language models as external knowledge bases for temporal knowledge graph reasoning tasks. In our research, we have meticulously designed a synergized model that leverages the knowledge from the graph as prompts. The answers generated by the large language model undergo careful processing before being seamlessly incorporated into the training dataset. The ultimate goal is to significantly enhance the reasoning capabilities of temporal knowledge graphs. Experimental results underscore the positive impact of this synergized model on the completion tasks of temporal knowledge graphs, showcasing its potential to address gaps in knowledge and improve overall performance. While its influence on prediction tasks is relatively weak, the collaborative synergy demonstrates promising avenues for further exploration and development in the realm of AI research.
Intention detection and slot filling are two main tasks in the field of naturallanguage understanding in naturallanguageprocessing. Since the two tasks are highly correlated, the two tasks are often modeled jointly...
详细信息
暂无评论