Performing logical reasoning based on prior knowledge is a crucial human cognitive ability and has been a long-standing objective in the field of artificial intelligence. Large language models based on transformer arc...
详细信息
ISBN:
(纸本)9798350344868;9798350344851
Performing logical reasoning based on prior knowledge is a crucial human cognitive ability and has been a long-standing objective in the field of artificial intelligence. Large language models based on transformer architecture have been a common approach for logical reasoning over text. However, the current language models often struggle to learn semantic information from logical expressions, resulting in underwhelming performance on logical reasoning tasks. In this paper, we propose a novel method to convert first-order logic (FOL) expressions to the form of a graph and integrate it with embeddings from language models to enhance their reasoning ability. The proposed method is designed to learn directly from FOL formulas and is able to generalize to any scenarios involving logical expressions. Experimental results demonstrate that the proposed method enhances the model's ability of learning logical semantic representations, and thus it brings a significant improvement on the performance of complex reasoning tasks. The code is available at https://***/FOL-GNN.
This research paper focuses on augmenting the process of learning a concept through the automated generation of notes and MCQs from educational videos. Our system works in three steps: (i) It transcribes and analyzes ...
详细信息
ISBN:
(纸本)9798350374353;9798350374346
This research paper focuses on augmenting the process of learning a concept through the automated generation of notes and MCQs from educational videos. Our system works in three steps: (i) It transcribes and analyzes the video coupled with extracting keywords and concepts, (ii) then it generates detailed and informative notes, personalized to the video content and (iii) it creates Multiple Choice Questions (MCQs) concerning the content. The system currently uses Machine Learning algorithms and naturallanguageprocessing to carry out the above-mentioned. This research holds relevance for online courses, flipped classrooms, and self-directed learning along with promoting better understanding of a concept among learners.
Simile tasks are challenging in naturallanguageprocessing (NLP) because models require adequate world knowledge to produce predictions. In recent years, pre-trained language models (PLMs) have succeeded in NLP since...
详细信息
Text summarization has been rapidly developed as an important task in the field of naturallanguage text generation. Among them, due to the practical needs of Tibetan text summarization, some people have also begun to...
详细信息
language models now constitute essential tools for improving efficiency for many professional tasks such as writing, coding, or learning. For this reason, it is imperative to identify inherent biases. In the field of ...
详细信息
ISBN:
(纸本)9783031789762;9783031789779
language models now constitute essential tools for improving efficiency for many professional tasks such as writing, coding, or learning. For this reason, it is imperative to identify inherent biases. In the field of naturallanguageprocessing, five sources of bias are well-identified: data, annotation, representation, models, and research design. This study focuses on biases related to geographical knowledge. We explore the connection between geography and language models by highlighting their tendency to misrepresent spatial information, thus leading to distortions in the representation of geographical distances. This study introduces four indicators to assess these distortions, by comparing geographical and semantic distances. Experiments are conducted from these four indicators with eight widely used language models and their implementations are available on github (https://***/tetis-nlp/geographical-biases-in-llms). Results underscore the critical necessity of inspecting and rectifying spatial biases in language models to ensure accurate and equitable representations.
Multilingual transformer models are considered to be a solution for naturallanguageprocessing (NLP) in under-resourced languages like Tamil. Machine Reading Comprehension for the Tamil language was an under-studied ...
详细信息
Scholar retrieval resources are faced with the problems of information overload, low efficiency, and low matching degree. To address this issue, we developed a personalized academic resource recommendation system base...
详细信息
Reddit's mission is to bring community, belonging, and empowerment to everyone in the world. This hands-on tutorial explores the immense potential of Artificial Intelligence (AI) to improve accessibility to social...
详细信息
ISBN:
(纸本)9798400704901
Reddit's mission is to bring community, belonging, and empowerment to everyone in the world. This hands-on tutorial explores the immense potential of Artificial Intelligence (AI) to improve accessibility to social media content for individuals with different disabilities, including hearing, visual, and cognitive impairments. We will design and implement a variety of AI-based approaches based on multimodal open-source Large language Models (LLMs) to bridge the gap between research and real-world applications.
The rapid development of Large language Models (LLMs) has significantly advanced the field of naturallanguageprocessing, including the automated generation of Multiple-Choice Questions (MCQs) from scientific literat...
详细信息
Recent generative models demonstrate impressive performance on synthesizing photographic images, which makes humans hardly to distinguish them from pristine ones, especially on realistic-looking synthetic facial image...
详细信息
暂无评论