New solutions are needed to enhance infrastructure resilience in response to global warming and increasing natural disasters. Recent studies demonstrate the potential of naturallanguageprocessing (NLP) for mining un...
详细信息
Large language Models (LLMs) have emerged as strategic in the promotion of naturallanguageprocessing (NLP) to the extent of having machines translate languages as well as making logical deductions. The role of the o...
详细信息
The goal of screening resumes is to determine the top applicants for a position and to inform users of their resume score and areas for improvement. The literature on existing approaches has been analyzed, and it has ...
详细信息
Traditional manufacturing faces challenges adapting to dynamic environments and quickly responding to manufacturing changes. The use of multi-agent systems has improved adaptability and coordination but requires furth...
详细信息
ISBN:
(纸本)9798350358513;9798350358520
Traditional manufacturing faces challenges adapting to dynamic environments and quickly responding to manufacturing changes. The use of multi-agent systems has improved adaptability and coordination but requires further advancements in rapid human instruction comprehension, operational adaptability, and coordination through naturallanguage integration. Large language models like GPT-3.5 and GPT-4 enhance multi-agent manufacturing systems by enabling agents to communicate in naturallanguage and interpret human instructions for decision-making. This research introduces a novel framework where large language models enhance the capabilities of agents in manufacturing, making them more adaptable, and capable of processing context-specific instructions. A case study demonstrates the practical application of this framework, showing how agents can effectively communicate, understand tasks, and execute manufacturing processes, including precise G-code allocation among agents. The findings highlight the importance of continuous large language model integration into multi-agent manufacturing systems and the development of sophisticated agent communication protocols for a more flexible manufacturing system.
Multimodal fake news propagation on social media has emerged as a primary concern for both the public and the government. Distinguishing fake news from genuine content is challenging due to their close resemblance. Mo...
详细信息
ISBN:
(纸本)9798350349184;9798350349191
Multimodal fake news propagation on social media has emerged as a primary concern for both the public and the government. Distinguishing fake news from genuine content is challenging due to their close resemblance. Moreover, fake news often presents information that contradicts real-world facts, underscoring the necessity of incorporating comprehensive open-domain knowledge. However, current knowledge-based detection methods struggle to improve two key areas in integrating open-domain knowledge for fake news detection: i) query acquisition for retrieval and ii) utilization of retrieved knowledge. The challenges stem from limitations in understanding the semantics and reasoning about the relationship between open-domain knowledge and the content of the news. With the emergence of the Large language Model (LLM), naturallanguageprocessing (NLP) has witnessed a revolution where impressive semantic understanding and reasoning abilities are significantly improved. This paper proposes a novel Open-domain knowledge Integrated (OKI) framework for multimodal fake news detection, featuring two LLM-based agents that collaboratively leverage open-domain knowledge. One agent generates appropriate queries to retrieve knowledge, while the other filters out irrelevant retrieved knowledge. Experimental results demonstrate a significant performance improvement of OKI over established baselines on the Weibo and Twitter datasets.
BERT based Pre-training language Model (PLM) has become a crucial step in achieving the best results in various naturallanguageprocessing (NLP) tasks. However, the current progress, which mainly focuses on major lan...
详细信息
ISBN:
(纸本)9798350359329;9798350359312
BERT based Pre-training language Model (PLM) has become a crucial step in achieving the best results in various naturallanguageprocessing (NLP) tasks. However, the current progress, which mainly focuses on major languages such as English and Chinese, has not thoroughly investigated the low-resource languages, particularly agglutinative languages like Mongolian, due to the scarcity of large-scale data resources and the difficulty of understanding agglutinative knowledge. In this paper, we propose a novel PLM for the Mongolian language, that incorporates a novel three-stage agglutinative knowledge injection strategy. Specifically, early-stage injection aims to convert the Mongolia word sequence to the fine-grained sub-word token that comprises a stem and some suffixes;Middle-stage injection designed a morphological knowledge-based masking strategy to enhance the model's ability to learn agglutinative knowledge;Late-stage injection not only involves the model restoring the masked tokens but also predicting the order of suffixes. To address the issue of data scarcity, we create a large-scale Mongolian PLM dataset and three datasets for three downstream tasks, that are News Classification, Name Entity Recognition (NER), and Part-of-Speech (POS) prediction, etc. The experimental results on three downstream tasks demonstrate that our method surpasses the traditional BERT approach and successfully learns agglutinative languageknowledge in Mongolian.
This work presents Real- Time Semantic Mapping (RTSM), a new idea for improving the language comprehension in autonomous systems using the knowledge graph. RTSM solely relies on ConceptNet as its dataset and incorpora...
详细信息
Previous work has showcased the intriguing capability of large language models (LLMs) in retrieving facts and processing context knowledge. However, only limited research exists on the layer-wise capability of LLMs to...
详细信息
naturallanguageprocessing (NLP) has made significant breakthroughs, mainly in the creation of transformer-based language models, and has demonstrated great performance in a variety of language comprehension tasks. D...
详细信息
At present, the presentation of the knowledge graph of train-controlled on-board equipment is only one model in most cases, the entity extraction process is more complicated, and there is a lack of Q&A system and ...
详细信息
暂无评论