processing of metaphorical languages has been one of the most challenging tasks in naturallanguageprocessing (NLP). Recent work utilizing neural models has received notable results on metaphor detection. However, th...
详细信息
ISBN:
(纸本)9798331540869;9798331540852
processing of metaphorical languages has been one of the most challenging tasks in naturallanguageprocessing (NLP). Recent work utilizing neural models has received notable results on metaphor detection. However, the characteristic of metaphorical languages showing an intimate relationship with sensory experiences has not been given enough attention in NLP. This study proposes an innovative model for the task of Chinese metaphor detection by incorporating the conceptual knowledge of sensory experiences into a neural network model. Experiments show that our model outperforms the state-of-the-art baseline models significantly, hence contributing to the ongoing efforts to incorporate neuro-cognitive data in NLP tasks. In addition, the effectiveness of our model helps to deepen our understanding of metaphor that sensory experiences form the crucial part of the embodied nature of metaphorical languages.
The rapid advancement of artificial intelligence (AI) and machine learning (ML) technologies, particularly in large language models (LLMs) like the GPT series, has significantly impacted research and industrial applic...
详细信息
ISBN:
(纸本)9798350386783;9798350386776
The rapid advancement of artificial intelligence (AI) and machine learning (ML) technologies, particularly in large language models (LLMs) like the GPT series, has significantly impacted research and industrial applications. These models excel in various naturallanguageprocessing (NLP) tasks, including text generation, comprehension, and translation. However, harnessing these capabilities for academic research still presents challenges, particularly for early-career researchers navigating extensive literature. In this paper, we introduce AcawebAgent, an inventive AutoAgent specifically crafted to enhance the abilities of beginner researchers. It leverages the advanced generation and analysis capabilities of large language models (LLMs) to collect open academic knowledge from the web. AcawebAgent offers customized research reports that include in-depth overviews, practical applications, the latest developments, and future trajectories of specific research domains, thereby significantly diminishing the time and effort needed for comprehensive literature reviews and trend analyses.
A backbone of knowledge graphs are their class membership relations, which assign entities to a given class. As part of the knowledgeengineering process, we propose a new method for evaluating the quality of these re...
详细信息
ISBN:
(纸本)9783031789519;9783031789526
A backbone of knowledge graphs are their class membership relations, which assign entities to a given class. As part of the knowledgeengineering process, we propose a new method for evaluating the quality of these relations by processing descriptions of a given entity and class using a zero-shot chain-of-thought classifier that uses a naturallanguage intensional definition of a class. We evaluate the method using two publicly available knowledge graphs, Wikidata and CaLiGraph, and 7 large language models. Using the gpt-4-0125-preview large language model, the method's classification performance achieves a macro-averaged F1-score of 0.830 on data from Wikidata and 0.893 on data from CaLiGraph. Moreover, a manual analysis of the classification errors shows that 40.9% of errors were due to the knowledge graphs, with 16.0% due to missing relations and 24.9% due to incorrectly asserted relations. These results show how large language models can assist knowledge engineers in the process of knowledge graph refinement. The code and data are available on Github (https://***/bradleypallen/evaluating-kg-class-memberships-using-llms).
The effectiveness of Large language Models (LLMs) in tasks involving reasoning is significantly influenced by the structure and formulation of the prompts, contemporary research in prompt engineering aims to help LLMs...
详细信息
ISBN:
(纸本)9798350349122;9798350349115
The effectiveness of Large language Models (LLMs) in tasks involving reasoning is significantly influenced by the structure and formulation of the prompts, contemporary research in prompt engineering aims to help LLMs better understand the paradigms of reasoning question (e.g., CoT). However, these efforts have either struggled to effectively incorporate external knowledge into single prompt or integrating entire corpus information, often fails to significantly enhance the reasoning capabilities of LLMs. This paper introduces a novel prompting method that incorporates implicit hints that represent logical combinatorial relationships between known conditions in reasoning problems, guiding LLMs to think correctly in the initial steps of reasoning for such problems. Extensive and comprehensive experiment results on four different reasoning problem datasets indicate that our proposed method improved accuracy while maintaining efficiency.
Semantic Dependency Graph is a framework for representing deep semantic knowledge through flexible graph structures. While recent works indicate that large language models (LLMs) have impressive language and knowledge...
详细信息
ISBN:
(纸本)9789819794362;9789819794379
Semantic Dependency Graph is a framework for representing deep semantic knowledge through flexible graph structures. While recent works indicate that large language models (LLMs) have impressive language and knowledge understanding abilities, it remains unclear whether they can understand this deep semantic knowledge. To explore this problem, we design four prompt-style probing tasks from aspects of semantic structure and semantic relations to adapt the inherent abilities of LLMs. To ensure thorough evaluation, we conduct extensive experiments in both in-context learning (ICL) and supervised fine-tuning (SFT) scenarios. Our findings indicate that the understanding of deep semantic knowledge requires larger parameter scale, especially the understanding of high-order semantic structure knowledge and semantic relation knowledge. Furthermore, our experiments reveal that while LLMs perform well on the in-domain (ID) test set via SFT, their generalization ability on out-of-domain (OOD) test set remains inadequate.
This paper introduces an approach that integrates naturallanguageprocessing (NLP) and knowledge graphs with Reconfigurable Manufacturing Systems (RMS) to enhance flexibility and adaptability. We utilize a chatbot in...
详细信息
ISBN:
(数字)9783031744853
ISBN:
(纸本)9783031744846;9783031744853
This paper introduces an approach that integrates naturallanguageprocessing (NLP) and knowledge graphs with Reconfigurable Manufacturing Systems (RMS) to enhance flexibility and adaptability. We utilize a chatbot interface powered by GPT-4 and a structured knowledge base to simplify the complexities of manufacturing reconfiguration. This system not only boosts reconfiguration efficiency but also broadens accessibility to advanced manufacturing technologies. We demonstrate our methodology through an application in capability matching, showcasing how it facilitates the identification of assets for new product requirements. Our results indicate that this integrated solution offers a scalable and user-friendly approach to overcoming adaptability challenges in modern manufacturing environments.
This study provides an in-depth examination of LLaMA 3's performance on a domain-specific task, specifically classifying monetary policy texts. The aim is to categorize these texts as hawkish, dovish, or neutral b...
详细信息
ISBN:
(纸本)9798331539894;9798331539887
This study provides an in-depth examination of LLaMA 3's performance on a domain-specific task, specifically classifying monetary policy texts. The aim is to categorize these texts as hawkish, dovish, or neutral by using a prompt to assess LLaMA 3's understanding of finance-related knowledge anticipated to be acquired during its pretraining. Experimental results demonstrate that LLaMA 3 has indeed developed the necessary commonsense knowledge to interpret monetary policy, as it surpasses both major and random baselines. To further understand its performance, we also analyze the consistency of its impressive results across different model inputs, including the number of tokens, nouns, and verbs.
This project constructs a subject knowledge map for instructional design based on naturallanguageprocessing technology. This provides a new way of thinking and method for the teaching practice of this subject. First...
详细信息
Traditional multitask learning methods typically can only leverage shared knowledge within specific tasks or languages, resulting in a loss of either cross-language or cross-task knowledge. This paper proposes a gener...
详细信息
ISBN:
(纸本)9798350344868;9798350344851
Traditional multitask learning methods typically can only leverage shared knowledge within specific tasks or languages, resulting in a loss of either cross-language or cross-task knowledge. This paper proposes a general multilingual multitask model, named SkillNet-X, which enables a single model to tackle many different tasks from different languages. To this end, we define several language-specific skills and task-specific skills, each of which corresponds to a skill module. SkillNet-X sparsely activates parts of the skill modules which are relevant to eitherthe target task or the target language. Acting as knowledge transit hubs, skill modules are capable of absorbing task-related knowledge and language-related knowledge consecutively. We evaluate SkillNet-X on eleven naturallanguage understanding datasets in four languages. Results show that SkillNet-X performs better than task-specific and two multitask learning baselines. To investigate the generalization of our model, we conduct experiments on two new tasks and find that SkillNet-X significantly outperforms baselines.
We live in a world of information, and with the ever-increasing rate of content growth, we have no choice but to use machine-based solutions to manage, classify, and use it. However, the produced content is often unst...
详细信息
暂无评论