The Large Language Model (LLM) has gained significant popularity and is extensively utilized across various domains. Most LLM deployments occur within cloud data centers, where they encounter substantial response dela...
详细信息
Electric Vehicles (EVs) are becoming more and more popular in our daily life, which replaces traditional fuel vehicles to reduce carbon emissions and protect the environment. EVs need to be charged, but the number of ...
详细信息
The proliferation of dirty data on Internet of Things (IoT) devices can undermine the accuracy of data-driven decision-making by affecting the distribution of original data. The Quality of Service (QoS) of data cleani...
The proliferation of dirty data on Internet of Things (IoT) devices can undermine the accuracy of data-driven decision-making by affecting the distribution of original data. The Quality of Service (QoS) of data cleaning on these devices is heavily impacted by processing delay and accuracy. In this paper, we find that edge service placement is a key step aligned with data cleaning and consider the collaborative edge service placement with distributed data cleaning (SPDC) problem. To address this issue, we propose a novel distributed collaborative edge-based architecture that effectively balances the demands of storage, communication, computation, and load constraints. Experimental results show that the proposed approach significantly improves the accuracy of data cleaning by 0.31%-86.07% and reduces delay by 2.73%-58.71% compared to state-of-the-art baselines.
Artificial Intelligence Generated Content (aiGC) has gained significant popularity for creating diverse content. Current aiGC models primarily focus on content quality within a centralized framework, resulting in a hi...
详细信息
Named Entity Recognition (NER) is an important task in knowledge extraction, which targets extracting structural information from unstructured text. To fully employ the prior-knowledge of the pre-trained language mode...
详细信息
Named Entity Recognition (NER) is an important task in knowledge extraction, which targets extracting structural information from unstructured text. To fully employ the prior-knowledge of the pre-trained language models, some research works formulate the NER task into the machine reading comprehension form (MRC-form) to enhance their model generalization capability of commonsense knowledge. However, this transformation still faces the data-hungry issue with limited training data for the specific NER tasks. To address the low-resource issue in NER, we introduce a method named active multi-task-based NER (AMT-NER), which is a two-stage multi-task active learning training model. Specifically, A multi-task learning module is first introduced into AMT-NER to improve its representation capability in low-resource NER tasks. Then, a two-stage training strategy is proposed to optimize AMT-NER multi-task learning. An associated task of Natural Language Inference (NLI) is also employed to enhance its commonsense knowledge further. More importantly, AMT-NER introduces an active learning module, uncertainty selective, to actively filter training data to help the NER model learn efficiently. Besides, we also find different external supportive data under different pipelines improves model performance differently in the NER tasks. Extensive experiments are performed to show the superiority of our method, which also proves our findings that the introduction of external knowledge is significant and effective in the MRC-form NER tasks.
暂无评论