A key challenge in personalized product search is to capture user’s preferences. Recent work attempted to model sequences of user historical behaviors, i.e., product purchase histories, to build user profiles and to ...
详细信息
A key challenge in personalized product search is to capture user’s preferences. Recent work attempted to model sequences of user historical behaviors, i.e., product purchase histories, to build user profiles and to personalize results accordingly. Although these approaches have demonstrated promising retrieval performances, we notice that most of them focus solely on the intra-sequence interactions between items. However, as there is usually a small amount of historical behavior data, the user profiles learned by these approaches could be very sensitive to the noise included in it. To tackle this problem, we propose incorporating out-of-sequence external information to enhance user modeling. More specifically, we inject the external item-item relations (e.g., belonging to the same brand), and query-query relations (e.g., the semantic similarities between them), into the intra-sequence interaction to learn better user profiles. In addition, we devise two auxiliary decoders, with the historical item sequence reconstruction task and the global item similarity prediction task, to further improve the reliability of user modeling. Experimental results on two datasets from simulated and real user search logs respectively show that the proposed personalized product search method outperforms existing approaches.
Knowledge Graphs (KGs) often suffer from incompleteness and this issue motivates the task of Knowledge Graph Completion (KGC). Traditional KGC models mainly concentrate on static KGs with a fixed set of entities and r...
详细信息
Knowledge Graphs (KGs) often suffer from incompleteness and this issue motivates the task of Knowledge Graph Completion (KGC). Traditional KGC models mainly concentrate on static KGs with a fixed set of entities and relations, or dynamic KGs with temporal characteristics, faltering in their generalization to constantly evolving KGs with possible irregular entity drift. Thus, in this paper, we propose a novel link prediction model based on the embedding representation to handle the incompleteness of KGs with entity drift, termed as DCEL. Unlike traditional link prediction, DCEL could generate precise embeddings for drifted entity without imposing any regular temporal characteristic. The drifted entity is added into the KG with its links to the existing entity predicted in an incremental fashion with no requirement to retrain the whole KG for computational efficiency. In terms of DCEL model, it fully takes advantages of unstructured textual description, and is composed of four modules, namely MRC (Machine Reading Comprehension), RCAA (Relation Constraint Attentive Aggregator), RSA (Relation Specific Alignment) and RCEO (Relation Constraint Embedding Optimization). Specifically, the MRC module is first employed to extract short texts from long and redundant descriptions. Then, RCAA is used to aggregate the embeddings of textual description of drifted entity and the pre-trained word embeddings learned from corpus to a single text-based entity embedding while shielding the impact of noise and irrelevant information. After that, RSA is applied to align the text-based entity embedding to graph-based space to obtain the corresponding graph-based entity embedding, and then the learned embeddings are fed into the gate structure to be optimized based on the RCEO to improve the accuracy of representation learning. Finally, the graph-based model TransE is used to perform link prediction for drifted entity. Extensive experiments conducted on benchmark datasets in terms of evaluat
Distributed Collaborative Machine Learning (DCML) has emerged in artificial intelligence-empowered edge computing environments, such as the Industrial Internet of Things (IIoT), to process tremendous data generated by...
详细信息
Distributed Collaborative Machine Learning (DCML) has emerged in artificial intelligence-empowered edge computing environments, such as the Industrial Internet of Things (IIoT), to process tremendous data generated by smart devices. However, parallel DCML frameworks require resource-constrained devices to update the entire Deep Neural Network (DNN) models and are vulnerable to reconstruction attacks. Concurrently, the serial DCML frameworks suffer from training efficiency problems due to their serial training nature. In this paper, we propose a Model Pruning-enabled Federated Split Learning framework (MP-FSL) to reduce resource consumption with a secure and efficient training scheme. Specifically, MP-FSL compresses DNN models by adaptive channel pruning and splits each compressed model into two parts that are assigned to the client and the server. Meanwhile, MP-FSL adopts a novel aggregation algorithm to aggregate the pruned heterogeneous models. We implement MP-FSL with a real FL platform to evaluate its performance. The experimental results show that MP-FSL outperforms the state-of-the-art frameworks in model accuracy by up to 1.35%, while concurrently reducing storage and computational resource consumption by up to 32.2% and 26.73%, respectively. These results demonstrate that MP-FSL is a comprehensive solution to the challenges faced by DCML, with superior performance in both reduced resource consumption and enhanced model performance.
The advancement of the Internet of Medical Things (IoMT) has led to the emergence of various health and emotion care services, e.g., health monitoring. To cater to increasing computational requirements of IoMT service...
详细信息
The advancement of the Internet of Medical Things (IoMT) has led to the emergence of various health and emotion care services, e.g., health monitoring. To cater to increasing computational requirements of IoMT services, Mobile Edge computing (MEC) has emerged as an indispensable technology in smart health. Benefiting from the cost-effectiveness of deployment, unmanned aerial vehicles (UAVs) equipped with MEC servers in Non-Orthogonal Multiple Access (NOMA) have emerged as a promising solution for providing smart health services in proximity to medical devices (MDs). However, the escalating number of MDs and the limited availability of communication resources of UAVs give rise to a significant increase in transmission latency. Moreover, due to the limited communication range of UAVs, the geographically-distributed MDs lead to workload imbalance of UAVs, which deteriorates the service response delay. To this end, this paper proposes a UAV-enabled Distributed computation Offloading and Power control method with Multi-Agent, named DOPMA, for NOMA-based IoMT environment. Specifically, this paper introduces computation and transmission queue models to analyze the dynamic characteristics of task execution latency and energy consumption. Moreover, a credit assignment scheme-based reward function is designed considering both system-level rewards and rewards tailored to each MD, and an improved multi-agent deep deterministic policy gradient algorithm is developed to derive offloading and power control decisions independently. Extensive simulations demonstrate that the proposed method outperforms existing schemes, achieving \(7.1\%\) reduction in energy consumption and \(16\%\) decrease in average delay.
The Anchor-based Multi-view Subspace Clustering (AMSC) has turned into a favourable tool for large-scale multi-view clustering. However, there still exist some limitations to the current AMSC approaches. First, they t...
详细信息
The Anchor-based Multi-view Subspace Clustering (AMSC) has turned into a favourable tool for large-scale multi-view clustering. However, there still exist some limitations to the current AMSC approaches. First, they typically recover anchor graph structure in the original linear space, restricting their feasibility for nonlinear scenarios. Second, they usually overlook the potential benefits of jointly capturing the inter-view and intra-view information for enhancing the anchor representation learning. Third, these approaches mostly perform anchor-based subspace learning by a specific matrix norm, neglecting the latent high-order correlation across different views. To overcome these limitations, this paper presents an efficient and effective approach termed Large-scale Tensorized Multi-view Kernel Subspace Clustering (LTKMSC). Different from the existing AMSC approaches, our LTKMSC approach exploits both inter-view and intra-view awareness for anchor-based representation building. Concretely, the low-rank tensor learning is leveraged to capture the high-order correlation (i.e., the inter-view complementary information) among distinct views, upon which the \(l_{1,2}\) norm is imposed to explore the intra-view anchor graph structure in each view. Moreover, the kernel learning technique is leveraged to explore the nonlinear anchor-sample relationships embedded in multiple views. With the unified objective function formulated, an efficient optimization algorithm that enjoys low computational complexity is further designed. Extensive experiments on a variety of multi-view datasets have confirmed the efficiency and effectiveness of our approach when compared with the other competitive approaches.
暂无评论