Knowledge Graph has become a dominant research field in graph theory, but its incompleteness and sparsity hinder its application in various fields. Knowledge Graph Reasoning aims to alleviate these problems by deducin...
详细信息
Knowledge Graph has become a dominant research field in graph theory, but its incompleteness and sparsity hinder its application in various fields. Knowledge Graph Reasoning aims to alleviate these problems by deducing new knowledge or identifying false knowledge from existing knowledge. Recently, the Graph Convolution Network (GCN) based method is one of the most advanced methods to realize knowledge graph reasoning. However, it still suffers from some problems such as incomplete neighbor information aggregation and slow training speed. This paper proposes a knowledge graph reasoning model named GK for the link prediction task, which obtains better performance than existing GCN-based methods by introducing Graphormer into knowledge reasoning. The GK first proposes that nodes and their surroundings can be regarded as a hierarchical architecture that enables the model to capture more practical reasoning information to improve prediction accuracy. In addition, to accelerate the training speed of the model on the large-scale Knowledge Graph, we present a faster shortest path-finding method F-SPF in the edge coding process. Extensive experimental results show that the GK model can obtain the state-of-the-art prediction results of current GCN-based methods and can improve the training speed.
The proceedings contain 11 papers. The topics discussed include: accelerate distributed stochastic descent for Nonconvex optimization with momentum;accelerating GPU-based machine learning in python using MPI library: ...
ISBN:
(纸本)9780738110783
The proceedings contain 11 papers. The topics discussed include: accelerate distributed stochastic descent for Nonconvex optimization with momentum;accelerating GPU-based machine learning in python using MPI library: a case study with MVAPICH2-GDR;deep learning-based low-dose tomography reconstruction with hybrid-dose measurements;EventGraD: event-triggered communication in parallel stochastic gradient descent;a benders decomposition approach to correlation clustering;high-bypass learning: automated detection of tumor cells that significantly impact drug response;deep generative models that solve PDEs: distributedcomputing for training large data-free models;automatic particle trajectory classification in plasma simulations;reinforcement learning-based solution to power grid planning and operation under uncertainties;and predictions of steady and unsteady flows using machine-learned surrogate models.
In the future heterogeneous integrated wireless network environment, delay limited network has the characteristics of distributed self-organization, multi hop transmission, delay tolerance and intermittent link connec...
详细信息
Authenticating users based on their typing patterns has always been a target for cybersecurity professionals and researchers. Although previous studies have worked on free text-based authentication on mobile devices, ...
详细信息
Authenticating users based on their typing patterns has always been a target for cybersecurity professionals and researchers. Although previous studies have worked on free text-based authentication on mobile devices, none of them addressed the necessity of continuous learning under these settings. In this work, we propose a variational autoencoder (VAE) model that deals with this issue. As a case study, we consider a scenario where the model is initially trained on data collected from the user in a particular language (English). Then, the model is supposed to recognize the user when typing in another language (Korean). One way to adapt to that change is to retrain the model on a subset of the Korean data when it becomes available. By then, two scenarios can arise: 1) The English data still exists and the model is trained on the combination of English and Korean data; 2) The English data does not exist for security reasons or limited storage issues, and thus, we use the decoder part of our VAE to generate data based on what has been learned and then retrain the model on the mix. The average Equal Error Rate achieved among 50 participants was 3.23% and 3.55% for scenarios 1 and 2, respectively (~14% less than the baseline case where the model is not retrained). These results prove the need for continuous retraining of authentication models and highlight the efficiency of the proposed model and its ability to continuously learn, even without having access to the previous training data.
Summary form only given, as follows. The complete presentation was not made available for publication as part of the conference proceedings. Software systems must satisfy rapidly increasing demands imposed by emerging...
Summary form only given, as follows. The complete presentation was not made available for publication as part of the conference proceedings. Software systems must satisfy rapidly increasing demands imposed by emerging applications. For example, new AI applications, such as autonomous driving, require quick responses to an environment that is changing continuously. At the same time, software systems must be fault-tolerant in order to ensure a high degree of availability. As it stands, however, developing these new distributed software systems is extremely challenging even for expert software engineers due to the interplay of concurrency, asynchronicity, and failure of components. The objective of our research is to develop reusable solutions to the above challenges by means of novel programming models and frameworks that can be used to build a wide range of applications. This talk reports on our work on the design, implementation, and foundations of programming models and languages that enable the robust construction of large-scale concurrent and distributed software systems.
Aiming at the high complexity of parameter optimization for portfolio models, this paper designs a distributed high-performance portfolio optimization platform(HPPO) based on parallelcomputing framework and event dri...
详细信息
ISBN:
(纸本)9780738111162
Aiming at the high complexity of parameter optimization for portfolio models, this paper designs a distributed high-performance portfolio optimization platform(HPPO) based on parallelcomputing framework and event driven architecture. The platform consists of the data layer, the model layer, and the excursion layer, which is built in a component, pluggable, and loosely coupled way. The platform adopts parallelization acceleration for backtesting and optimizing parameters of portfolio models in a certain historical interval. The platform is able to docking portfolio model with real-time market. Based on the HPPO platform, a parallel program is designed to optimize the parameters of the value at risk(VAR) model. The performance of the platform are summarized by analyzing the experimental results and comparing with the open source framework Zipline and Rqalpha.
Detection and analysis framework of anomalous Internet crime data based on edge computing is designed in this paper. The edge server is both the edge data processing center and the data storage center. The edge server...
详细信息
ISBN:
(纸本)9781728146850
Detection and analysis framework of anomalous Internet crime data based on edge computing is designed in this paper. The edge server is both the edge data processing center and the data storage center. The edge server receives the edge device data for processing and returns the processing result to the edge device. It complements cloud computing and cloud services, is close to users and data sources, and provides a new computing model for intelligent computing. Therefore, edge computing is another new computing model after distributedcomputing, gridcomputing, and cloud computing. Inspired by the features of this technology, this paper proposes the novel data crime analytic framework. The numerical experiment has proven the satisfactory performance of the proposed method.
With the rapid development of the 5G and Internet of Things (IoT), mobile edge computing has gained considerable popularity in academic and industrial field, which provides physical resources closer to end users. Serv...
详细信息
Container has emerged as a new technology in clouds to replace virtual machines (VM) for distributed applications deployment and operation. With the increasing number of new cloud-focused applications, such as deep le...
详细信息
ISBN:
(纸本)9781450370523
Container has emerged as a new technology in clouds to replace virtual machines (VM) for distributed applications deployment and operation. With the increasing number of new cloud-focused applications, such as deep learning and high performance applications, started to reply on the high computing throughput of GPUs, efficiently supporting GPU in container cloud becomes essential. While GPU virtualization has been extensively studied for VM, limited work has been done for containers. One of the key challenges is the lack of support for GPU sharing between multiple concurrent containers. This limitation leads to low resource utilization when a GPU device cannot be fully utilized by a single application due to the burstiness of GPU workload and the limited memory bandwidth. To overcome this issue, we designed and implemented KubeShare, which extends Kubernetes to enable GPU sharing with fine-grained allocation. KubeShare is the first solution for Kubernetes to make GPU device as a first class resources for scheduling and allocations. Using real deep learning workloads, we demonstrated KubeShare can significantly increase GPU utilization and overall system throughput around 2x with less than 10% performance overhead during container initialization and execution.
With the rapid development and popularization of internet technologies such as AI and 5G, more and more organizations and individuals have suffered a dramatic increase in the number of ransomware attacks, which has br...
详细信息
With the rapid development and popularization of internet technologies such as AI and 5G, more and more organizations and individuals have suffered a dramatic increase in the number of ransomware attacks, which has brought substantial economic losses to them. Ransomware is an illegal act that blackmails victims into paying a ransom by locking their devices or encrypting files. Achieving fast and effective ransomware classification, attack intent and pattern analysis can improve the efficiency of security analysts and discover ransomware variants earlier. Therefore, we propose a new ransomware classification method, which uses the entropy map extracted from the ransomware binary file for classification. The entropy map retains more fine-grained features in the ransomware family, which can improve the classification result. Aiming at the problem of data imbalance among ransomware families, we propose a data augmentation method based on the Cycle-GAN network, which combines the fine-tuning technology in transfer learning to improve the classification result of the framework further. At the same time, the attention mechanism is introduced into VGG-16. It is used to enhance the ability of feature extraction of the network. The experimental results show that the proposed method achieves the best performance on 14 ransomware families, and the accuracy rate can reach 97.16%, which is better than other traditional ransomware visualization classification methods. Our proposed method still has the best classification performance compared with other neural networks.
暂无评论