As a new stage in the development of the cloud computing paradigm, serverless computing has the high-level abstraction characteristic of shielding underlying details. This makes it extremely challenging for users to c...
As a new stage in the development of the cloud computing paradigm, serverless computing has the high-level abstraction characteristic of shielding underlying details. This makes it extremely challenging for users to choose a suitable serverless platform. To address this, targeting the jointcloud computing scenario of heterogeneous serverless platforms across multiple clouds, this paper presents a jointcloud collaborative mechanism called FCloudless with cross-cloud detection of the full lifecycle performance of serverless platforms. Based on the benchmark metrics set that probe performance critical stages of the full lifecycle, this paper proposes a performance optimization algorithm based on detected performance data that takes into account all key stages that affect the performance during the lifecycle of a function and predicts the overall performance by combining the scores of local stages and dynamic weights. We evaluate FCloudless on AWS, AliYun, and Azure. The experimental results show that FCloudless can detect the underlying performance of serverless platforms hidden in the black box and its optimization algorithm can select the optimal scheduling strategy for various applications in a jointcloud environment. FCloudless reduces the runtime by 23.3% and 24.7% for cold and warm invocations respectively under cost constraints.
In diverse and self-governed multiple clouds context, the service management and discovery are greatly challenged by the dynamic and evolving features of services. How to manage the features of cloud services and supp...
详细信息
In diverse and self-governed multiple clouds context, the service management and discovery are greatly challenged by the dynamic and evolving features of services. How to manage the features of cloud services and support accurate and efficient service discovery has becomean open problem in the area of cloud computing. This paper proposes a field model of multiple cloud services and corresponding service discovery method to address the issue. Different from existing researches, our approach is inspired by Bohr atom model. We use the abstraction of energy level and jumping mechanism to describe services status and variations, and thereby to support the service demarcation and discovery. The contributions of this paper are threefold. First, we propose the abstraction of service energy level to represent the status of services, and service jumping mechanism to investigate the dynamic and evolving features as the variations and re-demarcation of cloud services according to their energy levels. Second, we present user acceptable service region to describe the services satisfying users' requests and corresponding service discovery method, which can significantly decrease services search scope and improve the speed and precision of service discovery. Third, a series of algorithms are designed to implement the generation of field model, user acceptable service regions, service jumping mechanism, and user-oriented service discovery. We have conducted an extensive experiments on QWS dataset to validate and evaluate our proposed models and algorithms. The results show that field model can well support the representation of dynamic and evolving aspects of services in multiple clouds context and the algorithms can improve the accuracy and efficiency of service discovery.
Convolutional neural networks (CNNs) have been extensively used in artificial intelligence fields such as computer vision and natural language processing. Winograd-based fast convolution algorithms can effectively red...
详细信息
With the development of nuclear energy techniques, pin-by-pin calculation using the SP3 method based on pin cell homogenization attracts researchers. This study proposed a pin-by-pin calculation method of rectangular ...
详细信息
Controlled thermonuclear fusion has always been a dream pursued by mankind. However, the physical processes of controlled thermonuclear fusion are complex, requiring numerical simulations with high performance computi...
Controlled thermonuclear fusion has always been a dream pursued by mankind. However, the physical processes of controlled thermonuclear fusion are complex, requiring numerical simulations with high performance computing, and the amount of data generated by the physical processes on spatial, temporal and temperature scales is too large to be captured, managed, processed and collated in a reasonable time frame by mainstream software tools to achieve more aggressive fusion physical design. The data are too large to be captured, managed, processed, and collated into more aggressive targets for fusion physical design in a reasonable time by mainstream software tools. At the same time, the failure of fusion ignition can be caused by the distortion of various key physical quantities, and only by decomposing the process step by step and clarifying the changes of key physical quantities in the fusion physics process, can an effective mechanism be formed to prevent the distortion of key physical quantities from causing ignition failure in experimental physics. Big data in collaboration with artificial intelligence and high performance computing to drive the physical design of fusion is a novel avenue. By data acquisition with and pre-processing, this allows the creation of small sample libraries and deep learning. With supervised learning function convergence, incorporating solid/fluid computational methods, further network layering and cell expansion, the parameters of the physical model conforming to Lawson’s criterion will become experimental physical parameters, and we find the new approach in Data capacity, Model combination approach, Type of material, Calculation speed, Optimisation of design iteration times, etc. are superior to the traditional approach.
Vision-Language Navigation (VLN) tasks require an agent to follow human language instructions to navigate in previously unseen environments. This challenging field involving problems in natural language processing, co...
详细信息
Graph neural networks (GNNs) have been emerging as powerful learning tools for unstructured data and successfully applied to many graph-based application domains. Sampling-based GNN inference is commonly adopted in ex...
详细信息
Graph neural networks (GNNs) have been emerging as powerful learning tools for unstructured data and successfully applied to many graph-based application domains. Sampling-based GNN inference is commonly adopted in existing graph learning frameworks to handle large-scale graphs. However, this approach is restricted by the problems of redundant vertex embedding computation in GPU and inefficient loading of vertex features from CPU to GPU. In this paper, we propose PCGraph, a system that supports adaptive GNN inference and feature partition caching. PCGraph significantly reduces the vertex embedding computation time by adaptive GNN inference technology, which selects the optimal inference algorithm and minimizes vertex embedding computation. PCGraph also reduces the redundant data transfer between the CPU and GPU by partition the target vertices and caching their corresponding partitions in turn. We evaluate PCGraph against two state-of-the-art industrial GNN frameworks, i.e., PyG and DGL, on a diverse array of benchmarks. Experimental results show that PCGraph reduces up to 99% vertex embedding computation and 98.5% data loading time, and achieves up to 360× performance speedup over the state-of-the-art baselines.
Despite impressive capabilities and outstanding performance, deep neural networks (DNNs) have captured increasing public concern about their security problems, due to their frequently occurred erroneous behaviors. The...
详细信息
Despite impressive capabilities and outstanding performance, deep neural networks (DNNs) have captured increasing public concern about their security problems, due to their frequently occurred erroneous behaviors. Therefore, it is necessary to conduct a systematical testing for DNNs before they are deployed to real-world applications. Existing testing methods have provided fine-grained metrics based on neuron coverage and proposed various approaches to improve such metrics. However, it has been gradually realized that a higher neuron coverage does not necessarily represent better capabilities in identifying defects that lead to errors. Besides, coverage-guided methods cannot hunt errors due to faulty training procedure. So the robustness improvement of DNNs via retraining by these testing examples are unsatisfactory. To address this challenge, we introduce the concept of excitable neurons based on Shapley value and design a novel white-box testing framework for DNNs, namely DeepSensor. It is motivated by our observation that neurons with larger responsibility towards model loss changes due to small perturbations are more likely related to incorrect corner cases due to potential defects. By maximizing the number of excitable neurons concerning various wrong behaviors of models, DeepSensor can generate testing examples that effectively trigger more errors due to adversarial inputs, polluted data and incomplete training. Extensive experiments implemented on both image classification models and speaker recognition models have demonstrated the superiority of DeepSensor. Compared with the state-of-the-art testing approaches, DeepSensor can find more test errors due to adversarial inputs (∼ ×1.2), polluted data (∼ ×5) and incompletely-trained DNNs (∼ ×1.3). Additionally, it can help DNNs build larger l2-norm robustness bound (∼ ×3) via retraining according to CLEVER's certification. We further provide interpretable proofs for effectiveness of DeepSensor via excitable neuro
Convolutional Neural Networks (CNNs), one of the most representative algorithms of deep learning, are widely used in various artificial intelligence applications. Convolution operations often take most of the computat...
详细信息
Feature selection places an important role in improving the performance of outlier detection, especially for noisy data. Existing methods usually perform feature selection and outlier scoring separately, which would s...
详细信息
暂无评论