Mobile Edge computing (MEC) enables mobile users to offload their computation loads to nearby edge servers, and is seen to be integrated in the 5G architecture to support a variety of low-latency applications and serv...
详细信息
ISBN:
(数字)9781728190747
ISBN:
(纸本)9781728183824
Mobile Edge computing (MEC) enables mobile users to offload their computation loads to nearby edge servers, and is seen to be integrated in the 5G architecture to support a variety of low-latency applications and services. However, an edge server might soon be overloaded when its computation resources are heavily requested, and would then fail to process all of its received computation loads in time. Unlike most of existing schemes that ingeniously instruct the overloaded edge server to transfer computation loads to the remote cloud, we make use of the spare computation resources from other local edge servers by specially taking the risk of network link failures into account. We measure such link failure risks with the financial risk management metric of Conditional Value-at-Risk (CVaR), and well constrain it to the offloading decisions using a Minimum Cost Flow (MCF) problem formulation. Numerical results validate the enhancement of the MEC service's availability by our risk-aware offloading scheme.
The BlockChain is a New Era Technology that provides secure technological impacts on various exchanges. It totally revolves over the distributed database. That framework helps to eliminate the failure at any single po...
详细信息
Scalable web systems are directly related to distributed storage systems used to process large amounts of data (big data). An example of such a system is Hadoop with its many extensions supporting data storage such as...
详细信息
ISBN:
(纸本)9783319999814;9783319999807
Scalable web systems are directly related to distributed storage systems used to process large amounts of data (big data). An example of such a system is Hadoop with its many extensions supporting data storage such as SQL-on-Hadoop systems and the "Parquet" file format. Another kind of systems for storing and processing big data are NoSQL databases, such as HBase, which are used in applications requiring fast random access. The Kudu system was created to combine the advantages of Hadoop and HBase and enable both effective data set analysis and fast random access. As subject of the research, performance analysis of the mentioned systems was performed. The experiment was conducted in the Amazon Web Services public cloud environment, where the cluster of nine virtual machines was configured. For research purpose, containing about billion rows fragment of "Wikipedia Page Traffic Statistics" public dataset was used. The results of the measurements confirm that the Kudu system is a promising alternative to the commonly used technologies.
In recent years, the parallelcomputing community has shown increasing interest in leveraging cloud resources for executing parallel applications. Clouds exhibit several fundamental features of economic value, like on...
详细信息
Today, in the age of digital era, electronic circuit is the key component and its design, testing is validated through simulator. But even though use of simulator is cost-effective, large circuit simulation is quite t...
详细信息
In sub-second stream computing, the answer to a complex query usually depends on operations of aggregation or join on streams, especially multi-way theta join. Some attribute keys are not distributed uniformly, which ...
详细信息
ISBN:
(数字)9781728190747
ISBN:
(纸本)9781728183824
In sub-second stream computing, the answer to a complex query usually depends on operations of aggregation or join on streams, especially multi-way theta join. Some attribute keys are not distributed uniformly, which is called the data intrinsic skew problem, such as taxi car plate in GPS trajectories and transaction records, or stock code in stock quotes and investment portfolios etc. In this paper, we define the concept of key redundancy for single stream as the degree of data intrinsic skew, and joint key redundancy for multi-way streams. We present an execution model for multi-way stream theta joins with a fine-grained cost model to evaluate its performance. We propose a solution named Group Join (GroJoin) to make use of key redundancy during transmission and execution in a cluster. GroJoin is adaptive to data intrinsic skew in the way that it depends on the grouping condition we find out, i.e., the selectivity of theta join results should be smaller than 25%. Experiments are carried out by our MS-Generator to produce multi-way streams, and the simulation results show that GroJoin can decrease at most 45% transmission overheads with different key redundancies and value-key proportionality coefficients, and reduce at most 70% query delay with different key distributions. We further implement GroJoin in Multi-Way Stream Theta Join by Spark Streaming. The experimental results demonstrate that there are about 40%~50% join latency reduced after our optimization with a very small computation cost.
The design of test case automatic generation technology is an important step in the implementation of software automated testing. It plays an important role in guiding the later testing work and is the fundamental gua...
详细信息
Anomalies during system execution can be detected by automated analysis of logs generated by the system. However, large scale systems can generate tens of millions of lines of logs within days. Centralized implementat...
详细信息
ISBN:
(纸本)9781728108582
Anomalies during system execution can be detected by automated analysis of logs generated by the system. However, large scale systems can generate tens of millions of lines of logs within days. Centralized implementations of traditional machine learning algorithms are not scalable for such data. Therefore, we recently introduced a distributed log analysis framework for anomaly detection. In this paper, we introduce an extension of this framework, which can detect anomalies earlier via incremental analysis instead of the existing offline analysis approach. In the extended version, we periodically process the log data that is accumulated so far. We conducted controlled experiments based on a benchmark dataset to evaluate the effectiveness of this approach. We repeated our experiments with various periods that determine the frequency of analysis as well as the size of the data processed each time. Results showed that our online analysis can improve anomaly detection time significantly while keeping the accuracy level same as that is obtained with the offline approach. The only exceptional case, where the accuracy is compromised, rarely occurs when the analysis is triggered before all the log data associated with a particular session of events are collected.
The COVID-19 global pandemic is an unprecedented health crisis. Many researchers around the world have produced an extensive collection of literature since the outbreak. Analysing this information to extract knowledge...
详细信息
ISBN:
(数字)9780738123943
ISBN:
(纸本)9781665415637
The COVID-19 global pandemic is an unprecedented health crisis. Many researchers around the world have produced an extensive collection of literature since the outbreak. Analysing this information to extract knowledge and provide meaningful insights in a timely manner requires a considerable amount of computational power. Cloud platforms are designed to provide this computational power in an on-demand and elastic manner. Specifically, hybrid clouds, composed of private and public data centers, are particularly well suited to deploy computationally intensive workloads in a cost-efficient, yet scalable manner. In this paper, we developed a system utilising the Aneka Platform as a Service middleware with parallel processing and multi-cloud capability to accelerate the data process pipeline and article categorising process using machine learning on a hybrid cloud. The results are then persisted for further referencing, searching and visualising. The performance evaluation shows that the system can help with reducing processing time and achieving linear scalability. Beyond COVID-19, the application might be used directly in broader scholarly article indexing and analysing.
Neutron flux distribution inside the core of large size nuclear reactors is a function of space and time. An online Flux Mapping System (FMS) is needed to monitor the core during the reactor operation. FMS estimates t...
ISBN:
(数字)9781728168517
ISBN:
(纸本)9781728168524
Neutron flux distribution inside the core of large size nuclear reactors is a function of space and time. An online Flux Mapping System (FMS) is needed to monitor the core during the reactor operation. FMS estimates the core flux distribution from the measurements of few in-core detectors using an appropriate algorithm. Here, a distributed Artificial Neural Network (D-ANN) model is developed using parallel-forward multi-layer perceptron architecture to capture the spatial core flux variation in a nuclear reactor. The proposed D-ANN model is tested with simulated test case data of Advanced Heavy Water Reactor (AHWR) for multiple operating conditions of the reactor. The model estimates the neutron flux in all horizontal mesh locations (2-D) from the multiple networks distributed spatially across AHWR core. Estimation error using the proposed D-ANN model is found to be significantly lower than that with lumped ANN model. Validation exercises establish that this D-ANN model could effectively capture the spatial variations in the reactor core and therefore could be utilized for efficient flux mapping. The real time implementation of D-ANN based flux mapping method is also proposed.
暂无评论