Withthe continuous development of data-centric Internet applications, data management solutions for massive data have gradually become a new research direction in vector map rendering. the introduction of distributed...
详细信息
ISBN:
(纸本)9781665435741
Withthe continuous development of data-centric Internet applications, data management solutions for massive data have gradually become a new research direction in vector map rendering. the introduction of distributed "cloud computing" technology has continuously improved the storage and management of massive data. Supported by a large number of distributed systems, it provides normal services to the outside world, as well as fast data location services. the system has good performance in data storage efficiency, node load balancing and system scalability, and has an efficient data positioning mechanism. the basic idea of the system design draws on the hash algorithm and the consistent hash algorithm, and improves the scalability of the basic algorithm. through the three-tier layout: front-end server layer, consistent hash layer, data node layer, to maximize the consistency, availability, efficiency and scalability of the system.
the rechargeable sensor network is promising for various applications. However, improving network performance is challenging, because the energy depletion of the sensor nodes will result in abnormal death of the nodes...
详细信息
ISBN:
(纸本)9781538637906
the rechargeable sensor network is promising for various applications. However, improving network performance is challenging, because the energy depletion of the sensor nodes will result in abnormal death of the nodes. In this paper, we propose a hybrid framework to model the abnormal death of the sensor nodes. Based on the Markov fluid queue theory, the model includes three parts, namely utilizing a Markov process to simulate the charging behavior, a queuing model to trace the working mechanism of rechargeable sensor nodes, and a continuous fluid process to indicate the energy level of sensor nodes. the numerical results show that our model can effectively predict the probability of abnormal death and stationary energy consumption of the sensor nodes.
In this paper, we propose system support for building adaptive migratory continuous media applications in distributed real-time environments. In future distributed computing environments, various objects in homes and ...
详细信息
In recent years, chaos-based image cipher has been widely studied and a growing number of schemes based on permutation-substitution architecture have been proposed. To better meet the challenge of real-time secure ima...
详细信息
ISBN:
(纸本)9781538637906
In recent years, chaos-based image cipher has been widely studied and a growing number of schemes based on permutation-substitution architecture have been proposed. To better meet the challenge of real-time secure image communication applications, this paper suggests a new image encryption scheme using a parallel substitution. In the permutation stage, the Arnold cat map is employed to shuffle the pixel positions so as to erase the strong relationship between adjacent pixels. In the substitution stage, the scrambled image is firstly decomposed into eight bit-planes which are then parallelly mixed with keystreams generated by the chaotic logistic map. theoretically, the parallel substitution strategy runs eight times faster than the serial strategy on an 8-thread processor as the volume of data processed by each substitution unit is 1/8 of that of the input image. Experimental results show that the proposed parallel scheme runs more than five times faster than the serial scheme. Extensive security analysis is carried out with detailed analysis, demonstrating the satisfactory security of the proposed scheme.
the extensive growth of smartphones has spawned the propagation of malicious applications. Due to the increasing use of polymorphic malware, detection is becoming more difficult. To this end, ensemble learning has bee...
详细信息
ISBN:
(纸本)9781538637906
the extensive growth of smartphones has spawned the propagation of malicious applications. Due to the increasing use of polymorphic malware, detection is becoming more difficult. To this end, ensemble learning has been proposed to improve accuracy in malware detection, without severely sacrificing time complexity. In this paper, we propose a hybrid detection system, TFBOOST, which incorporates the tensor filter algorithm into boosting ensemble generalization architecture, in order to improve detection efficacy. TFBOOST uses a static analysis to extract features and a level-by-level boosting structure with re-sampling process to diversify base learners. Experimental results show that TFBOOST generally outperforms state-of-the-art ensemble algorithms with higher detection precision and lower false positive rates. Finally, we visually interpret the high-level results of TFBOOST and conjecture that repackaged malware is the mainstay of potential malware.
Graphics processing Units (GPUs) have been seeing widespread adoption in the field of scientific computing, owing to the performance gains provided on computation-intensive applications. In this paper, we present the ...
详细信息
ISBN:
(纸本)9781509036820
Graphics processing Units (GPUs) have been seeing widespread adoption in the field of scientific computing, owing to the performance gains provided on computation-intensive applications. In this paper, we present the design and implementation of a Hessenberg reduction algorithm immune to simultaneous soft-errors, capable of taking advantage of hybrid GPU-CPU platforms. these soft-errors are detected and corrected on the fly, preventing the propagation of the error to the rest of the data. Our design is at the intersection between several fault tolerant techniques and employs the algorithm-based fault tolerance technique, diskless checkpointing, and reverse computation to achieve its goal. By utilizing the idle time of the CPUs, and by overlapping both host-side and GPU-side workloads, we minimize the resilience overhead. Experimental results have validated our design decisions as our algorithm introduced less than 2% performance overhead compared to the optimized, but fault-prone, hybrid Hessenberg reduction.
Virtual Machine Migration (VMM) is a key technology in data centers. Due to the uncertainty of applications in resource allocation, the imbalance of resource utilization is badly poor. In this paper, an Auto-regressio...
详细信息
ISBN:
(纸本)9781538637906
Virtual Machine Migration (VMM) is a key technology in data centers. Due to the uncertainty of applications in resource allocation, the imbalance of resource utilization is badly poor. In this paper, an Auto-regression Moving Average model is proposed to predict the resource requirement of a certain virtual machine, while the resource utilization rate of the physical machine is analyzed. Since the existing VMM scheme basically follows one by one migration, which makes the migration be unable to achieve the global optimum and save more energy, the paper introduces migrated cost matrix, and recommends a set of migrations withthe best performance from a global point of view to carry out the migration, and improves the "three-step" method in the traditional migrated scheme to achieve energy efficiency. Simulation experiments show that the proposed scheme in the paper can effectively reduce the energy consumption, and can improve the quality of service in a certain degree.
Enterprise architects and information system designers need to understand and manage workflows, data flows, and social interactions to design tools and systems for well-coordinated organizational operations. However, ...
详细信息
ISBN:
(纸本)9781538637906
Enterprise architects and information system designers need to understand and manage workflows, data flows, and social interactions to design tools and systems for well-coordinated organizational operations. However, the organizational-nature has drastically transformed over the recent years due to wide-scale use of new computing technologies. Disintegrated structures, large quantities of frequently-generated data, and dubious system and interaction boundaries are some of the obvious identifiers of a modern enterprise, where poorly designed coordination can lead to serious privacy risks. Old coordination modeling frameworks do not set well for the new organizational settings, and a need for alternative models and frameworks has been felt. In this paper, we propose a privacy-aware conceptual framework for understanding coordination by identifying and mapping work, data, and interaction patterns in organizational environments. these propositions intend to help practitioners in developing an updated understanding of the coordination that serves privacy needs, as well.
Recently, the computational requirements for largescale data-intensive analysis of scientific data have grown significantly. In High Energy Physics (HEP) for example, the Large Hadron Collider (LHC) produced 13 petaby...
详细信息
ISBN:
(纸本)9780769546766
Recently, the computational requirements for largescale data-intensive analysis of scientific data have grown significantly. In High Energy Physics (HEP) for example, the Large Hadron Collider (LHC) produced 13 petabytes of data in 2010. this huge amount of data are processed on more than 140 computing centers distributed across 34 countries. the MapReduce paradigm has emerged as a highly successful programming model for large-scale data-intensive computing applications. However, current MapReduce implementations are developed to operate on single cluster environments and cannot be leveraged for large-scale distributed data processing across multiple clusters. On the other hand, workflow systems are used for distributed data processing across data centers. It has been reported that the workflow paradigm has some limitations for distributed data processing, such as reliability and efficiency. In this paper, we present the design and implementation of G-Hadoop, a MapReduce framework that aims to enable large-scale distributed computing across multiple clusters. G-Hadoop uses the Gfarm file system as an underlying file system and executes MapReduce tasks across distributed clusters. Experiments of the G-Hadoop framework on distributed clusters show encouraging results.
暂无评论