Due to the strict energy limitation and the common vulnerability of WSNs, providing efficient and security data gathering in WSNs becomes an essential problem. Compressive data gathering, which is based on the recent ...
详细信息
With the increase in energy of the Large Hadron Collider to a centre-of-mass energy of 13 TeV for Run 2, events with dense environments, such as in the cores of highenergy jets, became a focus for new physics searches...
详细信息
With the increase in energy of the Large Hadron Collider to a centre-of-mass energy of 13 TeV for Run 2, events with dense environments, such as in the cores of highenergy jets, became a focus for new physics searches as well as measurements of the Standard Model. These environments are characterized by charged-particle separations of the order of the tracking detectors sensor granularity. Basic track quantities are compared between 3.2 fb(-1) of data collected by the ATLAS experiment and simulation of protonproton collisions producing high-transverse-momentum jets at a centre-of-mass energy of 13 TeV. The impact of chargedparticle separations and multiplicities on the track reconstruction performance is discussed. The track reconstruction efficiency in the cores of jets with transverse momenta between 200 and 1600 GeV is quantified using a novel, datadriven, method. The method uses the energy loss, dE/ dx, to identify pixel clusters originating from two charged particles. Of the charged particles creating these clusters, themeasured fraction that fail to be reconstructed is 0.061 +/- 0.006 (stat.) +/- 0.014 (syst.) and 0.093 +/- 0.017 (stat.) +/- 0.021 (syst.) for jet transverse momenta of 200-400GeV and 1400-1600GeV, respectively.
data centers have been widely employed to offer reliable and agile on-demand web services. However, the dramatic increase of the operational cost, largely due to the power consumptions, has posed a significant challen...
详细信息
ISBN:
(纸本)9783981537048
data centers have been widely employed to offer reliable and agile on-demand web services. However, the dramatic increase of the operational cost, largely due to the power consumptions, has posed a significant challenge to the service providers as services expand in both scale and scope. In this paper, we study the problem of how to improve resource utilization and minimize power consumption in a data center with guaranteed quality-of-service (QoS). Different from a common approach that separates requests with different QoS levels on different servers, we devise an approach to pack requests of the same service - even with different QoS requirements - into the same server to improve resource usage. We also develop a novel method to improve the system utilization without compromising the QoS levels by removing potential failure requests. Experimental results show superiority of our approach over other widely applied approaches.
Direct evidence is provided for the transition from surface conduction (SC) to electro-osmotic flow (EOF) above a critical channel depth (d) of a nanofluidic device. The dependence of the overlimiting conductance (OLC...
详细信息
Direct evidence is provided for the transition from surface conduction (SC) to electro-osmotic flow (EOF) above a critical channel depth (d) of a nanofluidic device. The dependence of the overlimiting conductance (OLC) on d is consistent with theoretical predictions, scaling as d−1 for SC and d4/5 for EOF with a minimum around d=8 μm. The propagation of transient deionization shocks is also visualized, revealing complex patterns of EOF vortices and unstable convection with increasing d. This unified picture of surface-driven OLC can guide further advances in electrokinetic theory, as well as engineering applications of ion concentration polarization in microfluidics and porous media.
Recent developments in the field of gene sequencing technology greatly accelerated discovery of mutations that cause various genetic disorders. At the same time, a typical sequencing experiment generates a large numbe...
详细信息
ISBN:
(纸本)9781479975617
Recent developments in the field of gene sequencing technology greatly accelerated discovery of mutations that cause various genetic disorders. At the same time, a typical sequencing experiment generates a large number of candidate mutations, hence detecting single or few causative variants is still a formidable problem. Many computational methods have been proposed to assist this process, from which a large portion employ statistical learning in some form. Consequently, each newly designed algorithm is routinely compared to other competing systems in hope to demonstrate advantageous performance. In this work we review and discuss several issues related to the current practice of evaluation of mutation prioritization algorithms and suggest possible directions for improvements.
Fault detection is a task of major importance in industry nowadays, since that it can considerably reduce the risk of accidents involving human lives, in addition to production and, consequently, financial losses. The...
详细信息
In the next generation of Cloud computing systems (so-called Inter-Cloud), it is expected that multiple Cloud Service Providers (CSPs) will collaborate or cooperate together to form the so-called Cloud Market, with th...
详细信息
In the next generation of Cloud computing systems (so-called Inter-Cloud), it is expected that multiple Cloud Service Providers (CSPs) will collaborate or cooperate together to form the so-called Cloud Market, with the goal to advertise their services and associate prices to their end users. In this environment, mobile users will be able to better negotiate and select the CSP that better suits their budgetary and technical needs in a more flexible manner. However, despite the benefit of having multiple CSPs to choose from, mobile users will have to cope with several issues, for instance, that of selecting a suitable CSP to handle the incoming service request from the users. To address this problem, this paper proposes an optimal Cloud Broker. The role played by the optimal Cloud Broker is to provide the best possible match for mobile users and CSPs. The Cloud Broker design is formulated by using the framework of Semi-Markov Decision Process (SMDP). Considering the average cost criterion as the optimality criterion, the Value Iteration Algorithm is used to find the optimal Broker policy. Numerical results are presented, demonstrating the effectiveness of our proposed Broker design, in a scenario where a wireless service provider has its own Cloudlet and a Service level Agreement (SLA) has been established with a public CSP to handle the overload occurrences.
Land use and land cover change (LUCC) is necessary to explore the factors leading to heavy drought and rainy-flood disaster in some districts of Sichuan province. A method based RS, GIS, GPS and Google earth (GE) is p...
详细信息
Land use and land cover change (LUCC) is necessary to explore the factors leading to heavy drought and rainy-flood disaster in some districts of Sichuan province. A method based RS, GIS, GPS and Google earth (GE) is presented to establish LUCC database in Sichuan province and Chengdu district. At first, LUCC is interpreted based on the new temporal images and the land use and land cover database from TM in ***, some ground objects, which could not be identified in the new temporal images, were interpreted utilizing GE with some higher spatial resolution images. Thirdly, the new interpreted LUCC was validated in the field with GPS handheld receiver. Then, LUCC of Sichuan province was updated. A comparative analysis of LUCC between in Sichuan province and in Chengdu district was conducted and the result showed: (1) a large amount of farmland in Sichuan Province was occupied from 2000 to 2005 and the area is 84 573 ha. While construction land gained obviously and the area was 35 828 ha. The dynamic degree of construction land was 111.10 0 / 00 from 2000 to 2005. The LUCC demonstrated that the economy of Sichuan province continued to develop, the cities were overspreading and the urban heat island effect was deteriorated from 2000 to 2005. (2) A large amount of farmland was also occupied in Chengdu district from 2000 to 2005, the area amounted to 12 989 ha. The farmland lost was mainly changed to construction land, amounting to 93%. And the dynamic degree was 117.41 0 / 00 from 2000 to 2005, which was bigger than that in Sichuan province.
Network congestion is one of the primary causes of performance degradation, performance variability and poor scaling in communication-heavy parallel applications. However, the causes and mechanisms of network congesti...
详细信息
Network congestion is one of the primary causes of performance degradation, performance variability and poor scaling in communication-heavy parallel applications. However, the causes and mechanisms of network congestion on modern interconnection networks are not well understood. We need new approaches to analyze, model and predict this critical behaviour in order to improve the performance of large-scale parallel applications. This paper applies supervised learning algorithms, such as forests of extremely randomized trees and gradient boosted regression trees, to perform regression analysis on communication data and application execution time. Using data derived from multiple executions, we create models to predict the execution time of communication-heavy parallel applications. This analysis also identifies the features and associated hardware components that have the most impact on network congestion and intern, on execution time. The ideas presented in this paper have wide applicability: predicting the execution time on a different number of nodes, or different input datasets, or even for an unknown code, identifying the best configuration parameters for an application, and finding the root causes of network congestion on different architectures.
暂无评论