distributedcomputing has many opportunities for Modeling and Simulation (M&S). Grid computing approaches have been developed that can use multiple computers to reduce the processing time of an application. In ter...
详细信息
distributedcomputing has many opportunities for Modeling and Simulation (M&S). Grid computing approaches have been developed that can use multiple computers to reduce the processing time of an application. In terms of M&S this means simulations can be run very quickly by distributing individual runs over locally or remotely available computing resources. distributed simulation techniques allow us to link together models over a network enabling the creation of large models and/or models that could not be developed due to data sharing or model reuse problems. Using real-world examples, this advanced tutorial discusses how both approaches can be used to benefit M&S researchers and practitioners alike.
Extensive data generated by peers of nodes in wireless sensor networks (WSNs) needs to be analysed and processed in order to extract information that is meaningful to the user. Data processing techniques that achieve ...
详细信息
Data in many biological problems are often compounded by imbalanced class distribution. That is, the positive examples may largely outnumbered by the negative examples. Many classification algorithms such as support v...
详细信息
As the energy consumption of embedded multiprocessor systems becomes increasingly prominent, the real-time energy-efficient scheduling in multiprocessor systems becomes an urgent problem to reduce the system energy co...
详细信息
As the energy consumption of embedded multiprocessor systems becomes increasingly prominent, the real-time energy-efficient scheduling in multiprocessor systems becomes an urgent problem to reduce the system energy consumption while meeting real-time constraints. For a multiprocessor with independent DVFS and DPM at each processor, this paper proposes an energy-efficient real-time scheduling algorithm named LRE-DVFS-EACH, based on LRE-TL which is an optimal real-time scheduling algorithm for sporadic tasks. LRE-DVFS-EACH utilizes the concept of TL plane and the idea of fluid scheduling to dynamically scale the voltage and frequency of processors at the initial time of each TL plane as well as the release time of a sporadic task in each TL plane. Consequently, LRE-DVFS-EACH can obtain a reasonable tradeoff between the real-time constraints and the energy saving. LRE-DVFS-EACH is also adaptive to the change of workload caused by the dynamic release of sporadic tasks, which can obtain more energy savings. The experimental results show that compared with existing algorithms, LRE-DVFS-EACH can not only guarantee the optimal feasibility of sporadic tasks, but also achieve more energy savings in all cases, especially in the case of high workloads.
Scientific workflows are common in biomedical research, particularly for molecular docking simulations such as those used in drug discovery. Such workflows typically involve data distribution between computationally d...
详细信息
Scientific workflows are common in biomedical research, particularly for molecular docking simulations such as those used in drug discovery. Such workflows typically involve data distribution between computationally demanding stages which are usually mapped onto large scale compute resources. Volunteer or Desktop Grid (DG) computing can provide such infrastructure but has limitations resulting from the heterogeneous nature of the compute nodes. These constraints mean that reducing the make span of a given workflow stage submitted to a DG becomes problematic. Late jobs can significantly affect the make span, often completing long after the bulk of the computation has finished. In this paper we present a system capable of significantly reducing the make span of a scientific workflow. Our system comprises a DG which is dynamically augmented with an infrastructure as a service (IaaS) Cloud. Using this solution, the Cloud resources are used to process replicated late jobs. Our system comprises a core component termed the scheduler, which implements an algorithm to perform late job detection, Cloud resource management (instantiation and reuse), and job monitoring. We offer a formal definition of this algorithm, whilst we also provide an evaluation of our prototype using a production scientific workflow.
Energy-efficient computing has now become a key challenge not only for data-center operations, but also for many other energy-driven systems, with the focus on reducing of all energy-related costs, and operational exp...
详细信息
Energy-efficient computing has now become a key challenge not only for data-center operations, but also for many other energy-driven systems, with the focus on reducing of all energy-related costs, and operational expenses, as well as its corresponding and environmental impacts. Intelligent machine-learning systems are typically performance driven. For instance, most non-parametric model-free approaches are often known to require high computational cost in order to find the global optima. Designing more accurate machine-learning systems to satisfy the market needs will hence lead to a higher likelihood of energy waste due to the increased computational cost. This paper thus introduces an energy-efficient framework for large-scale data modeling and classification. It can achieve a test error comparable to or better than the state-of-the-art machine-learning models, while at the same time, maintaining a low computational cost when dealing with large-scale data. The effectiveness of the proposed approaches has been demonstrated by our experiments with two large-scale KDD datasets: Mtv-1 and Mtv-2.
Extensive data generated by peers of nodes in wireless sensor networks (WSNs) needs to be analysed and processed in order to extract information that is meaningful to the user. Data processing techniques that achieve ...
详细信息
Agent-based crowd simulation has been widely applied in the analysis of evacuation safety under disastrous and terrorist circumstances. In crowd simulation, the virtual environment plays an important role in influenci...
详细信息
This paper presents a tool for the visualization and simulation of automated stowage plan generation for large containership. The stowage plan is generated automatically based on a heuristic algorithm. The allocation ...
详细信息
ISBN:
(纸本)9789881701282
This paper presents a tool for the visualization and simulation of automated stowage plan generation for large containership. The stowage plan is generated automatically based on a heuristic algorithm. The allocation algorithm generates different stowage plan in three main stages which includes basic allocation, special container allocation and various stages of stability adjustments. The purpose of this study is to describe how visualization of stowage plan and simulation of allocation sequences can help in the formulation of new and better stowage allocation algorithm.
Evolutionary algorithms can efficiently solve multi-objective optimization problems (MOPs) by obtaining diverse and near-optimal solution sets. However, the performance of multi-objective evolutionary algorithms (MOEA...
详细信息
ISBN:
(纸本)9789881701282
Evolutionary algorithms can efficiently solve multi-objective optimization problems (MOPs) by obtaining diverse and near-optimal solution sets. However, the performance of multi-objective evolutionary algorithms (MOEAs) is often limited by the suitability of their corresponding parameter settings with respect to different optimization problems. The tuning of the parameters is a crucial task which concerns resolving the conflicting goals of convergence and diversity. Moreover, parameter tuning is a time-consuming trial-and-error optimization process which restricts the applicability of MOEAs to provide real-time decision support. To address this issue, we propose a self-adaptive mechanism (SAM) to exploit and optimize the balance between exploration and exploitation during the evolutionary search. This "explore first and exploit later" approach is addressed through the automated and dynamic adjustment of the distribution index of the simulated binary crossover (SBX) operator. Our experimental results suggest that SAM can produce satisfactory results for different problem sets without having to predefine/pre-optimize the MOEA's parameters. SAM can effectively alleviate the tedious process of parameter tuning thus making on-line decision support using MOEA more feasible.
暂无评论