cloud is an inflection point for customers to transit the burden of infrastructure and platform management to a service provider. There is virtual centralization in cloudcomputing. This paper provides the state of th...
详细信息
This paper presents our work on simulation of large-scale black oil models on parallel computers. An in-house platform has been developed and a black oil simulator has been implemented based on this platform, which ca...
详细信息
ISBN:
(纸本)9781509024032
This paper presents our work on simulation of large-scale black oil models on parallel computers. An in-house platform has been developed and a black oil simulator has been implemented based on this platform, which can handle the standard black oil model and oil-water model. Numerical methods and a new parallel preconditioner are introduced. The simulator uses MPI for communication among computation nodes and it is capable of simulating black oil models with hundreds of millions of grid cells. Numerical simulations show that our simulator has excellent scalability and it can speed simulations thousands of times faster.
cloudcomputing is a paradigm of both parallel processing and distributedcomputing. It offers computing facilities as a utility service in pay as par use manner. Virtualization, selfservice provisioning, elasticity a...
详细信息
ISBN:
(纸本)9781509052561
cloudcomputing is a paradigm of both parallel processing and distributedcomputing. It offers computing facilities as a utility service in pay as par use manner. Virtualization, selfservice provisioning, elasticity and pay per use are the key features of cloudcomputing. It provides different types of resources over the Internet to perform user submitted tasks. In cloud environment, huge number of tasks are executed simultaneously, an effective Task Scheduling is required to gain better performance of the cloud system. Various cloud-based Task Scheduling algorithms are available that schedule the user's task to resources for execution. Due to the novelty of cloudcomputing, traditional scheduling algorithms cannot satisfy the cloud's needs, the researchers are trying to modify traditional algorithms that can fulfil the cloud requirements like rapid elasticity, resource pooling and on-demand self-service. In this paper the current state of Task Scheduling algorithms has been discussed and compared on the basis of various scheduling parameters like execution time, throughput, makespan, resource utilization, quality of service, energy consumption, response time and cost.
cloudcomputing aims at allowing access to large amount of computing power in a fully virtualized manner, by aggregating resources and offering a single system view. More and more businesses and individuals are attrac...
详细信息
As communication was the major perspective of traditional networks, gridcomputing focuses on figuring the problems by using unprocessed CPU cycles that cannot be resolved by stand-alone computers. grid being an earth...
详细信息
ISBN:
(纸本)9781467399166
As communication was the major perspective of traditional networks, gridcomputing focuses on figuring the problems by using unprocessed CPU cycles that cannot be resolved by stand-alone computers. grid being an earthly distributed network of computers provides a clear, coordinated, consistent and reliable computing medium to various applications. Owing to the heterogeneity of resources in a grid environment, job scheduling is problematic and therefore, needs competent schedulers. The paper provides a comparative analysis of various Ant colony optimization variants based on their effectiveness in determining optimal or near optimal solutions. The conclusion drawn from the survey explores ACO's effectiveness in solving the various scheduling problems. However, uncertainty in convergence time and early Designation of the initial and the extreme point hinders optimal scheduling. With the support of literature survey, not only assured guidelines are extracted for ACO algorithm but also, promising directions are provided for future work.
The performance of parallel computers grows rapidly. However, application software is lagging behind due to two bottlenecks: "performance wall" and "programmability wall". These bottlenecks have pr...
详细信息
ISBN:
(纸本)9781509024032
The performance of parallel computers grows rapidly. However, application software is lagging behind due to two bottlenecks: "performance wall" and "programmability wall". These bottlenecks have prevented a lot of application software from achieving good performance and fast development. Programming framework is considered an effective approach to overcome the above bottlenecks. In this paper, we give a prototype of JAUMIN, which is a programming framework for large scale application software based on unstructured mesh. Some important technologies of JAUMIN will be presented, including distributed data structures, data communication patterns and application programming interfaces. Finally, some applications based on JAUMIN will be demonstrated to show that JAUMIN can accelerate the development of application software greatly and support effective simulation on petascale supercomputer.
The abundance of computing technologies and devices imply that we will live in a data-driven society in the next years. But this data-driven society requires radically new technologies in the data center to deal with ...
详细信息
The abundance of computing technologies and devices imply that we will live in a data-driven society in the next years. But this data-driven society requires radically new technologies in the data center to deal with data manipulation, transformation, access control, sharing and placement, among others. We advocate in this paper for a new generation of Software Defined Data Management Infrastructures covering the entire life-cycle of data. On the one hand, this will require new extensible programming abstractions and services for data-management in the data center. On the other hand, this also implies opening up the control plane to data owners outside the data center to manage the data life cycle. We present in this article the open challenges existing in data-driven software defined infrastructures and a use case based on Software Defined Protection of data. (C) 2016 The Authors. Published by Elsevier B.V.
Rapid construction of electric power systems leads to a large increase in quantities of network nodes and data, complicating network reconstruction issue significantly. However, the traditional serial algorithms can n...
详细信息
Rapid construction of electric power systems leads to a large increase in quantities of network nodes and data, complicating network reconstruction issue significantly. However, the traditional serial algorithms can not reach a satisfactory computation speed;while some proposed parallel algorithms prior are only applicable to specialized cluster. In this paper, we propose a parallel algorithm of distribution network reconstruction conducted on cost-effective Hadoop cluster. Our algorithm complies with Map-Reduce distributedcomputing framework. It can process the data of each network node in parallel, thus accelerating power flow calculation. Moreover, our algorithm combines depth-first and breath-first principles together in branch traversals, which substantially improves the probability in finding the optimal solution. Feasibility and effectiveness of the proposed algorithm are verified on a Hadoop cluster.
The proceedings contain 10 papers. The topics discussed include: the role of open source software in program analysis for reverse engineering;the malware detection challenge of accuracy;3d gesture-based control system...
ISBN:
(纸本)9781509045808
The proceedings contain 10 papers. The topics discussed include: the role of open source software in program analysis for reverse engineering;the malware detection challenge of accuracy;3d gesture-based control system using processing open source software;automated analysis of flow cytometry data: a systematic review of recent methods;toward extending apache thrift open source to alleviate SOAP service consumption;test suite effectiveness: an indicator for open source software quality;innovative methodology for elevating big data analysis and security;a new data-intensive task scheduling in OptorSim, an open source grid simulator;monitoring 'grid-cloud' model using complex event processing (CEP);and on tackling social engineering web phishing attacks utilizing software defined networks (SDN) approach.
The main intent of the research is to learn more about the ways in which PhD doctorants in the Software engineering field can pre pare themselves for careers in modern exacting and fast changing job market. Results of...
详细信息
The main intent of the research is to learn more about the ways in which PhD doctorants in the Software engineering field can pre pare themselves for careers in modern exacting and fast changing job market. Results of first diagnostic stage of the study are presented based on materials of a simple quantitative survey of a group of PhD-students, a signifficant part of which is developing in the field of parallel, distributed andcloudcomputing. This group is currently enrolled in the international program at the Erasmus+ project of eleven universities from five European countries, Russia and Jordan. Processing and statistical analysis results of the survey allowed to fidentify groups of the most signifficant professional skills for future work, to find out the PhD-students level of knowledge and mastering these skills and to evaluate the students intention to obtain them. Conclusions and recommendations presented on the base of comparison the list of skills ranked by PhD students with a list of skills required by employers in the areas close to the Software engineering.
暂无评论