An integrated generation, transmission, and energy storage planning model accounting for short-term con-straints and long-term uncertainty is proposed. The model allows to accurately quantify the value of flexibility ...
详细信息
An integrated generation, transmission, and energy storage planning model accounting for short-term con-straints and long-term uncertainty is proposed. The model allows to accurately quantify the value of flexibility options in renewable power systems by representing short-term operation through the unit commitment constraints. Long-term uncertainty is represented through a scenario tree. The resulting model is a large-scale multi-stage stochastic mixed-integer programming problem. To overcome the computational burden, a distributed computing framework based on the novel Column Generation and Sharing algorithm is proposed. The performance improvement of the proposed approach is demonstrated through study cases applied to the NREL 118-bus power system. The results confirm the added value of modeling short-term constraints and long-term uncertainty simultaneously. The computational case studies show that the proposed solution approach clearly outperforms the state of the art in terms of computational performance and accuracy. The proposed planning framework is used to assess the value of energy storage systems in the transition to a low-carbon power system.
To support large-scale intelligent applications, distributed machine learning based on JointCloud is an intuitive solution scheme. However, the distributed machine learning is difficult to train due to that the corres...
详细信息
ISBN:
(纸本)9781728169811
To support large-scale intelligent applications, distributed machine learning based on JointCloud is an intuitive solution scheme. However, the distributed machine learning is difficult to train due to that the corresponding optimization solver algorithms converge slowly, which highly demand on computing and memory resources. To overcome the challenges, we propose a computing framework for L-BFGS optimization algorithm based on variance reduction method, which can utilize a fixed big learning rate to linearly accelerate the convergence speed. To validate our claims, we have conducted several experiments on multiple classical datasets. Experimental results show that the computing framework accelerate the training process of solver and obtain accurate results for machine learning algorithms.
Current scientific workflows are large and complex. They normally perform thousands of simulations whose results combined with searching and data analytics algorithms, in order to infer new knowledge, generate a very ...
详细信息
ISBN:
(纸本)9783030576752;9783030576745
Current scientific workflows are large and complex. They normally perform thousands of simulations whose results combined with searching and data analytics algorithms, in order to infer new knowledge, generate a very large amount of data. To this end, workflows comprise many tasks and some of them may fail. Most of the work done about failure management in workflow managers and runtimes focuses on recovering from failures caused by resources (retrying or resubmitting the failed computation in other resources, etc.) However, some of these failures can be caused by the application itself (corrupted data, algorithms which are not converging for certain conditions, etc.), and these fault tolerance mechanisms are not sufficient to perform a successful workflow execution. In these cases, developers have to add some code in their applications to prevent and manage the possible failures. In this paper, we propose a simple interface and a set of transparent runtime mechanisms to simplify how scientists deal with application-based failures in task-based parallel workflows. We have validated our proposal with use-cases from e-science and machine learning to show the benefits of the proposed interface and mechanisms in terms of programming productivity and performance.
Contemporary distributed computing Systems (DCS) such as Cloud Data Centers are large scale, complex, heterogeneous, and distributed across multiple networks and geographical boundaries. On the other hand, the Interne...
详细信息
ISBN:
(纸本)9781728182667
Contemporary distributed computing Systems (DCS) such as Cloud Data Centers are large scale, complex, heterogeneous, and distributed across multiple networks and geographical boundaries. On the other hand, the Internet of Things (IoT)-driven applications are producing a huge amount of data that requires real-time processing and fast response. Managing these resources efficiently to provide reliable services to end-users or applications is a challenging task. The existing Resource Management Systems (RMS) rely on either static or heuristic solutions inadequate for such composite and dynamic systems. The advent of Artificial Intelligence (AI) due to data availability and processing capabilities manifested into possibilities of exploring data-driven solutions in RMS tasks that are adaptive, accurate, and efficient. In this regard, this paper aims to draw the motivations and necessities for data-driven solutions in resource management. It identifies the challenges associated with it and outlines the potential future research directions detailing where and how to apply the data-driven techniques in the different RMS tasks. Finally, it provides a conceptual data-driven RMS model for DCS and presents the two real-time use cases (GPU frequency scaling and data centre resource management from Google Cloud and Microsoft Azure) demonstrating AI-centric approaches' feasibility.
Digital image processing is an actual task in the digital communication systems, IP-telephony and video conferencing, in digital television, and video surveillance. Digital processing of large video images takes a lot...
详细信息
ISBN:
(纸本)9781728173863
Digital image processing is an actual task in the digital communication systems, IP-telephony and video conferencing, in digital television, and video surveillance. Digital processing of large video images takes a lot of time, especially if it happens in a real-time system. And, processing speed plays an important role in recognition of objects in video images received from IP-cameras in real time. This requires the use of modern technologies, and fast algorithms that increase the acceleration of digital image processing. Acceleration problems have not been fully resolved till present. Today's realities are such that the development of accelerated image processing programs requires a good knowledge of parallel and distributed computing. Both of these areas are united by the fact that both parallel and distributed software consists of several processes that together solve one common problem. This article proposes an accelerated method for the tasks of recognizing objects in video images received from IP-cameras using parallel and distributed computing technologies
This research compares the traditional machine learning algorithms and deep learning technology. We report our distributed computing convolutional neural network deep learning platform design and results in wafer defe...
详细信息
ISBN:
(纸本)9781728158761
This research compares the traditional machine learning algorithms and deep learning technology. We report our distributed computing convolutional neural network deep learning platform design and results in wafer defect classification. The result shows that the classification accuracy and purity performance is better than that of traditional machine learning models like Random Forest.
In the last few years, we have seen a significant increase both in the number and capabilities of mobile devices, as well as in the number of applications that need more and more computing and storage resources. Curre...
详细信息
ISBN:
(纸本)9783030576752;9783030576745
In the last few years, we have seen a significant increase both in the number and capabilities of mobile devices, as well as in the number of applications that need more and more computing and storage resources. Currently, in order to deal with this growing need for resources, applications make use of cloud services. This raises some problems, namely high latency, considerable use of energy and bandwidth, and the unavailability of connectivity infrastructures. Given this context, for some applications it makes sense to do part, or all, of the computations locally on the mobile devices themselves. In this paper we present OREGANO, a framework for distributed computing on mobile devices, capable of processing batches or streams of data generated on mobile device networks, without requiring centralized services. Contrary to current state-of-the-art, where computations and data are sent to worker mobile devices, OREGANO performs computations where the data is located, significantly reducing the amount of exchanged data.
The energy consumption of modern data centers is currently growing rapidly, due to the development of new services and the spread of the ICT (Information & Communication Technologies) in all areas of human life. I...
详细信息
ISBN:
(纸本)9781728171272
The energy consumption of modern data centers is currently growing rapidly, due to the development of new services and the spread of the ICT (Information & Communication Technologies) in all areas of human life. It causes a huge demand in new energy efficient computing technologies which would simultaneously improve data processing performance meeting the requirements of SLA.
this paper introduces a simple method of building a prototype cluster of the Raspberry Pi. The Pi cluster is a powerful-low -cost tool for teaching the complex concepts of parallel and distributed computing to undergr...
详细信息
ISBN:
(纸本)9781728153179
this paper introduces a simple method of building a prototype cluster of the Raspberry Pi. The Pi cluster is a powerful-low -cost tool for teaching the complex concepts of parallel and distributed computing to undergraduate E&CS students. The performance of the presented Pi cluster is assessed using two different applications of face recognition and image encryption, which are computationally expensive. The paper explains how to compare the performance of the Pi cluster against a traditional high performance traditional cluster. The comparison is designed to help undergraduate students understand the state of the art of the clusters. The paper explains how the Pi clusters fit in the CS curriculum at Old Dominion University. The presented project-based learning is an effective teaching approach that helps the learning of struggling engineering/computer science students from minorities and women groups at ODU.
Executing complicated computations in parallel increases the speed of computing and brings user delight to the system. Decomposing the program into several small programs and running multiple parallel processors are m...
详细信息
ISBN:
(纸本)9781728110516
Executing complicated computations in parallel increases the speed of computing and brings user delight to the system. Decomposing the program into several small programs and running multiple parallel processors are modeled by Directed Acyclic Graph. Scheduling nodes to execute this task graph is an important problem that will speed up computations. Since task scheduling in this graph belongs to NP-hard problems, various algorithms were developed for node scheduling to contribute to quality service delivery. The present study brought a heuristic algorithm named looking ahead sequencing algorithm (LASA) to cope with static scheduling in heterogeneous distributed computing systems with the intention of minimizing the schedule length of the user application. In the algorithm proposed here, looking ahead is considered as a criterion for prioritizing tasks. Also, a property called Emphasized Processor has been added to the algorithm to emphasize the task execution on a particular processor. The effectiveness of the algorithm was shown on few workflow type applications and the results of the algorithm implementation were compared with two more heuristic and meta-heuristic algorithms.
暂无评论