To realize the effectiveness and real-time of computing power scheduling of power grid cloud platform, this paper proposes a load forecasting and computing power scheduling method of power grid cloud platform based on...
详细信息
In recent years, with the rapid development of technology, the level of informatization of power systems based on cloud computing has also been continuously improving. More and more data has been accumulated in variou...
详细信息
This paper provides a novel solution for developing a virtual keyboard and mouse (VKM) system that is easily manageable and portable. The traditional keyboards and mouse devices take up valuable desk space and are not...
详细信息
ISBN:
(纸本)9798350369458;9798350369441
This paper provides a novel solution for developing a virtual keyboard and mouse (VKM) system that is easily manageable and portable. The traditional keyboards and mouse devices take up valuable desk space and are not easily customizable to different languages. On-screen keyboards and 3-D cameras are alternatives, but they also have drawbacks. Our proposed method makes use of computer vision techniques and calls for a mini-projector and a web camera as necessary hardware. The system tracks hand keypoints to detect real-time touch events and uses the Mediapipe tool to detect hands and keystrokes. The mouse functionality is also implemented by monitoring the finger hovering. Through experimentation, we show that our VKM solution can provide an accuracy of >90% for detecting the correct keystroke, with a typing speed of similar to 55 letters/min.
Conventional short-term load forecasting methods for main network, which are mainly based on load change feature extraction, are affected by the complexity of forecasting and cannot adapt to multiple operation scenari...
详细信息
Edge AI has been recently proposed to facilitate the training and deployment of Deep Neural Network (DNN) models in proximity to the sources of data. To enable the training of large models on resource-constraint edge ...
详细信息
ISBN:
(纸本)9798350371000;9798350370997
Edge AI has been recently proposed to facilitate the training and deployment of Deep Neural Network (DNN) models in proximity to the sources of data. To enable the training of large models on resource-constraint edge devices and protect data privacy, parallel split learning is becoming a practical and popular approach. However, current parallel split learning neglects the resource heterogeneity of edge devices, which may lead to the straggler issue. In this paper, we propose EdgeSplit, a novel parallel split learning framework to better accelerate distributed model training on heterogeneous and resource-constraint edge devices. EdgeSplit enhances the efficiency of model training on less powerful edge devices by adaptively segmenting the model into varying depths. Our approach focuses on reducing total training time by formulating and solving a task scheduling problem, which determines the most efficient model partition points and bandwidth allocation for each device. We employ a straightforward yet effective alternating algorithm for this purpose. Comprehensive tests conducted with a range of DNN models and datasets demonstrate that EdgeSplit not only facilitates the training of large models on resource-restricted edge devices but also surpasses existing baselines in performance.
The growth of single-phase solar photovoltaic systems and it's integration with grid, necessitates Low Voltage Ride-Through (LVRT) capability for grid stability. grid code mandates LVRT capabilities for all distri...
详细信息
ISBN:
(纸本)9798350385939;9798350385922
The growth of single-phase solar photovoltaic systems and it's integration with grid, necessitates Low Voltage Ride-Through (LVRT) capability for grid stability. grid code mandates LVRT capabilities for all distributed generation (DG), including those connected to low voltage distribution grids. LVRT capability provides stable grid integration during transient conditions. This paper uses a variable power point tracking (VPPT) method along with the reactive power injection. Unlike the conventional PV inverter, the proposed techniques mitigate the issues of high DC link voltage and excessive grid current in low voltage grid faults. In this paper, two controllers are demonstrated for two different operations. DC link voltage controller is used for power control operation and proportional-integral (PI) with feed-forward controller is used for current control operation. Seamless LVRT capability is achieved through the peak value based fault detection and orthogonal signal generation (OSG) based voltage sag detection. Simulation in MATLAB/Simulink for 5kW single phase two stage system verifies the effectiveness of the algorithm in successful implementation of LVRT capabilities. During voltage sag, system seamlessly transitions from MPPT mode to VPPT mode hence maintaining the power balance and keeping the DC link voltage stable as well as avoiding over current tripping of the inverter. Additionally, the system demonstrated the capability of solar inverters to inject reactive power as required by grid codes.
Edge computing is a recent paradigm where the processing takes place close to the data sources. It therefore reduces latency and saves bandwidth compared to traditional cloud computing. The latter can continue to play...
详细信息
ISBN:
(纸本)9781728190549
Edge computing is a recent paradigm where the processing takes place close to the data sources. It therefore reduces latency and saves bandwidth compared to traditional cloud computing. The latter can continue to play a supportive role. Edge-cloud computing provides benefits in many use cases including distributed computation algorithms, where the processing is divided into a number of tasks that are executed in parallel on different equipment. An important relevant challenge is to allocate the appropriate resources to process the data that are continuously generated from user devices. The issue becomes more complicated when we take into account the variations in the volume of the generated data as a function of time. In this paper we present a resource allocation algorithm for distributed computation with emphasis on machine learning algorithms. We consider that the resource requirements vary with time in a semi-static way that exhibits some daily pattern. We distinguish between periodic (expected) variations that occur during the day, and sporadic variations due to unexpected events. We propose an Integer Linear Programming algorithm to allocate the periodic resource requirements. To handle the non-periodic requirements, we consider a suitable prediction algorithm coupled with a reconfiguration algorithm that allocates the predicted required resources. Our results indicate that our proposal outperforms traditional allocation algorithms in terms of resource utilization, monetary cost and achieved accuracy.
The evolution of the distributedcomputing paradigm had as a result new computing models such as grid and cloud computing. Furthermore, in these environments it is common to run complex parallel applications thus maki...
详细信息
With the continuous advancement of information management system construction, power informatization has gradually achieved office automation for power enterprises, providing decision-making and services for power ent...
详细信息
We develop a novel autoscaling service (autoscaler) to provide elasticity for Real-Time Online Interactive Applications (ROIA) running on clouds for thousands of concurrent users. High-performance ROIA include real-ti...
详细信息
ISBN:
(纸本)9798350363074;9798350363081
We develop a novel autoscaling service (autoscaler) to provide elasticity for Real-Time Online Interactive Applications (ROIA) running on clouds for thousands of concurrent users. High-performance ROIA include real-time 3D product configurators, multiplayer online gaming, digital twins for the industry 4.0 market, and e-learning. Using our autoscaler for ROIA on clouds facilitates meeting high demands on Quality of Experience (QoE) and the economic utilization of cloud resources. Compared to existing autoscaling solutions (e.g., in Kubernetes), our autoscaler is based not on the classical metrics (CPU/GPU load, memory usage, etc.), but rather on the session slots which limit the number of concurrent sessions for a service instance. We design a novel autoscaling algorithm using linear regression of the session slots usage, and we mathematically analyze and experimentally evaluate autoscaler's dynamic reaction to changing workload while avoiding overswinging (creating more service instances than needed). We also report our preliminary experimental results.
暂无评论