In the last couple of decades, enterprises have relished and leveraged the capacity, performance, scalability, and quality of cloud computing services. However, a few years ago, the edge computing concept enabled data...
详细信息
ISBN:
(纸本)9798350362442;9798350362435
In the last couple of decades, enterprises have relished and leveraged the capacity, performance, scalability, and quality of cloud computing services. However, a few years ago, the edge computing concept enabled data processing and application execution near compute and data resources, aiming to reduce latency and promote higher security and sovereignty regarding data transfers. The simultaneous use of these two service models by enterprises has lately resulted in the concept of edge-to-cloud continuum, which combines edge and cloud technologies and promotes standards and algorithms for resource orchestration, data management, and the deployment of solutions across the connected edge and cloud resources. Taking a step further in this direction, this paper introduces a framework to realise the Cognitive Computing Continuum (CCC). The framework leverages AI techniques to address the needs for optimal (edge and cloud) resource management and dynamic scaling, elasticity, and portability of hyper-distributed data-intensive applications. The proposed ENACT framework enables the automated management of distributed (edge and cloud) resources and the development of hyper-distributed applications that can take advantage of distributed deployment and execution opportunities to optimize their behaviors in terms of execution time, resource utilisation and energy efficiency.
Known image processing software systems use filtering methods such as the Gaussian filter, the median filter, and others, which often do not have satisfactory performance in processing certain types of noise. This lea...
详细信息
ISBN:
(纸本)9798350368185;9798350368178
Known image processing software systems use filtering methods such as the Gaussian filter, the median filter, and others, which often do not have satisfactory performance in processing certain types of noise. This leads to a partial loss of the useful signal and a deterioration in image quality. The work is devoted to improving the quality of image filtering in various kinds of noise. A model of additive interaction of signals in additive impulse noise is proposed. A new method of least finite differences (MLFD) for image filtering has been developed. An interactive web service for the implementation of MLFD was created. Various filtering methods have been proposed to reduce the effect of impulsive noise on images. The proposed data processing is based on a priori information about the type of noise. The web service is a flexible and efficient solution for filtering digital images and is not available in well-known graphics packages. The new web service is a good compromise between the quality of filtering and the simplicity of the solution compared to complex and resource-intensivesystems for graphic image processing.
Nowadays, load balancing is more and more important for huge data centers. Most data centers usually adopt a software-based load balancer (LB), which consumes too many server resources and does not scale with the incr...
详细信息
ISBN:
(纸本)9798350386066;9798350386059
Nowadays, load balancing is more and more important for huge data centers. Most data centers usually adopt a software-based load balancer (LB), which consumes too many server resources and does not scale with the increasing traffic volume. However, there is an increased demand for layer-4 load balancers, which need to check more bits in packet headers. Although hardware-based load balancers can meet the requirements, they are much more expensive. During the past years, data centers frequently update their devices, including kinds of switches. Thus, there exists a huge number of obsolete switches, some of which are in good condition and equipped with high-performance storage like SRAM (Static Random Access Memory). In this paper, we proposed BCLB (Bit-based Collaborative Load Balancer), a new cooperative LB, which builds more powerful load balancing based on existing switches. Different with previous cooperative mechanisms that distribute rules to different LBs, BCLB lets switches cooperate based on bits. Many switches along the data path check different bits, and cooperatively check all bits in 5-tuple (which would be 296 bits in IPv6) for layer-4 packet header. In this way, rule updating will not influence the load balancing in BCLB. To optimize the performance of BCLB, we formulate the problem as an optimization problem, then we propose a dynamic programming algorithm to solve it. Finally, we conduct comprehensive simulations using both real-world traffic datasets and a P4-based prototype, the results show that BCLB performs much better than previous rule-based cooperative schemes.
The regulation problem for a class of discrete-time nonlinear non-affine systems with partially known structures using model free adaptive control (MFAC) algorithm is investigated in this paper. The core idea is to fi...
详细信息
ISBN:
(纸本)9798350354416;9798350354409
The regulation problem for a class of discrete-time nonlinear non-affine systems with partially known structures using model free adaptive control (MFAC) algorithm is investigated in this paper. The core idea is to first linearize the known parts of the system mathematical model based on traditional linearization methods, and then employ dynamic linearization technology to process the unknown structure of controlled system and the unmodeled dynamics generated by traditional linearization, for the purpose of complementary advantages and the collaborative control between the data-driven control (DDC) methods and the model-based control (MBC) strategies. Unlike the prototype MFAC algorithm, the control scheme devised in this paper fully utilizes the known structure of the system such that the control objective can be better realized. Finally, the monotonic convergence of system tracking error is rigorously proved, meanwhile, the superiorities of developed algorithm is demonstrated by the simulation comparison results.
Uncertain and evolving conditions on several time horizons can jeopardize the safety and mobility of goods and people on road networks. While sensor vehicle data aids in assessing system safety, uncertainties remain. ...
详细信息
ISBN:
(纸本)9798350358810;9798350358803
Uncertain and evolving conditions on several time horizons can jeopardize the safety and mobility of goods and people on road networks. While sensor vehicle data aids in assessing system safety, uncertainties remain. New connected-vehicle (CV) technology offers increasingly higher resolutions of data for evaluating potential safety counter- measures. This paper develops a systemic risk and multi-criteria analysis of roadway network performance, utilizing CV data to inform parametric models. Its innovation is the use of CV data in a particular type of risk assessment, where risk is quantified as the disruption of priority orders of locations across a region. An included demonstration uses newly available connected vehicle movement and event data of a transportation agency. The results guide the implementation of safety protocols, prioritizing access management at locations of vulnerable transport networks. The paper should be of interest to stakeholders that include planners, engineers and traffic management authorities. The approach and insights are relevant to the monitoring for anomalies across a variety of systems and networks. In our analysis, the study identified significant variability in roadway performance, with mornings and weekends and peak hours showing the highest disruptions scores of 7, 6, and 6 respectively. The analysis revealed five segments (10, 11, 18, 19, 1) with robust performance, underscoring the importance of implementing access management techniques on these segments. These findings highlight mornings and weekends as critical times for targeted interventions to apply access management.
The integration of Field Programmable Gate Arrays (FPGAs) into cloud computing systems has become commonplace. As the operating systems used to manage these systems evolve, special consideration must be given to DRAM ...
详细信息
ISBN:
(纸本)9798350387186;9798350387179
The integration of Field Programmable Gate Arrays (FPGAs) into cloud computing systems has become commonplace. As the operating systems used to manage these systems evolve, special consideration must be given to DRAM devices accessible by FPGAs. These devices may hold sensitive data that can become inadvertently exposed to adversaries following user logout. Although addressed in some cloud FPGA environments, automatic DRAM clearing after process termination is not automatically included in popular FPGA runtime environments nor in most proposed cloud FPGA hypervisors. In this paper, we examine DRAM data persistence in AMD/Xilinx Alveo U280 nodes that are part of the Open Cloud Testbed (OCT). Our results indicate that DDR4 DRAM is not automatically cleared following user logout from an allocated node and subsequent node users can easily obtain recognizable data from the DRAM following node reallocation over 17 minutes later. This issue is particularly relevant for systems which support FPGA multi-tenancy.
Deep hashing retrieval has gained widespread use in big data retrieval due to its robust feature extraction and efficient hashing process. However, training advanced deep hashing models has become more expensive due t...
详细信息
In this work, we propose a solution that leverages geospatial data to initialize the monocular visual-inertial navigation system. For Visual-Inertial Navigation systems (VINS) operating on UAVs, the ability to perform...
详细信息
ISBN:
(纸本)9798350377712;9798350377705
In this work, we propose a solution that leverages geospatial data to initialize the monocular visual-inertial navigation system. For Visual-Inertial Navigation systems (VINS) operating on UAVs, the ability to perform initialization and relocalization in mid-air is essential. However, degenerate motion can cause VINS to lose scale, making traditional initialization algorithms less reliable. To address this issue, we fuse geographic information in the initialization process, and utilize a learning-based feature matching algorithm to associate the information with inertial states. The proposed approach demonstrates adaptability to the degenerate motions of UAVs and significantly surpasses the estimation accuracy of conventional VINS initialization algorithms. Compared to methods that assist initialization by using a laser-range-finder (LRF), the proposed method solely relies on low-cost satellite imagery and elevation information. We evaluate the proposed approach on a large-scale UAV dataset, and compare with existing methods. The results demonstrate the superior effectiveness of the proposed method.
Structural Health Monitoring (SHM) monitors the working states of building structures to ensure their reliable operation, but the high cost of SHM sensor nodes limits its largescale deployment. In this paper, we propo...
详细信息
ISBN:
(纸本)9798350359329;9798350359312
Structural Health Monitoring (SHM) monitors the working states of building structures to ensure their reliable operation, but the high cost of SHM sensor nodes limits its largescale deployment. In this paper, we propose a novel computational model that can generate "virtual" sensor nodes with reliable data output at zero deployment costs for cost-efficient SHM sensing. To achieve this, we combine one-dimensional convolutional layer in CNN with Bi-LSTM model to capture temporal and spatial correlations in the data;to improve the quality of the generated data, we design a weighted smoothing algorithm to reduce noise while preserving the integrity of the data;to enhance system robustness, we integrate attention mechanisms at the end of the model to assign different weights to each element, which can capture the limited but crucial information within the data. We evaluate our system on the dataset of the KW51 railroad bridge as an example. The results show that the generated virtual sensor node incurs zero deployment cost, and its data is 94% close to the ground truth, thus making it helpful for facilitating the largescale deployment of SHM systems.
Although the volume of stored data doubles every year, storage capacity costs decline only at a rate of less than 1/5 per year. At the same time, data is stored in multiple physical locations and remotely retrieved fr...
详细信息
ISBN:
(纸本)9798350322446
Although the volume of stored data doubles every year, storage capacity costs decline only at a rate of less than 1/5 per year. At the same time, data is stored in multiple physical locations and remotely retrieved from multiple sites. Thus, minimizing data storage costs while maintaining data fidelity and efficient retrieval is still a key challenge in database systems. In addition to the raw big data, its associated metadata and indexes equally demand tremendous storage that impacts the I/O footprint of data centers. In this vision paper, we propose a new signature-based compression (SIBACO) technique that is able to: (i) incrementally store big data in an efficient way;and (ii) improve the retrieval time for data-intensive applications. SIBACO achieves higher compression ratios by combining and compressing columns differently based on the type and distribution of data and can be easily integrated with column and hybrid stores. We evaluate our proposed tool using real datasets showing that SIBACO outperforms "monolithic" compression schemes in terms of storage cost.
暂无评论