With the iteration of decision algorithm, various decision problems emerge in an endless stream. This paper aims to propose a new decision algorithm to reduce the impact of decision problems on practice. Firstly, six ...
详细信息
Foveated rendering (FR) improves the rendering performance of virtual reality (VR) by allocating fewer computational loads in the peripheral field of view (FOV). Existing FR techniques are built based on the radially ...
详细信息
ISBN:
(纸本)9798350374025;9798350374032
Foveated rendering (FR) improves the rendering performance of virtual reality (VR) by allocating fewer computational loads in the peripheral field of view (FOV). Existing FR techniques are built based on the radially symmetric regression model of human visual acuity. However, horizontal-vertical asymmetry (HVA) and vertical meridian asymmetry (VMA) in the cortical magnification factor (CMF) of the human visual system have been evidenced by retinotopy research of neuroscience, suggesting the radially asymmetric regression of visual acuity. In this paper, we begin with functional magnetic resonance imaging (fMRI) data, construct an anisotropic CMF model of the human visual system, and then introduce the first radially asymmetric regression model of the rendering precision for FR applications. We conducted a pilot experiment to adapt the proposed model to VR head-mounted displays (HMDs). A user study demonstrates that retinotopic foveated rendering (RFR) provides participants with perceptually equal image quality compared to typical FR methods while reducing fragments shading by 27.2% averagely, leading to the acceleration of 1/6 for graphics rendering. We anticipate that our study will enhance the rendering performance of VR by bridging the gap between retinotopy research in neuroscience and computer graphics in VR.
Digitized histopathology glass slides, known as Whole Slide Images (WSIs), are often several gigapixels large and contain sensitive metadata information, which makes distributed processing unfeasible. Moreover, artifa...
详细信息
ISBN:
(纸本)9798350341034
Digitized histopathology glass slides, known as Whole Slide Images (WSIs), are often several gigapixels large and contain sensitive metadata information, which makes distributed processing unfeasible. Moreover, artifacts in WSIs may result in unreliable predictions when directly applied by Deep Learning (DL) algorithms. Therefore, preprocessing WSIs is beneficial, e.g., eliminating privacy-sensitive information, splitting a gigapixel medical image into tiles, and removing the diagnostically irrelevant areas. This work proposes a cloud service to parallelize the preprocessing pipeline for large medical images. The data and model parallelization will not only boost the end-to-end processing efficiency for histological tasks but also secure the reconstruction of WSI by randomly distributing tiles across processing nodes. Furthermore, the initial steps of the pipeline will be integrated into the Jupyter-based Virtual Research Environment (VRE) to enable image owners to configure and automate the execution process based on resource allocation.
Recent advancements in the domain of human activity recognition (HAR) are increasingly aimed at developing methodologies, approaches, and models for real-time, multi-step activity recognition and analysis. This work p...
详细信息
Over the past decade, various methods for detecting side-channel leakage have been proposed and proven to be effective against CPU side-channel attacks. These methods are valuable in assisting developers to identify a...
详细信息
ISBN:
(纸本)9798350341065;9798350341058
Over the past decade, various methods for detecting side-channel leakage have been proposed and proven to be effective against CPU side-channel attacks. These methods are valuable in assisting developers to identify and patch side-channel vulnerabilities. Nevertheless, recent research has revealed the feasibility of exploiting side-channel vulnerabilities to steal sensitive information from GPU applications, which are beyond the reach of previous side-channel detection methods. Therefore, in this paper, we conduct an in-depth examination of various GPU features and present Owl, a novel side-channel detection tool targeting CUDA applications on NVIDIA GPUs. Owl is designed to detect and locate side-channel leakage in various types of CUDA applications. When tracking the execution of CUDA applications, we design a hierarchical tracing scheme and extend the A-DCFG (Attributed Dynamic Control Flow Graph) to address the massively parallel execution in CUDA, ensuring Owl's detection scalability. After completing the initial assessment and filtering, we conduct statistical tests on the differences in program traces to determine whether they are indeed caused by input variations, subsequently facilitating the positioning of side-channel leaks. We evaluate Owl's capability to detect side-channel leaks by testing it on Libgpucrypto, PyTorch, and nvJPEG. Meanwhile, we verify that our solution effectively handles a large number of threads. Owl has successfully identified hundreds of leaks within these applications. To the best of our knowledge, we are the first to implement side-channel leakage detection for general CUDA applications.
Because the landscape of data processing created is likely to rise considerably in the next years due to the proliferation of devices that require data processing at edge computing, it is imperative for data processin...
详细信息
ISBN:
(纸本)9798350386783;9798350386776
Because the landscape of data processing created is likely to rise considerably in the next years due to the proliferation of devices that require data processing at edge computing, it is imperative for data processing issues at the network's edge storage frame. The goal of this study is to find out how deep learning can be used to speed up operation processing. Effective resource management is essential in the age of edge computing to maximize the performance of storage edge networked frameworks. The purpose of this work is to investigate how resource management in such systems could be improved by utilizing CNN-CPU scheduling approaches. Our goal is to optimize job allocation, prioritization, and scheduling in storage edge situations by fusing Convolutional Neural Networks (CNNs) with CPU-based scheduling methods. The suggested method aims to improve overall system performance, reduce latency, and maximize resource use. We illustrate the efficacy of CNN-CPU scheduling in improving resource management in storage edge networked frameworks with experimental validation and performance analysis. This research contributes to advancing the capabilities of edge computingsystems by leveraging deep learning techniques for efficient resource allocation and management. The proposed approach seeks to overcome the limitations of traditional scheduling methods by leveraging the power of deep learning. CNNs are renowned for their ability to extract meaningful features from data, and by harnessing this capability, we can optimize resource utilization and minimize latency in storage edge networked frameworks. Through a combination of CNN-based feature extraction and CPU-based scheduling decisions, we strive to achieve efficient and intelligent resource management. This study investigates the potential of CNN-CPU scheduling techniques to address these challenges and enhance resource management in such frameworks. By integrating Convolutional Neural Networks (CNNs) with CPU-based sc
Artificial neural networks are widely used in various fields, such as intelligent road networks, Internet of Things, and smart medical systems due to their ability to process large amounts of data in parallel, store i...
详细信息
Industrial Control systems (ICS) manage physical processes through Programmable Logic Controllers (PLCs) and are widely used in critical infrastructure (CI). Because of their wide use in CI, PLCs are a target for cybe...
详细信息
ISBN:
(纸本)9798350375367
Industrial Control systems (ICS) manage physical processes through Programmable Logic Controllers (PLCs) and are widely used in critical infrastructure (CI). Because of their wide use in CI, PLCs are a target for cyberattacks to disrupt CI. On CI and other networks, there is often a mix of Information Technology (IT) and Operational Technology (OT) systems. Detecting and classifying OT and IT assets on a network can help in providing targeted security measures for OT systems. Actively scanning for assets in ICS networks can lead to unexpected errors on PLCs. PLCs operate with limited resources and are notorious for producing errors or faults when presented with unexpected network packets. Special care must be taken when sending unsolicited network packets to these devices to avoid physical process interruption, catastrophic failures, even potential human injury. In this paper, we explore a collection of prevalent Machine Learning (ML) algorithms: k-Nearest-Neighbors (KNN), Support-Vector-Machines (SVM), Random Forest and AdaBoost in order to differentiate ICS network assets from IT machines solely from passive monitoring of network traffic flows. Furthermore, we integrate the proposed classification method into an autonomic architecture for continuous monitoring and identification of ICS assets in a SCADA network environment, both for offensive and defensive purposes.
The integration of deep learning and computer vision has led to groundbreaking innovations, notably in Image-to-Audio technologies. This transformative paradigm aims to provide comprehensible audio descriptions for vi...
详细信息
Scheduling is undoubtedly one of the difficulties that the cloud, which is always developing, must overcome. Scheduling describes aspecific method of arranging the tasks that a laptopdevice should complete in the corr...
详细信息
ISBN:
(纸本)9798350305708
Scheduling is undoubtedly one of the difficulties that the cloud, which is always developing, must overcome. Scheduling describes aspecific method of arranging the tasks that a laptopdevice should complete in the correct order. In this study, we used FCFS and round-robin schedulingto construct a standard priority seek set of criteria for assignment execution and assessment. The method has been tested on the cloud, and the outcomes demonstrate that it outperforms several traditional scheduling techniques. In distributedcomputingsystems, a variety of scheduling techniques are used, task scheduling being one of them. For resource optimization, we specifically cover three methods: ANT colony, analytic, and genetic algorithms. We developed a new precedence set of rules with a limited number of activities;in the future, we will take on additional jobs and work to reduce the implemented executiontime. We may also expand this set of rules to the network environment and examine the time difference between the cloud and the network. The suggested approach is entirely based on queuing models. The burden, response time, and length of the common queue were all decreased by directingincoming requests to a low task. These results show that our version may increase the utilization of the global agenda while decreasing latency. The results of the experiments verified that the suggested version may lower power consumption, hence enhancing service quality inside the cloud architecture. Future iterations of the planned version would utilize cloud computing techniques built on parallel algorithms to speed up activation of user requests. We recommend the adoption of aheterogeneous resource allocation method called Skewness avoidance multi-resource allocation (SAMR) to distribute resources based on particularrequirements for particular types of assets. Our approach comprises of a set of VM provisioning guidelines to ensurethat heterogeneous constraints are properly distributed to preven
暂无评论