Endoscopic images usually have many overexposed regions due to strong and focused light sources and, consequently, physicians need to change the camera angle for a clear view from time to time. This work targets remov...
详细信息
Social networks have evolved as a ubiquitous means of communication in our daily lives. Users consistently generate a large amount of data on social media platforms, which on the other hand has increased the complexit...
详细信息
In this study, we investigate the characteristics of three-dimensional turbulent boundary layers influenced by transverse flow and pressure gradients. Our findings reveal that even without assuming an infinite sweep, ...
详细信息
Memory swapping was considered slow and evil, but swapping to Ultra Low-Latency storage like Optane has become a promising solution to save power and cost, helping densely-populated edge server to overcome its DRAM ca...
ISBN:
(纸本)9798350323481
Memory swapping was considered slow and evil, but swapping to Ultra Low-Latency storage like Optane has become a promising solution to save power and cost, helping densely-populated edge server to overcome its DRAM capacity bottleneck. However, the lack of integration between CPU scheduling and memory paging causes soft real-time tasks running on edge servers to miss deadlines under heavy memory multiplexing. We propose APP (Adaptive Page Pinning), lightweight protection of working set memory to ensure meeting soft real-time task deadlines without starving other non-real-time tasks. Experiments show that APP alleviates thrashing in memory-intensive tasks and upholds soft real-time task deadlines.
Recently,analyzing big data on the move is *** requires that the hardware resource should be low volume,low power,light in weight,high-performance,and highly scalable whereas the management software should be flexible...
详细信息
Recently,analyzing big data on the move is *** requires that the hardware resource should be low volume,low power,light in weight,high-performance,and highly scalable whereas the management software should be flexible and consume little hardware *** meet these requirements,we present a system named SOCA-DOM that encompasses a mobile system-on-chip array architecture and a two-tier“software-defined”resource manager named ***,we design an Ethernet communication board to support an array of mobile ***,we propose a two-tier software architecture for Chameleon to make it ***,we devise data,configuration,and control planes for Chameleon to make it“software-defined”and in turn consume hardware resources on ***,we design an accurate synthetic metric that represents the computational power of a computing *** employ 12 Apache Spark benchmarks to evaluate ***,SOCA-DOM consumes up to 9.4x less CPU resources and 13.5x less memory than Mesos which is an existing resource *** addition,we show that a 16-node SOCA-DOM consumes up to 4x less energy than two standard Xeon *** on the results,we conclude that an array architecture with fine-grained hardware resources and a software-defined resource manager works well for analyzing big data on the move.
As a fundamental tool for graph analysis, random walk receives extensive attention in both industry and academia. For computing massive random walks, recent works show that GPUs provide a good option to accelerate the...
As a fundamental tool for graph analysis, random walk receives extensive attention in both industry and academia. For computing massive random walks, recent works show that GPUs provide a good option to accelerate the performance. However, due to the limited memory space of modern GPUs, it is infeasible to have both the graph data and walk index fully reside in GPU memory when running large-scale random walks. Thus, it necessitates an out-of-GPU-memory design, but this inevitably induces large amounts of CPU-GPU data transmission traffic and thus hinders the overall performance. In this paper, we develop LightTraffic, which optimizes the data transmission between CPU and GPU memory under the constraint of GPU memory capacity with various system designs, including a memory-efficient scheme for partition-based management and multiple scheduling techniques. LightTraffic is a fully out-of-GPU-memory design, so it supports running large-scale random walks on GPUs. Experiments on our prototype show that LightTraffic outperforms various state-of-the-art CPU-based in-memory systems which also support large-scale random walks. For example, compared to the CPU-based systems FlashMob and ThunderRW, which are highly optimized for random walks, LightTraffic achieves 1.7−5.0× and 1.4 − 12.8× performance speedup, respectively. It also achieves up to an order of magnitude speedup when compared to the GPU-based system Subway which also supports large-scale random walks with an out-of-GPU-memory design for graph data.
The problem of space debris represents a major topic of concern in astronomy as the threat of space junk continues to grow, and the accuracy of its tracking is greatly restricted by the insufficiency and limitations o...
详细信息
Cloud computing has gained significant use over the last decade due to its several benefits,including cost savings associated with setup,deployments,delivery,physical resource sharing across virtual machines,and avail...
详细信息
Cloud computing has gained significant use over the last decade due to its several benefits,including cost savings associated with setup,deployments,delivery,physical resource sharing across virtual machines,and availability of on-demand cloud ***,in addition to usual threats in almost every computing environment,cloud computing has also introduced a set of new threats as consumers share physical resources due to the physical co-location ***,since there are a growing number of attacks directed at cloud environments(including dictionary attacks,replay code attacks,denial of service attacks,rootkit attacks,code injection attacks,etc.),customers require additional assurances before adopting cloud ***,the continuous integration and continuous deployment of the code fragments have made cloud services more prone to security *** this study,the model based on the root of trust for continuous integration and continuous deployment is proposed,instead of only relying on a single signon authentication method that typically uses only id and *** underlying study opted hardware security module by utilizing the Trusted Platform Module(TPM),which is commonly available as a cryptoprocessor on the motherboards of the personal computers and data center *** preliminary proof of concept demonstrated that the TPM features can be utilized through RESTful services to establish the root of trust for continuous integration and continuous deployment pipeline and can additionally be integrated as a secure microservice feature in the cloud computing environment.
Vision-language pretraining (VLP) has been investigated to generalize across diverse downstream tasks for fundus image analysis. Although recent methods showcase promising achievements, they significantly rely on larg...
详细信息
Polyp segmentation plays a vital role in accurately locating polyps at an early stage, which holds significant clinical importance for the prevention of colorectal cancer. Various polyp segmentation methods have been ...
详细信息
暂无评论