In the context of the ongoing global epidemic of COVID-19 and frequent virus mutations, the implementation of vaccine is the key to the prevention and control of the epidemic at this stage. In order to provide recomme...
The task of detecting fraud in credit card transactions is crucial to ensure the security and stability of a financial system, as well as to enforce customer confidence in digital payment systems. Historically, credit...
详细信息
Reasoning-based approaches have demonstrated their powerful ability for the task of image-text matching. In this work, two issues are addressed for image-text matching. First, for reasoning processing, conventional ap...
详细信息
Data dependency, often presented as directed acyclic graph (DAG), is a crucial application semantics for the performance of data analytic platforms such as Spark. Spark comes with two built-in schedulers, namely FIFO ...
详细信息
ISBN:
(数字)9781728168760
ISBN:
(纸本)9781728168777
Data dependency, often presented as directed acyclic graph (DAG), is a crucial application semantics for the performance of data analytic platforms such as Spark. Spark comes with two built-in schedulers, namely FIFO and Fair scheduler, which do not take advantage of data dependency structures. Recently proposed DAG-aware task scheduling approaches, notably GRAPHENE, have achieved significant performance improvements but paid little attention to cache management. The resulted data access patterns interact poorly with the built-in LRU caching, leading to significant cache misses and performance degradation. On the other hand, DAG-aware caching schemes, such as Most Reference Distance (MRD), are designed for FIFO scheduler instead of DAG-aware task *** this paper, we propose and develop a middleware Dagon, which leverages the complexity and heterogeneity of DAGs to jointly execute task scheduling and cache management. Dagon relies on three key mechanisms: DAG-aware task assignment that considers dependency structure and heterogeneous resource demands to reduce potential resource fragmentation, sensitivity-aware delay scheduling that prevents executors from long waiting for tasks insensitive to locality, and priority-aware caching that makes the cache eviction and prefetching decisions based on the stage priority determined by DAG-aware task assignment. We have implemented Dagon in Apache Spark. Evaluation on a testbed shows that Dagon improves the job completion time by up to 42% and CPU utilization by up to 46% respectively, compared to GRAPHENE plus MRD.
To improve the availability of data in the cloud and avoid vendor lock-in risk, multi-cloud storage is attracting more and more attentions. However, accessing data from the cloud usually has some disadvantages such as...
详细信息
Person search aims to locate target individuals in large image databases captured by multiple non-overlapping cameras. Existing models primarily rely on spatial feature extraction to capture fine-grained local details...
详细信息
Obstacle avoiding is one of the most complex tasks for autonomous driving systems, which was also ignored by many cutting-edge end-to-end learning-based methods. The difficulties stem from the integrated process of de...
详细信息
Mobile Edge computing (MEC) is a new computing paradigm that enables cloud computing and information technology (IT) services to be delivered at the network’s edge. By shifting the load of cloud computing to individu...
详细信息
The cache can improve the DSP processor's access speed to external memory and solve the “Storage Wall” problem. Designing an efficient and flexible cache module plays an vital role in improving the memory access...
详细信息
ISBN:
(纸本)9781665432078
The cache can improve the DSP processor's access speed to external memory and solve the “Storage Wall” problem. Designing an efficient and flexible cache module plays an vital role in improving the memory access efficiency and overall performance of the DSP. Based on the analysis of DSP storage level design requirements, combined with the actual pipeline structure of the DSP processor SWIFT independently developed by our laboratory, in this paper, we design and implement first- level instruction and data cache. For L1 D-Cache, the relevant conflict detection and processing module is designed to support four parallel Load/Store access requests. The size of the cache can be flexibly configured according to actual application requirements to achieve a balance between power consumption and latency. To write back the modified data left in the cache before the cache size is changed, the Cacheclean instruction is designed. Finally, module-level functional verification and logic synthesis are carried out. The results show that the cache function meets expectations, and the critical path after logic synthesis optimization meets the requirement of 1GHz frenuency.
As one of the fundamental tasks in computer vision, semantic segmentation plays an important role in real world applications. Although numerous deep learning models have made notable progress on several mainstream dat...
详细信息
暂无评论