With the development of 5G mobile networks, people's demand for network response speed and services has increased to meet the needs of a large amount of data traffic, reduce the backhaul load caused by frequently ...
详细信息
With the development of 5G mobile networks, people's demand for network response speed and services has increased to meet the needs of a large amount of data traffic, reduce the backhaul load caused by frequently requesting the same data (or content). The file is pre-stored in the base station by the edge device, and the user can directly obtain the requested data in the local cache without remotely. However, changes in popularity are difficult to capture, and data is updated more frequently through the backhaul. In order to reduce the number of backhauls and provide caching services for users with specific needs, we can provide proactive caching with users without affecting user activity. We propose a content caching strategy based on mobility prediction and joint user prefetching (MPJUP). The policy predicts the prefetching device data by predicting the user's movement position with respect to time by the mobility of the user and then splits the partial cache space for prefetching data based on the user experience gain. Besides, we propose to reduce the backhaul load by reducing the number of content backhauls by cooperating prefetch data between the user and the edge cache device. Experimental analysis shows that our method further reduces the average delay and backhaul load, and the prefetch method is also suitable for more extensive networks.
Maximizing parallelism level in applications can be achieved by minimizing overheads due to load imbalances and waiting time due to memory latencies. Compiler optimization is one of the most effective solutions to tac...
详细信息
ISBN:
(纸本)9780769561493
Maximizing parallelism level in applications can be achieved by minimizing overheads due to load imbalances and waiting time due to memory latencies. Compiler optimization is one of the most effective solutions to tackle this problem. The compiler is able to detect the data dependencies in an application and is able to analyze the specific sections of code for parallelization potential. However, all of these techniques provided with a compiler are usually applied at compile time, so they rely on static analysis, which is insufficient for achieving maximum parallelism and producing desired application scalability. One solution to address this challenge is the use of runtime methods. This strategy can be implemented by delaying certain amount of code analysis to be done at runtime. In this research, we improve the parallel application performance generated by the OP2 compiler by leveraging HPX, a C++ runtime system, to provide runtime optimizations. These optimizations include asynchronous tasking, loop interleaving, dynamic chunk sizing, and dataprefetching. The results of the research were evaluated using an Airfoil application which showed a 40-50% improvement in parallel performance.
暂无评论