The global mobile traffic consumption has been growing sharply, driven by the surge of smart IoT devices and the adoption of 5G technology. Further, the COVID-19 pandemic has accelerated the digital transformation of ...
详细信息
The global mobile traffic consumption has been growing sharply, driven by the surge of smart IoT devices and the adoption of 5G technology. Further, the COVID-19 pandemic has accelerated the digital transformation of work and study, leading to tremendous traffic demands. Due to these reasons, mobile operators are facing pressing resource management problems related to storage and overhead of transferring large amounts of metadata that serve for analysing user demands and planning the infrastructure needed to accommodate these. In this context, intelligent measurement collection, precise geo-spatial traffic analysis, and forecasting are becoming essential to assuring high performance and cost-efficiency of cellular network deployments. Modelling dynamic traffic patterns and especially the continuously changing dependencies between different base stations, which previous studies overlook, is challenging. Traditional algorithms struggle to process large volumes of data and to extract deep insights that help elucidate mobile traffic demands with fine granularity, as well as how these demands will evolve in the future. Therefore, in this thesis we harness the powerful hierarchical feature learning abilities of Deep Learning (DL) techniques in both spatial and temporal domains and develop solutions for precise city-scale mobile traffic analysis and forecasting. Firstly, we design Spider, a mobile traffic measurement collection and reconstruction framework with a view to reducing the cost of measurement collection and inferring traffic consumption with high accuracy, despite working with sparse information. In particular, we train a reinforcement learning agent to selectively sample subsets of target mobile coverage areas and tackle the large action space problem specific to this setting. We then introduce a lightweight neural network model to reconstruct the traffic consumption based on historical sparse measurements. Our proposed framework outperforms existing solutions
This study focuses on the issue of ID management when researchers change their organizational affiliations, aiming at the effective utilization of research data in educational and research organizations. Although rese...
详细信息
ISBN:
(数字)9798350376968
ISBN:
(纸本)9798350376975
This study focuses on the issue of ID management when researchers change their organizational affiliations, aiming at the effective utilization of research data in educational and research organizations. Although research data management systems are being developed at each organization, the continuous use of research data remains an issue for researchers. For example, when a researcher moves from one organization to another, the research data that was previously available to him or her becomes unavailable. In response, there is a need for a system that allows researchers to continue using their research data smoothly even if they move from one organization to another. To solve this problem, the authors propose a cross-organizational identity proofing system that ensures the continuity of IAL2/ AAL2 and completes the ID transfer online. The idea is to use researcher ID numbers, ORCID, and public personal authentication to verify the user's identity when linking the IDs before and after the transfer. To facilitate this, the study performs identity verification using an Individual Number Card (a Japanese governmental system) to enable easy ID linkage between IdPs with high identity confirmation and authentication strength. The implementation has been carried out in the development environment of Orthros, an ID linkage service provided by the National institute of informatics (NIl), utilizing the API service of xID Co., which is a solution for linking with the Individual Number Card. This approach is expected to overcome ID management issues associated with researcher transfers and improve the efficiency of research data use.
Delay tolerant networks may become unexpectedly partitioned due to node mobility or variation in signal strength. However, most widely used models in some relative works are generally very simplistic. In order to expl...
详细信息
Delay tolerant networks may become unexpectedly partitioned due to node mobility or variation in signal strength. However, most widely used models in some relative works are generally very simplistic. In order to exploit intelligent forwarding algorithms, a novel nodal mobile model of delay tolerant networks is presented to map reality with more accuracy. And several approaches are introduced to analyze the network structure, such as n-cliques, n-clans, degree, closeness and betweenness. Our research results showed that the centralizations became smaller when the wireless connections were concerned. It meant that quite a number of nodes became potential relays because of the new structure of networks.
This article investigates a control system within the context of sixth-generation wireless networks. The remote control performance optimization confronts the technical challenges that arise from the intricate interac...
详细信息
The growing amount of data and advances in data science have created a need for a new kind of cloud platform that provides users with flexibility, strong security, and the ability to couple with supercomputers and edg...
详细信息
This one-day hybrid workshop builds on previous feminist CSCW workshops to explore feminist theoretical and methodological approaches that have provided us with useful tools to see things differently and make space fo...
详细信息
Last-Level Cache (LLC) represents the bulk of a modern CPU processor’s transistor budget and is essential for application performance as LLC enables fast access to data in contrast to much slower main memory. Problem...
详细信息
Last-Level Cache (LLC) represents the bulk of a modern CPU processor’s transistor budget and is essential for application performance as LLC enables fast access to data in contrast to much slower main memory. Problematically, technology constraints make it infeasible to scale LLC capacity to meet the ever-increasing working set size of the applications. Thus, future processors will rely on effective cache management mechanisms and policies to get more performance out of the scarce LLC capacity. Applications with large working set size often exhibit streaming and/or thrashing access patterns at LLC. As a result, a large fraction of the LLC capacity is occupied by dead blocks that will not be referenced again, leading to inefficient utilization of the LLC capacity. To improve cache efficiency, the state-of-the-art cache management techniques employ prediction mechanisms that learn from the past access patterns with an aim to accurately identify as many dead blocks as possible. Once identified, dead blocks are evicted from LLC to make space for potentially high reuse cache blocks. In this thesis, we identify variability in the reuse behavior of cache blocks as the key limiting factor in maximizing cache efficiency for state-of-the-art predictive techniques. Variability in reuse prediction is inevitable due to numerous factors that are outside the control of LLC. The sources of variability include control-flow variation, speculative execution and contention from cores sharing the cache, among others. Variability in reuse prediction challenges existing techniques in reliably identifying the end of a block’s useful lifetime, thus causing lower prediction accuracy, coverage, or both. To address this challenge, this thesis aims to design robust cache management mechanisms and policies for LLC in the face of variability in reuse prediction to minimize cache misses, while keeping the cost and complexity of the hardware implementation low. To that end, we propose two cache man
COVID-19 is a contagious infection that has severe effects on the global economy and our daily *** diagnosis of COVID-19 is of importance for consultants,patients,and *** this study,we use the deep learning network Al...
详细信息
COVID-19 is a contagious infection that has severe effects on the global economy and our daily *** diagnosis of COVID-19 is of importance for consultants,patients,and *** this study,we use the deep learning network AlexNet as the backbone,and enhance it with the following two aspects:1)adding batch normalization to help accelerate the training,reducing the internal covariance shift;2)replacing the fully connected layer in AlexNet with three classifiers:SNN,ELM,and ***,we have three novel models from the deep COVID network(DC-Net)framework,which are named DC-Net-S,DC-Net-E,and DC-Net-R,*** comparison,we find the proposed DC-Net-R achieves an average accuracy of 90.91%on a private dataset(available upon email request)comprising of 296 images while the specificity reaches 96.13%,and has the best performance among all three proposed *** addition,we show that our DC-Net-R also performs much better than other existing algorithms in the literature.
In this note, we investigate the robustness of Nash equilibria (NE) in multi-player aggregative games with coupling constraints. There are many algorithms for computing an NE of an aggregative game given a known aggre...
详细信息
Several sources related that the electricity sector emits almost a quarter of greenhouse gases each year in the world. It is therefore one of the important sectors to take into account to limit global warming. Indian ...
详细信息
暂无评论