Federated learning (FL) is a distributed Machine Learning (ML) framework that is capable of training a new global model by aggregating clients' locally trained models without sharing users' original data. Fede...
详细信息
ISBN:
(纸本)9789819608041;9789819608058
Federated learning (FL) is a distributed Machine Learning (ML) framework that is capable of training a new global model by aggregating clients' locally trained models without sharing users' original data. Federated learning as a service (FLaaS) offers a privacy-preserving approach for training machine learning models on devices with various computational resources. Most proposed FL-based methods train the same model in all client devices regardless of their computational resources. However, in practical Internet of Things (IoT) scenarios, IoT devices with limited computational resources may not be capable of training models that client devices with greater hardware performance hosted. Most of the existing FL frameworks that aim to solve the problem of aggregating heterogeneous models are designed for Independent and Identical distributed (IID) data, which may make it hard to reach the target algorithm performance when encountering non-IID scenarios. To address these problems in hierarchical networks, in this paper, we propose a heterogeneous aggregation framework for hierarchical edge systems called HAF-Edge. In our proposed framework, we introduce a communication-efficient model aggregation method designed for FL systems with two-level model aggregations running at the edge and cloud levels. This approach enhances the convergence rate of the global model by leveraging selective knowledge transfer during the aggregation of heterogeneous models. To the best of our knowledge, this work is pioneering in addressing the problem of aggregating heterogeneous models within hierarchical FL systems spanning IoT, edge, and cloud environments. We conducted extensive experiments to validate the performance of our proposed method. The evaluation results demonstrate that HAF-Edge significantly outperforms state-of-the-art methods.
Blockchain and the proof-of-work (PoW) distributed consensus protocol rely on peer-to-peer (P2P) networking. We build a PoW P2P simulator for the modeling and analyses of permissionless blockchain networking. Our simu...
详细信息
ISBN:
(纸本)9781665445139
Blockchain and the proof-of-work (PoW) distributed consensus protocol rely on peer-to-peer (P2P) networking. We build a PoW P2P simulator for the modeling and analyses of permissionless blockchain networking. Our simulator utilizes a built-in randomness generator for the simulations, has an easy-to-use interface and intuitive visualization, supports dynamic/programmable control and modifications, and can generate simulation data for further processing. We publish our simulator in open source to facilitate its use for blockchain and P2P networking research and especially recommend it for scalability research or preliminary testing. To highlight its features and capabilities, we demonstrate the simulator use in this paper to analyze the recent blockchain security research, including 51% attack, eclipse, partitioning, and DoS attack.
This paper deeply explores the design of computing power volume syndrome trading platform applicable to the government, which aims to optimize the distribution and management of computing power resources through block...
详细信息
Cloud computing services and gig economy platforms vary in flexibility, efficiency, and scalability due to their pricing schemes. Cloud computing services provide flexibility and cost control via pay-as-you-go, subscr...
详细信息
Sensor data is of crucial importance in many IoT scenarios. It is used for online monitoring as well as long term data analytics, enabling countless use cases from damage prevention to predictive maintenance. Multivar...
详细信息
ISBN:
(数字)9781665481403
ISBN:
(纸本)9781665481403
Sensor data is of crucial importance in many IoT scenarios. It is used for online monitoring as well as long term data analytics, enabling countless use cases from damage prevention to predictive maintenance. Multivariate sensor time series data is acquired and initially stored close to the sensor, at the edge. It is also beneficial to summarize this data in windowed aggregations at different resolutions. A subset of the resulting aggregation hierarchy is typically sent to a cloud infrastructure, often via intermittent or low bandwidth connections. Consequently, different views on the data exist on different nodes in the edge-to-cloud continuum. However, when querying this data, users are interested in a fast response and a complete, unified view on the data, regardless of which part in the infrastructure continuum they send the query to and where the data is physically stored. In this paper, we present a loosely coupled approach that enables fast range queries on a distributed and hierarchical sensor database. Our system only assumes the possibility of fast local range queries on a hierarchical sensor database. It does not require any shared state between nodes and thus degrades gracefully in case certain parts of the hierarchy are unreachable. We show that our system is suitable for driving interactive data exploration sessions on terabytes of data while unifying the different views on the data. Thus, our system can improve the data analysis experience in many geo-distributed scenarios.
Mobile edge computing boosts the application performance on mobile devices by collaborating with cloud platforms. This paper studies the task offloading and computing resource allocation problem in a multibase, multis...
详细信息
Agriculture consumes a significant proportion of water reserves in irrigated areas. Improving irrigation is becoming essential to reduce this high-water consumption by adapting supplies to crop needs and avoiding loss...
详细信息
Indoor positioning systems (IPS) continue to encounter significant challenges in achieving meter-level accuracy, particularly in large and intricate environments such as airports, hospitals, and industrial sites. Desp...
详细信息
This abstract discusses the transformational capacity of Shor's Algorithm and quantum cryptography in quantum computing. However, Shor's Algorithm poses a problem to classical cryptographic systems such as RSA...
详细信息
In-memory computing with resistive-switching random access memory (RRAM) crossbar arrays has the potential to overcome the major bottlenecks faced by digital hardware for data-heavy workloads such as deep learning. Ho...
详细信息
ISBN:
(纸本)9798350300246
In-memory computing with resistive-switching random access memory (RRAM) crossbar arrays has the potential to overcome the major bottlenecks faced by digital hardware for data-heavy workloads such as deep learning. However, RRAM devices are subject to several non-idealities that result in significant inference accuracy drops compared with software baseline accuracy. A critical one is related to the drift of the conductance states appearing immediately at the end of program and verify algorithms that are mandatory for accurate multi-level conductance operation. The support of drift models in state-of-the-art simulation tools of memristive computationin-memory is currently only in the early stage, since they overlook key device- and array-level parameters affecting drift resilience such as the programming algorithm of RRAM cells, the choice of target conductance states and the weight-to-conductance mapping scheme. The goal of this paper is to fully expose these parameters to RRAM crossbar designers as a multi-dimensional optimization space of drift resilience. For this purpose, a simulation framework is developed, which comes with the suitable abstractions to propagate the effects of those RRAM crossbar configuration parameters to their ultimate implications over inference performance stability.
暂无评论