Wireless sensor Networks (WSN) contains spatially distributedsensor nodes that collaborate with each other. However, the WSN is susceptible since the wireless medium is unpredictable. Several conventional approaches ...
详细信息
Nowadays, data parallelism has been widely applied to train large datasets on distributed deep learning clusters, but it has suffered from costly global parameter updates at batch barriers. Performance imbalance among...
详细信息
Unmanned Aerial Vehicles (UAVs) find extensive applications across various industries, surveillance, and communication services. However, concerns regarding their potential misuse have prompted the development of coun...
详细信息
ISBN:
(数字)9798350369441
ISBN:
(纸本)9798350369458
Unmanned Aerial Vehicles (UAVs) find extensive applications across various industries, surveillance, and communication services. However, concerns regarding their potential misuse have prompted the development of counter-drone measures. In this paper, we propose a counter-UAV approach centered on radio frequency (RF) signal sensing. Upon the detection of an RF signal, our system employs a Short-Time Fourier Transform (STFT)-based spectrogram (SP) generation process. This SP is further refined through adaptive windowing and logarithmic tuning to extract multi-intensity features. To classify the complex RF time-domain signals and STFT spectrograms, we utilize two deep learning classifiers: RF-Network and SP-Network, facilitating a multi-class classification process by using deep neural networks (DNN). To enhance the overall accuracy of our model, we leverage an ensemble neural network (EN-Net) by combining predictions from the RF-Network and SP-Network classifiers. Fusing data from a single sensor in both time and frequency domains enhances DNN accuracy by providing complementary information, improving robustness, and reducing overfitting, resulting in increased model performance and a deep understanding of the data. Our results demonstrate a notable improvement in accuracy—specifically, a 36% increase for multi-class models when compared to single-class models. This proves the effectiveness of our EN-Net model in addressing security threats posed by UAVs through advanced RF signal analysis and classification.
Binary neural network (BNN) is widely used in speech recognition, image processing and other fields to save memory and speed up computing. However, the accuracy of the existing binarization scheme in the realistic dat...
详细信息
The proceedings contain 343 papers. The topics discussed include: three-level compact caching for search engines based on solid state drives;a fine-grained volley gesture recognition method with direction independence...
ISBN:
(纸本)9781665494571
The proceedings contain 343 papers. The topics discussed include: three-level compact caching for search engines based on solid state drives;a fine-grained volley gesture recognition method with direction independence;universal adversarial attack against 3d object tracking;efficient hardware redo logging for secure persistent memory;a cost-efficient metadata scheme for high-performance deduplication systems;high speed true random number generator controlled by logistic map;characterization and implication of edge WebAssembly runtimes;frend for edge servers: reduce server number! keeping service quality!;on-the-fly servers placement for online multiplayer games in the fog;advanced architecture design of high-radix router based on chiplet integration and IP reusability;embrace the conflicts: exploring the integration of single port memory in systolic array-based accelerators;visual sensitivity aware rate adaptation for video streaming via deep reinforcement learning;semi-supervised federated learning with non-IID data: algorithm and system design;on-demand intelligent routing algorithms for the deterministic networks;and distributed service placement in ultra-dense edge computing: a game-theoretical approach.
As the focus on highly intelligent robots continues, a problem that cannot be ignored has emerged: resource constraints. Considering the game problem of resource limitation and the level of intelligence, we focus on l...
详细信息
ISBN:
(数字)9781665479271
ISBN:
(纸本)9781665479271
As the focus on highly intelligent robots continues, a problem that cannot be ignored has emerged: resource constraints. Considering the game problem of resource limitation and the level of intelligence, we focus on lightweight intelligence. This work is a further refinement of our previous work, a heterogeneous lightweight intelligent multi-robot system. Inspired by the nature creatures "octopus" and "ants". First, we propose a heterogeneous centralized-distributed architecture, which can make robots collaboration more flexible and non-redundant. Second, to reflect lightweight intelligence, we use the Raspberry Pi, a low computing and power consumption internet of things (IoT) device, as a processing platform and first propose a quantitative definition of the lightweight intelligent system. Then, combining the centralized-distributed architecture and the lightweight computing platform, we propose an adapted algorithm called OCTOANTS and apply it to the simultaneous localization and mapping (SLAM) field. The OCTOANTS architecture consists of one brain and eight tentacles, which can achieve complex things with proper collaboration between them. Finally, we use heterogeneous cameras and heterogeneous algorithms to form a lightweight intelligent collaborative system that can run in the real world. On the low-grade platform Raspberry Pi our heterogeneous tentacles frame rate can reach 41fps and 99.8fps respectively, power consumption is only 2W and 1.2W. At the same time, our heterogeneous system is on average 7.2% more accurate than the state-of-the-art homogeneous system and can be applied to a wider range of application scenarios, demonstrating the superiority and feasibility of our OCTOANTS.
Edge computing established paradigms are prone to implicate solely powerful server-like edge nodes, in static or semistatic topologies, of centrally-controlled edge networks. In this paper, leveraging upon recent tech...
详细信息
ISBN:
(纸本)9781728143514
Edge computing established paradigms are prone to implicate solely powerful server-like edge nodes, in static or semistatic topologies, of centrally-controlled edge networks. In this paper, leveraging upon recent technological advancements and trends, we introduce a novel networking paradigm employing resources provided by independent crowd peers, within a zone of local proximity, to establish collaborative networks for edge computing. We call this paradigm the Crowdsourced Edge. We detail the architecture and characteristics of this novel paradigm, highlighting its unique characteristics and specific challenges, while also positioning it vis-a-vis the existing edge computing concretisations. Finally, we demonstrate the Crowdsourced Edge functionality by presenting an ongoing use case regarding a video-enhanced object search.
In this paper, we present a distributed machine learning based intrusion detection system in Internet of Things (IoT) utilizing Blockchain technology. In particular, spectral partitioning is proposed to divide the IoT...
详细信息
ISBN:
(数字)9781728143514
ISBN:
(纸本)9781728143514
In this paper, we present a distributed machine learning based intrusion detection system in Internet of Things (IoT) utilizing Blockchain technology. In particular, spectral partitioning is proposed to divide the IoT network into autonomous systems (AS) enabling traffic monitoring for intrusion detection (ID) to be performed by the selected AS border area nodes in a distributed manner. The ID system is based on machine learning, where a support-vector machine algorithm is trained using prominent IoT data sets and detection of the attackers is provided. Furthermore, the integrity of the attackers' list is offered by utilizing Blockchain technology, which enables a distributed sharing of the attackers' information among the AS border area nodes of the Blockchain network. Simulations are performed to evaluate different aspects of the proposed IoT system and demonstrate the potential of integrating machine learning based ID to a distributed spectral partitioned Blockchain network.
More recently, it has become possible to run deep learning algorithms on edge devices such as microcontrollers due to continuous improvements in neural network optimization algorithms such as quantization and neural a...
详细信息
ISBN:
(纸本)9781665473156
More recently, it has become possible to run deep learning algorithms on edge devices such as microcontrollers due to continuous improvements in neural network optimization algorithms such as quantization and neural architecture search. Nonetheless, most of the embedded hardware available today still falls short of the requirements of running deep neural networks. As a result, specialized processors have emerged to improve the inference efficiency of deep learning algorithms. However, most are not for edge applications that require efficient and low-cost hardware. Therefore, we design and prototype a low-cost configurable sparse Neural Processing Unit (NPU). The NPU has a built-in buffer and a reshapable mixed-precision multiply-accumulator (MAC) array. The computing and memory resources of the NPU are parameterized, and different NPUs can be derived. Besides, users can also configure the NPU at runtime to fully utilize the resources. In our experiments, the 200MHz NPU with only 32 MACs is more than 32 times faster than the 400MHz STM32H7 when inferring MobileNet-V1. Besides, the yielded NPUs can achieve roofline or even beyond roofline performance. The buffer and reshapeable MAC array push the NPU's attainable performance to the roofline, while the feature of supporting sparsity allows the NPU to obtain performance beyond the roofline.
In recent years, the concept of the Internet of things (IoT) has become ubiquitous among several applications. However, traditional localization methods face challenges in accurately measuring sensor node positions, p...
详细信息
ISBN:
(数字)9798350369441
ISBN:
(纸本)9798350369458
In recent years, the concept of the Internet of things (IoT) has become ubiquitous among several applications. However, traditional localization methods face challenges in accurately measuring sensor node positions, particularly in scenarios involving wireless signal instability, energy harvesting, anchor mobility, latency, and dense environmental conditions. However, these problems open the new ways to utilize machine learning (ML) and self-calibration for localization. To defeat these problems, in this paper, we have proposed a new localization technique, which reduces the localization error caused by the iterations in a triangulation method and drastically shortens the computation time for full network coverage despite noisy environments. First, we present a localization technique based on a looped network. The link quality induction (LQI) is used to formulate multiple feature vectors for the localization problem, solved using linear regression. Second, we examine the appropriateness of a few algorithms under different tradeoffs by testing the localization accuracy for different network parameters, including the anchor node density, additive noise, and quality of the wireless channel. The simulation achieves exciting results that the adoption of a support vector machine (SVM) and nonlinear regression model applied with a radial basis function (RBF) kernel leads to an enhanced localization error with Root-Mean Square Error (RMSE) of 0.25m. Furthermore, we gain 48.5% accuracy in the localization error by implementing two models.
暂无评论