The distributedcomputing system is a hot research field. Deep learning, the Internet of Things, and other technologies are rapidly advancing, necessitating better levels of computation, storage, and communication eff...
详细信息
R esource allocation is a key topic in distributed cloud systems, especially as more and more cloud products and service offerings introduce decision complexity. It's a common challenge for cloud users to predict ...
详细信息
ISBN:
(纸本)9798350377330;9798350377323
R esource allocation is a key topic in distributed cloud systems, especially as more and more cloud products and service offerings introduce decision complexity. It's a common challenge for cloud users to predict usage and budget accordingly. This paper studies the industry practices on smart resource allocation in the cloud and proposes a novel approach using data-driven combination of LSTM and K-means for cloud resource allocation.
The proceedings contain 3 papers. The topics discussed include: energy-efficient hybrid cluster scheduling strategy for heterogeneous wireless sensor networks;graph-based anomaly detection and root cause analysis for ...
ISBN:
(纸本)9798331521059
The proceedings contain 3 papers. The topics discussed include: energy-efficient hybrid cluster scheduling strategy for heterogeneous wireless sensor networks;graph-based anomaly detection and root cause analysis for microservices in cloud-native platform;and towards real-world deployment of NILM systems: challenges and practices.
Localization is a critical technology for various applications ranging from navigation and surveillance to assisted living. Localization systems typically fuse information from sensors viewing the scene from different...
详细信息
ISBN:
(纸本)9798350377712;9798350377705
Localization is a critical technology for various applications ranging from navigation and surveillance to assisted living. Localization systems typically fuse information from sensors viewing the scene from different perspectives to estimate the target location while also employing multiple modalities for enhanced robustness and accuracy. Recently, such systems have employed end-to-end deep neural models trained on large datasets due to their superior performance and ability to handle data from diverse sensor modalities. However, such neural models are often trained on data collected from a particular set of sensor poses (i.e., locations and orientations). During real-world deployments, slight deviations from these sensor poses can result in extreme inaccuracies. To address this challenge, we introduce FlexLoc, which employs conditional neural networks to inject node perspective information to adapt the localization pipeline. Specifically, a small subset of model weights are derived from node poses at run time, enabling accurate generalization to unseen perspectives with minimal additional overhead. Our evaluations on a multimodal, multiview indoor tracking dataset showcase that FlexLoc improves the localization accuracy by almost 50% in the zero-shot case (no calibration data available) compared to the baselines. The source code of FlexLoc is available in https://***/nesl/FlexLoc.
Consensus filtering algorithms are commonly used for distributed state estimation. In addressing the challenge of unknown colored noise in distributedsensor networks, we propose a consensus filter based on Gaussian p...
详细信息
The purpose of this paper is to solve the reconstruction problem of sparse vectors in distributed compressed sensing with Multiple Measurement Vectors (MMV). Many traditional methods are based on certain sparse condit...
详细信息
ISBN:
(纸本)9798350333398
The purpose of this paper is to solve the reconstruction problem of sparse vectors in distributed compressed sensing with Multiple Measurement Vectors (MMV). Many traditional methods are based on certain sparse conditions, such as assuming that signals of different channels are joint sparse. This paper relaxes this restriction and assumes some correlation between the signals of different channels. We use Gated Recurrent Units (GRU) to capture this correlation and reconstruct the signal. Meanwhile, the previous reconstruction algorithms treat the signals of different channels equally, but the influence of different channels on the reconstruction results may be different. To solve this problem, we design an attention mechanism that adaptively learns and adjusts the influence factors of different channel signals, and optimizes the reconstruction results. The method proposed in this paper is only for the decoder, without any additional requirements for the encoder, so the encoder can use the conventional compression sampling method. Extensive experiments indicate the efficiency and robustness of the proposed method in the case of simulated data and real data.
The present file introduces a tutorial planned at the DINPS 22 workshop. The tutorial will cover Holium, a protocol based on IPFS, IPLD and WebAssembly, dedicated to data transformation pipelines. It will be provided ...
详细信息
ISBN:
(数字)9781665488792
ISBN:
(纸本)9781665488792
The present file introduces a tutorial planned at the DINPS 22 workshop. The tutorial will cover Holium, a protocol based on IPFS, IPLD and WebAssembly, dedicated to data transformation pipelines. It will be provided by Philippe Metals, one of its designers and core developers of the reference implementation.
Data Analytics (DA) and Machine Learning (ML) algorithms today are inside many applications and systems. They provide state-of-the-art anomaly detection, pattern recognition, health monitoring, predictive maintenance,...
详细信息
ISBN:
(纸本)9798350358513;9798350358520
Data Analytics (DA) and Machine Learning (ML) algorithms today are inside many applications and systems. They provide state-of-the-art anomaly detection, pattern recognition, health monitoring, predictive maintenance, or help to optimize and calibrate parameter combinations. However, they usually require strong compute resources. It is therefore a challenge to deploy the corresponding algorithms on edge devices or in embedded systems, such as on cloud-decoupled drives and drive systems, as these usually have strict resource constraints. In this work, we look at the deployment of DA/ML algorithms on combinations of resource-constraint edge and device environments. We present our framework for the structured and guided definition, setup, distribution, and deployment of such DA/ML algorithms. A guided workflow leads a user through algorithm specification and configuration. While doing so, it extracts information, which is necessary for the distribution and deployment of the DA/ML algorithm to a suitable device-edge combination. With this solution, a DA/ML algorithm can be executed for sensor data processing and, in the ML context, for inference, on device-edge combinations without permanent cloud connection, e.g., on a drive/gateway combination on a cruise ship, so that it can be used for real-time anomaly detection or pattern analysis.
Synchronization of sensor devices is crucial for concurrent data acquisition. Numerous protocols have emerged for this task, and for some multi-sensor setups to operate synchronized, a conversion between deployed prot...
详细信息
ISBN:
(纸本)9798350387964;9798350387957
Synchronization of sensor devices is crucial for concurrent data acquisition. Numerous protocols have emerged for this task, and for some multi-sensor setups to operate synchronized, a conversion between deployed protocols is needed. This paper presents a bare-metal implementation of a Tri-Level Sync signal generator on a microcontroller unit (MCU) synchronized to a master clock via the ieee 1588 Precision Time Protocol (PTP). Cameras can be synchronized by locking their frame generators to the Tri-Level Sync signal. As this synchronization depends on a stable analog signal, a careful design of the signal generation based on a PTP-managed clock is required. The limited tolerance of a camera to clock frequency adjustments for continuous operations imposes rate-limits on the PTP-controller. Simulations using a software model demonstrate the resulting controller instabilities from rate-limiting. This problem is addressed by introducing a linear prediction mode to the controller, which estimates the realizable offset change during rate-limited frequency alignment. By adjusting the frequency in a timely manner, a large overshoot of the controller can be avoided. Additionally, a cascading controller design that decouples the PTP from the clock update rate proved to be advantageous to increase the camera's tolerable frequency change. This paper demonstrates that a MCU is a viable platform to perform PTP-synchronized Tri-Level Sync generation. Our open source implementation is available for use by the research community at https://***/IMS-AS-LUH/t41-tri-sync-ptp.
Modern ieee 802.11 (Wi-Fi) networks extensively rely on multiple-input multiple-output (MIMO) to significantly improve throughput. To correctly beamform MIMO transmissions, the access point needs to frequently acquire...
详细信息
ISBN:
(纸本)9798350339864
Modern ieee 802.11 (Wi-Fi) networks extensively rely on multiple-input multiple-output (MIMO) to significantly improve throughput. To correctly beamform MIMO transmissions, the access point needs to frequently acquire a beamforming matrix (BM) from each connected station. However, the size of the matrix grows with the number of antennas and subcarriers, resulting in an increasing amount of airtime overhead and computational load at the station. Conventional approaches come with either excessive computational load or loss of beamforming precision. For this reason, we propose SplitBeam, a new framework where we train a split deep neural network (DNN) to directly output the BM given the channel state information (CSI) matrix as input. The DNN is designed with an additional "bottleneck" layer to "split" the original DNN into a head model and a tail model, respectively executed by the station and the access point. The head model generates a compressed representation of the BM, which is then used by the AP to produce the BM using the tail model. We formulate and solve a bottleneck optimization problem (BOP) to keep computation, airtime overhead, and bit error rate (BER) below application requirements. We perform extensive experimental CSI collection with off-the-shelf Wi-Fi devices in two distinct environments and compare the performance of SplitBeam with the standard ieee 802.11 algorithm for BM feedback and the state-of-the-art DNN-based approach LB-SciFi. Our experimental results show that SplitBeam reduces the beamforming feedback size and computational complexity by respectively up to 81% and 84% while maintaining BER within about 10(-3) of existing approaches. We also implement the SplitBeam DNNs on FPGA hardware to estimate the end-to-end BM reporting delay, and show that the latter is less than 10 milliseconds in the most complex scenario, which is the target channel sounding frequency in realistic multiuser MIMO scenarios. To allow full reproducibility, we will r
暂无评论