This paper presents a learning-based control policy design for point-to-point vehicle positioning in the urban environment via BeiDou *** navigating in urban canyons,the multipath effect is a kind of interference that...
详细信息
This paper presents a learning-based control policy design for point-to-point vehicle positioning in the urban environment via BeiDou *** navigating in urban canyons,the multipath effect is a kind of interference that causes the navigation signal to drift and thus imposes severe impacts on vehicle localization due to the reflection and diffraction of the BeiDou ***,the authors formulated the navigation control system with unknown vehicle dynamics into an optimal control-seeking problem through a linear discrete-time system,and the point-to-point localization control is modeled and handled by leveraging off-policy reinforcement learning for feedback *** proposed learning-based design guarantees optimality with prescribed performance and also stabilizes the closed-loop navigation system,without the full knowledge of the vehicle *** is seen that the proposed method can withstand the impact of the multipath effect while satisfying the prescribed convergence rate.A case study demonstrates that the proposed algorithms effectively drive the vehicle to a desired setpoint under the multipath effect introduced by actual experiments of BeiDou navigation in the urban environment.
Avatars, as promising digital representations and service assistants of users in Metaverses, can enable drivers and passengers to immerse themselves in 3D virtual services and spaces of UAV-assisted vehicular Metavers...
详细信息
Avatars, as promising digital representations and service assistants of users in Metaverses, can enable drivers and passengers to immerse themselves in 3D virtual services and spaces of UAV-assisted vehicular Metaverses. However, avatar tasks include a multitude of human-to-avatar and avatar-to-avatar interactive applications, e.g., augmented reality navigation,which consumes intensive computing resources. It is inefficient and impractical for vehicles to process avatar tasks locally. Fortunately, migrating avatar tasks to the nearest roadside units(RSU)or unmanned aerial vehicles(UAV) for execution is a promising solution to decrease computation overhead and reduce task processing latency, while the high mobility of vehicles brings challenges for vehicles to independently perform avatar migration decisions depending on current and future vehicle status. To address these challenges, in this paper, we propose a novel avatar task migration system based on multi-agent deep reinforcement learning(MADRL) to execute immersive vehicular avatar tasks dynamically. Specifically, we first formulate the problem of avatar task migration from vehicles to RSUs/UAVs as a partially observable Markov decision process that can be solved by MADRL algorithms. We then design the multi-agent proximal policy optimization(MAPPO) approach as the MADRL algorithm for the avatar task migration problem. To overcome slow convergence resulting from the curse of dimensionality and non-stationary issues caused by shared parameters in MAPPO, we further propose a transformer-based MAPPO approach via sequential decision-making models for the efficient representation of relationships among agents. Finally, to motivate terrestrial or non-terrestrial edge servers(e.g., RSUs or UAVs) to share computation resources and ensure traceability of the sharing records, we apply smart contracts and blockchain technologies to achieve secure sharing management. Numerical results demonstrate that the proposed approach
We study a solo adaptation-based protocol design problem for distributed synchronization of event-triggered linear multi-agent systems (MAS) over general directed graphs. We show that a solo adaptive gain, a locally s...
详细信息
In this paper, an adaptive neural tracking control design is investigated for uncertain multiagent systems (MASs) with actuator dead zone and sensor faults. The adaptive control technique based on neural networks is e...
详细信息
Multipath signal recognition is crucial to the ability to provide high-precision absolute-position services by the BeiDou Navigation Satellite System(BDS).However,most existing approaches to this issue involve supervi...
详细信息
Multipath signal recognition is crucial to the ability to provide high-precision absolute-position services by the BeiDou Navigation Satellite System(BDS).However,most existing approaches to this issue involve supervised machine learning(ML)methods,and it is difficult to move to unsupervised multipath signal recognition because of the limitations in signal *** by an autoencoder with powerful unsupervised feature extraction,we propose a new deep learning(DL)model for BDS signal recognition that places a long short-term memory(LSTM)module in series with a convolutional sparse autoencoder to create a new autoencoder ***,we propose to capture the temporal correlations in long-duration BeiDou satellite time-series signals by using the LSTM module to mine the temporal change patterns in the time ***,we develop a convolutional sparse autoencoder method that learns a compressed representation of the input data,which then enables downscaled and unsupervised feature extraction from long-duration BeiDou satellite series ***,we add an l_(1/2) regularizer to the objective function of our DL model to remove redundant neurons from the neural network while ensuring recognition *** tested our proposed approach on a real urban canyon dataset,and the results demonstrated that our algorithm could achieve better classification performance than two ML-based methods(e.g.,11%better than a support vector machine)and two existing DL-based methods(e.g.,7.26%better than convolutional neural networks).
Deep matrix factorization(DMF)has been demonstrated to be a powerful tool to take in the complex hierarchical information of multi-view data(MDR).However,existing multiview DMF methods mainly explore the consistency o...
详细信息
Deep matrix factorization(DMF)has been demonstrated to be a powerful tool to take in the complex hierarchical information of multi-view data(MDR).However,existing multiview DMF methods mainly explore the consistency of multi-view data,while neglecting the diversity among different views as well as the high-order relationships of data,resulting in the loss of valuable complementary *** this paper,we design a hypergraph regularized diverse deep matrix factorization(HDDMF)model for multi-view data representation,to jointly utilize multi-view diversity and a high-order manifold in a multilayer factorization framework.A novel diversity enhancement term is designed to exploit the structural complementarity between different views of *** regularization is utilized to preserve the high-order geometry structure of data in each *** efficient iterative optimization algorithm is developed to solve the proposed model with theoretical convergence *** results on five real-world data sets demonstrate that the proposed method significantly outperforms stateof-the-art multi-view learning approaches.
Obtaining high-accuracy vehicle localization is essential for safe navigation in automated vehicles, however, complex urban environments result in inaccurate positioning because of multipath and non-line-of-sight erro...
详细信息
ISBN:
(纸本)9781713871378
Obtaining high-accuracy vehicle localization is essential for safe navigation in automated vehicles, however, complex urban environments result in inaccurate positioning because of multipath and non-line-of-sight errors, making precise positioning in complex environments an important and unsolved problem. Different from conventional model-based approaches which require rigid assumptions on sensors and noise models, data-driven artificial intelligent approaches can provide new solutions for Global Navigation Satellite System (GNSS) positioning problems in multipath environments. Recent learning-based works employ the generalization ability of the deep neural network to model these complex errors in urban environments, and provide effective positioning corrections for rough GNSS localizations by learning from a vehicle trajectory-based Reinforcement Learning (RL) environment. However, existing RL-based approaches employ insufficient features as input, and cannot take advantage of the vehicle trajectory *** tackle these issues, we propose a novel Deep Reinforcement Learning-based (DRL) positioning correction framework with fused features considering the vehicle trajectory information and GNSS features in the multi-input *** help the agent learn the correction policy efficiently with comprehensive inputs, we develop a new positioning correction RL environment with multi-input observations, which augments GNSS device readings of different satellites as complementary to the vehicle trajectory, and moreover employ a correction advantage reward to help the agent learn an efficient policy in positioning correction. To fully utilize comprehensive multi-input observations in the proposed environment, we employ the LSTM module to separately extract features to fuse brief states from input time series data. Finally, to address the long-term positioning accuracy, we construct the learning model based on actor-critic DRL structure with the cumulative reward
Automatic diagnosis of Coronavirus disease 2019 (COVID-19) using chest computed tomography (CT) images is of great significance for preventing its spread. However, it is difficult to precisely identify COVID-19 due to...
详细信息
Time to First Fix (TTFF) is one of the most important measures of GNSS receiver performance. Under the hot-start mode, the receiver needs only 1 to 2 seconds for positioning. However, under the cold start mode, it tak...
详细信息
ISBN:
(纸本)9780936406367
Time to First Fix (TTFF) is one of the most important measures of GNSS receiver performance. Under the hot-start mode, the receiver needs only 1 to 2 seconds for positioning. However, under the cold start mode, it takes more than 20 seconds to receive the ephemeris to obtain the accurate satellite position. This is due to the fact that current satellite orbit prediction method cannot provide position information accurately based on the old ephemeris, which is detrimental to fast positioning. To address the above challenge, we propose a novel Transformer-based model for long-term orbit corrections. The model makes full use of the time-series information of historical orbit errors and is able to capture long-term dependencies from orbit error series in different lengths. The predicted orbits can be corrected according to the predicted error components, thereby achieving accurate satellite orbit prediction. Due to the temporal patterns in the orbit error series, the features and information hidden in their complex temporal patterns are often difficult to be extracted and utilised. To tackle the intricate temporal patterns, we introduce an inner series decomposition block to deconstruct the seasonal part and trend-cyclical part from orbit errors series, which also empowers the model with immanent decomposition capacity. In addition, in order to extract the cross-time dependency within the curves as well as the cross-dimension dependency between the curves, we propose the Dual-Branch Attention (DBA) layer to capture the cross-time and cross-dimension dependency. Integrating the proposed series decomposition block and the Dual-Branch Attention(DBA) layer into Transformer, we build a Dual-branch Transformer with series decomposition (DBSD-Transformer) for long-term orbit corrections. Experimental results on satellite orbit error data in 2022 from the BeiDou MEO-10 satellite show the effectiveness of our method. Compared to the prediction method based on the Kepler model, t
The global navigation satellite systems (GNSS) positioning performance is significantly degraded due to the blocking of direct signals and errors caused by reflected signals in urban canyons. Recent studies have used ...
详细信息
ISBN:
(纸本)9780936406367
The global navigation satellite systems (GNSS) positioning performance is significantly degraded due to the blocking of direct signals and errors caused by reflected signals in urban canyons. Recent studies have used deep learning or machine learning methods to distinguish the line-of-sight(LOS) and non-line-of-sight(NLOS) signals to solve multi-path problems. However, these approaches still face challenges. The visibility of satellites is relatively stable in a short period, but the features of their signals change in varying degrees. Existing methods focus on the visibility identification of satellites at a single epoch, which fails to capture the effect of the temporal features of satellite signals on visibility and ignores the complex associations between satellites at multiple continual epochs. To address the above challenge, this paper develops a novel channel-independent patch transformer neural network with temporal information, for improving the prediction of GNSS satellite visibility. Firstly, to capture the influence of individual satellite features on the classifications of NLOS signals, we adopt the concept of independent channels to disentangle the various satellite features. In this way, we construct temporal variations for each feature and then independently assess the effect of these feature variations on satellite visibility. Secondly, to account for the association of multiple continuous epochs satellites, we partition the constructed temporal window feature sequences into a collection of sub-sequences level patches. This patch-level structural design preserves the semantic associations of multiple epochs satellites while also maintaining the ability to focus across a sufficient number of epochs. Finally, based on the idea of channel independence and patch, we develop a novel channel-independent patch transformer (CIPT) neural network with temporal information for predicting satellite visibility, which can not only learn the effect of individual f
暂无评论