The future machine-type communication in internet-of-things (IoT) systems involves a massive number of devices sporadically communicating with a base station (BS) equipped with multiple antennas. Detecting active devi...
详细信息
The future machine-type communication in internet-of-things (IoT) systems involves a massive number of devices sporadically communicating with a base station (BS) equipped with multiple antennas. Detecting activedevices and estimating their associated channels are crucial but challenging due to the large number of potential devices and the small fraction of activedevices. Existing studies assume high-resolution analog-to-digital converters (ADCs) at the BS, while there is a growing interest in implementing low-resolution ADCs, particularly one-bit ADCs, in massive multiple-input multiple-output (MIMO) systems. This paper focuses on the joint one-bit active device detection and channel estimation problem. We consider the maximum-likelihood approach and propose a novel expectation maximization (EM) algorithm with acceleration. On the theoretical aspect, we provide the convergent computational complexity analysis for the accelerated EM algorithm. The proposed method, evaluated through numerical simulations, outperforms benchmark algorithms in terms of both estimation accuracy and computational complexity.
This paper investigates multiple access schemes for uplink and downlink transmissions in cellular networks with massive Internet of Things(IoT) devices. Recall that single-carrier frequency division multiple access an...
详细信息
This paper investigates multiple access schemes for uplink and downlink transmissions in cellular networks with massive Internet of Things(IoT) devices. Recall that single-carrier frequency division multiple access and orthogonal frequency division multiple access, which are orthogonal multiple access(OMA) schemes, have been conventionally adopted for uplink and downlink transmissions in narrow-band IoT, respectively. Unlike these OMA schemes, we propose two non-orthogonal multiple access(NOMA)schemes for cellular IoT with short-packet transmissions. Especially, a generalized expectation consistent signal recovery-based algorithm is proposed to estimate activedevices, channel state information and data in uplink transmission, where all of the activedevices are allowed to transmit their pilots and data through the same resource block without authorization. On the other hand, the activedevices estimated during uplink transmission are grouped for downlink transmission with a trade-off between performance and detection complexity. Additionally, the data error rates are analysed for both uplink and downlink transmissions with low-resolution analog-to-digital converters(ADCs), where the effects of critical parameters such as the estimation error, ADC bits, packet length, and message bits are revealed. Both simulation and analytical results are provided to demonstrate the excellent performance of the proposed NOMA schemes and algorithms, especially for activedevice, channel, and data estimations. More importantly, the obtained results show that the data error rate performance of downlink NOMA is superior to that of OMA when the message bits of devices in one group are selected following the proposed strategy.
Massive machine-type communications(mMTC)is envisioned to be one of the pivotal scenarios in the fifth-generation(5G)wireless communication,where the explosively emerging Internet-of-Things(IoT)applications have trigg...
详细信息
Massive machine-type communications(mMTC)is envisioned to be one of the pivotal scenarios in the fifth-generation(5G)wireless communication,where the explosively emerging Internet-of-Things(IoT)applications have triggered the demand for services with low-latency and *** this end,grant-free random access paradigm has been proposed as a promising enabler in simplifying the connection procedure and significantly reducing access *** this paper,we propose to leverage the burgeoning reconfigurable intelligent surface(RIS)for grant-free massive access working at millimeter-wave(mmWave)frequency to further boost access *** attaching independently controllable phase shifts,reconfiguring,and refracting the propagation of incident electromagnetic waves,the deployed RISs could provide additional diversity gain and enhance the access channel *** this basis,to address the challenging active device detection(ADD)and channel estimation(CE)problem,we develop a joint-ADDCE(JADDCE)method by resorting to the existing approximate message passing(AMP)algorithm with expectation maximization(EM)to extract the structured common sparsity in traffic behaviors and cascaded channel ***,simulations are carried out to demonstrate the superiority of our proposed scheme.
With the increasing development of Internet of Things (IoT), the upcoming sixth-generation (6G) wireless network is required to support grant-free random access of a massive number of sporadic traffic devices. In part...
详细信息
With the increasing development of Internet of Things (IoT), the upcoming sixth-generation (6G) wireless network is required to support grant-free random access of a massive number of sporadic traffic devices. In particular, at the beginning of each time slot, the base station (BS) performs joint activity detection and channel estimation (JADCE) based on the received pilot sequences sent from activedevices. Due to the deployment of a large-scale antenna array and the existence of a massive number of IoT devices, conventional JADCE approaches usually have high computational complexity and need long pilot sequences. To solve these challenges, this paper proposes a novel deep learning framework for JADCE in 6G wireless networks, which contains a dimension reduction module, a deep learning network module, an active device detection module, and a channel estimation module. Then, prior-feature learning followed by an adaptive-tuning strategy is proposed, where an inner network composed of the Expectation-maximization (EM) and back-propagation is introduced to jointly tune the precision and learn the distribution parameters of the device state matrix. Finally, by designing the inner layer-by-layer and outer layer-by-layer training method, a feature-aided adaptive-tuning deep learning network is built. Both theoretical analysis and simulation results confirm that the proposed deep learning framework has low computational complexity and needs short pilot sequences in practical scenarios.
Massive Multiple Input Multiple Output (MIMO) has great potential to improve spectrum efficiency in the fifth generation (5G) wireless communication systems. However, the efficiency was disastrously reduced by the hea...
详细信息
Massive Multiple Input Multiple Output (MIMO) has great potential to improve spectrum efficiency in the fifth generation (5G) wireless communication systems. However, the efficiency was disastrously reduced by the heavy burden of overhead for devicedetection and channel estimation of the large amount of small data packets in the uplink channel. In the paper, we proposed a novel transmission scheme by superposing the training symbols for active device detection and channel estimation on the data symbols in the uplink transmission to improve the efficiency. More specifically, in order to mitigate the cross interference among the superposed signals, we proposed to split the transmission into the training phase and the traffic phase. Then we superposed the training phase in the next transmission with the traffic phase in the current transmission. Furthermore, we give the optimization of power allocation ratio between training phase and traffic phase to obtain the optimal overall performance. The analytic and simulation results show that, with the help of spatial isolation among devices, our proposed transmission scheme can significantly improve the transmission efficiency compared with the existing schemes.
We consider the problem of joint activity detection and channel estimation (JADCE) for massive grant-free random access with sporadic traffic devices. Due to the deployment of a large-scale antennas array and a massiv...
详细信息
ISBN:
(纸本)9781728172361
We consider the problem of joint activity detection and channel estimation (JADCE) for massive grant-free random access with sporadic traffic devices. Due to the deployment of a large-scale antennas array and a massive number of Internet-of-Things (IoT) devices in the sixth-generation (6G) wireless networks, conventional JADCE approaches usually incur exceedingly high computational complexity and require long pilot sequences. To address these challenges, this paper develops a deep learning-based JADCE framework which consists of a dimension reduction module, a deep learning network module, an active device detection module, and a channel estimation module. In particular, a deep learning network is developed for the recovery of device state matrix based on a designed denoiser, which can adaptively adjust the density parameters characterizing the device state matrix and effectively adapt to general complex channel settings with a finite number of training data. Both theoretical analysis and simulation results confirm that the proposed deep learning framework is computationally-efficient for achieving excellent performance in practical scenarios.
active device detection is a precondition of realizing grant-free random access in beyond fifth-generation (B5G) cellular Internet-of-Things (IoT). However, due to the deployment of a large antennas array and the exis...
详细信息
ISBN:
(纸本)9781728109626
active device detection is a precondition of realizing grant-free random access in beyond fifth-generation (B5G) cellular Internet-of-Things (IoT). However, due to the deployment of a large antennas array and the existence of a huge number of IoT devices, activity detection usually has high computational complexity and needs long pilot sequences. To overcome these challenges, we first propose a dimension deduction method by projecting the original device state matrix to a much lower dimension space. Then, we develop an optimized design framework with a logarithmic smoothing objective function and a coupled full column rank constraint. Under that framework, we transform the original interested matrix to a positive semidefinite matrix, followed by proposing a Riemannian trust-region algorithm to solve the problem in complex field. Simulation results show that the proposed algorithm outperforms the state-of-art algorithms in terms of devicedetection performance.
暂无评论