In compressed sensing (CS) approach for unsourced random access (URA), each user's transmitted sequence consists of shorter sub-sequences encoded via sparse regression codes (SPARCs) over several consecutive sub-s...
详细信息
In compressed sensing (CS) approach for unsourced random access (URA), each user's transmitted sequence consists of shorter sub-sequences encoded via sparse regression codes (SPARCs) over several consecutive sub-slots. The approximate message passing (AMP) technique is employed to decode the SPARCs. Stitching together each user's sub-messages decoded over distinct sub-slots requires an outer parity-check code that adds redundancy bit interconnecting the sub-messages. This letter introduces a novel CS-based URA scheme which is free from the outer code. In the proposed scheme, the encoded sub-sequence of the first sub-slot acts as a temporary user identifier and also customises the sensing matrix used to encode the subsequent sub-messages. The technique allows each user to send several sub-messages (data streams) per sub-slot. Simulation results indicate that the proposed scheme outperforms the existing CS-based algorithms on Gaussian channel. It is also superior than the other state-of-the-art URA schemes when the number of active users exceeds 200. The proposed analysis framework closely predicts the obtained numerical results.
We demonstrate how successive refinement ideas can be used in point-to-point lossy compression problems in order to reduce complexity. We show two examples, the binary-Hamming and quadratic-Gaussian cases, in which a ...
详细信息
ISBN:
(纸本)9781479934096
We demonstrate how successive refinement ideas can be used in point-to-point lossy compression problems in order to reduce complexity. We show two examples, the binary-Hamming and quadratic-Gaussian cases, in which a layered code construction results in a low complexity scheme that attains optimal performance. For example, when the number of layers grows with the block length n, we show how to design an O (n(log(n))) algorithm that asymptotically achieves the rate distortion bound. We then show that with the same scheme, used with a fixed number of layers, successive refinement is achieved in the classical sense, and at the same time the second order performance (i.e. dispersion) is also tight.
暂无评论