An ultra-low power DFT architecture suitable for MIMO-OFDM applications & it was introduced in this study. Several radix such as 8/16/32 dependent optimization methods are performed to create variable lengths of 4...
详细信息
ISBN:
(数字)9798350394474
ISBN:
(纸本)9798350394481
An ultra-low power DFT architecture suitable for MIMO-OFDM applications & it was introduced in this study. Several radix such as 8/16/32 dependent optimization methods are performed to create variable lengths of 4k/8k points, a necessity in today's 4G and 5G communication technologies, which are plagued by a number of power and performance concerns with DFT processors. these technologies are widely used but have a higher energy footprint than others. Consequently, this work uses a novel approach called "SBOX" and the concept of parallelism to boost performance and reduce power consumption, all of which are essential for an advanced DFT architecture. parallelism and the proposed SBOX model allow for fewer multiplications and faster processing times. Cadence-allegro-17.2 is used for the design and verification of the full implementation; this software is also used for simulation, synthesis, layout, and power analysis. the area required for 8-point s-box DFT is 105.15 um2, it requires 7mW of power, and it can achieve a throughput of 59.33GBPS. In addition, the throughput of 57.33GBPS may be achieved by 16-point s-box DFT with a footprint of 123.35um2, using 9mW of power. Also, the throughput of 32-point s-box DFT is 56.45GBPS on a chip that uses 10.24 mW and occupies 132.35um2. With MIMO in 5G technology, this layout becomes more practical for usage in telecommunications. the performance is enhanced over existing architecturesthanks to the optimized radix 8 s-box DFT.
Deep learning has been promising in recent years which learns complicated patterns and constructs data-driven decisions. To improve the effectiveness, due to the computational and memory requirements, most of the deep...
详细信息
ISBN:
(数字)9798331531539
ISBN:
(纸本)9798331531546
Deep learning has been promising in recent years which learns complicated patterns and constructs data-driven decisions. To improve the effectiveness, due to the computational and memory requirements, most of the deep learning training workloads utilize GPUs. the challenges such as growing data sets, model size, and computation specifically count on the factors of the deep neural network used to train and the resource factors of GPUs utilized for that training. this study investigates those factors and represents the flow of them withthe shifts. Since everything with deep learning training focuses the performance, this research designed and implemented a predictive model to find the end-to-end training time depending on the given resources by the user. the factors evaluated are model-specific parameters in-cluding batch size, matrix size, and kernel dimensions in addition to variables like GPU memory bandwidth, core count, and clock speed. this research offers practical insights for deep learning practitioners by benchmarking GPUs, such as NVIDIA's Tesla P100, K80, and M40, across several neural network architectures, including AlexNet, GoogLeNet, and VGG16. Furthermore, by creating predictive models for deep neural network training time on particular GPU setups, the research attempts to close the gap between theoretical expectations and real-world performance by guidance for cost-effective GPU selection, and optimizing resource allocation in deep learning processes. the analysis of the results for the different models in different GPUs presents ~5-20% factor with mitigation techniques to the users for the appropriate resource allocation.
LDPC codes have been intensively used in various wireless communication applications, due to their increased BER performance. the present paper summarizes the state of the art applications of short length LDPC codes a...
详细信息
ISBN:
(纸本)9781479914920
LDPC codes have been intensively used in various wireless communication applications, due to their increased BER performance. the present paper summarizes the state of the art applications of short length LDPC codes and proposes FPGA based application specific hardware architectures for short-length LDPC decoders. the decoding algorithms considered for implementation are both belief propagation and min-sum algorithm. Due to the increased BER performances, the proposed architecture make use of parallel computation capabilities offered by FPGA technology in order to implement the belief propagation algorithm. In spite of the iterative nature and increased computational complexity of the LDPC decoding algorithm, the proposed architecture achieves high-throughput, mandatory in real-time application and data transmission. the architecture for the LDPC belief propagation based decoder is based on arctangent hyperbolic function approximation used for check nodes update.
Automatic one-and three-phase re-closing power lines, after short circuit shutdown, is a very effective way to improve the reliability of power delivery. Re-closing while the fault is not cleared can be dangerous for ...
详细信息
ISBN:
(纸本)9781424487790
Automatic one-and three-phase re-closing power lines, after short circuit shutdown, is a very effective way to improve the reliability of power delivery. Re-closing while the fault is not cleared can be dangerous for some electrical appliances. To prevent re-closing of a short circuit a method allowing stable arc detection has developed. the method is based on the estimation of real-time parameters in voltage signals and an analysis of error estimation. A neural network has been proposed to solve the problem in real time.
暂无评论