High-dimensional analog circuit sizing with machine learning-based surrogate models suffers from the high sampling cost of evaluating expensive black-box objective functions in huge design spaces. This work addresses ...
详细信息
ISBN:
(数字)9798350352030
ISBN:
(纸本)9798350352047
High-dimensional analog circuit sizing with machine learning-based surrogate models suffers from the high sampling cost of evaluating expensive black-box objective functions in huge design spaces. This work addresses the sampling efficiency challenge by elaborately reducing the dimensionality of the input spaces, enabling efficient optimization for automated analog circuit sizing. We propose a latent space optimization method that includes an iteratively updated generative model based on a variational autoencoder to embed the solution manifold of analog circuits to a low-dimensional and continuous space, where the latent variables are optimized using Bayesian optimization. The effectiveness of the proposed method has been verified on two real-world analog circuits with 18 and 59 design variables. In comparison with BO in the original high-d spaces or latent low-d spaces assisted by other embedding strategies, the proposed method achieves 23%~73% improvements in optimization per-formance within the same runtime limitations. We also conduct a technology migration experiment using the pre-trained variational autoencoder model, which demonstrates the necessity of pre-training and the scalability of the proposed method.
Depth completion aims to predict dense depth maps with sparse depth measurements from a depth sensor. Currently, Convolutional Neural Network (CNN) based models are the most popular methods applied to depth completion...
详细信息
Coarse-Grained Reconfigurable Architecture (CGRA) is a domain-specific reconfigurable architecture. Generally, the CGRA architecture consists of IO, memory, coarse-grained processing element (PE), and interconnect. Us...
Coarse-Grained Reconfigurable Architecture (CGRA) is a domain-specific reconfigurable architecture. Generally, the CGRA architecture consists of IO, memory, coarse-grained processing element (PE), and interconnect. Usually, ALU in PE contains a relatively complete set of operations and most of the interconnects adopt neighbor-to-neighbor (N2N) [1], switch-based [2], and combination of the connection box and switch box (CB-SB) patterns [3]. However, the complex operation sets and switch-based/CB-SB fully-connected interconnects provide sufficient reconfigurability at the cost of resource overhead. Thus, it is important to build a parameterized architecture of CGRA to achieve a balance among hardware overhead, flexibility and performance through automatic design space exploration (DSE).
As design goes into multi-billion transistors, the synthesis runtime becomes an important issue, particularly for design verification and prototyping, as one may run the synthesis many times with design change. Module...
详细信息
Neural Radiance Field (NeRF) is a state-of-the-art algorithm in the field of novel view synthesis and has the potential to be used in AR/VR. However, the inference of NeRF is time-consuming. Motivated by resource-cons...
Neural Radiance Field (NeRF) is a state-of-the-art algorithm in the field of novel view synthesis and has the potential to be used in AR/VR. However, the inference of NeRF is time-consuming. Motivated by resource-constraint scenarios on the edge and mixed reality devices, our essential idea is to bridge this gap while improving throughput and power consumption. This paper proposes a high-performance FPGA-based accelerator, with a fully-pipelined design tailored for the vanilla NeRF algorithm. We also design a mechanism to monitor the output of the rendering module to reduce operations. Experimental results show that our accelerator achieves 3.63× energy efficiency over implementation on GPU NVIDIA V100, and 1.31× speed up over state-of-the-art asic design if running under the same clock frequency as asic.
The COordinate Rotation DIgital Computer (CORDIC) simplifies the elementary function using bit-shift operation and addition. However, the iteration increases with the accuracy and causes a long latency. In this paper,...
The COordinate Rotation DIgital Computer (CORDIC) simplifies the elementary function using bit-shift operation and addition. However, the iteration increases with the accuracy and causes a long latency. In this paper, we propose a decision-based CORDIC hardware for arc tangent calculation, which introduce a comparator to determine the necessity of the rotation in every iteration. By bypassing the redundant processes, the proposed CORDIC achieve a higher accuracy. Besides, the regular structure is also friendly for folding implementation. Experiments show that compare with the conventional CORDIC, the decision-based CORDIC hardware has an improvement of 28.1% in accuracy. Synthesis results in 65nm technology shows that the proposed unfolding hardware consumes an area of 0.774mm 2 and a power of 0.644mW under 65nm technology, which has an improvement of about 13.7% and 3% in comparison with the conventional CORDIC, and the folding structure has an area of 0.164mm 2 and a power of 0.243mW .
There are already some works on accelerating transformer networks with field-programmable gate array (FPGA). However, many accelerators focus only on attention computation or suffer from fixed data streams without fle...
There are already some works on accelerating transformer networks with field-programmable gate array (FPGA). However, many accelerators focus only on attention computation or suffer from fixed data streams without flexibility. Moreover, their hardware performance is limited without schedule optimization and full use of hardware resources. In this article, we propose a flexible and efficient FPGA-based overlay processor, named FET-OPU. Specifically, we design an overlay architecture for general accelerations of transformer networks. We propose a unique matrix multiplication unit (MMU), which consists of a processing element (PE) array based on modified DSP-packing technology and a FIFO array for data caching and rearrangement. An efficient non-linear function unit (NFU) is also introduced, which can calculate arbitrary single input non-linear functions. We also customize an instruction set for our overlay architecture, dynamically controlling data flows by instructions generated on the software side. In addition, we introduce a two-level compiler and optimize the parallelism and memory allocation schedule. Experimental results show that our FET-OPU achieves 7.33-21.27× speedup and 231× less energy consumption compared with CPU, and 1.56-4.08× latency reduction with 5.85-66.36× less energy consumption compared with GPU. Furthermore, we observe 1.56-8.21× better latency and 5.28-6.24× less energy consumption compared with previously customized FPGA/asic accelerators and can be 2.05× faster than NPE with 5.55× less energy consumption.
Transformer-based models suffer from large num-ber of parameters and high inference latency, whose deployment are not green due to the potential environmental damage caused by high inference energy consumption. In add...
Transformer-based models suffer from large num-ber of parameters and high inference latency, whose deployment are not green due to the potential environmental damage caused by high inference energy consumption. In addition, it is difficult to deploy such models on devices, especially on resource constrained devices such as FPGA. Various model pruning methods are proposed to shrink the model size and resource consumption, so as to fit the models on hardware. However, such methods often introduce floating point of operations (FLOPs) as an agent of hardware performance, which is not accurate. Furthermore, structural pruning methods are always in a single head-wise or layer-wise pattern, which fails to compress the models to the extreme. To resolve the above issues, we propose a green BERT deployment method on FPGA via hardware-aware and hybrid pruning, named g-BERT. Specifically, two hardware-aware metrics are introduced by High Level Synthesis (HLS) to evaluate the latency and power consumption of inference on FPGA, which can be optimized directly while pruning. Moreover, we simultaneously consider pruning of heads and full encoder layers. To efficiently find the optimal structure, g-BERT applies differentiable neural architecture search (NAS) with a special 0–1 loss function. Compared with the BERT-base, g-BERT achieves $2.1\times$ speedup, $1.9\times$ power consumption reduction and $1.8\times$ model size reduction with comparable accuracy, on par with the state-of-the-art methods.
Temporal Coarse-Grained Reconfigurable Architecture (CGRA) is a typical category of CGRA that supports single-cycle context switching and time-multiplexing hardware resources to perform both spatial and temporal compu...
Temporal Coarse-Grained Reconfigurable Architecture (CGRA) is a typical category of CGRA that supports single-cycle context switching and time-multiplexing hardware resources to perform both spatial and temporal computations. Compared with the spatial CGRA, it can be used in area and power budget-constrained scenarios, with the sacrifice of the throughput. Therefore, achieving minimum Initialization Interval (II) for higher throughput is the main objective in many works for temporal CGRA mapping.
Existing accelerators for transformer networks with field-programmable gate array (FPGA) either focus only on attention computation or suffer from fixed data streams without flexibility. Moreover, compression and appr...
Existing accelerators for transformer networks with field-programmable gate array (FPGA) either focus only on attention computation or suffer from fixed data streams without flexibility. Moreover, compression and approximation methods of transformer networks have the potential for further optimization. In this article, we propose a low-latency FPGA-based overlay processor, named LTrans-OPU for general accelerations of transformer networks. Specifically, we design a domain-specific overlay architecture, including a computation unit for matrix multiplication of arbitrary dimensions. An instruction set customized for our overlay architecture is also introduced, dynamically controlling data flows by generated instructions. In addition, we introduce a hybrid pruning method common to various transformer networks, along with an efficient non-linear function approximation method. Experimental results show that our design is rather competitive and has low latency. LTrans-OPU achieves 11.10-32.20× speedup compared with CPU and 2.44-6.18 × latency reduction compared with GPU. We also observe 2.36-12.43 × lower latency compared with customized FPGA/asic accelerators, and can be 3.10× faster than NPE.
暂无评论