Non-rigid registration is crucial in imaging, in particular, to adjust deformities produced during image acquisition and improve the accuracy of datasets. However, conventional imaging systems lack the desired speed a...
详细信息
ISBN:
(纸本)9780769547497
Non-rigid registration is crucial in imaging, in particular, to adjust deformities produced during image acquisition and improve the accuracy of datasets. However, conventional imaging systems lack the desired speed and computational bandwidth for additional non-rigid registration of the deformed images. Therefore, such functionality is usually unavailable in time-critical settings. Expensive computations and memory intensive characteristics of non-rigid image registration algorithms such as the Demons algorithm further limits the realization of such systems. In response, we propose an alternative and efficient custom hardware-based Demons registration algorithm which utilizes pipelined streaming models to minimize memory fetches for computation. Designed for highly customizable hardware, our design only requires single-pass of images to compute the Demons kernel. Implementation results on the Xilinx ML605 FPGA system is presented and quantitatively evaluated in clock cycle counts in contrast with a software-based implementation.
Aiming at solving the shortcomings of traditional single-pass clustering algorithms, such as low accuracy and large amount of computation, a novel Storm-based parallel single-pass clustering algorithm is proposed to d...
详细信息
ISBN:
(纸本)9783030042127;9783030042110
Aiming at solving the shortcomings of traditional single-pass clustering algorithms, such as low accuracy and large amount of computation, a novel Storm-based parallel single-pass clustering algorithm is proposed to discovery of hot events in the food field. In order to solve the problem of data inconsistency in parallel computing, a method of dynamically acquiring cluster increments and random delays is adopted to improve the single-pass algorithm. In order to validate the performance of the proposed method, a case study of news events classification is carried out. Simulation results show that the proposed algorithm can effectively improve the cluster repetition in clustering results and greatly improve the accuracy and efficiency of clustering compared with the traditional single-pass algorithm.
With the rapid development and popularization, Internet is becoming the most convenient way to publish and obtain information, which causes an extremely increasing quantity and variety of data. It is difficult to find...
详细信息
ISBN:
(纸本)9781538676721
With the rapid development and popularization, Internet is becoming the most convenient way to publish and obtain information, which causes an extremely increasing quantity and variety of data. It is difficult to find out potentially valuable information from these data, which is the primary problem of data mining. Mining company hot events from Internet news can effectively reflect how its business works. Thus, we propose a method for discovering and obtaining hot events from Internet news. In the proposed method, we use Gaussian kernel to update clustering center instead of global cluster to modify single-pass clustering algorithm. It is a dynamic incremental clustering algorithm which does not need to initialize the number of clusters. Then, Top-N hot events can be obtained through the clustering centers. Experimental comparison shows that the improved algorithm has higher clustering efficiency than the classic algorithm. Case studies from Shanghai pilot free-trade zone (FTZ) also show the effectiveness of our proposed method.
This paper develops a novel limited-memory method to solve dynamic optimization problems. The memory requirements for such problems often present a major obstacle, particularly for problems with PDE constraints such a...
详细信息
This paper develops a novel limited-memory method to solve dynamic optimization problems. The memory requirements for such problems often present a major obstacle, particularly for problems with PDE constraints such as optimal flow control, full waveform inversion, and optical tomography. In these problems, PDE constraints uniquely determine the state of a physical system for a given control;the goal is to find the value of the control that minimizes an objective. While the control is often low dimensional, the state is typically more expensive to store. This paper suggests using randomized matrix approximation to compress the state as it is generated and shows how to use the compressed state to reliably solve the original dynamic optimization problem. Concretely, the compressed state is used to compute approximate gradients and to apply the Hessian to vectors. The approximation error in these quantities is controlled by the target rank of the sketch. This approximate first- and second-order information can readily be used in any optimization algorithm. As an example, we develop a sketched trust-region method that adaptively chooses the target rank using a posteriori error information and provably converges to a stationary point of the original problem. Numerical experiments with the sketched trust-region method show promising performance on challenging problems such as the optimal control of an advection-reaction-diffusion equation and the optimal control of fluid flow past a cylinder.
The future of high-performance computing, specifically on future Exascale computers, will presumably see memory capacity and bandwidth fail to keep pace with data generated, for instance, from massively parallel parti...
详细信息
The future of high-performance computing, specifically on future Exascale computers, will presumably see memory capacity and bandwidth fail to keep pace with data generated, for instance, from massively parallel partial differential equation (PDE) systems. Current strategies proposed to address this bottleneck entail the omission of large fractions of data, as well as the incorporation of in situ compression algorithms to avoid overuse of memory. To ensure that post-processing operations are successful, this must be done in a way that a sufficiently accurate representation of the solution is stored. Moreover, in situations where the input/output system becomes a bottleneck in analysis, visualization, etc., or the execution of the PDE solver is expensive, the number of passes made over the data must be minimized. In the interest of addressing this problem, this work focuses on the utility of pass-efficient, parallelizable, low-rank, matrix decomposition methods in compressing high-dimensional simulation data from turbulent flows. A particular emphasis is placed on using coarse representation of the data - compatible with the PDE discretization grid - to accelerate the construction of the low-rank factorization. This includes the presentation of a novel single-pass matrix decomposition algorithm for computing the so-called interpolative decomposition. The methods are described extensively and numerical experiments on two turbulent channel flow data are performed. In the first (unladen) channel flow case, compression factors exceeding 400 are achieved while maintaining accuracy with respect to first- and second-order flow statistics. In the particle-laden case, compression factors of 100 are achieved and the compressed data is used to recover particle velocities. These results show that these compression methods can enable efficient computation of various quantities of interest in both the carrier and disperse phases. (C) 2020 Elsevier Inc. All rights reserved.
Vehicle-to-infrastructure (V2I) is one of the effective ways to solve the problem of intelligent connected vehicle perception, and the core is to fuse the information sensed by vehicle sensors with that sensed by infr...
详细信息
Vehicle-to-infrastructure (V2I) is one of the effective ways to solve the problem of intelligent connected vehicle perception, and the core is to fuse the information sensed by vehicle sensors with that sensed by infrastructure sensors. However, accurately matching the objects detected by the vehicle with multiple objects detected by the infrastructure remains a challenge. This paper presents an object association matching method to fuse the object information from vehicle sensors and roadside sensors, enabling the matching and fusion of multiple target information. The proposed object association matching algorithm consists of three steps. First, the deployment method for vehicle sensors and roadside sensors is designed. Then, the laser point cloud data from the roadside sensors are processed using the DBSCAN algorithm to extract the object information on the road. Finally, an improved single-pass algorithm for object association matching is proposed to achieve the matched target by setting a change threshold for selection. To validate the effectiveness and feasibility of the proposed method, real-vehicle experiments are conducted. Furthermore, the improved single-pass algorithm is compared with the classical Hungarian algorithm, Kuhn-Munkres (KM) algorithm, and nearest neighbor (NN) algorithm. The experimental results demonstrate that the improved single-pass algorithm achieves a target trajectory matching accuracy of 0.937, which is 6.60%, 1.85%, and 2.07% higher than the above-mentioned algorithms, respectively. In addition, this paper investigates the curvature of the target vehicle trajectory data after fusing vehicle sensing information and roadside sensing information. The curvature mean, curvature variance, and curvature standard deviation are analyzed. The experimental results illustrate that the fused target information is more accurate and effective. The method proposed in this study contributes to the advancement of the theoretical system of V2I coopera
暂无评论