A critical problem in finance engineering is to value the option and other derivatives securities correctly. The Monte Carlo method (MC) is an important one in the computation for the valuation of multi-asset European...
详细信息
A critical problem in finance engineering is to value the option and other derivatives securities correctly. The Monte Carlo method (MC) is an important one in the computation for the valuation of multi-asset European option. But its convergence rate is very slow. So various quasi Monte Carlo methods and the relative parallel computing method are becoming an important approach to the valuing of multi-asset European option. In this paper, we use a number-theoretic method, which is a H-W method, to generate identical distributed point set in order to compute the value of the multi-asset European option. It turns out to be very effective, and the time of computing is greatly shortened. Comparing with other methods, the method computes less points and it is especially suitable for high dimension problem.
The paper describes a thermo-mechanical analysis of a concrete containment which was used for the study of life service prolongation of the nuclear power plant Temelin in Czech Republic. In this case, the staggered co...
详细信息
ISBN:
(纸本)9781905088423
The paper describes a thermo-mechanical analysis of a concrete containment which was used for the study of life service prolongation of the nuclear power plant Temelin in Czech Republic. In this case, the staggered coupling approach was used. The complex geometry of the containment wall led to a three dimensional model which captured a concrete segment with steel reinforcement and tendon ducts. Taking into account complex mechanical behaviour of concrete, several material models were used in order to describe concrete creep, ageing and damage. Severe requirements of the problem solved on computer memory and speed led to parallelisation of the problem based on the domain decomposition. The results of performed analysis from the sequential and parallel analyses are described and discussed.
Because of the more in-depth scientific research, remote sensing images often contain huge amounts of information. Therefore, remote sensing images always have features with multi-dimensions details and huge size. In ...
详细信息
ISBN:
(纸本)9780819483805
Because of the more in-depth scientific research, remote sensing images often contain huge amounts of information. Therefore, remote sensing images always have features with multi-dimensions details and huge size. In order to obtain the ground information more accurately from the images, the remote sensing image processing would have several steps in the aim of better image restore and the image information refining. Frequently, processing for this type of images has faced to some difficult issues, such as calculating slowly or consuming huge in resources. For this reason, the parallel computing rendering in remote sensing image processing is essentially necessary. The parallel computing method approached in this paper does not require the original algorithm rewriting. Under a distributed framework, the method allocated the original algorithm efficiently to the multiple computing cores of the processing computer. Because this method has fully use the computing resources, so the calculating time would be reduced linearly with the number of computing threads. What's more, the method can also truly guarantee the integrity of the remote sensing image data. For the purpose of validating the feasibility of the method, this paper put the parallel computing method on application, in which the method rendering into a radiation simulation of remote sensing image processing. We conducted several experiments and got the statistical results. We integrated the parallel computing into the core of the original algorithm the wide huge size convolution. The experimental results showed that the computing efficiency improved linearly. The number of computer calculating core was proportionally related to the reduced rate of computing time. At the same time, the computing results were identical to the original results.
Synthetic aperture radar (SAR) data processing has matured over the past decade with development in processing approaches that include traditional time-domain methods, popular and efficient frequency-domain methods, a...
详细信息
ISBN:
(纸本)0819426490
Synthetic aperture radar (SAR) data processing has matured over the past decade with development in processing approaches that include traditional time-domain methods, popular and efficient frequency-domain methods, and relatively new and more precise chirp-scaling methods. These approaches have been used in various processing applications to achieve various degrees of efficiency and accuracy. One common trait amongst ail SAR data processing algorithms, however, is their iterative and repetitive nature that make them amenable to parallel computing implementation. With SAR's contribution to remote sensing now well-established, the processing throughput demand has steadily increased with each new mission. parallel computing implementation of SAR processing algorithms is therefore an important means of attaining high SAR data processing throughput to keep up with the ever-increasing science demand. This paper concerns parallel computing implementation of a mode of data called ScanSAR. ScanSAR has the unique advantage of yielding wide swath coverage in a single data collection pass. This mode of data collection has been demonstrated on SIR-C and is being used operationally for the first time on Radarsat. The burst nature of ScanSAR data is a natural candidate for parallel computing implementation. This paper gives a description of such an implementation experience at Alaska SAR Facility for Radarsat ScanSAR mode data. A practical concurrent processing technique is also described that allows further improvement in throughput at a slight increase in system cost.
Big data analytics enables to uncover hidden and useful information for better decisions. Our research area covers big data visualization that is based on dimensionality reduction methods. It requires time and resourc...
详细信息
ISBN:
(纸本)9783319462547;9783319462530
Big data analytics enables to uncover hidden and useful information for better decisions. Our research area covers big data visualization that is based on dimensionality reduction methods. It requires time and resource consuming processes, so in this paper we look for computing methods and environments that enable to execute the tasks and get results faster. In this research we use Random projection method to reduce the dimensions of the initial data. We investigate how parallel computing based on OpenMP and MPI technologies can increase the performance of these dimensionality reduction processes. The results show the significant improvement of performance when executing MPI code on computer cluster. However, the greater number of cores not always leads to higher speed.
Sequential-modular-based process flowsheeting software remains an indispensable tool for process design, control, and optimization. Yet, as the process industry advances in intelligent operation and maintenance, conve...
详细信息
Sequential-modular-based process flowsheeting software remains an indispensable tool for process design, control, and optimization. Yet, as the process industry advances in intelligent operation and maintenance, conventional sequential-modular-based process-simulation techniques present challenges regarding computationally intensive calculations and significant central processing unit (CPU) time requirements, particularly in large-scale design and optimization tasks. To address these challenges, this paper proposes a novel process-simulation parallel computing framework (PSPCF). This framework achieves layered parallelism in recycling processes at the unit operation level. Notably, PSPCF introduces a groundbreaking concept of formulating simulation problems as task graphs and utilizes Taskflow, an advanced task graph computing system, for hierarchical parallel scheduling and the execution of unit operation tasks. PSPCF also integrates an advanced work-stealing scheme to automatically balance thread resources with the demanding workload of unit operation tasks. For evaluation, both a simpler parallel column process and a more complex cracked gas separation process were simulated on a flowsheeting platform using PSPCF. The framework demonstrates significant time savings, achieving over 60% reduction in processing time for the simpler process and a 35%–40% speed-up for the more complex separation process.
The computing velocity and memory storage of Single PC are often limited in large-scale electromagnetic simulation by finite element method (FEM), parallel processing is an important means to overcome such problems. T...
详细信息
ISBN:
(纸本)9781467371063
The computing velocity and memory storage of Single PC are often limited in large-scale electromagnetic simulation by finite element method (FEM), parallel processing is an important means to overcome such problems. The domain decomposition method (DDM) which decomposes the domain by nodes dominating and suits for parallel computing was illustrated first in this paper;A 2D electrostatic model was built and decomposed by the DDM;And the FEM linear system of equations was solved by using parallel CG method on the distributed parallel system composed of 6 PCs, the effective speed up reaching 97.5% was satisfying. Especially for large-scale simulation which consists of more than millions of freedoms, the parallel processing reduces computing time and increases the computing velocity greatly, it's the base on which large-scale 3D electromagnetic parallel computing.
In order to meet the requirement to quickly solve the water quality equation for an unexpected water pollution incident, this dissertation, based on the paralleled algorithm under Java parallel Processing Framework (J...
详细信息
ISBN:
(纸本)9783037856499
In order to meet the requirement to quickly solve the water quality equation for an unexpected water pollution incident, this dissertation, based on the paralleled algorithm under Java parallel Processing Framework (JPPF), does research about the process to dynamically decomposes the iterative process into calculation tasks and distribute the tasks by the API of JPPF to the paralleled nodes for calculation. And the simulation result of one-dimension water quality equation shows that parallel computing method could reduce the time complexity from O(n(2)) to O(n log n), not only resulting in a significant improvement in calculation speed, but in a higher reliability and stability.
A correctly beating heart is important to ensure adequate circulation of blood throughout the body. Normal heart rhythm is produced by the orchestrated conduction of electrical signals throughout the heart. Cardiac el...
详细信息
ISBN:
(纸本)9781424479290
A correctly beating heart is important to ensure adequate circulation of blood throughout the body. Normal heart rhythm is produced by the orchestrated conduction of electrical signals throughout the heart. Cardiac electrical activity is the resulted function of a series of complex biochemical-mechanical reactions, which involves transportation and bio-distribution of ionic flows through a variety of biological ion channels. Cardiac arrhythmias are caused by the direct alteration of ion channel activity that results in changes in the AP waveform. In this work, we developed a whole-heart simulation model with the use of massive parallel computing with GPGPU and OpenGL. The simulation algorithm was implemented under several different versions for the purpose of comparisons, including one conventional CPU version and two GPU versions based on Nvidia CUDA platform. OpenGL was utilized for the visualization / interaction platform because it is open source, light weight and universally supported by various operating systems. The experimental results show that the GPU-based simulation outperforms the conventional CPU-based approach and significantly improves the speed of simulation. By adopting modern computer architecture, this present investigation enables real-time simulation and visualization of electrical excitation and conduction in the large and complicated 3D geometry of a real-world human heart.
Kernel density estimation is nowadays very popular tool for nonparametric probabilistic density estimation. One of its most important disadvantages is computational complexity of computations needed, especially for la...
详细信息
ISBN:
(纸本)9780769549392;9781467353212
Kernel density estimation is nowadays very popular tool for nonparametric probabilistic density estimation. One of its most important disadvantages is computational complexity of computations needed, especially for large data sets. One way for accelerating these computations is to use the parallel computing with multi-core platforms. In this paper we parallelize two kernel estimation methods such as the univariate and multivariate kernel estimation from the field of the computational econometrics on multi-core platform using different programming frameworks such as Pthreads, OpenMP, Intel Cilk++, Intel TBB, SWARM and FastFlow. The purpose of this paper is to present an extensive quantitative (i.e., performance) and qualitative (i.e., the ease of programming effort) study of the multi-core programming frameworks for these two kernel estimation methods.
暂无评论