Modern day proteomics generates ever more complex data, causing the requirements on the storage and processing of such data to outgrow the capacity of most desktop computers. To cope with the increased computational d...
详细信息
Modern day proteomics generates ever more complex data, causing the requirements on the storage and processing of such data to outgrow the capacity of most desktop computers. To cope with the increased computational demands, distributed architectures have gained substantial popularity in the recent years. In this review, we provide an overview of the current techniques for distributed computing, along with examples of how the techniques are currently being employed in the field of proteomics. We thus underline the benefits of distributed computing in proteomics, while also pointing out the potential issues and pitfalls involved.
More than hundred years ago the classic physics was in its full power, with just a few unexplained phenomena;which, however, led to a revolution and the development of the modern physics. The outbreak was possible by ...
详细信息
ISBN:
(纸本)9781728155845
More than hundred years ago the classic physics was in its full power, with just a few unexplained phenomena;which, however, led to a revolution and the development of the modern physics. The outbreak was possible by studying the nature under extreme conditions which finally led to the understanding of the relativistic and quantal behavior. Today, the computing is in a similar position: it is a sound success story, with exponentially growing utilization but, as moving towards extreme utilization conditions, with a growing number of difficulties and unexpected issues which cannot be explained based on the classic computing paradigm. The paper draws the attention that under extreme conditions, computing behavior could differ than the one in the normal conditions, and pinpoints that certain, unnoticed or neglected features enable the explanation of the new phenomena, and the enhancement of some computing features. Moreover, a new modern computing paradigm implementation idea is proposed.
X-rays offer high penetration with the potential for tomography of centimetre-sized specimens, but synchrotron beamlines often provide illumination that is only millimetres wide. Here an approach is demonstrated terme...
详细信息
X-rays offer high penetration with the potential for tomography of centimetre-sized specimens, but synchrotron beamlines often provide illumination that is only millimetres wide. Here an approach is demonstrated termed Tomosaic for tomographic imaging of large samples that extend beyond the illumination field of view of an X-ray imaging system. This includes software modules for image stitching and calibration, while making use of existing modules available in other packages for alignment and reconstruction. The approach is compatible with conventional beamline hardware, while providing a dose-efficient method of data acquisition. By using parallelization on a distributed computing system, it provides a solution for handling teravoxel-sized or larger datasets that cannot be processed on a single workstation in a reasonable time. Using experimental data, the package is shown to provide good quality three-dimensional reconstruction for centimetre-sized samples with sub-micrometre pixel size.
Simulation of cm-scale tumor growth has generally been constrained by the computational cost to numerically solve the associated equations, with models limited to representing mm-scale or smaller tumors. While the wor...
详细信息
Simulation of cm-scale tumor growth has generally been constrained by the computational cost to numerically solve the associated equations, with models limited to representing mm-scale or smaller tumors. While the work has proven useful to the study of small tumors and micro-metastases, a biologically-relevant simulation of cmscale masses as would be typically detected and treated in patients has remained an elusive goal. This study presents a distributed computing (parallelized) implementation of a mixture model of tumor growth to simulate 3D cm-scale vascularized tissue at sub-mm resolution. The numerical solving scheme utilizes a two-stage parallelization framework. The solution is written for GPU computation using the CUDA framework, which handles all Multigrid-related computations. Message Passing Interface (MPI) handles distribution of information across multiple processes, freeing the program from RAM and the processing limitations found on single systems. On each system, Nvidia's CUDA library allows for fast processing of model data using GPU-bound computing on fewer systems. The results show that a combined MPI-CUDA implementation enables the continuum modeling of cm-scale tumors at reasonable computational cost. Further work to calibrate model parameters to particular tumor conditions could enable simulation of patient-specific tumors for clinical application.
Plagioclase microlites in a magma nucleate and grow in response to melt supersaturation (& UDelta;& phi;(plag)). The resultant frozen plagioclase crystal size distribution (CSD) preserves the history of decomp...
详细信息
Plagioclase microlites in a magma nucleate and grow in response to melt supersaturation (& UDelta;& phi;(plag)). The resultant frozen plagioclase crystal size distribution (CSD) preserves the history of decompression pathways (dP/dt). SNGPlag is a numerical model that calculates the equilibrium composition of a decompressing magma and nucleates and grows plagioclase in response to an imposed & UDelta;& phi;(plag). Here, we test a new version of SNGPlag calibrated for use with basaltic andesite magmas and model dP/dt for the ca. 12.6 ka Curacautin eruption of Llaima volcano, Chile. Instantaneous nucleation (N-plag) and growth (G(plag)) rates of plagioclase were computed using the experimental results of Shea and Hammer (J Volcanol Geotherm Res 260:127-145, 10.1016/***.2013.04.018, 2013) and used for SNGPlag modeling of basaltic andesite composition. Maximum N-plag of 6.1 x 10(5) cm h(-1) is achieved at a & UDelta;& phi;(plag) of 44% and the maximum G(plag) of 27.4 & mu;m h(-1) is achieved at a & UDelta;& phi;(plag) of 29%. Our modeled log dP/dt(avg) range from 2.69 & PLUSMN;0.09 to 6.89 & PLUSMN;0.96 MPa h(-1) (1 & sigma;) with an average duration of decompression from 0.87 & PLUSMN;0.25 to 16.13 & PLUSMN;0.29 h assuming a starting pressure P-i of 110-150 MPa. These rates are similar to those derived from mafic decompression experiments for other explosive eruptions. Using assumptions for lithostatic pressure gradients (dP/dz), we calculate ascent rates of < 1-6 m s(-1). We conducted a second set of Monte Carlo simulations using P-i of 15-30 MPa to investigate the influence of shallower decompression, resulting in log dP/dt(avg) from 2.86 & PLUSMN;0.49 to 6.00 & PLUSMN;0.86 MPa h(-1). The dP/dt modeled here is two orders of magnitude lower than those calculated by Valdivia et al. (Bull Volcanol, 10.1007/s00445-021-01514-8, 2022) for the same eruption using a bubble number density meter, and suggests homogeneous nucleation raises dP/dt by orders of magnitude
暂无评论