The seminal Marshall-Olkin distribution is given by the following d-dimensional frailty model: independent exponentially distributed random variables represent the arrival times of shocks that destroy subgroups of a v...
详细信息
The seminal Marshall-Olkin distribution is given by the following d-dimensional frailty model: independent exponentially distributed random variables represent the arrival times of shocks that destroy subgroups of a vector with d components. sampling the resulting random vector of extinction times along the lines of this stochastic representation is conceptually straightforward. Firstly, all possible shocks are sampled. Secondly, the extinction time of each component is computed. However, implementing this sampling scheme involves 2 d - 1 random variables and requires finding the minimum of a set with 2 d-1 elements for each component. Thus, the overall effort is exponentially increasing in the number of considered components, preventing the general model from being used in high-dimensional simulation studies. This problem is overcome for the subfamily of exchangeable Marshall-Olkin distributions, for which a sampling algorithm with polynomial effort in d, an upper bound is ??(d 3), is presented. In a second step, it is shown how hierarchical models are constructed from exchangeable structures and how models with alternative univariate marginal laws are obtained. The presented algorithm is then adapted to sample such structures as well. Finally, a small case study in the context of portfolio credit risk is presented.
The accuracy of hydrologic and hydrodynamic models, used to study urban hydrology and predict urban flooding, depends on the availability of high-resolution terrain and infrastructure data. Unfortunately, cities often...
详细信息
The accuracy of hydrologic and hydrodynamic models, used to study urban hydrology and predict urban flooding, depends on the availability of high-resolution terrain and infrastructure data. Unfortunately, cities often do not have or cannot release complete infrastructure data, and high-resolution terrain data products are not available everywhere. In this study, we quantify how the accuracy and precision of urban hydrologic hydrodynamic models vary as a function of data completeness and model resolution. For this aim, we apply the one-dimensional (1D) and coupled one-and two-dimensional (1D-2D) versions of the U.S. Environmental Protection Agency's Storm Water Management Model (SWMM) in an urban catchment in the city of Phoenix, Arizona. Here, we have collected detailed infrastructure data, a high-resolution 0.3-m LiDAR-based digital elevation model, and catchment properties data. We tested several model configurations assuming different levels of (i) availability of stormwater infrastructure data (ranging from 5% to 75% of attribute-values missing) and (ii) terrain aggregation (i.e., 4.6 m and 9.7 m). These configurations were generated through random Monte Carlo sampling for SWMM 1D and selective sampling with four cases for SWMM 1D-2D. We ran simulations under the 50-year return period design storm and compared simulated flood metrics assuming the highest resolution and complete data model configuration as a reference. The study found that the model may over or underestimate flood volume and duration with different levels of missing data depending on the parameters - roughness, diameter or depth, and that model performance is more sensitive to missing data that is downstream and closer to the outfall as opposed to missing data upstream. Errors in flood depth, area and volume estimation are functions of both the data completeness and model resolution. Missing feature data leads to overestimation of flood depth, while lower model resolution results in underestimati
Background: Ancestral Recombinations Graph (ARG) is a phylogenetic structure that encodes both duplication events, such as mutations, as well as genetic exchange events, such as recombinations: this captures the (gene...
详细信息
Background: Ancestral Recombinations Graph (ARG) is a phylogenetic structure that encodes both duplication events, such as mutations, as well as genetic exchange events, such as recombinations: this captures the (genetic) dynamics of a population evolving over generations. Results: In this paper, we identify structure-preserving and samples-preserving core of an ARG G and call it the minimal descriptor ARG of G. Its structure-preserving characteristic ensures that all the branch lengths of the marginal trees of the minimal descriptor ARG are identical to that of G and the samples-preserving property asserts that the patterns of genetic variation in the samples of the minimal descriptor ARG are exactly the same as that of G. We also prove that even an unbounded G has a finite minimal descriptor, that continues to preserve certain (graph-theoretic) properties of G and for an appropriate class of ARGs, our estimate (Eqn 8) as well as empirical observation is that the expected reduction in the number of vertices is exponential. Conclusions: Based on the definition of this lossless and bounded structure, we derive local properties of the vertices of a minimal descriptor ARG, which lend itself very naturally to the design of efficient sampling algorithms. We further show that a class of minimal descriptors, that of binary ARGs, models the standard coalescent exactly (Thm 6).
Image fusion is a classical problem in the field of image processing whose solutions are usually not unique. The common image fusion methods can only generate a fixed fusion result based on the source image pairs. The...
详细信息
Image fusion is a classical problem in the field of image processing whose solutions are usually not unique. The common image fusion methods can only generate a fixed fusion result based on the source image pairs. They tend to be applicable only to a specific task and have high computational costs. Hence, in this paper, a two-stage unsupervised universal image fusion with generative diffusion model is proposed, termed as UUDFusion. For the first stage, a strategy based on the initial fusion results is devised to offload the computational effort. For the second stage, two novel sampling algorithms based on generative diffusion model are designed. The fusion sequence generation algorithm (FSGA) searches fora series of solutions in the solution space by iterative sampling. The fusion image enhancement algorithm (FIEA) greatly improves the quality of the fused images. Qualitative and quantitative evaluations of multiple datasets with different modalities demonstrate the great versatility and effectiveness of UUD-Fusion. It is capable of solving different fusion problems, including multi-focus image fusion task, multi-exposure image fusion task, infrared and visible fusion task, and medical image fusion task. The proposed approach is superior to current state-of-the-art methods. Our code is publicly available at https://***/xiangxiang-wang/UUD-Fusion.
The use of mathematical models for design space characterization has become commonplace in pharmaceutical quality-by-design, providing a systematic risk-based approach to assurance of quality. This paper presents a me...
详细信息
The use of mathematical models for design space characterization has become commonplace in pharmaceutical quality-by-design, providing a systematic risk-based approach to assurance of quality. This paper presents a methodology to complement sampling algorithms by computing the largest box inscribed within a given probabilistic design space at a desired reliability level. Such an encoding of the samples yields an operational envelope that can be conveniently communicated to process operators as independent ranges in process parameters. The first step involves training a feed-forward multi-layer perceptron as a surrogate of the sampled probability map. This surrogate is then embedded into a design centering problem, formulated as a semi-infinite program and solved using a cutting-plane algorithm. Effectiveness and computational tractability are demonstrated on the case study of a batch reactor with two critical process parameters.
When kernel methods are applied to detect the defection, there is a need to select the training samples, because kernel methods are based on the statistical learning theory. To extract the defects, the pre-image is ca...
详细信息
When kernel methods are applied to detect the defection, there is a need to select the training samples, because kernel methods are based on the statistical learning theory. To extract the defects, the pre-image is calculated. In this paper, a sampling algorithm based on the alignment is designed to improve the calculation efficiency, where kernel alignment can measure the similarity between different kernel functions and matrices. A local linear algorithm is proposed to calculate the pre-image. When obtain the 0–1 difference image, an algorithm is designed to determine whether there are defects. An algorithm is designed to calculate the center coordinates and the areas of defects in the 0–1 image. Using this method, the accuracy of detection can be improved, because the method can remove the effect from recovery errors. When using the algorithms on a data set of printing products, the experiment results show that the detection results are more accurately than using the difference matrix.
暂无评论