Ensemble Kalman filter, EnKF, as a Monte Carlo sequential data assimilation method has emerged promisingly for subsurface media characterization during past decade. Due to high computational cost of large ensemble siz...
详细信息
Ensemble Kalman filter, EnKF, as a Monte Carlo sequential data assimilation method has emerged promisingly for subsurface media characterization during past decade. Due to high computational cost of large ensemble size, EnKF is limited to small ensemble set in practice. This results in appearance of spurious correlation in covariance structure leading to incorrect or probable divergence of updated realizations. In this paper, a universal/adaptive thresholding method is presented to remove and/or mitigate spurious correlation problem in the forecast covariance matrix. This method is, then, extended to regularize Kalman gain directly. Four different thresholding functions have been considered to threshold forecast covariance and gain matrices. These include hard, soft, lasso and Smoothly Clipped Absolute Deviation ( SCAD) functions. Three benchmarks are used to evaluate the performances of these methods. These benchmarks include a small 1D linear model and two 2D water flooding ( in petroleum reservoirs) cases whose levels of heterogeneity/nonlinearity are different. It should be noted that beside the adaptive thresholding, the standard distance dependant localization and bootstrap Kalman gain are also implemented for comparison purposes. We assessed each setup with different ensemble sets to investigate the sensitivity of each method on ensemble size. The results indicate that thresholding of forecast covariance yields more reliable performance than Kalman gain. Among thresholding function, SCAD is more robust for both covariance and gain estimation. Our analyses emphasize that not all assimilation cycles do require thresholding and it should be performed wisely during the early assimilation cycles. The proposed scheme of adaptive thresholding outperforms other methods for subsurface characterization of underlying benchmarks. (C) 2014 Elsevier Ltd. All rights reserved.
In this paper, a new adaptive denoising method is presented based on Stein's unbiased risk estimate (SURE) and on a new class of thresholding functions. First, we present a new class of thresholding functions that...
详细信息
In this paper, a new adaptive denoising method is presented based on Stein's unbiased risk estimate (SURE) and on a new class of thresholding functions. First, we present a new class of thresholding functions that has continuous derivative while the derivative of standard soft-thresholding function is not continuous. The new thresholding functions make it possible to construct the adaptive algorithm whenever using the wavelet shrinkage method. By using the new thresholding functions, a new adaptive denoising method is presented based on SURE. Several numerical examples are given. The results indicated that for denoising applications, the proposed method is very effective in adaptively finding the optimal solution in mean square error (MSE) sense. It is also shown that this method gives better MSE performance than those conventional wavelet shrinkage methods.
The authors show how the optimum hard-limiter can be found. They also show what the optimum operating point of this type of nonlinear function should be, by illustrating the performance of this optimum hard-limiter wh...
详细信息
ISBN:
(纸本)0780302273
The authors show how the optimum hard-limiter can be found. They also show what the optimum operating point of this type of nonlinear function should be, by illustrating the performance of this optimum hard-limiter when it is used with a simple neural network in content-addressable memories. It is demonstrated that there is a narrow band of values for the normal operation of the hard-limiting function, beyond which the network would not be able to accurately recall any of the stored patterns. Mathematical analysis of the theoretical bounds of this parameter showed that this band will narrow if one expects the network to work with noisier data. The network is expected to suffer no deterioration in the quality of recall with small deviations in the threshold when the noise ratio in the test patterns is low. However, the margin of safe operation will narrow when the noise ratio of the test patterns is high. Other types of nonlinear functions with offsets have been shown to improve the performance of this type of neural network in accurately recovering the original patterns.
We consider structured optimization problems defined in terms of the sum of a smooth and convex function and a proper, lower semicontinuous (l.s.c.), convex (typically nonsmooth) function in reflexive variable exponen...
详细信息
We consider structured optimization problems defined in terms of the sum of a smooth and convex function and a proper, lower semicontinuous (l.s.c.), convex (typically nonsmooth) function in reflexive variable exponent Lebesgue spaces Lp(\cdot )(\Omega ). Due to their intrinsic space-variant properties, such spaces can be naturally used as solution spaces and combined with space-variant functionals for the solution of ill-posed inverse problems. For this purpose, we propose and analyze two instances (primal and dual) of proximal-gradient algorithms in Lp(\cdot )(\Omega ), where the proximal step, rather than depending on the natural (nonseparable) Lp(\cdot )(\Omega ) norm, is defined in terms of its modular function, which, thanks to its separability, allows for the efficient computation of algorithmic iterates. Convergence in function values is proved for both algorithms, with convergence rates depending on problem/space smoothness. To show the effectiveness of the proposed modeling, some numerical tests highlighting the flexibility of the space Lp(\cdot )(\Omega ) are shown for exemplar deconvolution and mixed noise removal problems. Finally, a numerical verification of the convergence speed and computational costs of both algorithms in comparison with analogous ones defined in standard Lp(\Omega ) spaces is presented.
暂无评论