Production optimization is an effective technique to maximize the oil recovery or the net present value in reservoir development. Recently, the stochastic simplex approximation gradient (StoSAG) optimization algorithm...
详细信息
Production optimization is an effective technique to maximize the oil recovery or the net present value in reservoir development. Recently, the stochastic simplex approximation gradient (StoSAG) optimization algorithm draws significant attention in the optimization algorithm family. It shows high searching quality in largescale engineering problems. However, its optimization performance and features are not fully understood. This study evaluated and analyzed the influence of some key parameters related to the optimization process of StoSAG including the ensemble size to estimate the approximation gradient, the step size, the cut number, the perturbation size, and the initial position by using 47 mathematical benchmark functions. Statistical analysis was employed to diminish the randomness of the algorithm. The quality of the optimization results, the convergence, and the computational time consuming were analyzed and compared. The parameter selection strategy was presented. The results showed that a larger ensemble size was not always favorable to obtain better optimization results. The increase of the search step size was favorable to escape from the local optimum. A large step size needed to match a large cut number. The increase of cut number was beneficial to increase the local searchability, but also made the algorithm more easily fall into the local optimum. The random initial position was beneficial to find the global optimal point. Moreover, the effectiveness of the parameter selection strategy was tested by a classical reservoir production optimization example. The final net present value (NPV) for water flooding reservoir production optimization substantially increased, which indicated the excellent performance of StoSAG by adjusting the key parameters.
Image analysis tasks such as size measurement and landmark-based registration require the user to select control points in an image. The output of such algorithms depends on the choice of control points. Since the cho...
详细信息
Image analysis tasks such as size measurement and landmark-based registration require the user to select control points in an image. The output of such algorithms depends on the choice of control points. Since the choice of points varies from one user to the next, the requirement for user input introduces variability into the output of the algorithm. In order to test and/or optimize such algorithms, it is necessary to assess the multiplicity of outputs generated by the algorithm in response to a large set of inputs;however, the input of data requires substantial time and effort from multiple users. In this paper we describe a method to automate the testing and optimization of algorithms using "virtual operators," which consist of a set of spatial distributions describing how actual users select control points in an image. In order to construct the virtual operator, multiple users must repeatedly select control points in the image on which testing is to be performed. Once virtual operators are generated, control points for initializing the algorithm can be generated from them using a random number generator. Although an initial investment of time is required from the users in order to construct the virtual operator, testing and optimization of the algorithm can be done without further user interaction. We illustrate the construction and use of virtual operators by testing and optimizing our prostate boundary segmentation algorithm. The algorithm requires the user to select four control points on the prostate as input. (C) 2003 American Association of Physicists in Medicine.
An optical extensometer was tested using artificially deformed images with a known strain field. A real image series from a tensile test was used to obtain realistic deformation parameters, including spatial and tempo...
详细信息
An optical extensometer was tested using artificially deformed images with a known strain field. A real image series from a tensile test was used to obtain realistic deformation parameters, including spatial and temporal strain characteristics, changes in tonal pixel properties due to deformation, and the effect of nonuniform illumination. These parameters are used to artificially deform a real image taken from an object with a random speckle pattern. The signal-to-noise ratio of the resulting artificially deformed images is varied by applying a blurring pillbox filter and additive Gaussian noise to them. The optical extensometer uses digital image correlation to track homologous points of the object, and further to measure strains. The strain measurement algorithm includes a heuristic to dynamically control the template size in image correlation. Furthermore, several other methods to improve the accuracy-complexity ratio of the algorithm exist. The effects of different parameters and heuristics on the accuracy of the algorithm as well as its robustness against blur and noise are studied. Results show that the proposed test method is practical, and the heuristics improve the accuracy and robustness of the algorithm. c 2008 Society of Photo-Optical Instrumentation Engineers. [DOI: 10.1117/1.2993319]
An iterative algorithm for the decomposition of data series into trend and residual (including the seasonal effect) components is proposed. This algorithm is based on the approaches proposed by the authors in several ...
详细信息
An iterative algorithm for the decomposition of data series into trend and residual (including the seasonal effect) components is proposed. This algorithm is based on the approaches proposed by the authors in several previous studies and allows unbiased estimates for the trend and seasonal components for data with a strong trend containing different periodic (including seasonal) variations, as well as observational gaps and omissions. The main idea of the algorithm is that both the trend and the seasonal components should be estimated using a signal that is maximally cleaned of any other variations, which are considered a noise. In estimating the trend component, seasonal variation is a noise, and vice versa. The iterative approach allows a priori information to be more completely used in the optimization of models of both trend and seasonal components. The approximation procedure provides maximum flexibility and is fully controllable at all stages of the process. In addition, it allows one to naturally solve the problems in the case of missing observations and defective measurements without filling these dates with artificially simulated values. The algorithm was tested using data on changes in the concentration of CO2 in the atmosphere at four stations belonging to different latitudinal zones. The choice of these data is explained by the features that complicate the use of other methods, namely, high interannual variability, high-amplitude seasonal variations, and gaps in the series of observed data. This algorithm made it possible to obtain trend estimates (which are of particular importance for studying the characteristics and searching for the causes of global warming) for any time interval, including those that are not multiples of an integer number of years. The rate of increase in the CO2 content in the atmosphere has also been analyzed. It has been reliably established that in around 2016, the rate of CO2 accumulation in the atmosphere became stabilized and
An adaptive model is proposed to describe time-varying seasonal effects. The seasonal average function is constructed using an iterative algorithm that provides a neat decomposition of the signal into a generalized tr...
详细信息
An adaptive model is proposed to describe time-varying seasonal effects. The seasonal average function is constructed using an iterative algorithm that provides a neat decomposition of the signal into a generalized trend, seasonal and residual components. By a trend, we mean long-term evolutionary changes in the average signal level, both unidirectional and chaotic, in the form of a slow random drift. This algorithm allows one to obtain unbiased estimates for each of the signal components, even in the presence of a significant number of missing observations. The series length is not required to be a multiple of an integer number of years. In contrast to the usual "Climate Normals" (CN) model, the considered adaptive model of seasonal effects assumes a continuous slow change in the properties of the seasonal component over time. The degree of allowable variability in seasonal effects from year to year is entered as a tunable parameter of the model. In particular, this allows one to show the dynamics of the growth of the amplitude of seasonal fluctuations in time in the form of a continuous (smooth) function without necessarily linking these changes to predetermined calendar epochs. The algorithm was tested on the atmospheric CO2 concentration monitoring series at Barrow, Mauna Loa, Tutuila, and South Pole stations located at different latitudes. The form of the seasonal variation was estimated, and the average amplitude of the seasonal variation and the rate of its change at each station were calculated. Noticeable differences in the dynamics of the studied parameters between stations are demonstrated. Mean amplitude of seasonal variation in CO2 concentration at Barrow, Mauna Loa, Tutuila, and South Pole stations in the epoch 2010-2019 was estimated as 18.15, 7.08, 1.30, and 1.26 ppm, respectively, and the average rate of increase in the amplitude of the seasonal variation in the increase in CO2 concentration in the interval 1976-2019 is 0.085, 0.0100, 0.0165, and 0.
Imaging algorithms are the key factors of the performances of the synthetic aperture radar (SAR) imaging. Most existing methods usually need to utilize field environment and data, which dramatically reduces the effect...
详细信息
ISBN:
(纸本)9781538636909
Imaging algorithms are the key factors of the performances of the synthetic aperture radar (SAR) imaging. Most existing methods usually need to utilize field environment and data, which dramatically reduces the effectiveness of the software testing. This paper presents a novel testing method for narrow-band SAR imaging algorithms, which is consisted of four parts: system parameter verification, echo data simulation, algorithm simulation and real data verification, evaluations of imaging results. This scheme has fully validated the correctness and feasibility of imaging algorithms. Also, this approach greatly improves the validity of the testing work. Finally, the correctness and effectiveness of the proposed method are verified by simulation experiments and real data results.
Earlier, it was shown that conventional algorithms for solving the inverse VES problem cannot achieve the accuracy required for precision monitoring of a geoelectric section, and regularized algorithms were proposed t...
详细信息
Earlier, it was shown that conventional algorithms for solving the inverse VES problem cannot achieve the accuracy required for precision monitoring of a geoelectric section, and regularized algorithms were proposed to improve the accuracy and stability of solving the inverse VES problem. In this paper, we test the resistivity contrast stabilization algorithm on synthetic data. For modeling, a geoelectric section is used, similar to the section of the Garm test site both in the set of layers and their resistivities, and in the characteristics of seasonal variations, as well as noise. It is shown that regularization of the inverse problem greatly reduces errors. The most significant effect is achieved by suppressing the buildup of resistivity. Estimates are obtained for the accuracy in solving the inverse problem, which can be achieved when working with experimental data.
Previous work by the author has shown that the entropy of an image's histogram can be used to control the acquisition variables (brightness, contrast, shutter speed) of a camera/digitiser combination in situations...
详细信息
Previous work by the author has shown that the entropy of an image's histogram can be used to control the acquisition variables (brightness, contrast, shutter speed) of a camera/digitiser combination in situations where the imaging conditions are changing. Although the control leads to histograms that satisfy pragmatic expectations of what a 'good' histogram should look like (i.e. filling the dynamic range of the digitiser without too much saturation), it avoids the problem of what we mean by a good histogram in the machine vision context and whether the control produces images that have these histograms. In this work a good image is defined to be one where the subsequent analysis algorithms work well. Three different algorithms, each containing many diverse components, are tested on sets of images with different acquisition parameters. As well as acquiring at different parameters, a simulation of the image acquisition process is derived and validated to assist evaluation. Test results show that near-optimal performance is obtained with maximum entropy and it is concluded that this measure is a suitable one for control of image acquisition. (C) 2002 Elsevier Science B.V. All rights reserved.
In the paper, realized simulator for testing parameters estimation of the power system was described. Simulator allows definition of the test conditions like amplitude and phase, activity of decreasing direct componen...
详细信息
ISBN:
(纸本)9789532330953
In the paper, realized simulator for testing parameters estimation of the power system was described. Simulator allows definition of the test conditions like amplitude and phase, activity of decreasing direct component, variation of number of samples, variation of harmonic components, and change of frequency of the basic harmonic. The test criterion takes the error of the estimated value in relation to the default value and then time of convergence of the estimated value. As an illustration of algorithm functioning, the time format of the input signal and the timing response of the estimated amplitude and phase of signals are shown. In the simulator, algorithms for amplitude and phase estimation are implemented. For amplitude estimation, algorithms that were implemented: algorithm based on the Fourier bandwidth filter and algorithm based on the least squares method. For phase estimation, algorithms that were implemented: zero crossing algorithm and algorithm based on the Fourier bandwidth filter. All of the implemented algorithms are described and given with their mathematical models. Simulator was made in Matlab and can be expanded with addition of the noise, implementation of other algorithms, development and testing of new algorithms. For the demonstration of the usability of the simulator, results that were obtained by testing are shown.
暂无评论