An impedance analyzer is described which used direct sampling of the stimulating and response signals into a laboratory data acquisition system. The analyzer uses subsampling in order to extend the range to frequencie...
详细信息
An impedance analyzer is described which used direct sampling of the stimulating and response signals into a laboratory data acquisition system. The analyzer uses subsampling in order to extend the range to frequencies higher than the sampling frequency. An algorithm which computes the optimum sampling rate is described. The analyzer can be used in the frequency range from above 10(5) Hz to below 10(-2) Hz. In the range from 50 kHz to 0.01 Hz the relative amplitude error was found to be less than 0.01% and the phase error to be less than 0.1 degree.
We study the fundamental problem of the exact and efficient generation of random values from a finite and discrete probability distribution. Suppose that we are given n distinct events with associated probabilities p(...
详细信息
We study the fundamental problem of the exact and efficient generation of random values from a finite and discrete probability distribution. Suppose that we are given n distinct events with associated probabilities p(1,)...,p(n) First, we consider the problem of sampling from the distribution where the i-th event has probability proportional to p(i). Second, we study the problem of sampling a subset which includes the i-th event independently with probability . For both problems we present on two different classes of inputs-sorted and general probabilities-efficient data structures consisting of a preprocessing and a query algorithm. Varying the allotted preprocessing time yields a trade-off between preprocessing and query time, which we prove to be asymptotically optimal everywhere.
The best calculation of concentration profiles, isoconcentration surfaces or Gibbsian interfacial excesses from three-dimensional atom-probe microscopy data requires a compromise between spatial positioning error and ...
详细信息
The best calculation of concentration profiles, isoconcentration surfaces or Gibbsian interfacial excesses from three-dimensional atom-probe microscopy data requires a compromise between spatial positioning error and statistical sampling error. For example, sampling from larger spatial regions decreases the statistical error, but increases the error in spatial positioning. Finding the appropriate balance for a particular calculation can be tricky, especially when the three-dimensional nature of the data presents an infinite number of degrees of freedom in defining surfaces, and when the statistical error is changing from one region of a sample to another due to differences in collection efficiency or atomic density. We present some strategies for approaching these problems, focusing on efficient algorithms for generating different spatial samplings. We present a unique double-splat algorithm, in which an initial, fine-grained sampling is taken to convert the data to a regular grid, followed by a second, variable width splat, to spread the effective sampling distance to any value,desired. The first sampling is time consuming for a large dataset, but needs only be performed once. The second splat is done on a regular grid, so it is efficient, and can be repeated as many times as necessary to find the correct balance of statistical and positioning error. The net effect is equivalent to a Gaussian spreading of each data point, without the necessity of calculating Gaussian coefficients for millions of data points. We show examples of isoconcentration surfaces calculated under different circumstances from the same dataset. (C) 2002 Published by Elsevier Science B.V.
This paper studies a classic maximum entropy sampling problem (MESP), which aims to select the most informative principal submatrix of a prespecified size from a covariance matrix. By investigating its Lagrangian dual...
详细信息
This paper studies a classic maximum entropy sampling problem (MESP), which aims to select the most informative principal submatrix of a prespecified size from a covariance matrix. By investigating its Lagrangian dual and primal characterization, we derive a novel convex integer program for MESP and show that its continuous relaxation yields a near-optimal solution. The results motivate us to develop a sampling algorithm and derive its approximation bound for MESP, which improves the best known bound in literature. We then provide an efficient deterministic implementation of the sampling algorithm with the same approximation bound. Besides, we investigate the widely used local search algorithm and prove its first known approximation bound for MESP. The proof techniques further inspire for us an efficient implementation of the local search algorithm. Our numerical experiments demonstrate that these approximation algorithms can efficiently solve medium-size and large-scale instances to near optimality. Finally, we extend the analyses to the A-optimal MESP, for which the objective is to minimize the trace of the inverse of the selected principal submatrix.
This brief addresses the issue of monitoring physical spatial phenomena of interest using information collected by a resource-constrained network of mobile, wireless, and noisy sensors that can take discrete measureme...
详细信息
This brief addresses the issue of monitoring physical spatial phenomena of interest using information collected by a resource-constrained network of mobile, wireless, and noisy sensors that can take discrete measurements as they navigate through the environment. We first propose an efficient novel optimality criterion for designing a sampling strategy to find the most informative locations in taking future observations to minimize the uncertainty at all unobserved locations of interest. This solution is proven to be within bounds. The computational complexity of this proposition is shown to be practically feasible. We then prove that under a certain condition of monotonicity property, the approximate entropy at resulting locations obtained by our proposed algorithm is within 1 - (1/e) of the optimum, which is then utilized as a stopping criterion for the sampling algorithm. The criterion enables the prediction results to be within user-defined accuracies by controlling the number of mobile sensors. The effectiveness of the proposed method is illustrated using a prepublished data set.
Some factors affecting the physical and mental health of vocational college students, the sense of inferiority plays a very important role in cultivating students with physical and mental health. Inverse random under ...
详细信息
Some factors affecting the physical and mental health of vocational college students, the sense of inferiority plays a very important role in cultivating students with physical and mental health. Inverse random under sampling algorithm is improved based on integrated learning, which can improve the performance of the classifier. Stacking integrated learning and flip random sampling reduction algorithm SIRUS is proposed. Select the individual subjective factors studied in this paper is important in self-attribution and social objective factors are important social support factors, and the only demographic variables is a significant difference.
Chromy (1979) proposed an unequal probability sampling algorithm, which is the default sequential method used in the SURVEYSELECT procedure of the SAS software. In this article, we demonstrate that Chromy sampling is ...
详细信息
Chromy (1979) proposed an unequal probability sampling algorithm, which is the default sequential method used in the SURVEYSELECT procedure of the SAS software. In this article, we demonstrate that Chromy sampling is equivalent to pivotal sampling. This makes it possible to estimate the variance unbiasedly for the randomized version of the method programmed in the SURVEYSELECT procedure.
A balanced sampling design is defined by the property that the Horvitz-Thompson estimators of the population totals of a set of auxiliary variables equal the known totals of these variables. Therefore the variances of...
详细信息
A balanced sampling design is defined by the property that the Horvitz-Thompson estimators of the population totals of a set of auxiliary variables equal the known totals of these variables. Therefore the variances of estimators of totals of all the variables of interest are reduced, depending on the correlations of these variables with the controlled variables. In this paper, we develop a general method, called the cube method, for selecting approximately balanced samples with equal or unequal inclusion probabilities and any number of auxiliary variables.
When auxiliary information is available at the design stage, samples may be selected by means of balanced sampling. Deville and Tillie proposed in 2004 a general algorithm to perform balanced sampling, named the cube ...
详细信息
When auxiliary information is available at the design stage, samples may be selected by means of balanced sampling. Deville and Tillie proposed in 2004 a general algorithm to perform balanced sampling, named the cube method. In this paper, we are interested in a particular case of the cube method named pivotal sampling, and first described by Deville and Tine in 1998. We show that this sampling algorithm, when applied to units ranked in a fixed order, is equivalent to Deville's systematic sampling, in the sense that both algorithms lead to the same sampling design. This characterization enables the computation of the second-order inclusion probabilities for pivotal sampling. We show that the pivotal sampling enables to take account of an appropriate ordering of the units to achieve a variance reduction, while limiting the loss of efficiency if the ordering is not appropriate.
In financial risk management, modelling dependency within a random vector X is crucial, a standard approach is the use of a copula model. Say the copula model can be sampled through realizations of Y having copula fun...
详细信息
In financial risk management, modelling dependency within a random vector X is crucial, a standard approach is the use of a copula model. Say the copula model can be sampled through realizations of Y having copula function C: had the marginals of Y been known, sampling X-(i), the i-th component of X , would directly follow by composing Y-(i) with its cumulative distribution function (c.d.f.) and the inverse c.d.f. of X-(i). In this work, the marginals of Y are not explicit, as in a factor copula model. We design an algorithm which samples X through an empirical approximation of the c.d.f. of the Y-marginals. To be able to handle complex distributions for Y or rare-event computations, we allow Markov Chain Monte Carlo (MCMC) samplers. We establish convergence results whose rates depend on the tails of X, Y and the Lyapunov function of the MCMC sampler. We present numerical experiments confirming the convergence rates and also revisit a real data analysis from financial risk management.
暂无评论