We give sufficient robust stability conditions for matrix polytopes of linear systems with impulsive influence. The methods of our study are based on logarithmic matrix measure theory and linear operator theory in Ban...
详细信息
We give sufficient robust stability conditions for matrix polytopes of linear systems with impulsive influence. The methods of our study are based on logarithmic matrix measure theory and linear operator theory in Banach spaces. Our results reduce the robust stability problem to the feasibility problem for a system of linear matrix inequalities in the class of positive definite matrices.
A well-defined subsample of 128 subadult (3-5 years) polar bears (Ursus maritimus) from 19 sampling years within the period 1984-2006 was investigated for perfluoroalkyl contaminants (PFCs), Linear regression analysis...
详细信息
A well-defined subsample of 128 subadult (3-5 years) polar bears (Ursus maritimus) from 19 sampling years within the period 1984-2006 was investigated for perfluoroalkyl contaminants (PFCs), Linear regression analysis of logarithmic-transformed median concentrations showed significant annual increases for PFOS (4.7%), PFNA (6.1%), PFUnA (5.9%), PFDA (4.3%), PFTrA (8.5%), PFOA (2.3%), and PFDoA (5.2%). For four of the PFCs, a LOESS smoother model provided significantly better descriptions, revealing steeper linear annual increases for PFOSA of 9.2% after 1990 and between 18.6 and 27.4% for PFOS, PFDA, and PFTrA after 2000. Concentrations of Sigma PFCs, by 2006, exceeded the concentrations of all conventional OHCs (organohalogen compounds), of which several have been documented to correlate with a number of negative health effects. If the PFC concentrations in polar bears continue to increase with the steepest observed trends, then the lowest no-adverse-effect level (NOAEL) and lowest-adverse-effect level (LOAEL) detected for rats and monkeys will be exceeded in 2014-2024. In addition, the rapidly increasing concentrations of PFCs are likely to cause cumulative and combined effects on the polar bear, compounding the already detected threats from OHCs.
The interference between coherent and squeezed vacuum light effectively produces path entangled N00N states with very high fidelities. We show that the phase sensitivity of the above interferometric scheme with parity...
详细信息
The interference between coherent and squeezed vacuum light effectively produces path entangled N00N states with very high fidelities. We show that the phase sensitivity of the above interferometric scheme with parity detection saturates the quantum Cramer-Rao bound, which reaches the Heisenberg limit when the coherent and squeezed vacuum light are mixed in roughly equal proportions. For the same interferometric scheme, we draw a detailed comparison between parity detection and a symmetric-logarithmic-derivative-based detection scheme suggested by Ono and Hofmann.
This paper employs an extended Kaya identity as the scheme and utilizes the logarithmic Mean Divisia Index (LMDI II) as the decomposition technique based on analyzing CO2 emissions trends in China. Change in CO2 emiss...
详细信息
This paper employs an extended Kaya identity as the scheme and utilizes the logarithmic Mean Divisia Index (LMDI II) as the decomposition technique based on analyzing CO2 emissions trends in China. Change in CO2 emissions intensity is decomposed from 1995 to 2010 and includes measures of the effect of Industrial structure, energy intensity, energy structure, and carbon emission factors. Results illustrate that changes in energy intensity act to decrease carbon emissions intensity significantly and changes in industrial structure and energy structure do not act to reduce carbon emissions intensity effectively. Policy will need to significantly optimize energy structure and adjust industrial structure if China's emission reduction targets in 2020 are to be reached. This requires a change in China's economic development path and energy consumption path for optimal outcomes.
No, not that W. I won't be drawn into presidential politics here. The W Iwant to discuss is something else entirely: the Lambert W function, a mathematical contrivance thathas been getting a fair amount of attenti...
详细信息
No, not that W. I won't be drawn into presidential politics here. The W Iwant to discuss is something else entirely: the Lambert W function, a mathematical contrivance thathas been getting a fair amount of attention lately. The buzz began in the world of computer-algebrasystems such as Macsyma, Maple and Mathematica, but word of W has also been spreading throughjournal articles, preprints, conference presentations and Internet news groups. The W function evenhas its own poster (see http://***/LambertW). The concept at the root of W can be tracedback through more than two centuries of the mathematical literature, but the function itself hashad a name only for the past 10 years or so. (A few years longer if you count a name used within theMaple software but otherwise unpublished.) When it comes to mathematical objects, it turns out thatnames are more important than you might guess.
The positional isomers of monounsaturated long-chain fatty compounds containing allylic hydroxy groups (shown for trans double bonds) are distinguished by H-1-NMR spectroscopy through the chemical shift differences of...
详细信息
The positional isomers of monounsaturated long-chain fatty compounds containing allylic hydroxy groups (shown for trans double bonds) are distinguished by H-1-NMR spectroscopy through the chemical shift differences of the olefinic protons. These differences are expressed as rational functions (the differences being proportional to the negative third power of the position of the unsaturation), or logarithmic functions (the differences being proportional to the position of the unsaturation raised as power). The present results on more sensitive H-1-NMR spectra complement previous work on the C-13-NMR spectra of these compounds. Theoretical models for explaining shifts in C-13-NMR therefore also apply to H-1-NMR.
The photocurrent generated in single-walled carbon nanotube bundles upon camera flash illumination has been studied under different ambient pressures and light intensities. The results show that the intensity of photo...
详细信息
The photocurrent generated in single-walled carbon nanotube bundles upon camera flash illumination has been studied under different ambient pressures and light intensities. The results show that the intensity of photocurrent depends closely on the ambient pressure and light intensity. With the ambient pressure reduced, the photocurrent exhibits a logarithmic growth behaviour. Meanwhile, the photocurrent increases with the increase in light intensity. In this work, a dynamic model is employed to unveil the origins of the observed photocurrent. A much smaller lifetime of photocarriers (similar to 10 ms) is observed than that needed for gas molecular desorption or photodesorption (seconds or longer). Our results are consistent with the model of Schottky barriers being responsible for photocurrent generation.
In this article, we consider the proximal point method with Bregman distance applied to linear programming problems, and study the dual sequence obtained from the optimal multipliers of the linear constraints of each ...
详细信息
In this article, we consider the proximal point method with Bregman distance applied to linear programming problems, and study the dual sequence obtained from the optimal multipliers of the linear constraints of each subproblem. We establish the convergence of this dual sequence, as well as convergence rate results for the primal sequence, for a suitable family of Bregman distances. These results are obtained by studying first the limiting behavior of a certain perturbed dual path and then the behavior of the dual and primal paths.
We apply the exponential weight algorithm, introduced and Littlestone and Warmuth [26] and by Vovk [35] to the problem of predicting a binary sequence almost as well as the best biased coin. We first show that for the...
详细信息
We apply the exponential weight algorithm, introduced and Littlestone and Warmuth [26] and by Vovk [35] to the problem of predicting a binary sequence almost as well as the best biased coin. We first show that for the case of the logarithmic loss, the derived algorithm is equivalent to the Bayes algorithm with Jeffrey's prior, that was studied by Xie and Barron [38] under probabilistic assumptions. We derive a uniform bound on the regret which holds for any sequence. We also show that if the empirical distribution of the sequence is bounded away from 0 and from 1, then, as the length of the sequence increases to infinity, the difference between this bound and a corresponding bound on the average case regret of the same algorithm (which is asymptotically optimal in that case) is only 1/2. We show that this gap of 1/2 is necessary by calculating the regret of the min-max optimal algorithm for this problem and showing that the asymptotic upper bound is tight. We also study the application of this algorithm to the square loss and show that the algorithm that is derived in this case is different from the Bayes algorithm and is better than it for prediction in the worst-case. (C) 2003 Elsevier Science (USA).
暂无评论