One of the commonly used techniques for tackling the nonconvex optimization problems in which all the nonlinear terms are univariate is the piecewise linear approximation by which the nonlinear terms are reformulated....
详细信息
One of the commonly used techniques for tackling the nonconvex optimization problems in which all the nonlinear terms are univariate is the piecewise linear approximation by which the nonlinear terms are reformulated. The performance of the linearization technique primarily depends on the quantities of variables and constraints required in the formulation of a piecewise linear function. The state-of-the-art linearization method introduces 2 [log(2) m] inequality constraints, where m is the number of line segments in the constructed piecewise linear function. This study proposes an effective alternative logarithmic scheme by which no inequality constraint is incurred. The price that more continuous variables are needed in the proposed scheme than in the state-of-the-art method is less than offset by the simultaneous inclusion of a system of equality constraints satisfying the canonical form and the absence of any inequality constraint. Our numerical experiments demonstrate that the developed scheme has the computational superiority, the degree of which increases with m.
The massive amount of data and large variety of data distributions in the big data era call for access methods that are efficient in both query processing and index management, and over both practical and worst-case w...
详细信息
The massive amount of data and large variety of data distributions in the big data era call for access methods that are efficient in both query processing and index management, and over both practical and worst-case workloads. To address this need, we revisit two classic multidimensional access methods-the R-tree and the space-filling curve. We propose a novel R-tree packing strategy based on space-filling curves. This strategy produces R-trees with an asymptotically optimal I/O complexity for window queries in the worst case. Experiments show that our R-trees are highly efficient in querying both real and synthetic data of different distributions. The proposed strategy is also simple to parallelize, since it relies only on sorting. We propose a parallel algorithm for R-tree bulk-loading based on the proposed packing strategy and analyze its performance under the massively parallel communication model. To handle dynamic data updates, we further propose index update algorithms that process data insertions and deletions without compromising the optimal query I/O complexity. Experimental results confirm the effectiveness and efficiency of the proposed R-tree bulk-loading and updating algorithms over large data sets.
Let f be a real-valued function which is continuous on [1, infinity) and s(x) = integral(x)(1) f(t)dt. If the integral integral(infinity)(1) f(t)dt = s exists, then limit lim (x -> 8) 1/log x integral(x)(1) s(t)/t ...
详细信息
Let f be a real-valued function which is continuous on [1, infinity) and s(x) = integral(x)(1) f(t)dt. If the integral integral(infinity)(1) f(t)dt = s exists, then limit lim (x -> 8) 1/log x integral(x)(1) s(t)/t dt = s (*) also exists. However, the converse implication is not always true. This case may be satisfied by adding some conditions on s(x). In this paper, we prove some theorems that one can retrieve convergence of the improper integral from the existence of the limit (*). We also present the logarithmic method of integrability of order 2, and some theorems, such as Abel and Tauberian type, are proved for this method.
Background: COVID-19 is the most informative pandemic in history. These unprecedented recorded data give rise to some novel concepts, discussions and models. Macroscopic modeling of the period of hospitalization is on...
详细信息
Background: COVID-19 is the most informative pandemic in history. These unprecedented recorded data give rise to some novel concepts, discussions and models. Macroscopic modeling of the period of hospitalization is one of these new issues. methods: Modeling of the lag between diagnosis and death is done by using two classes of macroscopic analytical methods: the correlation-based methods based on Pearson, Spearman and Kendall correlation coefficients, and the logarithmic methods of two types. Also, we apply eight weighted average methods to smooth the time series before calculating the distance. We consider five lags with the least distance. All the computations are conducted on Matlab R2015b. Results: The length of hospitalization for the fatal cases in the USA, Italy and Germany are 2-10, 1-6 and 5-19 days, respectively. Overall, this length in the USA is 2 days more than that in Italy and 5 days less than that in Germany. Conclusion: We take the distance between the diagnosis and death as the length of hospitalization. There is a negative association between the length of hospitalization and the case fatality rate. Therefore, the estimation of the length of hospitalization by using these macroscopic mathematical methods can be introduced as an indicator to scale the success of the countries fighting the ongoing pandemic.
In this paper we find (n, m, a) solutions of the Diophantine equation L-n - L-m = 2 . 3(a), where L-n and L-m are Lucas numbers with a >= 0 and n > m >= 0. For proving our theorem, we use lower bounds for lin...
详细信息
In this paper we find (n, m, a) solutions of the Diophantine equation L-n - L-m = 2 . 3(a), where L-n and L-m are Lucas numbers with a >= 0 and n > m >= 0. For proving our theorem, we use lower bounds for linear forms in logarithms and Baker-Davenport reduction method in Diophantine approximation.
In this paper, we find non-negative (n, m, a) integer solutions of the diophantine equation F-n-F-m = 3(a) where F-n and F-m are Fibonacci numbers. For proving our theorem, we use lower bounds in linear forms.
In this paper, we find non-negative (n, m, a) integer solutions of the diophantine equation F-n-F-m = 3(a) where F-n and F-m are Fibonacci numbers. For proving our theorem, we use lower bounds in linear forms.
A new method to improve the filter features and to control the results of the LPF using Cross defected ground structure (CDGS) and logarithmic method is proposed in this work. The etched DGS topologies on the ground p...
详细信息
ISBN:
(纸本)9781509066810
A new method to improve the filter features and to control the results of the LPF using Cross defected ground structure (CDGS) and logarithmic method is proposed in this work. The etched DGS topologies on the ground plane as well as its coupling provide the slow-wave characteristics [1, 2. In order to control the operate frequency range of the filter and its results, the Chebyshev's rule, suspended Layer's technique first order and logarithmic method are employed. The proposed filters are partially measured, simulated, optimized and compared. The lowpass filters designed at cutoff frequency of 2.0GHz/3GHz/4.5GHz, which are suitable for GSM900, radar and wireless local area network applications. The proposed filters have low insertion loss, high return loss in the passband and wide stopband as well as very sharp roll-off characteristics for near pass-band. The proposed topologies are designed, optimized and simulated on RO4003 substrate using AWR software. The measurements show good consistency with the simulations. The minor discrepancy between both results can be attributed to the unexpected tolerance of the manufacturing process and reflections from the SMA connectors.
Purpose of the article: The article strives to identify quantitative factors that positively or negatively influencethe performance of enterprises and thus also their value (the so-called value drivers). Research wasp...
详细信息
Purpose of the article: The article strives to identify quantitative factors that positively or negatively influence
the performance of enterprises and thus also their value (the so-called value drivers). Research was
performed in processing industry enterprises in the Czech Republic which contribute about 25 per cent to the
gross value added in the CR.
methodology/methods: The principal research method is the logarithmic decomposition of the ROE. In the
analysis, data from the processing industry enterprises from the years 2007–2011 were used. The data were
aggregated for the purposes of the calculation. Because Grubbs test confirmed the existence of extreme outliers
in the sample, a 5% Winsorized mean was used in the aggregation. This approach makes it possible to identify
the most important factors responsible for changes in enterprise performance.
Scientific aim: The aim of the article is to identify value drivers of processing industry enterprises in the CR,
i.e. factors that are responsible for the growth or decline in enterprise value. In view of the positive correlation
between the value of an enterprise and its return on equity, the return on equity decomposition was used in the
analysis.
Findings: The results show that performance of industrial enterprises in the CR decreased as early as in 2008,
the reason being lower profit rates (measured by operating margin). The decrease in profit rates was also evident
in the negative effect of the debt load on the decrease in the ROE between 2008 and 2009, in spite of the
decreasing interest rates. This leads to an unambiguous conclusion that industrial companies failed to adjust
their cost structure to the decrease in demand, which caused a marked decrease in their performance.
Conclusions: Research results identified the decisive factors influencing the change in the performance of
enterprises in the years 2007 to 2011, which can be considered positive or negative value drivers.
暂无评论