作者:
Miao, YuYin, QingHenan Normal Univ
Coll Math & Informat Sci Xinxiang 453007 Henan Province Peoples R China Henan Normal Univ
Henan Engn Lab Big Data Stat Anal & Optimal Contro Xinxiang 453007 Henan Province Peoples R China
In this paper, we consider the linear autoregressive model with varying coefficient theta n stending to the unit root. Cram & eacute;r's moderate deviations of the least-squares estimator of the parameter thet...
详细信息
In this paper, we consider the linear autoregressive model with varying coefficient theta n stending to the unit root. Cram & eacute;r's moderate deviations of the least-squares estimator of the parameter theta n is discussed.
This article forecasts consumer price index (CPI) inflation in the United Kingdom using random generalised network autoregressive (RaGNAR) processes. More specifically, we fit generalised network autoregressive (GNAR)...
详细信息
We introduce two general classes of reflected autoregressive processes, INGAR(+)and GAR(+). Here, INGAR(+)can be seen as the counterpart of INAR(1) with general thinning and reflection being imposed to keep the proces...
详细信息
We introduce two general classes of reflected autoregressive processes, INGAR(+)and GAR(+). Here, INGAR(+)can be seen as the counterpart of INAR(1) with general thinning and reflection being imposed to keep the process non-negative;GAR(+)relates to AR(1) in an analogous manner. The two processes INGAR(+)and GAR(+)are shown to be connected via a duality relation. We proceed by presenting a detailed analysis of the time-dependent and stationary behavior of the INGAR(+)process, and then exploit the duality relation to obtain the time-dependent and stationary behavior of the GAR(+)process.
In this paper, we study finite-sample properties of the least squares estimator in first order autoregressive processes. By leveraging a result from decoupling theory, we derive upper bounds on the probability that th...
详细信息
ISBN:
(纸本)9781509066315
In this paper, we study finite-sample properties of the least squares estimator in first order autoregressive processes. By leveraging a result from decoupling theory, we derive upper bounds on the probability that the estimate deviates by at least a positive epsilon from its true value. Our results consider both stable and unstable processes. Afterwards, we obtain problem-dependent non-asymptotic bounds on the variance of this estimator, valid for sample sizes greater than or equal to seven. Via simulations we analyze the conservatism of our bounds, and show that they reliably capture the true behavior of the quantities of interest.
In this paper, we study non-asymptotic deviation bounds of the least squares estimator for Gaussian AR(n) processes. By relying on martingale concentration inequalities and a tail-bound for similar to 2 distributed va...
详细信息
In this paper, we study non-asymptotic deviation bounds of the least squares estimator for Gaussian AR(n) processes. By relying on martingale concentration inequalities and a tail-bound for similar to 2 distributed variables, we provide a concentration bound for the sample covariance matrix of the process output. With this, we present a problem-dependent finite-time bound on the deviation probability of any fixed linear combination of the estimated parameters of the AR(n) process. We discuss extensions and limitations of our approach.
Bubble Entropy is a recently proposed entropy metric, that is almost free of parameters. Up to date, only an operational formulation of Bubble Entropy was available. In this paper, we derived analytical formulations o...
详细信息
ISBN:
(数字)9781728157511
ISBN:
(纸本)9781728157528
Bubble Entropy is a recently proposed entropy metric, that is almost free of parameters. Up to date, only an operational formulation of Bubble Entropy was available. In this paper, we derived analytical formulations of Bubble Entropy for embedding dimensions m ≤ 3, when the time series is generated by a stationary Gaussian autoregressive process. Such formulations match with the estimates obtained using Monte Carlo simulations. The study allows to further investigate the mathematical properties of Bubble Entropy on stationary autoregressive processes.
Jeffrey's divergence (JD), which is the symmetric version of the Kullback-Leibler divergence, has been used in a wide range of applications, from change detection to clutter homogeneity analysis in radar processin...
详细信息
Jeffrey's divergence (JD), which is the symmetric version of the Kullback-Leibler divergence, has been used in a wide range of applications, from change detection to clutter homogeneity analysis in radar processing. It has been calculated between the joint probability density functions of successive values of autoregressive (AR) processes. In this case, the JD is a linear function of the variate number to be considered. Knowing the derivative of the JD with respect to the number of variates is hence enough to compare noise-free AR processes. However, the processes can be disturbed by additive uncorrelated white noises. In this paper, we suggest comparing two noisy 1st-order AR processes. For this purpose, the JD is expressed from the JD between noise-free AR processes and the bias the noises induce. After a transient period, the derivative of this bias with respect to the variate number becomes constant as well as the derivative of the JD. The resulting asymptotic JD increment is then used to compare noisy AR processes. Some examples illustrate this theoretical analysis. (C) 2018 Elsevier B.V. All rights reserved.
The characterizations of nonanticipative rate distortion function (NRDF) on a finite horizon are generalized to nonstationary multivariate Gaussian order L autoregressive, AR(L), source processes, with respect to mean...
详细信息
ISBN:
(数字)9781728113982
ISBN:
(纸本)9781728113999
The characterizations of nonanticipative rate distortion function (NRDF) on a finite horizon are generalized to nonstationary multivariate Gaussian order L autoregressive, AR(L), source processes, with respect to mean square error (MSE) distortion functions. It is shown that the optimal reproduction distributions are induced by a reproduction process, which is a linear function of the state of the source, its best mean-square error estimate, and a Gaussian random process.
High-dimensional time series data exist in numerous areas such as finance, genomics, healthcare, and neuroscience. An unavoidable aspect of all such datasets is missing data, and dealing with this issue has been an im...
详细信息
ISBN:
(纸本)9781538654286
High-dimensional time series data exist in numerous areas such as finance, genomics, healthcare, and neuroscience. An unavoidable aspect of all such datasets is missing data, and dealing with this issue has been an important focus in statistics, control, and machine learning. In this work, we consider a high-dimensional estimation problem where a dynamical system, governed by a stable vector autoregressive model, is randomly and only partially observed at each time point. Our task amounts to estimating the transition matrix, which is assumed to be sparse. In such a scenario, where covariates are highly interdependent and partially missing, new theoretical challenges arise. While transition matrix estimation in vector autoregressive models has been studied previously, the missing data scenario requires separate efforts. Moreover, while transition matrix estimation can be studied from a high dimensional sparse linear regression perspective, the covariates are highly dependent and existing results on regularized estimation with missing data from i.i.d. covariates are not applicable. At the heart of our analysis lies 1) a novel concentration result when the innovation noise satisfies the convex concentration property, as well as 2) a new quantity for characterizing the interactions of the time-varying observation process with the underlying dynamical system.
The task of similarity identification is to identify items in a database which are similar to a given query item for a given metric. The identification rate of a compression scheme characterizes the minimum rate that ...
详细信息
ISBN:
(纸本)9781728112954
The task of similarity identification is to identify items in a database which are similar to a given query item for a given metric. The identification rate of a compression scheme characterizes the minimum rate that can be achieved which guarantees reliable answers with respect to a given similarity threshold [1]. In this paper, we study a prediction-based quadratic similarity identification for autoregressive processes. We use an ideal linear predictor to remove linear dependencies in autoregressive processes. The similarity identification is conducted on the residuals. We show that the relation between the distortion of query and database processes and the distortion of their residuals is characterized by a sequence of eigenvalues. We derive the identification rate of our prediction-based approach for autoregressive Gaussian processes. We characterize the identification rate for the special case where only the smallest value in the sequence of eigenvalues is required to be known and derive its analytical upper bound by approximating a sequence of matrices with a sequence of Toeplitz matrices.
暂无评论