The degree of the Grassmannian with respect to the Plücker embedding is well-known. However, the Plücker embedding, while ubiquitous in pure mathematics, is almost never used in appliedmathematics. In appli...
详细信息
Interpreting scattered acoustic and electromagnetic wave patterns is a computational task that enables remote imaging in a number of important applications, including medical imaging, geophysical exploration, sonar an...
详细信息
Various methods can obtain certified estimates for roots of polynomials. Many applications in science and engineering additionally utilize the value of functions evaluated at roots. For example, critical values are ob...
详细信息
Let G be a simple graph and let L(G) denote the line graph of G. A p-independent set in G is a set of vertices S ⊆ V (G) such that the subgraph induced by S has maximum degree at most p. The p-independence number of G...
详细信息
Fixed-point fast sweeping methods are a class of explicit iterative methods developed in the literature to efficiently solve steady state solutions of hyperbolic partial differential equations (PDEs). As other types o...
详细信息
Feature learning is thought to be one of the fundamental reasons for the success of deep neural networks. It is rigorously known that in two-layer fully-connected neural networks under certain conditions, one step of ...
Feature learning is thought to be one of the fundamental reasons for the success of deep neural networks. It is rigorously known that in two-layer fully-connected neural networks under certain conditions, one step of gradient descent on the first layer can lead to feature learning; characterized by the appearance of a separated rank-one component--spike--in the spectrum of the feature matrix. However, with a constant gradient descent step size, this spike only carries information from the linear component of the target function and therefore learning non-linear components is impossible. We show that with a learning rate that grows with the sample size, such training in fact introduces multiple rank-one components, each corresponding to a specific polynomial feature. We further prove that the limiting largedimensional and large sample training and test errors of the updated neural networks are fully characterized by these spikes. By precisely analyzing the improvement in the training and test errors, we demonstrate that these non-linear features can enhance learning.
Let $G$ be a simple graph, and let $p$, $q$, and $r$ be non-negative integers. A \emph{$p$-independent} set in $G$ is a set of vertices $S \subseteq V(G)$ such that the subgraph induced by $S$ has maximum degree at mo...
详细信息
In scientific applications, linear systems are typically well-posed, and yet the coefficient matrices may be nearly singular in that the condition number κ(A) may be close to 1/Εw, where Εw denotes the unit roundof...
详细信息
In scientific applications, linear systems are typically well-posed, and yet the coefficient matrices may be nearly singular in that the condition number κ(A) may be close to 1/Εw, where Εw denotes the unit roundoff of the working precision. Accurate and efficient solutions to such systems pose daunting challenges. It is well known that iterative refinement (IR) can make the forward error independent of κ(A) if κ(A) is sufficiently smaller than 1/Εw and the residual is computed in higher precision. Recently, Carson and Higham [SISC, 39(6), 2017] proposed a variant of IR called GMRES-IR, which replaced the triangular solves in IR with left-preconditioned GMRES using the LU factorization of A as the preconditioner. GMRES-IR relaxed the requirement on κ(A) in IR, but it requires triangular solves to be evaluated in higher precision, complicating its application to large-scale sparse systems. We propose a new iterative method, called Forward-and-Backward Stabilized Minimal Residual or FBSMR, by conceptually hybridizing right-preconditioned GMRES (RP-GMRES) with quasi-minimization. We develop FBSMR based on a new theoretical framework of essential-forward-and-backward stability (EFBS), which extends the backward error analysis to consider the intrinsic condition number of a well-posed problem. We stabilize the forward and backward errors in RP-GMRES to achieve EFBS by evaluating a small portion of the algorithm in higher precision while evaluating the preconditioner in lower precision. FBSMR can achieve optimal accuracy in terms of both forward and backward errors for well-posed problems with unpolluted matrices, independently of κ(A). With low-precision preconditioning, FBSMR can reduce the computational, memory, and energy requirements over direct methods with or without IR. FBSMR can also leverage parallelization-friendly classical Gram-Schmidt in Arnoldi iterations without compromising EFBS. We demonstrate the effectiveness of FBSMR using both random and realistic li
This paper focuses on the homogenization of high-contrast dielectric elastomer composites, which are materials that deform in response to electrical stimulation. The considered heterogeneous material, consisting of an...
详细信息
There is a wide availability of methods for testing normality under the assumption of independent and identically distributed data. When data are dependent in space and/or time, however, assessing and testing the marg...
详细信息
暂无评论