Rough set theory places great importance on approximation accuracy,which is used to gauge how well a rough set model describes a target ***,traditional approximation accuracy has limitations since it varies with chang...
详细信息
Rough set theory places great importance on approximation accuracy,which is used to gauge how well a rough set model describes a target ***,traditional approximation accuracy has limitations since it varies with changes in the target concept and cannot evaluate the overall descriptive ability of a rough set *** overcome this,two types of average approximation accuracy that objectively assess a rough set model’s ability to approximate all information granules is *** first is the relative average approximation accuracy,which is based on all sets in the universe and has several basic *** second is the absolute average approximation accuracy,which is based on undefinable sets and has yielded significant *** also explore the relationship between these two types of average approximation ***,the average approximation accuracy has practical applications in addressing missing attribute values in incomplete information tables.
There is a long history, as well as a recent explosion of interest, in statistical and generative modeling approaches based on score functions - derivatives of the log-likelihood of a distribution. In seminal works, H...
作者:
Puri, ChetanReddy, K.T.V.
Department of Computer Science and Design Wardha India
Department of Artificial Intelligence and Data Science Wardha India
Fetal growth restriction and preterm delivery proceed to be major around the world wellbeing concerns, with serious consequences for the wellbeing of moms and babies. Provoke and exact estimating of these issues is ba...
详细信息
Coronary artery disease stands as one of the primary contributors to global mortality rates. The automated identification of coronary artery stenosis from X-ray images plays a critical role in the diagnostic process f...
详细信息
In this paper, we consider the unified optimal subsampling estimation and inference on the lowdimensional parameter of main interest in the presence of the nuisance parameter for low/high-dimensionalgeneralized linear...
详细信息
In this paper, we consider the unified optimal subsampling estimation and inference on the lowdimensional parameter of main interest in the presence of the nuisance parameter for low/high-dimensionalgeneralized linear models (GLMs) with massive data. We first present a general subsampling decorrelated scorefunction to reduce the influence of the less accurate nuisance parameter estimation with the slow convergencerate. The consistency and asymptotic normality of the resultant subsample estimator from a general decorrelatedscore subsampling algorithm are established, and two optimal subsampling probabilities are derived under theA- and L-optimality criteria to downsize the data volume and reduce the computational burden. The proposedoptimal subsampling probabilities provably improve the asymptotic efficiency of the subsampling schemes in thelow-dimensional GLMs and perform better than the uniform subsampling scheme in the high-dimensional GLMs.A two-step algorithm is further proposed to implement, and the asymptotic properties of the correspondingestimators are also given. Simulations show satisfactory performance of the proposed estimators, and twoapplications to census income and Fashion-MNIST datasets also demonstrate its practical applicability.
In this paper, we propose a new recommendation algorithm for addressing the problem of two-sided online matching markets with complementary preferences and quota constraints, where agents' preferences are unknown ...
详细信息
In this paper, we propose a new recommendation algorithm for addressing the problem of two-sided online matching markets with complementary preferences and quota constraints, where agents' preferences are unknown a priori and must be learned from data. The presence of mixed quota and complementary preferences constraints can lead to instability in the matching process, making this problem challenging to solve. To overcome this challenge, we formulate the problem as a bandit learning framework and propose the Multi-agent Multi-type Thompson Sampling (MMTS) algorithm. The algorithm combines the strengths of Thompson Sampling for exploration with a new double matching technique to provide a stable matching outcome. Our theoretical analysis demonstrates the effectiveness of MMTS as it can achieve stability and has a total Õ(Q√KmaxT)-Bayesian regret with high probability, which exhibits linearity with respect to the total firm's quota Q, the square root of the maximum size of available type workers √Kmax and time horizon T. In addition, simulation studies also demonstrate MMTS' effectiveness in various settings. We provide code used in our experiments https://***/Likelyt/Double-Matching. Copyright 2024 by the author(s)
We propose a two-stage memory retrieval dynamics for modern Hopfield models, termed U-Hop, with enhanced memory capacity. Our key contribution is a learnable feature map Φ which transforms the Hopfield energy functio...
详细信息
We propose a two-stage memory retrieval dynamics for modern Hopfield models, termed U-Hop, with enhanced memory capacity. Our key contribution is a learnable feature map Φ which transforms the Hopfield energy function into kernel space. This transformation ensures convergence between the local minima of energy and the fixed points of retrieval dynamics within the kernel space. Consequently, the kernel norm induced by Φ serves as a novel similarity measure. It utilizes the stored memory patterns as learning data to enhance memory capacity across all modern Hopfield models. Specifically, we accomplish this by constructing a separation loss LΦ that separates the local minima of kernelized energy by separating stored memory patterns in kernel space. Methodologically, U-Hop memory retrieval process consists of: (Stage I) minimizing separation loss for a more uniformed memory (local minimum) distribution, followed by (Stage II) standard Hopfield energy minimization for memory retrieval. This results in a significant reduction of possible metastable states in the Hopfield energy function, thus enhancing memory capacity by preventing memory confusion. Empirically, with real-world datasets, we demonstrate that U-Hop outperforms all existing modern Hopfield models and SOTA similarity measures, achieving substantial improvements in both associative memory retrieval and deep learning tasks. Code is available at GitHub;future updates are on arXiv. Copyright 2024 by the author(s)
Heterogeneous effect estimation is crucial in causal inference, with applications across medicine and social science. Many methods for estimating conditional average treatment effects (CATEs) have been pro-posed, but ...
详细信息
In the realm of artificial intelligence (AI), a notable challenge has surfaced: adversarial attacks, these attacks involve altering input data to mislead AI models. Developing defensive measures against adversarial at...
详细信息
In classical statistics, the population mean is estimated using determinate, crisp data value when auxiliary information is known. These estimates can often be biased. The main objective of this study is to introduce ...
详细信息
暂无评论