Depression is a prevalent mental health condition with severe impacts on physical and social health. It is costly and difficult to detect, requiring substantial time from trained mental professionals. To alleviate thi...
详细信息
Continual learning (CL) is a fundamental topic in machine learning, where the goal is to train a model with continuously incoming data and tasks. Due to the memory limit, we cannot store all the historical data, and t...
详细信息
Continual learning (CL) is a fundamental topic in machine learning, where the goal is to train a model with continuously incoming data and tasks. Due to the memory limit, we cannot store all the historical data, and therefore confront the "catastrophic forgetting" problem, i.e., the performance on the previous tasks can substantially decrease because of the missing information in the latter period. Though a number of elegant methods have been proposed, the catastrophic forgetting phenomenon still cannot be well avoided in practice. In this paper, we study the problem from the gradient perspective, where our aim is to develop an effective algorithm to calibrate the gradient in each updating step of the model;namely, our goal is to guide the model to be updated in the right direction under the situation that a large amount of historical data are unavailable. Our idea is partly inspired by the seminal stochastic variance reduction methods (e.g., SVRG and SAGA) for reducing the variance of gradient estimation in stochastic gradient descent algorithms. Another benefit is that our approach can be used as a general tool, which is able to be incorporated with several existing popular CL methods to achieve better performance. We also conduct a set of experiments on several benchmark datasets to evaluate the performance in practice. Copyright 2024 by the author(s)
This work studies embedding of arbitrary VC classes in well-behaved VC classes, focusing particularly on extremal classes. Our main result expresses an impossibility: such embeddings necessarily require a significant ...
详细信息
作者:
Lokesh, GudivadaBaseer, K.K.
Dept. of Computer Science and Engineering Tirupati India
Department of Data Science Tirupati India
Clouds are highly customizable infrastructures that offer a platform as a service and let customers subscribe on a pay-as-you-go basis to their requirements. The straightforward service-oriented cloud computing model ...
详细信息
The development of technologies such as big data and blockchain has brought convenience to life,but at the same time,privacy and security issues are becoming more and more *** K-anonymity algorithm is an effective and...
详细信息
The development of technologies such as big data and blockchain has brought convenience to life,but at the same time,privacy and security issues are becoming more and more *** K-anonymity algorithm is an effective and low computational complexity privacy-preserving algorithm that can safeguard users’privacy by anonymizing big ***,the algorithm currently suffers from the problem of focusing only on improving user privacy while ignoring data *** addition,ignoring the impact of quasi-identified attributes on sensitive attributes causes the usability of the processed data on statistical analysis to be *** on this,we propose a new K-anonymity algorithm to solve the privacy security problem in the context of big data,while guaranteeing improved data ***,we construct a new information loss function based on the information quantity *** that different quasi-identification attributes have different impacts on sensitive attributes,we set weights for each quasi-identification attribute when designing the information loss *** addition,to reduce information loss,we improve K-anonymity in two ***,we make the loss of information smaller than in the original table while guaranteeing privacy based on common artificial intelligence algorithms,i.e.,greedy algorithm and 2-means clustering *** addition,we improve the 2-means clustering algorithm by designing a mean-center method to select the initial center of ***,we design the K-anonymity algorithm of this scheme based on the constructed information loss function,the improved 2-means clustering algorithm,and the greedy algorithm,which reduces the information ***,we experimentally demonstrate the effectiveness of the algorithm in improving the effect of 2-means clustering and reducing information loss.
Differential analysis of bulk RNA-seq data often suffers from lack of good controls. Here, we present a generative model that replaces controls, trained solely on healthy tissues. The unsupervised model learns a low-d...
详细信息
This research addresses the pressing global demand for food by leveraging cutting-edge deep learning techniques for automating plant disease detection. Focusing on tomato and potato leaf diseases, the study utilized t...
详细信息
The automated design of analog circuits presents a significant challenge due to the complexity of circuit topology and parameter selection. Traditional evolutionary algorithms, such as Genetic Programming (GP), have s...
详细信息
Skip connections (SCs) are commonly employed in neural networks to facilitate gradient-based training and often lead to improved performance in deep learning. To implement SCs, a user writes custom modules along with ...
详细信息
Graphs have become a widely-used tool to model data with relationships in real life for a long time. To discover the important contents in the graph, many graph neural networks (GNNs) have been come up with. Neverthel...
详细信息
暂无评论