Inverter circuits are widely used in power electronics applications such as electric motor control, induction heating or different Alternating Current (AC) loads. The control signal applied to the switching elements c...
详细信息
Locally linear model tree (LoLiMoT) and piecewise linear network (PLN) learning algorithms are two approaches in local linear neurofuzzy modeling. While both methods belong to the class of growing tree learning algori...
详细信息
Locally linear model tree (LoLiMoT) and piecewise linear network (PLN) learning algorithms are two approaches in local linear neurofuzzy modeling. While both methods belong to the class of growing tree learning algorithms, they use different logics. PLN learning relies on training data, it needs rich training data set and no division test, so it is much faster than LoLiMoT, but it may create adjacent neurons that may lead to singularity in regression matrix. On the other hand, LoLiMoT almost always leads to acceptable output error, but it often needs more rules. In this paper, to exploit the complimentary performance of both algorithms piecewise linear model tree (PiLiMoT) learning algorithm is introduced. In essence, PiLiMoT is a combination of LoLiMoT and PLN learning. The initially proposed algorithm is improved by adding the ability to merge previously divided local linear models, and utilizing a simulated annealing stochastic decision process to select a local model for splitting. Comparing to LoLiMoT and PLN learning, our proposed improved learning algorithm shows the ability to construct models with less number of rules at comparable modeling errors. algorithms are compared through a case study of nonlinear function approximation. Obtained results demonstrate the advantages of combined modified method.
This paper studies the relationship between generalization and privacy preservation in iterative learning algorithms by two sequential steps. We first establish an alignment between generalization and privacy preserva...
详细信息
Fair supervised learning algorithms assigning labels with little dependence on a sensitive attribute have attracted great attention in the machine learning community. While the demographic parity (DP) notion has been ...
详细信息
Data is evolving with the rapid progress of population and communication for various types of devices such as networks, cloud computing, Internet of Things (IoT), actuators, and sensors. The increment of data and comm...
详细信息
Warfarin dosing remains challenging due to narrow therapeutic index and highly individual variability. Incorrect warfarin dosing is associated with devastating adverse events. Remarkable efforts have been made to deve...
详细信息
We introduce a new framework for studying meta-learning methods using PAC-Bayesian theory. Its main advantage over previous work is that it allows for more flexibility in how the transfer of knowledge between tasks is...
详细信息
We consider core-periphery structured graphs, which are graphs with a group of densely and sparsely connected nodes, respectively, referred to as core and periphery nodes. The so-called core score of a node is related...
详细信息
Meta-learning has emerged as an effective methodology to model several real-world tasks and problems due to its extraordinary effectiveness in the low-data regime. There are many scenarios ranging from the classificat...
详细信息
Meta-learning has emerged as an effective methodology to model several real-world tasks and problems due to its extraordinary effectiveness in the low-data regime. There are many scenarios ranging from the classification of rare diseases to language modelling of uncommon languages where the availability of large datasets is rare. Similarly, for more broader scenarios like self-driving, an autonomous vehicle needs to be trained to handle every situation well. This requires training the ML model on a variety of tasks with good quality data. But often times, we find that the data distribution across various tasks is skewed, *** data follows a long-tail distribution. This leads to the model performing well on some tasks and not performing so well on others leading to model robustness issues. Meta-learning has recently emerged as a potential learning paradigm which can effectively learn from one task and generalize that learning to unseen tasks. However, it is often difficult to train a meta-learning model due to stability issues. Negative transfer (cite: to transfer on not to), which is commonly seen in transfer learning, is one of the main reasons for this instability. Akin to transfer learning where negative transfer can actually hinder performance if the tasks are too dissimilar, understudied effects of different task interactions can affect the performance in meta-learning as well. It will be useful to study the task distribution of meta-train and meta-test tasks and leverage any external source of information about these tasks which can help us create more informed mini-batches instead of the status-quo of randomly selecting tasks for the mini-batch. In this study, we aim to exploit external knowledge of task relations to improve training stability via effective mini-batching of tasks. We hypothesize that selecting a diverse set of tasks in a mini-batch will lead to a better estimate of the full gradient and hence will lead to a reduction of noise in training.
Background Health professionals can use natural language processing (NLP) technologies when reviewing electronic health records (EHR). Machine learning free-text classifiers can help them identify problems and make cr...
详细信息
暂无评论