The Internet of Things (IoT) has tremendously impacted people’s daily lives in recent years. Researchers and corporations also show interest in diverse IoT applications, such as the Internet of Medical Things (IoMT)....
详细信息
To reduce the current crisis on energy and environmental problems, the adoption of electric vehicles is an effective way. But limited access and availability of charging infrastructure are restricting the EV market de...
详细信息
Electrical energy consumption is always increasing, and this causes the supply of electrical energy to be increased to compensate. One solution is to predict electricity energy consumption using Artificial Intelligenc...
详细信息
GPT-based models have enabled the creation of natural language chatbots that support both Inquiry-Based and Structured Learning approaches. This study offers a direct comparison of these two paradigms within a UNIX Sh...
详细信息
The maximal coding rate reduction (MCR2) objective for learning structured and compact deep representations is drawing increasing attention, especially after its recent usage in the derivation of fully explainable and...
详细信息
The maximal coding rate reduction (MCR2) objective for learning structured and compact deep representations is drawing increasing attention, especially after its recent usage in the derivation of fully explainable and highly effective deep network architectures. However, it lacks a complete theoretical justification: only the properties of its global optima are known, and its global landscape has not been studied. In this work, we give a complete characterization of the properties of all its local and global optima, as well as other types of critical points. Specifically, we show that each (local or global) maximizer of the MCR2 problem corresponds to a low-dimensional, discriminative, and diverse representation, and furthermore, each critical point of the objective is either a local maximizer or a strict saddle point. Such a favorable landscape makes MCR2 a natural choice of objective for learning diverse and discriminative representations via first-order optimization methods. To validate our theoretical findings, we conduct extensive experiments on both synthetic and real data sets. Copyright 2024 by the author(s)
Federated Learning (FL) is an emerging privacy-preserving distributed computing paradigm that enables numerous clients to collaboratively train machine learning models without the need for transmitting the private dat...
详细信息
Simulation-based learning is one of the most powerful tools to support the educational process. This learning offers an opportunity to learn based on experiences that are very similar to real-world environments. In th...
详细信息
The digital era has made seamless sharing and keeping of media such as images on cloud platforms an integral part of our lives. Still, there is a big issue about user privacy and data security in these repositories. W...
详细信息
Sim-to-real transfer, which trains RL agents in the simulated environments and then deploys them in the real world, has been widely used to overcome the limitations of gathering samples in the real world. Despite the ...
The notion of margin loss has been central to the development and analysis of algorithms for binary classification. To date, however, there remains no consensus as to the analogue of the margin loss for multiclass cla...
详细信息
The notion of margin loss has been central to the development and analysis of algorithms for binary classification. To date, however, there remains no consensus as to the analogue of the margin loss for multiclass classification. In this work, we show that a broad range of multiclass loss functions, including many popular ones, can be expressed in the relative margin form, a generalization of the margin form of binary losses. The relative margin form is broadly useful for understanding and analyzing multiclass losses as shown by our prior work (Wang and Scott, 2020, 2021). To further demonstrate the utility of this way of expressing multiclass losses, we use it to extend the seminal result of Bartlett et al. (2006) on classification-calibration of binary margin losses to multiclass. We then analyze the class of Fenchel-Young losses, and expand the set of these losses that are known to be classification-calibrated.
暂无评论