In this manuscript, we introduce and study some pentapartitioned neutrosophic probability distributions. The study is done through the generalization of some classical probability distributions as Poisson distribution...
详细信息
Deep latent variable models learn condensed representations of data that, hopefully, reflect the inner workings of the studied phenomena. Unfortunately, these latent representations are not statistically identifiable,...
Fix an integer s ≥ 2. Let P be a set of n points and let L be a set of lines in a linear space such that no line in L contains more than (n - 1)/(s - 1) points of P. Suppose that for every s-set S in P, there is a pa...
详细信息
作者:
Moschella, LucaGLADIA research lab
Department of Computer Science Faculty of Information Engineering Informatics and Statistics Italy
As NNs (Neural Networks) permeate various scientific and industrial domains, understanding the universality and reusability of their representations becomes crucial. At their core, these networks create intermediate n...
详细信息
As NNs (Neural Networks) permeate various scientific and industrial domains, understanding the universality and reusability of their representations becomes crucial. At their core, these networks create intermediate neural representations, indicated as latent spaces, of the input data and subsequently leverage them to perform specific downstream tasks. This dissertation focuses on the universality and reusability of neural representations. Do the latent representations crafted by a NN remain exclusive to a particular trained instance, or can they generalize across models, adapting to factors such as randomness during training, model architecture, or even data domain? This adaptive quality introduces the notion of Latent Communication – a phenomenon that describes when representations can be unified or reused across neural spaces. A salient observation from our research is the emergence of similarities in latent representations, even when these originate from distinct or seemingly unrelated NNs. By exploiting a partial correspondence between the two data distributions that establishes a semantic link, we found that these representations can either be projected into a universal representation (Moschella*, Maiorca*, et al., 2023), coined as Relative Representation, or be directly translated from one space to another (Maiorca* et al., 2023). Intriguingly, this holds even when the transformation relating the spaces is unknown (Cannistraci, Moschella, Fumero, et al., 2024) and when the semantic bridge between them is minimal (Cannistraci, Moschella, Maiorca, et al., 2023). Latent Communication allows for a bridge between independently trained NN, irrespective of their training regimen, architecture, or the data modality they were trained on – as long as the data semantic content stays the same (e.g., images and their captions). This holds true for both generation, classification and retrieval downstream tasks;in supervised, weakly supervised, and unsupervised settings;and
Management of solid waste is a major challenge in densely populated urban areas. Such areas also have other predominant health management issues and improper waste management contributes to that. With the rise of COVI...
详细信息
In recent years, communication technologies are growing significantly and Cognitive Radio (CR) networks is an expert system to adjust the radio spectrum. However, wireless communication diverse scenarios and distingui...
详细信息
Non-stationary count time series characterized by features such as abrupt changes and fluctuations about the trend arise in many scientific domains including biophysics, ecology, energy, epidemiology, and social scien...
详细信息
Solving Partially Observable Markov Decision Processes (POMDPs) in continuous state, action and observation spaces is key for autonomous planning in many real-world mobility and robotics applications. Current approach...
详细信息
Modern healthcare often utilises radiographic images alongside textual reports for diagnostics, encouraging the use of Vision-Language Self-Supervised Learning (VL-SSL) with large pre-trained models to learn versatile...
Modern healthcare often utilises radiographic images alongside textual reports for diagnostics, encouraging the use of Vision-Language Self-Supervised Learning (VL-SSL) with large pre-trained models to learn versatile medical vision representations. However, most existing VL-SSL frameworks are trained end-to-end, which is computation-heavy and can lose vital prior information embedded in pre-trained encoders. To address both issues, we introduce the backbone-agnostic Adaptor framework, which preserves medical knowledge in pre-trained image and text encoders by keeping them frozen, and employs a lightweight Adaptor module for cross-modal learning. Experiments on medical image classification and segmentation tasks across three datasets reveal that our framework delivers competitive performance while cutting trainable parameters by over 90% compared to current pre-training approaches. Notably, when fine-tuned with just 1% of data, Adaptor outperforms several Transformer-based methods trained on full datasets in medical image segmentation.
The purpose of this paper is to introduce a new class of bilevel problem in the frame work of a real Hilbert space. In addition, we introduce an inertial iterative method with a regularization term and we establish th...
详细信息
暂无评论