COVID-19 is an infectious disease caused by the most recently discovered Coronavirus. Despite four years of study on the characteristics of this virus since its emersion in 2019, lots of questions have been left unsol...
详细信息
In addressing the challenge of image similarity estimation on the MNIST dataset, our research drives from conventional Siamese network methodologies by incorporating Vision Transformer (ViT) architecture. Departing fr...
详细信息
In today's fast-paced world, individuals are increasingly vulnerable to high levels of stress, which consequently raise the risk of developing depression. Depres-sion may represent itself via various symptoms, ran...
详细信息
Chest x-ray studies can be automatically detected and their locations located using artificial intelligence (AI) in healthcare. To detect the location of findings, additional annotation in the form of bounding boxes i...
详细信息
The phenomenon of atmospheric haze arises due to the scattering of light by minute particles suspended in the atmosphere. This optical effect gives rise to visual degradation in images and videos. The degradation is p...
详细信息
The phenomenon of atmospheric haze arises due to the scattering of light by minute particles suspended in the atmosphere. This optical effect gives rise to visual degradation in images and videos. The degradation is primarily influenced by two key factors: atmospheric attenuation and scattered light. Scattered light causes an image to be veiled in a whitish veil, while attenuation diminishes the image inherent contrast. Efforts to enhance image and video quality necessitate the development of dehazing techniques capable of mitigating the adverse impact of haze. This scholarly endeavor presents a comprehensive survey of recent advancements in the domain of dehazing techniques, encompassing both conventional methodologies and those founded on machine learning principles. Traditional dehazing techniques leverage a haze model to deduce a dehazed rendition of an image or frame. In contrast, learning-based techniques employ sophisticated mechanisms such as Convolutional Neural Networks (CNNs) and different deep Generative Adversarial Networks (GANs) to create models that can discern dehazed representations by learning intricate parameters like transmission maps, atmospheric light conditions, or their combined effects. Furthermore, some learning-based approaches facilitate the direct generation of dehazed outputs from hazy inputs by assimilating the non-linear mapping between the two. This review study delves into a comprehensive examination of datasets utilized within learning-based dehazing methodologies, elucidating their characteristics and relevance. Furthermore, a systematic exposition of the merits and demerits inherent in distinct dehazing techniques is presented. The discourse culminates in the synthesis of the primary quandaries and challenges confronted by prevailing dehazing techniques. The assessment of dehazed image and frame quality is facilitated through the application of rigorous evaluation metrics, a discussion of which is incorporated. To provide empiri
Real-world data always exhibit an imbalanced and long-tailed distribution,which leads to poor performance for neural network-based *** methods mainly tackle this problem by reweighting the loss function or rebalancing...
详细信息
Real-world data always exhibit an imbalanced and long-tailed distribution,which leads to poor performance for neural network-based *** methods mainly tackle this problem by reweighting the loss function or rebalancing the ***,one crucial aspect overlooked by previous research studies is the imbalanced feature space problem caused by the imbalanced angle *** this paper,the authors shed light on the significance of the angle distribution in achieving a balanced feature space,which is essential for improving model performance under long-tailed ***,it is challenging to effectively balance both the classifier norms and angle distribution due to problems such as the low feature *** tackle these challenges,the authors first thoroughly analyse the classifier and feature space by decoupling the classification logits into three key components:classifier norm(*** magnitude of the classifier vector),feature norm(*** magnitude of the feature vector),and cosine similarity between the classifier vector and feature *** this way,the authors analyse the change of each component in the training process and reveal three critical problems that should be solved,that is,the imbalanced angle distribution,the lack of feature discrimination,and the low feature *** from this analysis,the authors propose a novel loss function that incorporates hyperspherical uniformity,additive angular margin,and feature norm *** component of the loss function addresses a specific problem and synergistically contributes to achieving a balanced classifier and feature *** authors conduct extensive experiments on three popular benchmark datasets including CIFAR-10/100-LT,ImageNet-LT,and iNaturalist *** experimental results demonstrate that the authors’loss function outperforms several previous state-of-the-art methods in addressing the challenges posed by imbalanced and longtailed datasets,t
The proliferation of Wireless Sensor Networks (WSN) in various applications has necessitated the exploration of network architectures that can ensure efficient, scalable, and reliable communication. This study present...
详细信息
The utilization of Data-Driven Machine Learning (DDML) models in the healthcare sector poses unique challenges due to the crucial nature of clinical decision-making and its impact on patient outcomes. A primary concer...
详细信息
Context: In the public health domain, there is no shortage of failed Information Systems projects. In addition to overblown budgets and elapsed deadlines (ad nauseam), technical issues exist. These include poor usabil...
详细信息
暂无评论