Zero-shot learning (ZSL) directs the challenge of classifying unseen test images without explicit training on those samples. ZSL can identify and classify unlabeled images available in abundance by learning from visua...
详细信息
ISBN:
(纸本)9783031734762;9783031734779
Zero-shot learning (ZSL) directs the challenge of classifying unseen test images without explicit training on those samples. ZSL can identify and classify unlabeled images available in abundance by learning from visual and semantic embedding vectors (feature vectors). Information-enriched visual features extracted from images play a crucial role in ZSL. This paper proposes a hybrid feature approach that integrates low-level (LL), and high-level (HL) features extracted from images. Gray Level Co-occurrence Matrix (GLCM) and Gabor features are employed to obtain LL texture features, while HL features are derived from the ResNet-50 model, renowned for capturing complex hierarchical representations. These hybrid visual features are then mapped with semantic features using linear mapping, where the semantic features are embedding vectors of labels generated by the fastText model. Experiments on the AWA2 and SUN datasets are conducted in a bid to evaluate the proposed approach's effectiveness. The hybrid feature approach has demonstrated enhanced quality in zero-shot image classification, effectively classifying images that the model has not seen during training.
A light field is usually represented as a set of multi-view images captured from a two-dimensional (2-D) array of viewpoints and requires a large amount of data compared with a standard 2-D image. We propose a 2-D com...
详细信息
This demo paper gives a real-time learned image codec on FPGA. By using Xilinx VCU128, the proposed system reaches 720P@30fps codec, which is 7.76x faster than prior work.
ISBN:
(纸本)9781665475921
This demo paper gives a real-time learned image codec on FPGA. By using Xilinx VCU128, the proposed system reaches 720P@30fps codec, which is 7.76x faster than prior work.
image segmentation is a crucial step in imageprocessing having various applications in biomedical image analysis. Segmentation of the magnetic resonance images of the brain is one such key area in biomedical image an...
详细信息
ISBN:
(纸本)9783031585340;9783031585357
image segmentation is a crucial step in imageprocessing having various applications in biomedical image analysis. Segmentation of the magnetic resonance images of the brain is one such key area in biomedical image analysis that segments various tissues in the brain and detects tumor regions. In this paper, an unsupervised rough spatial ensemble kernelized fuzzy clustering segmentation algorithm is presented for automated segmentation of magnetic resonance images of the brain. The proposed algorithm is an integration of Rough Fuzzy C Means clustering and the kernel method with a novel ensemble kernel being a combination of spherical kernel, Gaussian, and Cauchy kernels, which improves the performance of the segmentation algorithm. The proposed algorithm performs better than the existing clustering algorithms across a wide range of magnetic resonance images of the brain along with visual indications obtained from the results.
An electrocardiogram (ECG) captures the heart's electrical signal to assess various heart conditions. In practice, ECG data is stored as either digitized signals or printed images. Despite the emergence of numerou...
详细信息
ISBN:
(纸本)9798350349405;9798350349399
An electrocardiogram (ECG) captures the heart's electrical signal to assess various heart conditions. In practice, ECG data is stored as either digitized signals or printed images. Despite the emergence of numerous deep learning models for digitized signals, many hospitals prefer image storage due to cost considerations. Recognizing the unavailability of raw ECG signals in many clinical settings, we propose VizECGNet, which uses only printed ECG graphics to determine the prognosis of multiple cardiovascular diseases. During training, cross-modal attention modules (CMAM) are used to integrate information from two modalities - image and signal, while self-modality attention modules (SMAM) capture inherent long-range dependencies in ECG data of each modality. Additionally, we utilize knowledge distillation to improve the similarity between two distinct predictions from each modality stream. This innovative multi-modal deep learning architecture enables the utilization of only ECG images during inference. VizECGNet with image input achieves higher performance in precision, recall, and F1-Score compared to signal-based ECG classification models, with improvements of 3.50%, 8.21%, and 7.38%, respectively.
Glass reflection is a problem when taking photos through glass windows or showcases. As the visual quality of captured image can be enhanced by removing reflection, we develop an intelligent reflection elimination ima...
详细信息
ISBN:
(纸本)9781665475921
Glass reflection is a problem when taking photos through glass windows or showcases. As the visual quality of captured image can be enhanced by removing reflection, we develop an intelligent reflection elimination imaging device based on polarizer to minimize reflection effect on the images. The system mainly consists of a polarizing module, an image analysis module and a reflection removal module. The users can hold the device and capture images with minimum reflection whether in the day or night. The demo video is available at: https://***/10.6084/***.19687830.v1.
Tracking image sources and verifying copyright information is crucial in digital media communication. Digital image watermarking technology, widely used for copyright protection and source tracking, faces challenges i...
详细信息
Text-to-image generation is a cutting-edge technology that enables computers to generate images from textual descriptions. While this technology has been extensively researched and applied to English language text, ap...
详细信息
ISBN:
(纸本)9783031804373;9783031804380
Text-to-image generation is a cutting-edge technology that enables computers to generate images from textual descriptions. While this technology has been extensively researched and applied to English language text, applying it to Arabic language text is still in its early stages. Additionally, the Arabic language is challenging due to its right-to-left writing system and extensive vocabulary of 1.3 million words. In this paper, we explore text-to-image generation for generating images from Arabic language text descriptions. Firstly, we fine-tune a transformer-based model pre-trained on the Arabic text to transform the text information into affine transformation within the DF-GAN generator. Secondly, we present a text transformer that combines LSTM layers to address the limitation of unrecognized words. Thirdly, a mask predictor is trained into the generator using a weakly supervised method and incorporated into the affine transformation for a more effective integration of image and text features. In addition, we add the DAMSM loss function as a regularization to the loss function to achieve convergences and stability in the training phase. The experiment on two challenging datasets CUB and Oxford-flower shows that our architectures can accurately generate high-quality images faithfully representing the Arabic textual descriptions. We believe the scaling of this task could have critical applications in fields such as Arabic visual learning, e-commerce, advertising, and entertainment.
Effective image coding techniques are crucial for digital image storage and transmission. Traditional methods struggle to maintain high visual quality at low bitrates. In this paper, we present MobileViT-GAN, a novel ...
详细信息
This paper presents a concise end-to-end visual analysis motivated super-resolution model VASR for image reconstruction. Compatible with the existing machine vision feature coding framework, the features extracted fro...
详细信息
ISBN:
(纸本)9781665475921
This paper presents a concise end-to-end visual analysis motivated super-resolution model VASR for image reconstruction. Compatible with the existing machine vision feature coding framework, the features extracted from the machine vision task model are super-resolution amplified to reconstruct the original image for human vision. The experimental results show that without additional bit-streams, VASR can well complete the task of image reconstruction based on the extracted machine features, and has achieved good results on COCO, Openimages, TVD, and DIV2K datasets.
暂无评论