Triple-level cell(TLC)NAND flash is increasingly adopted to build solid-state drives(SSDs)for modern computer *** TLC NAND flash effectively improves storage density,it faces severe reliability issues;in partic-ular,t...
详细信息
Triple-level cell(TLC)NAND flash is increasingly adopted to build solid-state drives(SSDs)for modern computer *** TLC NAND flash effectively improves storage density,it faces severe reliability issues;in partic-ular,the pages exhibit different raw bit error rates(RBERs).Integrating strong low-density parity-check(LDPC)code helps to improve reliability but suffers from prolonged and proportional read latency due to multiple read retries for worse *** straightforward idea is that dispersing page-size data across several pages in different types can achieve a low-er average RBER and reduce the read ***,directly implementing this simple idea into flash translation lay-er(FTL)induces the read amplification issue as one logic page residing in more than one physical page brings several read *** this paper,we propose the Dynamic Request Interleaving(DIR)technology for improving the performance of TLC NAND flash-based SSDs,in particular,the aged ones with large *** exploits the observation that the la-tency of an I/O request is determined,without considering the queuing time,by the access of the slowest device page,i.e.,the page that has the highest *** grouping consecutive logical pages that have high locality and interleaving their encoded data in different types of device pages that have different RBERs,DIR effectively reduces the number of read re-tries for LDPC with limited read *** meet the requirement of allocating hybrid page types for interleaved data,we also design a page-interleaving friendly page allocation scheme,which splits all the planes into multi-plane re-gions for storing the interleaved data and single-plane regions for storing the normal *** pages in the multi-plane re-gion can be read/written in parallel by the proposed multi-plane command and avoid the read amplification *** on the DIR scheme and the proposed page allocation scheme,we build DIR-enable FTL,which integrates the pro
Dear Editor,This letter presents a distributed adaptive second-order latent factor(DAS) model for addressing the issue of high-dimensional and incomplete data representation. Compared with first-order optimizers, a se...
详细信息
Dear Editor,This letter presents a distributed adaptive second-order latent factor(DAS) model for addressing the issue of high-dimensional and incomplete data representation. Compared with first-order optimizers, a second-order optimizer has stronger ability in approaching a better solution when dealing with the non-convex optimization problems, thus obtaining better performance in extracting the latent factors(LFs) well representing the known information from high-dimensional and incomplete data.
Although ray tracing produces high-fidelity, realistic images, it is considered computationally burdensome when implemented on a high rendering rate system. Perception-driven rendering techniques generate images with ...
详细信息
Although ray tracing produces high-fidelity, realistic images, it is considered computationally burdensome when implemented on a high rendering rate system. Perception-driven rendering techniques generate images with minimal noise and distortion that are generally acceptable to the human visual system, thereby reducing rendering costs. In this paper, we introduce a perception-entropy-driven temporal reusing method to accelerate real-time ray tracing. We first build a just noticeable difference(JND) model to represent the uncertainty of ray samples and image space masking effects. Then, we expand the shading gradient through gradient max-pooling and gradient filtering to enlarge the visual receipt field. Finally, we dynamically optimize reusable time segments to improve the accuracy of temporal reusing. Compared with Monte Carlo ray tracing, our algorithm enhances frames per second(fps) by 1.93× to 2.96× at 8 to 16 samples per pixel, significantly accelerating the Monte Carlo ray tracing process while maintaining visual quality.
Thyroid nodules,a common disorder in the endocrine system,require accurate segmentation in ultrasound images for effective diagnosis and ***,achieving precise segmentation remains a challenge due to various factors,in...
详细信息
Thyroid nodules,a common disorder in the endocrine system,require accurate segmentation in ultrasound images for effective diagnosis and ***,achieving precise segmentation remains a challenge due to various factors,including scattering noise,low contrast,and limited resolution in ultrasound *** existing segmentation models have made progress,they still suffer from several limitations,such as high error rates,low generalizability,overfitting,limited feature learning capability,*** address these challenges,this paper proposes a Multi-level Relation Transformer-based U-Net(MLRT-UNet)to improve thyroid nodule *** MLRTUNet leverages a novel Relation Transformer,which processes images at multiple scales,overcoming the limitations of traditional encoding *** transformer integrates both local and global features effectively through selfattention and cross-attention units,capturing intricate relationships within the *** approach also introduces a Co-operative Transformer Fusion(CTF)module to combine multi-scale features from different encoding layers,enhancing the model’s ability to capture complex patterns in the ***,the Relation Transformer block enhances long-distance dependencies during the decoding process,improving segmentation *** results showthat the MLRT-UNet achieves high segmentation accuracy,reaching 98.2% on the Digital Database Thyroid Image(DDT)dataset,97.8% on the Thyroid Nodule 3493(TG3K)dataset,and 98.2% on the Thyroid Nodule3K(TN3K)*** findings demonstrate that the proposed method significantly enhances the accuracy of thyroid nodule segmentation,addressing the limitations of existing models.
Constructing an effective common latent embedding by aligning the latent spaces of cross-modal variational autoencoders(VAEs) is a popular strategy for generalized zero-shot learning(GZSL). However, due to the lac...
详细信息
Constructing an effective common latent embedding by aligning the latent spaces of cross-modal variational autoencoders(VAEs) is a popular strategy for generalized zero-shot learning(GZSL). However, due to the lack of fine-grained instance-wise annotations, existing VAE methods can easily suffer from the posterior collapse problem. In this paper, we propose an innovative asymmetric VAE network by aligning enhanced feature representation(AEFR) for GZSL. Distinguished from general VAE structures, we designed two asymmetric encoders for visual and semantic observations and one decoder for visual reconstruction. Specifically, we propose a simple yet effective gated attention mechanism(GAM) in the visual encoder for enhancing the information interaction between observations and latent variables, alleviating the possible posterior collapse problem effectively. In addition, we propose a novel distributional decoupling-based contrastive learning(D2-CL) to guide learning classification-relevant information while aligning the representations at the taxonomy level in the latent representation space. Extensive experiments on publicly available datasets demonstrate the state-of-the-art performance of our method. The source code is available at https://***/seeyourmind/AEFR.
Evaluation system of small arms firing has an important effect in the context of military domain. A partially automated evaluation system has been conducted and performed at the ground level. Automation of such system...
详细信息
Evaluation system of small arms firing has an important effect in the context of military domain. A partially automated evaluation system has been conducted and performed at the ground level. Automation of such system with the inclusion of artificial intelligence is a much required process. This papers puts focus on designing and developing an AI-based small arms firing evaluation systems in the context of military environment. Initially image processing techniques are used to calculate the target firing score. Additionally, firing errors during the shooting have also been detected using a machine learning algorithm. However, consistency in firing requires an abundance of practice and updated analysis of the previous results. Accuracy and precision are the basic requirements of a good shooter. To test the shooting skill of combatants, firing practices are held by the military personnel at frequent intervals that include 'grouping' and 'shoot to hit' scores. Shortage of skilled personnel and lack of personal interest leads to an inefficient evaluation of the firing standard of a firer. This paper introduces a system that will automatically be able to fetch the target data and evaluate the standard based on the fuzzy *** it will be able to predict the shooter performance based on linear regression ***, it compares with recognized patterns to analyze the individual expertise and suggest improvements based on previous values. The paper is developed on a Small Arms Firing Skill Evaluation System, which makes the whole process of firing and target evaluation faster with better accuracy. The experiment has been conducted on real-time scenarios considering the military field and shows a promising result to evaluate the system automatically.
Plant disease detection has played a significant role in combating plant diseases that pose a threat to global agri-culture and food *** these diseases early can help mitigate their impact and ensure healthy crop *** ...
详细信息
Plant disease detection has played a significant role in combating plant diseases that pose a threat to global agri-culture and food *** these diseases early can help mitigate their impact and ensure healthy crop *** learning algorithms have emerged as powerful tools for accurately identifying and classifying a wide range of plant diseases from trained image datasets of affected *** algorithms,including deep learning algorithms,have shown remarkable success in recognizing disease patterns and early signs of plant *** early detection,there are other potential benefits of machine learning algorithms in overall plant disease management,such as soil and climatic condition predictions for plants,pest identification,proximity detection,and many *** the years,research has focused on using machine-learning algorithms for plant disease ***,little is known about the extent to which the research community has ex-plored machine learning algorithms to cover other significant areas of plant disease *** view of this,we present a cross-comparative review of machine learning algorithms and applications designed for plant dis-ease detection with a specific focus on four(4)economically important plants:apple,cassava,cotton,and *** conducted a systematic review of articles published between 2013 and 2023 to explore trends in the research community over the *** filtering a number of articles based on our inclusion criteria,including articles that present individual prediction accuracy for classes of disease associated with the selected plants,113 articles were considered *** these articles,we analyzed the state-of-the-art techniques,challenges,and future prospects of using machine learning for disease identification of the selected *** from our re-view show that deep learning and other algorithms performed significantly well in detecting plant *** addition,we fou
The Internet of Things(IoT)has taken the interconnected world by *** to their immense applicability,IoT devices are being scaled at exponential proportions ***,very little focus has been given to securing such *** the...
详细信息
The Internet of Things(IoT)has taken the interconnected world by *** to their immense applicability,IoT devices are being scaled at exponential proportions ***,very little focus has been given to securing such *** these devices are constrained in numerous aspects,it leaves network designers and administrators with no choice but to deploy them with minimal or no security at *** have seen distributed denial-ofservice attacks being raised using such devices during the infamous Mirai botnet attack in *** we propose a lightweight authentication protocol to provide proper access to such *** have considered several aspects while designing our authentication protocol,such as scalability,movement,user registration,device registration,*** define the architecture we used a three-layered model consisting of cloud,fog,and edge *** have also proposed several pre-existing cipher suites based on post-quantum cryptography for evaluation and *** also provide a fail-safe mechanism for a situation where an authenticating server might fail,and the deployed IoT devices can self-organize to keep providing services with no human *** find that our protocol works the fastest when using ring learning with *** prove the safety of our authentication protocol using the automated validation of Internet security protocols and applications *** conclusion,we propose a safe,hybrid,and fast authentication protocol for authenticating IoT devices in a fog computing environment.
The essence of music is inherently multi-modal – with audio and lyrics going hand in hand. However, there is very less research done to study the intricacies of the multi-modal nature of music, and its relation with ...
详细信息
Binary neural networks have become a promising research topic due to their advantages of fast inference speed and low energy consumption. However, most existing studies focus on binary convolutional neural networks, w...
详细信息
Binary neural networks have become a promising research topic due to their advantages of fast inference speed and low energy consumption. However, most existing studies focus on binary convolutional neural networks, while less attention has been paid to binary graph neural networks. A common drawback of existing studies on binary graph neural networks is that they still include lots of inefficient full-precision operations in multiplying three matrices and are therefore not efficient enough. In this paper, we propose a novel method, called re-quantization-based binary graph neural networks(RQBGN), for binarizing graph neural networks. Specifically, re-quantization, a necessary procedure contributing to the further reduction of superfluous inefficient full-precision operations, quantizes the results of multiplication between any two matrices during the process of multiplying three matrices. To address the challenges introduced by requantization, in RQBGN we first study the impact of different computation orders to find an effective one and then introduce a mixture of experts to increase the model capacity. Experiments on five benchmark datasets show that performing re-quantization in different computation orders significantly impacts the performance of binary graph neural network models, and RQBGN can outperform other baselines to achieve state-of-the-art performance.
暂无评论