Brain tumor classification is crucial for personalized treatment *** deep learning-based Artificial Intelligence(AI)models can automatically analyze tumor images,fine details of small tumor regions may be overlooked d...
详细信息
Brain tumor classification is crucial for personalized treatment *** deep learning-based Artificial Intelligence(AI)models can automatically analyze tumor images,fine details of small tumor regions may be overlooked during global feature ***,we propose a brain tumor Magnetic Resonance Imaging(MRI)classification model based on a global-local parallel dual-branch *** global branch employs ResNet50 with a Multi-Head Self-Attention(MHSA)to capture global contextual information from whole brain images,while the local branch utilizes VGG16 to extract fine-grained features from segmented brain tumor *** features from both branches are processed through designed attention-enhanced feature fusion module to filter and integrate important ***,to address sample imbalance in the dataset,we introduce a category attention block to improve the recognition of minority *** results indicate that our method achieved a classification accuracy of 98.04%and a micro-average Area Under the Curve(AUC)of 0.989 in the classification of three types of brain tumors,surpassing several existing pre-trained Convolutional Neural Network(CNN)***,feature interpretability analysis validated the effectiveness of the proposed *** suggests that the method holds significant potential for brain tumor image classification.
Image bitmaps,i.e.,data containing pixels and visual perception,have been widely used in emerging applica-tions for pixel operations while consuming lots of memory space and *** with legacy DRAM(dynamic ran-dom access...
详细信息
Image bitmaps,i.e.,data containing pixels and visual perception,have been widely used in emerging applica-tions for pixel operations while consuming lots of memory space and *** with legacy DRAM(dynamic ran-dom access memory),non-volatile memories(NVMs)are suitable for bitmap storage due to the salient features of high density and intrinsic ***,writing NVMs suffers from higher energy consumption and latency compared with read *** precise or approximate compression schemes in NVM controllers show limited performance for bitmaps due to the irregular data patterns and variance in *** observe the pixel-level similarity when writing bitmaps due to the analogous contents in adjacent *** exploiting the pixel-level similarity,we propose SimCom,an approximate similarity-aware compression scheme in the NVM module controller,to efficiently compress data for each write access *** idea behind SimCom is to compress continuous similar words into the pairs of base words with *** storage costs for small runs are further mitigated by reusing the least significant bits of base *** adaptively selects an appropriate compression mode for various bitmap formats,thus achieving an efficient trade-off be-tween quality and memory *** implement SimCom on GEM5/zsim with NVMain and evaluate the perfor-mance with real-world image/video *** results demonstrate the efficacy and efficiency of our SimCom with an efficient quality-performance trade-off.
Real-time systems involve tasks that may voluntarily suspend their execution as they await specific events or resources. Such self-suspension can introduce further delays and unpredictability in scheduling, making the...
详细信息
The scaler and scheduler of serverless system are the two cornerstones that ensure service quality and efficiency. However, existing scalers and schedulers are constrained by static thresholds, scaling latency, and si...
详细信息
An abundance of data have been generated from various embedded devices, applications, and systems, and require cost-efficient storage services. Data deduplication removes duplicate chunks and becomes an important tech...
详细信息
An abundance of data have been generated from various embedded devices, applications, and systems, and require cost-efficient storage services. Data deduplication removes duplicate chunks and becomes an important technique for storage systems to improve space efficiency. However, stored unique chunks are heavily fragmented, decreasing restore performance and incurs high overheads for garbage collection. Existing schemes fail to achieve an efficient trade-off among deduplication, restore and garbage collection performance, due to failing to explore and exploit the physical locality of different chunks. In this paper, we trace the storage patterns of the fragmented chunks in backup systems, and propose a high-performance deduplication system, called HiDeStore. The main insight is to enhance the physical-locality for the new backup versions during the deduplication phase, which identifies and stores hot chunks in the active containers. The chunks not appearing in new backups become cold and are gathered together in the archival containers. Moreover, we remove the expired data with an isolated container deletion scheme, avoiding the high overheads for expired data detection. Compared with state-of-the-art schemes, HiDeStore improves the deduplication and restore performance by up to 1.4x and 1.6x, respectively, without decreasing the deduplication ratios and incurring high garbage collection overheads.
The visual noise of each light intensity area is different when the image is drawn by Monte Carlo ***,the existing denoising algorithms have limited denoising performance under complex lighting conditions and are easy...
详细信息
The visual noise of each light intensity area is different when the image is drawn by Monte Carlo ***,the existing denoising algorithms have limited denoising performance under complex lighting conditions and are easy to lose detailed *** we propose a rendered image denoising method with filtering guided by lighting ***,we design an image segmentation algorithm based on lighting information to segment the image into different illumination ***,we establish the parameter prediction model guided by lighting information for filtering(PGLF)to predict the filtering parameters of different illumination *** different illumination areas,we use these filtering parameters to construct area filters,and the filters are guided by the lighting information to perform sub-area ***,the filtering results are fused with auxiliary features to output denoised images for improving the overall denoising effect of the *** the physically based rendering tool(PBRT)scene and Tungsten dataset,the experimental results show that compared with other guided filtering denoising methods,our method improves the peak signal-to-noise ratio(PSNR)metrics by 4.2164 dB on average and the structural similarity index(SSIM)metrics by 7.8%on *** shows that our method can better reduce the noise in complex lighting scenesand improvethe imagequality.
Dear Editor,This letter investigates predefined-time optimization problems(OPs) of multi-agent systems(MASs), where the agent of MASs is subject to inequality constraints, and the team objective function accounts for ...
详细信息
Dear Editor,This letter investigates predefined-time optimization problems(OPs) of multi-agent systems(MASs), where the agent of MASs is subject to inequality constraints, and the team objective function accounts for impulse effects. Firstly, to address the inequality constraints,the penalty method is introduced. Then, a novel optimization strategy is developed, which only requires that the team objective function be strongly convex.
Video salient object detection(VSOD)aims at locating the most attractive objects in a video by exploring the spatial and temporal *** poses a challenging task in computer vision,as it involves processing complex spati...
详细信息
Video salient object detection(VSOD)aims at locating the most attractive objects in a video by exploring the spatial and temporal *** poses a challenging task in computer vision,as it involves processing complex spatial data that is also influenced by temporal *** the progress made in existing VSOD models,they still struggle in scenes of great background diversity within and between ***,they encounter difficulties related to accumulated noise and high time consumption during the extraction of temporal features over a long-term *** propose a multi-stream temporal enhanced network(MSTENet)to address these *** investigates saliency cues collaboration in the spatial domain with a multi-stream structure to deal with the great background diversity challenge.A straightforward,yet efficient approach for temporal feature extraction is developed to avoid the accumulative noises and reduce time *** distinction between MSTENet and other VSOD methods stems from its incorporation of both foreground supervision and background supervision,facilitating enhanced extraction of collaborative saliency *** notable differentiation is the innovative integration of spatial and temporal features,wherein the temporal module is integrated into the multi-stream structure,enabling comprehensive spatial-temporal interactions within an end-to-end *** experimental results demonstrate that the proposed method achieves state-of-the-art performance on five benchmark datasets while maintaining a real-time speed of 27 fps(Titan XP).Our code and models are available at https://***/RuJiaLe/MSTENet.
Existing methods in article recommendation fail to fully use the article information, or pay less attention to the correlations among articles and "User-Article"s, resulting in inaccurate recommendation perf...
详细信息
Predicting the metastatic direction of primary breast cancer (BC), thus assisting physicians in precise treatment, strict follow-up, and effectively improving the prognosis. The clinical data of 293,946 patients with ...
详细信息
暂无评论