The segmentation of head and neck(H&N)tumors in dual Positron Emission Tomography/Computed Tomogra-phy(PET/CT)imaging is a critical task in medical imaging,providing essential information for diagnosis,treatment p...
详细信息
The segmentation of head and neck(H&N)tumors in dual Positron Emission Tomography/Computed Tomogra-phy(PET/CT)imaging is a critical task in medical imaging,providing essential information for diagnosis,treatment planning,and outcome *** by the need for more accurate and robust segmentation methods,this study addresses key research gaps in the application of deep learning techniques to multimodal medical ***,it investigates the limitations of existing 2D and 3D models in capturing complex tumor structures and proposes an innovative 2.5D UNet Transformer model as a *** primary research questions guiding this study are:(1)How can the integration of convolutional neural networks(CNNs)and transformer networks enhance segmentation accuracy in dual PET/CT imaging?(2)What are the comparative advantages of 2D,2.5D,and 3D model configurations in this context?To answer these questions,we aimed to develop and evaluate advanced deep-learning models that leverage the strengths of both CNNs and *** proposed methodology involved a comprehensive preprocessing pipeline,including normalization,contrast enhancement,and resampling,followed by segmentation using 2D,2.5D,and 3D UNet Transformer *** models were trained and tested on three diverse datasets:HeckTor2022,AutoPET2023,and *** was assessed using metrics such as Dice Similarity Coefficient,Jaccard Index,Average Surface Distance(ASD),and Relative Absolute Volume Difference(RAVD).The findings demonstrate that the 2.5D UNet Transformer model consistently outperformed the 2D and 3D models across most metrics,achieving the highest Dice and Jaccard values,indicating superior segmentation *** instance,on the HeckTor2022 dataset,the 2.5D model achieved a Dice score of 81.777 and a Jaccard index of 0.705,surpassing other model *** 3D model showed strong boundary delineation performance but exhibited variability across datasets,while the
Deep Metric Learning (DML) has long attracted the attention of the machine learning community as a key objective. Existing solutions concentrate on fine-tuning the pre-trained models on conventional image datasets. As...
详细信息
Domain adaptive semantic segmentation enables robust pixel- wise understanding in real-world driving scenes. Source-free domain adaptation, as a more practical technique, addresses the concerns of data privacy and sto...
详细信息
Domain adaptive semantic segmentation enables robust pixel- wise understanding in real-world driving scenes. Source-free domain adaptation, as a more practical technique, addresses the concerns of data privacy and storage limitations in typical unsupervised domain adaptation methods, making it especially relevant in the context of intelligent vehicles. It utilizes a well-trained source model and unlabeled target data to achieve adaptation in the target domain. However, in the absence of source data and target labels, current solutions cannot sufficiently reduce the impact of domain shift and fully leverage the information from the target data. In this paper, we propose an end-to-end source-free domain adaptation semantic segmentation method via Importance-Aware and Prototype-Contrast (IAPC) learning. The proposed IAPC framework effectively extracts domain-invariant knowledge from the well-trained source model and learns domain-specific knowledge from the unlabeled target domain. Specifically, considering the problem of domain shift in the prediction of the target domain by the source model, we put forward an importance-aware mechanism for the biased target prediction probability distribution to extract domain-invariant knowledge from the source model. We further introduce a prototype-contrast strategy, which includes a prototype-symmetric cross-entropy loss and a prototype-enhanced cross-entropy loss, to learn target intra-domain knowledge without relying on labels. A comprehensive variety of experiments on two domain adaptive semantic segmentation benchmarks demonstrates that the proposed end-to-end IAPC solution outperforms existing state-of-the-art methods. The source code is publicly available at https://***/yihong-97/Source-free-IAPC. IEEE
Federated learning (FL) has emerged as a promising paradigm for enabling the collaborative training of models without centralized access to the raw data on local devices. In the typical FL paradigm (e.g., FedAvg), mod...
详细信息
Redundancy elimination techniques are extensively investigated to reduce storage overheads for cloud-assisted health *** eliminates the redundancy of duplicate blocks by storing one physical instance referenced by mul...
详细信息
Redundancy elimination techniques are extensively investigated to reduce storage overheads for cloud-assisted health *** eliminates the redundancy of duplicate blocks by storing one physical instance referenced by multiple *** compression is usually regarded as a complementary technique to deduplication to further remove the redundancy of similar blocks,but our observations indicate that this is disobedient when data have sparse duplicate *** addition,there are many overlapped deltas in the resemblance detection process of post-deduplication delta compression,which hinders the efficiency of delta compression and the index phase of resemblance detection inquires abundant non-similar blocks,resulting in inefficient system ***,a multi-feature-based redundancy elimination scheme,called MFRE,is proposed to solve these *** similarity feature and temporal locality feature are excavated to assist redundancy elimination where the similarity feature well expresses the duplicate ***,similarity-based dynamic post-deduplication delta compression and temporal locality-based dynamic delta compression discover more similar base blocks to minimise overlapped deltas and improve compression ***,the clustering method based on block-relationship and the feature index strategy based on bloom filters reduce IO overheads and improve system *** demonstrate that the proposed method,compared to the state-of-the-art method,improves the compression ratio and system throughput by 9.68%and 50%,respectively.
With the emergence of the Transformer architecture, the accuracy of deep learning within the domain of facial emotion recognition has seen further enhancement. However, Transformer comes with increased training comple...
详细信息
Cloud computing is a dynamic and rapidly evolving field,where the demand for resources fluctuates *** paper delves into the imperative need for adaptability in the allocation of resources to applications and services ...
详细信息
Cloud computing is a dynamic and rapidly evolving field,where the demand for resources fluctuates *** paper delves into the imperative need for adaptability in the allocation of resources to applications and services within cloud computing *** motivation stems from the pressing issue of accommodating fluctuating levels of user demand *** adhering to the proposed resource allocation method,we aim to achieve a substantial reduction in energy *** reduction hinges on the precise and efficient allocation of resources to the tasks that require those most,aligning with the broader goal of sustainable and eco-friendly cloud computing *** enhance the resource allocation process,we introduce a novel knowledge-based optimization *** this study,we rigorously evaluate its efficacy by comparing it to existing algorithms,including the Flower Pollination Algorithm(FPA),Spark Lion Whale Optimization(SLWO),and Firefly *** findings reveal that our proposed algorithm,Knowledge Based Flower Pollination Algorithm(KB-FPA),consistently outperforms these conventional methods in both resource allocation efficiency and energy consumption *** paper underscores the profound significance of resource allocation in the realm of cloud *** addressing the critical issue of adaptability and energy efficiency,it lays the groundwork for a more sustainable future in cloud computing *** contribution to the field lies in the introduction of a new resource allocation strategy,offering the potential for significantly improved efficiency and sustainability within cloud computing infrastructures.
Aerial imagery analysis is critical for many research fields. However, obtaining frequent high-quality aerial images is not always accessible due to its high effort and cost requirements. One solution is to use the Gr...
详细信息
Supervised learning is a fundamental framework used to train machine learning systems. A supervised learning problem is often formulated using an i.i.d. assumption that restricts model attention to a single relevant s...
详细信息
Image classification in computervision has seen tremendous amount of success in recent years. Deep learning has played a pivotal role in achieving human level performance in many image recognition challenges and benc...
详细信息
Image classification in computervision has seen tremendous amount of success in recent years. Deep learning has played a pivotal role in achieving human level performance in many image recognition challenges and benchmarks. Even though, image classification has been so successful, no other closely related domains have taken advantage from the efforts put into development of image classification methods. One such closely related field is of Object Detection. Object detection or localisation is a computervision problem whose solutions have not been victorious enough to human level performance. Many challenges arise when developing object detection models for newly generated domains, one of which is labelling of datasets. Preparation of dataset is one of the most cumbersome and expensive task to accomplish while developing an object detection model. Although, image classifiers are used as a feature extractor in object detection training regimes, their localisation abilities are barely studied. In this paper, we propose an object detection training regime, that does not rely on bounding box labelled datasets, hence unsupervised in nature, and is solely based on trained image classifiers. We build up on our hypothesis, that, "if an image classifier is able to predict what object is in the input image, then it must have information about where the object is, we just need a mechanism to extract that information from it". Precisely, we divide the input image into patches of same size and employ a parameter restricted convolutional classifier on each patches to predict whether it contains the object or not, we call this our patch-based image classifier (the object here is the prediction of the trained image classifier). The training of the patch-based classifier is not straightforward as there is no true labels for each patches on which we can reduce the binary cross-entropy. Therefore, we propose a loss function weighted by the importance map, which we generate using Grad
暂无评论