In recent times, Artificial intelligence (AI) and machine learning (ML) technologies have demonstrated remarkable capabilities in addressing real time needs and challenges across multiple domains and fields. In the wo...
详细信息
In this paper, application-based solution implemented for smart irrigation system using IoT and sensors. The proposed model provides a prediction of the user's time before which plant must need spraying, to compar...
详细信息
High-dimensional and incomplete(HDI) matrices are primarily generated in all kinds of big-data-related practical applications. A latent factor analysis(LFA) model is capable of conducting efficient representation lear...
详细信息
High-dimensional and incomplete(HDI) matrices are primarily generated in all kinds of big-data-related practical applications. A latent factor analysis(LFA) model is capable of conducting efficient representation learning to an HDI matrix,whose hyper-parameter adaptation can be implemented through a particle swarm optimizer(PSO) to meet scalable ***, conventional PSO is limited by its premature issues,which leads to the accuracy loss of a resultant LFA model. To address this thorny issue, this study merges the information of each particle's state migration into its evolution process following the principle of a generalized momentum method for improving its search ability, thereby building a state-migration particle swarm optimizer(SPSO), whose theoretical convergence is rigorously proved in this study. It is then incorporated into an LFA model for implementing efficient hyper-parameter adaptation without accuracy loss. Experiments on six HDI matrices indicate that an SPSO-incorporated LFA model outperforms state-of-the-art LFA models in terms of prediction accuracy for missing data of an HDI matrix with competitive computational ***, SPSO's use ensures efficient and reliable hyper-parameter adaptation in an LFA model, thus ensuring practicality and accurate representation learning for HDI matrices.
The segmentation of head and neck(H&N)tumors in dual Positron Emission Tomography/Computed Tomogra-phy(PET/CT)imaging is a critical task in medical imaging,providing essential information for diagnosis,treatment p...
详细信息
The segmentation of head and neck(H&N)tumors in dual Positron Emission Tomography/Computed Tomogra-phy(PET/CT)imaging is a critical task in medical imaging,providing essential information for diagnosis,treatment planning,and outcome *** by the need for more accurate and robust segmentation methods,this study addresses key research gaps in the application of deep learning techniques to multimodal medical ***,it investigates the limitations of existing 2D and 3D models in capturing complex tumor structures and proposes an innovative 2.5D UNet Transformer model as a *** primary research questions guiding this study are:(1)How can the integration of convolutional neural networks(CNNs)and transformer networks enhance segmentation accuracy in dual PET/CT imaging?(2)What are the comparative advantages of 2D,2.5D,and 3D model configurations in this context?To answer these questions,we aimed to develop and evaluate advanced deep-learning models that leverage the strengths of both CNNs and *** proposed methodology involved a comprehensive preprocessing pipeline,including normalization,contrast enhancement,and resampling,followed by segmentation using 2D,2.5D,and 3D UNet Transformer *** models were trained and tested on three diverse datasets:HeckTor2022,AutoPET2023,and *** was assessed using metrics such as Dice Similarity Coefficient,Jaccard Index,Average Surface Distance(ASD),and Relative Absolute Volume Difference(RAVD).The findings demonstrate that the 2.5D UNet Transformer model consistently outperformed the 2D and 3D models across most metrics,achieving the highest Dice and Jaccard values,indicating superior segmentation *** instance,on the HeckTor2022 dataset,the 2.5D model achieved a Dice score of 81.777 and a Jaccard index of 0.705,surpassing other model *** 3D model showed strong boundary delineation performance but exhibited variability across datasets,while the
This study introduces CLIP-Flow,a novel network for generating images from a given image or *** effectively utilize the rich semantics contained in both modalities,we designed a semantics-guided methodology for image-...
详细信息
This study introduces CLIP-Flow,a novel network for generating images from a given image or *** effectively utilize the rich semantics contained in both modalities,we designed a semantics-guided methodology for image-and text-to-image *** particular,we adopted Contrastive Language-Image Pretraining(CLIP)as an encoder to extract semantics and StyleGAN as a decoder to generate images from such ***,to bridge the embedding space of CLIP and latent space of StyleGAN,real NVP is employed and modified with activation normalization and invertible *** the images and text in CLIP share the same representation space,text prompts can be fed directly into CLIP-Flow to achieve text-to-image *** conducted extensive experiments on several datasets to validate the effectiveness of the proposed image-to-image synthesis *** addition,we tested on the public dataset Multi-Modal CelebA-HQ,for text-to-image *** validated that our approach can generate high-quality text-matching images,and is comparable with state-of-the-art methods,both qualitatively and quantitatively.
Learning network dynamics from the empirical structure and spatio-temporal observation data is crucial to revealing the interaction mechanisms of complex networks in a wide range of domains. However,most existing meth...
详细信息
Learning network dynamics from the empirical structure and spatio-temporal observation data is crucial to revealing the interaction mechanisms of complex networks in a wide range of domains. However,most existing methods only aim at learning network dynamic behaviors generated by a specific ordinary differential equation instance, resulting in ineffectiveness for new ones, and generally require dense *** observed data, especially from network emerging dynamics, are usually difficult to obtain, which brings trouble to model learning. Therefore, learning accurate network dynamics with sparse, irregularly-sampled,partial, and noisy observations remains a fundamental challenge. We introduce a new concept of the stochastic skeleton and its neural implementation, i.e., neural ODE processes for network dynamics(NDP4ND), a new class of stochastic processes governed by stochastic data-adaptive network dynamics, to overcome the challenge and learn continuous network dynamics from scarce observations. Intensive experiments conducted on various network dynamics in ecological population evolution, phototaxis movement, brain activity, epidemic spreading, and real-world empirical systems, demonstrate that the proposed method has excellent data adaptability and computational efficiency, and can adapt to unseen network emerging dynamics, producing accurate interpolation and extrapolation with reducing the ratio of required observation data to only about 6% and improving the learning speed for new dynamics by three orders of magnitude.
Pixel-level structure segmentations have attracted considerable attention,playing a crucial role in autonomous driving within the metaverse and enhancing comprehension in light field-based machine ***,current light fi...
详细信息
Pixel-level structure segmentations have attracted considerable attention,playing a crucial role in autonomous driving within the metaverse and enhancing comprehension in light field-based machine ***,current light field modeling methods fail to integrate appearance and geometric structural information into a coherent semantic space,thereby limiting the capability of light field transmission for visual *** this paper,we propose a general light field modeling method for pixel-level structure segmentation,comprising a generative light field prompting encoder(LF-GPE)and a prompt-based masked light field pretraining(LF-PMP)*** LF-GPE,serving as a light field backbone,can extract both appearance and geometric structural cues *** aligns these features into a unified visual space,facilitating semantic ***,our LF-PMP,during the pretraining phase,integrates a mixed light field and a multi-view light field *** prioritizes considering the geometric structural properties of the light field,enabling the light field backbone to accumulate a wealth of prior *** evaluate our pretrained LF-GPE on two downstream tasks:light field salient object detection and semantic *** results demonstrate that LF-GPE can effectively learn high-quality light field features and achieve highly competitive performance in pixel-level segmentation tasks.
作者:
A.E.M.EljialyMohammed Yousuf UddinSultan AhmadDepartment of Information Systems
College of Computer Engineering and SciencesPrince Sattam Bin Abdulaziz UniversityAlkharjSaudi Arabia Department of Computer Science
College of Computer Engineering and SciencesPrince Sattam Bin Abdulaziz UniversityAlkharjSaudi Arabiaand also with University Center for Research and Development(UCRD)Department of Computer Science and EngineeringChandigarh UniversityPunjabIndia
Intrusion detection systems (IDSs) are deployed to detect anomalies in real time. They classify a network’s incoming traffic as benign or anomalous (attack). An efficient and robust IDS in software-defined networks i...
详细信息
Intrusion detection systems (IDSs) are deployed to detect anomalies in real time. They classify a network’s incoming traffic as benign or anomalous (attack). An efficient and robust IDS in software-defined networks is an inevitable component of network security. The main challenges of such an IDS are achieving zero or extremely low false positive rates and high detection rates. Internet of Things (IoT) networks run by using devices with minimal resources. This situation makes deploying traditional IDSs in IoT networks unfeasible. Machine learning (ML) techniques are extensively applied to build robust IDSs. Many researchers have utilized different ML methods and techniques to address the above challenges. The development of an efficient IDS starts with a good feature selection process to avoid overfitting the ML model. This work proposes a multiple feature selection process followed by classification. In this study, the Software-defined networking (SDN) dataset is used to train and test the proposed model. This model applies multiple feature selection techniques to select high-scoring features from a set of features. Highly relevant features for anomaly detection are selected on the basis of their scores to generate the candidate dataset. Multiple classification algorithms are applied to the candidate dataset to build models. The proposed model exhibits considerable improvement in the detection of attacks with high accuracy and low false positive rates, even with a few features selected.
In blockchain networks, transactions can be transmitted through channels. The existing transmission methods depend on their routing information. If a node randomly chooses a channel to transmit a transaction, the tran...
详细信息
In blockchain networks, transactions can be transmitted through channels. The existing transmission methods depend on their routing information. If a node randomly chooses a channel to transmit a transaction, the transmission may be aborted due to insufficient funds(also called balance) or a low transmission rate. To increase the success rate and reduce transmission delay across all transactions, this work proposes a transaction transmission model for blockchain channels based on non-cooperative game *** balance, channel states, and transmission probability are fully considered. This work then presents an optimized channel transaction transmission algorithm. First, channel balances are analyzed and suitable channels are selected if their balance is sufficient. Second, a Nash equilibrium point is found by using an iterative sub-gradient method and its related channels are then used to transmit transactions. The proposed method is compared with two state-of-the-art approaches: Silent Whispers and Speedy Murmurs. Experimental results show that the proposed method improves transmission success rate, reduces transmission delay,and effectively decreases transmission overhead in comparison with its two competitive peers.
Speculative execution attacks can leak arbitrary program data under malicious speculation,presenting a severe security *** on two key observations,this paper presents a software-transparent defense mechanism called sp...
详细信息
Speculative execution attacks can leak arbitrary program data under malicious speculation,presenting a severe security *** on two key observations,this paper presents a software-transparent defense mechanism called speculative secret flow tracking(SSFT),which is capable of defending against all cache-based speculative execution attacks with a low performance ***,we observe that the attacker must use array or pointer variables in the victim code to access arbitrary memory ***,we propose a strict definition of secret data to reduce the amount of data to be ***,if the load is not data-dependent and control-dependent on secrets,its speculative execution will not leak any ***,this paper introduces the concept of speculative secret flow to analyze how secret data are obtained and propagated during speculative *** tracking speculative secret flow in hardware,SSFT can identify all unsafe speculative loads(USLs)that are dependent on ***,SSFT exploits three different methods to constrain USLs’speculative execution and prevent them from leaking secrets into the cache and translation lookaside buffer(TLB)*** paper evaluates the performance of SSFT on the SPEC CPU 2006 workloads,and the results show that SSFT is effective and its performance overhead is very *** defend against all speculative execution attack variants,SSFT only incurs an average slowdown of 4.5%(Delay USL-L1Miss)or 3.8%(Invisible USLs)compared to a non-secure *** analysis also shows that SSFT maintains a low hardware overhead.
暂无评论