Artificial Intelligence (AI) is transforming numerous domains, including bioinformatics and information extraction systems, by advancing data processing capabilities, enhancing precision, and facilitating automation. ...
详细信息
Behavior-Driven Development (BDD) user stories are widely used in agile methods for capturing user requirements and acceptance criteria due to their simplicity and clarity. However, the concise structure of BDD-based ...
详细信息
In the contemporary landscape, autonomous vehicles (AVs) have emerged as a prominent technological advancement globally. Despite their widespread adoption, significant hurdles remain, with security standing out as a c...
详细信息
Over the past few years,the application and usage of Machine Learning(ML)techniques have increased exponentially due to continuously increasing the size of data and computing *** the popularity of ML techniques,only a...
详细信息
Over the past few years,the application and usage of Machine Learning(ML)techniques have increased exponentially due to continuously increasing the size of data and computing *** the popularity of ML techniques,only a few research studies have focused on the application of ML especially supervised learning techniques in Requirement engineering(RE)activities to solve the problems that occur in RE *** authors focus on the systematic mapping of past work to investigate those studies that focused on the application of supervised learning techniques in RE activities between the period of 2002–*** authors aim to investigate the research trends,main RE activities,ML algorithms,and data sources that were studied during this ***-five research studies were selected based on our exclusion and inclusion *** results show that the scientific community used 57 *** those algorithms,researchers mostly used the five following ML algorithms in RE activities:Decision Tree,Support Vector Machine,Naïve Bayes,K-nearest neighbour Classifier,and Random *** results show that researchers used these algorithms in eight major RE *** activities are requirements analysis,failure prediction,effort estimation,quality,traceability,business rules identification,content classification,and detection of problems in requirements written in natural *** selected research studies used 32 private and 41 public data *** most popular data sources that were detected in selected studies are the Metric Data Programme from NASA,Predictor Models in softwareengineering,and iTrust Electronic Health Care System.
We present Q-Cogni, an algorithmically integrated causal reinforcement learning framework that redesigns Q-Learning to improve the learning process with causal inference. Q-Cogni achieves improved policy quality and l...
详细信息
We present a novel attention-based mechanism to learn enhanced point features for point cloud processing tasks, e.g., classification and segmentation. Unlike prior studies, which were trained to optimize the weights o...
详细信息
We present a novel attention-based mechanism to learn enhanced point features for point cloud processing tasks, e.g., classification and segmentation. Unlike prior studies, which were trained to optimize the weights of a pre-selected set of attention points, our approach learns to locate the best attention points to maximize the performance of a specific task, e.g., point cloud classification. Importantly, we advocate the use of single attention point to facilitate semantic understanding in point feature learning. Specifically,we formulate a new and simple convolution, which combines convolutional features from an input point and its corresponding learned attention point(LAP). Our attention mechanism can be easily incorporated into state-of-the-art point cloud classification and segmentation networks. Extensive experiments on common benchmarks, such as Model Net40, Shape Net Part, and S3DIS, all demonstrate that our LAP-enabled networks consistently outperform the respective original networks, as well as other competitive alternatives, which employ multiple attention points, either pre-selected or learned under our LAP framework.
This paper comprehensively analyzes the Manta Ray Foraging Optimization(MRFO)algorithm and its integration into diverse academic *** in 2020,the MRFO stands as a novel metaheuristic algorithm,drawing inspiration from ...
详细信息
This paper comprehensively analyzes the Manta Ray Foraging Optimization(MRFO)algorithm and its integration into diverse academic *** in 2020,the MRFO stands as a novel metaheuristic algorithm,drawing inspiration from manta rays’unique foraging behaviors—specifically cyclone,chain,and somersault *** biologically inspired strategies allow for effective solutions to intricate physical *** its potent exploitation and exploration capabilities,MRFO has emerged as a promising solution for complex optimization *** utility and benefits have found traction in numerous academic *** its inception in 2020,a plethora of MRFO-based research has been featured in esteemed international journals such as IEEE,Wiley,Elsevier,Springer,MDPI,Hindawi,and Taylor&Francis,as well as at international conference *** paper consolidates the available literature on MRFO applications,covering various adaptations like hybridized,improved,and other MRFO variants,alongside optimization *** trends indicate that 12%,31%,8%,and 49%of MRFO studies are distributed across these four categories respectively.
Recent years have witnessed the rapid growth of social network services. Real-world social networks are huge and changing over time. Consequently, the problems in this area have become more complex. Community detectio...
详细信息
Audio Deepfakes, which are highly realistic fake audio recordings driven by AI tools that clone human voices, With Advancements in Text-Based Speech Generation (TTS) and Vocal Conversion (VC) technologies have enabled...
详细信息
Audio Deepfakes, which are highly realistic fake audio recordings driven by AI tools that clone human voices, With Advancements in Text-Based Speech Generation (TTS) and Vocal Conversion (VC) technologies have enabled it easier to create realistic synthetic and imitative speech, making audio Deepfakes a common and potentially dangerous form of deception. Well-known people, like politicians and celebrities, are often targeted. They get tricked into saying controversial things in fake recordings, causing trouble on social media. Even kids’ voices are cloned to scam parents into ransom payments, etc. Therefore, developing effective algorithms to distinguish Deepfake audio from real audio is critical to preventing such frauds. Various Machine learning (ML) and Deep learning (DL) techniques have been created to identify audio Deepfakes. However, most of these solutions are trained on datasets in English, Portuguese, French, and Spanish, expressing concerns regarding their correctness for other languages. The main goal of the research presented in this paper is to evaluate the effectiveness of deep learning neural networks in detecting audio Deepfakes in the Urdu language. Since there’s no suitable dataset of Urdu audio available for this purpose, we created our own dataset (URFV) utilizing both genuine and fake audio recordings. The Urdu Original/real audio recordings were gathered from random youtube podcasts and generated as Deepfake audios using the RVC model. Our dataset has three versions with clips of 5, 10, and 15 seconds. We have built various deep learning neural networks like (RNN+LSTM, CNN+attention, TCN, CNN+RNN) to detect Deepfake audio made through imitation or synthetic techniques. The proposed approach extracts Mel-Frequency-Cepstral-Coefficients (MFCC) features from the audios in the dataset. When tested and evaluated, Our models’ accuracy across datasets was noteworthy. 97.78% (5s), 98.89% (10s), and 98.33% (15s) were remarkable results for the RNN+LSTM
The segmentation of head and neck(H&N)tumors in dual Positron Emission Tomography/Computed Tomogra-phy(PET/CT)imaging is a critical task in medical imaging,providing essential information for diagnosis,treatment p...
详细信息
The segmentation of head and neck(H&N)tumors in dual Positron Emission Tomography/Computed Tomogra-phy(PET/CT)imaging is a critical task in medical imaging,providing essential information for diagnosis,treatment planning,and outcome *** by the need for more accurate and robust segmentation methods,this study addresses key research gaps in the application of deep learning techniques to multimodal medical ***,it investigates the limitations of existing 2D and 3D models in capturing complex tumor structures and proposes an innovative 2.5D UNet Transformer model as a *** primary research questions guiding this study are:(1)How can the integration of convolutional neural networks(CNNs)and transformer networks enhance segmentation accuracy in dual PET/CT imaging?(2)What are the comparative advantages of 2D,2.5D,and 3D model configurations in this context?To answer these questions,we aimed to develop and evaluate advanced deep-learning models that leverage the strengths of both CNNs and *** proposed methodology involved a comprehensive preprocessing pipeline,including normalization,contrast enhancement,and resampling,followed by segmentation using 2D,2.5D,and 3D UNet Transformer *** models were trained and tested on three diverse datasets:HeckTor2022,AutoPET2023,and *** was assessed using metrics such as Dice Similarity Coefficient,Jaccard Index,Average Surface Distance(ASD),and Relative Absolute Volume Difference(RAVD).The findings demonstrate that the 2.5D UNet Transformer model consistently outperformed the 2D and 3D models across most metrics,achieving the highest Dice and Jaccard values,indicating superior segmentation *** instance,on the HeckTor2022 dataset,the 2.5D model achieved a Dice score of 81.777 and a Jaccard index of 0.705,surpassing other model *** 3D model showed strong boundary delineation performance but exhibited variability across datasets,while the
暂无评论