Networks based on backscatter communication provide wireless data transmission in the absence of a power source.A backscatter device receives a radio frequency(RF)source and creates a backscattered signal that deliver...
详细信息
Networks based on backscatter communication provide wireless data transmission in the absence of a power source.A backscatter device receives a radio frequency(RF)source and creates a backscattered signal that delivers data;this enables new services in battery-less domains with massive Internet-of-Things(IoT)*** is highly energy-efficient in the context of massive IoT ***,long-range(LoRa)backscattering facilitates large IoT services.A backscatter network guarantees timeslot-and contention-based ***-based transmission ensures data transmission,but is not scalable to different numbers of transmission *** contention-based transmission is used,collisions are *** reduce collisions and increase transmission efficiency,the number of devices transmitting data must be *** control device activation,the RF source range can be modulated by adjusting the RF source power during LoRa *** reduces the number of transmitting devices,and thus collisions and retransmission,thereby improving transmission *** performed extensive simulations to evaluate the performance of our method.
Cyberbullying,a critical concern for digital safety,necessitates effective linguistic analysis tools that can navigate the complexities of language use in online *** tackle this challenge,our study introduces a new ap...
详细信息
Cyberbullying,a critical concern for digital safety,necessitates effective linguistic analysis tools that can navigate the complexities of language use in online *** tackle this challenge,our study introduces a new approach employing Bidirectional Encoder Representations from the Transformers(BERT)base model(cased),originally pretrained in *** model is uniquely adapted to recognize the intricate nuances of Arabic online communication,a key aspect often overlooked in conventional cyberbullying detection *** model is an end-to-end solution that has been fine-tuned on a diverse dataset of Arabic social media(SM)tweets showing a notable increase in detection accuracy and sensitivity compared to existing *** results on a diverse Arabic dataset collected from the‘X platform’demonstrate a notable increase in detection accuracy and sensitivity compared to existing methods.E-BERT shows a substantial improvement in performance,evidenced by an accuracy of 98.45%,precision of 99.17%,recall of 99.10%,and an F1 score of 99.14%.The proposed E-BERT not only addresses a critical gap in cyberbullying detection in Arabic online forums but also sets a precedent for applying cross-lingual pretrained models in regional language applications,offering a scalable and effective framework for enhancing online safety across Arabic-speaking communities.
Travelling Salesman Problem(TSP)is a discrete hybrid optimization problem considered *** aims to discover the shortest Hamilton route that visits each city precisely once and then returns to the starting point,making ...
详细信息
Travelling Salesman Problem(TSP)is a discrete hybrid optimization problem considered *** aims to discover the shortest Hamilton route that visits each city precisely once and then returns to the starting point,making it the shortest route *** paper employed a Farmland Fertility Algorithm(FFA)inspired by agricultural land fertility and a hyper-heuristic technique based on the Modified Choice Function(MCF).The neighborhood search operator can use this strategy to automatically select the best heuristic method formaking the best ***-Kernighan(LK)local search has been incorporated to increase the efficiency and performance of this suggested approach.71 TSPLIB datasets have been compared with different algorithms to prove the proposed algorithm’s performance and *** results indicated that the proposed algorithm outperforms comparable methods of average mean computation time,average percentage deviation(PDav),and tour length.
Over the past few years,the application and usage of Machine Learning(ML)techniques have increased exponentially due to continuously increasing the size of data and computing *** the popularity of ML techniques,only a...
详细信息
Over the past few years,the application and usage of Machine Learning(ML)techniques have increased exponentially due to continuously increasing the size of data and computing *** the popularity of ML techniques,only a few research studies have focused on the application of ML especially supervised learning techniques in Requirement engineering(RE)activities to solve the problems that occur in RE *** authors focus on the systematic mapping of past work to investigate those studies that focused on the application of supervised learning techniques in RE activities between the period of 2002–*** authors aim to investigate the research trends,main RE activities,ML algorithms,and data sources that were studied during this ***-five research studies were selected based on our exclusion and inclusion *** results show that the scientific community used 57 *** those algorithms,researchers mostly used the five following ML algorithms in RE activities:Decision Tree,Support Vector Machine,Naïve Bayes,K-nearest neighbour Classifier,and Random *** results show that researchers used these algorithms in eight major RE *** activities are requirements analysis,failure prediction,effort estimation,quality,traceability,business rules identification,content classification,and detection of problems in requirements written in natural *** selected research studies used 32 private and 41 public data *** most popular data sources that were detected in selected studies are the Metric Data programme from NASA,Predictor Models in softwareengineering,and iTrust Electronic Health Care System.
While emerging technologies such as the Internet of Things(IoT)have many benefits,they also pose considerable security challenges that require innovative solutions,including those based on artificial intelligence(AI),...
详细信息
While emerging technologies such as the Internet of Things(IoT)have many benefits,they also pose considerable security challenges that require innovative solutions,including those based on artificial intelligence(AI),given that these techniques are increasingly being used by malicious actors to compromise IoT *** an ample body of research focusing on conventional AI methods exists,there is a paucity of studies related to advanced statistical and optimization approaches aimed at enhancing security *** contribute to this nascent research stream,a novel AI-driven security system denoted as“AI2AI”is presented in this ***2AI employs AI techniques to enhance the performance and optimize security mechanisms within the IoT *** also introduce the Genetic Algorithm Anomaly Detection and Prevention Deep Neural Networks(GAADPSDNN)sys-tem that can be implemented to effectively identify,detect,and prevent cyberattacks targeting IoT ***,this system demonstrates adaptability to both federated and centralized learning environments,accommodating a wide array of IoT *** evaluation of the GAADPSDNN system using the recently complied WUSTL-IIoT and Edge-IIoT datasets underscores its *** an impressive overall accuracy of 98.18%on the Edge-IIoT dataset,the GAADPSDNN outperforms the standard deep neural network(DNN)classifier with 94.11%***,with the proposed enhancements,the accuracy of the unoptimized random forest classifier(80.89%)is improved to 93.51%,while the overall accuracy(98.18%)surpasses the results(93.91%,94.67%,94.94%,and 94.96%)achieved when alternative systems based on diverse optimization techniques and the same dataset are *** proposed optimization techniques increase the effectiveness of the anomaly detection system by efficiently achieving high accuracy and reducing the computational load on IoT devices through the adaptive selection of active features.
This paper explores the global spread of the COVID-19 virus since 2019, impacting 219 countries worldwide. Despite the absence of a definitive cure, the utilization of artificial intelligence (AI) methods for disease ...
详细信息
This paper explores the global spread of the COVID-19 virus since 2019, impacting 219 countries worldwide. Despite the absence of a definitive cure, the utilization of artificial intelligence (AI) methods for disease diagnosis has demonstrated commendable effectiveness in promptly diagnosing patients and curbing infection transmission. The study introduces a deep learning-based model tailored for COVID-19 detection, leveraging three prevalent medical imaging modalities: computed tomography (CT), chest X-ray (CXR), and Ultrasound. Various deep Transfer Learning Convolutional Neural Network-based (CNN) models have undergone assessment for each imaging modality. For each imaging modality, this study has selected the two most accurate models based on evaluation metrics such as accuracy and loss. Additionally, efforts have been made to prune unnecessary weights from these models to obtain more efficient and sparse models. By fusing these pruned models, enhanced performance has been achieved. The models have undergone rigorous training and testing using publicly available real-world medical datasets, focusing on classifying these datasets into three distinct categories: Normal, COVID-19 Pneumonia, and non-COVID-19 Pneumonia. The primary objective is to develop an optimized and swift model through strategies like Transfer Learning, Ensemble Learning, and reducing network complexity, making it easier for storage and transfer. The results of the trained network on test data exhibit promising outcomes. The accuracy of these models on the CT scan, X-ray, and ultrasound datasets stands at 99.4%, 98.9%, and 99.3%, respectively. Moreover, these models’ sizes have been substantially reduced and optimized by 51.93%, 38.00%, and 69.07%, respectively. This study proposes a computer-aided-coronavirus-detection system based on three standard medical imaging techniques. The intention is to assist radiologists in accurately and swiftly diagnosing the disease, especially during the screen
The growing field of urban monitoring has increasingly recognized the potential of utilizing autonomous technologies,particularly in drone *** deployment of intelligent drone swarms offers promising solutions for enha...
详细信息
The growing field of urban monitoring has increasingly recognized the potential of utilizing autonomous technologies,particularly in drone *** deployment of intelligent drone swarms offers promising solutions for enhancing the efficiency and scope of urban condition *** this context,this paper introduces an innovative algorithm designed to navigate a swarm of drones through urban landscapes for monitoring *** primary challenge addressed by the algorithm is coordinating drone movements from one location to another while circumventing obstacles,such as *** algorithm incorporates three key components to optimize the obstacle detection,navigation,and energy efficiency within a drone ***,the algorithm utilizes a method to calculate the position of a virtual leader,acting as a navigational beacon to influence the overall direction of the ***,the algorithm identifies observers within the swarm based on the current *** further refine obstacle avoidance,the third component involves the calculation of angular velocity using fuzzy *** approach considers the proximity of detected obstacles through operational rangefinders and the target’s location,allowing for a nuanced and adaptable computation of angular *** integration of fuzzy logic enables the drone swarm to adapt to diverse urban conditions dynamically,ensuring practical obstacle *** proposed algorithm demonstrates enhanced performance in the obstacle detection and navigation accuracy through comprehensive *** results suggest that the intelligent obstacle avoidance algorithm holds promise for the safe and efficient deployment of autonomous mobile drones in urban monitoring applications.
The gaming industry, encompassing both console and mobile platforms, has experienced remarkable growth in recent years. Assessing emotional responses during gameplay presents a significant challenge. This study employ...
详细信息
Real-time systems experience many safety and performance issues at run time due to different uncertainties in the environment. Systems are now becoming highly interactive and must be able to execute in a changing envi...
详细信息
Real-time systems experience many safety and performance issues at run time due to different uncertainties in the environment. Systems are now becoming highly interactive and must be able to execute in a changing environment without experiencing any failure. A real-time system can have multiple modes of operation such as safety and performance. The system can satisfy its safety and performance requirements by switching between the modes at run time. It is essential for the designers to ensure that a multi-mode real-time system operates in the expected mode at run time. In this paper, we present a verification model that identifies the expected mode at run time and checks whether the multi-mode real-time system is operating in the correct mode or not. To determine the expected mode, we present a monitoring module that checks the environment of the system, identifies different real-world occurrences as events, determines their properties and creates an event-driven dataset for failure analysis. The dataset consumes less memory in comparison to the raw input data obtained from the monitored environment. The event-driven dataset also facilitates onboard decision-making because the dataset allows the system to perform a safety analysis by determining the probability of failure in each environmental situations. We use the probability of failure of the system to determine the safety mode in different environmental situations. To demonstrate the applicability of our proposed scheme, we design and implement a real-time traffic monitoring system that has two modes: safety, and performance. The experimental analysis of our work shows that the verification model can identify the expected operating mode at run time based on the safety (probability of failure) and performance (usage) requirements of the system as well as allows the system to operate in performance mode (in 3295 out of 3421 time intervals) and safety mode (in 126 out of 3421 time intervals). The experimental resul
As a pivotal enabler of intelligent transportation system(ITS), Internet of vehicles(Io V) has aroused extensive attention from academia and industry. The exponential growth of computation-intensive, latency-sensitive...
详细信息
As a pivotal enabler of intelligent transportation system(ITS), Internet of vehicles(Io V) has aroused extensive attention from academia and industry. The exponential growth of computation-intensive, latency-sensitive,and privacy-aware vehicular applications in Io V result in the transformation from cloud computing to edge computing,which enables tasks to be offloaded to edge nodes(ENs) closer to vehicles for efficient execution. In ITS environment,however, due to dynamic and stochastic computation offloading requests, it is challenging to efficiently orchestrate offloading decisions for application requirements. How to accomplish complex computation offloading of vehicles while ensuring data privacy remains challenging. In this paper, we propose an intelligent computation offloading with privacy protection scheme, named COPP. In particular, an Advanced Encryption Standard-based encryption method is utilized to implement privacy protection. Furthermore, an online offloading scheme is proposed to find optimal offloading policies. Finally, experimental results demonstrate that COPP significantly outperforms benchmark schemes in the performance of both delay and energy consumption.
暂无评论