We developed an information system using an object-oriented programming language and a distributed database (DDB) consisting of multiple interconnected databases across a computer network, managed by a distributed dat...
详细信息
The development of the Internet of Things(IoT)technology is leading to a new era of smart applications such as smart transportation,buildings,and smart ***,these applications act as the building blocks of IoT-enabled ...
详细信息
The development of the Internet of Things(IoT)technology is leading to a new era of smart applications such as smart transportation,buildings,and smart ***,these applications act as the building blocks of IoT-enabled smart *** high volume and high velocity of data generated by various smart city applications are sent to flexible and efficient cloud computing resources for ***,there is a high computation latency due to the presence of a remote cloud *** computing,which brings the computation close to the data source is introduced to overcome this *** an IoT-enabled smart city environment,one of the main concerns is to consume the least amount of energy while executing tasks that satisfy the delay *** efficient resource allocation at the edge is helpful to address this *** this paper,an energy and delay minimization problem in a smart city environment is formulated as a bi-objective edge resource allocation ***,we presented a three-layer network architecture for IoT-enabled smart ***,we designed a learning automata-based edge resource allocation approach considering the three-layer network architecture to solve the said bi-objective minimization *** Automata(LA)is a reinforcement-based adaptive decision-maker that helps to find the best task and edge resource *** extensive set of simulations is performed to demonstrate the applicability and effectiveness of the LA-based approach in the IoT-enabled smart city environment.
Desertification greatly affects land deterioration, farming efficiency, economic growth, and health, especially in Gulf nations. Climate change has worsened desertification, making developmental issues in the area eve...
详细信息
Desertification greatly affects land deterioration, farming efficiency, economic growth, and health, especially in Gulf nations. Climate change has worsened desertification, making developmental issues in the area even more difficult. This research presents an enhanced framework utilizing the Internet of Things (IoT) for ongoing monitoring, data gathering, and analysis to evaluate desertification patterns. The framework utilizes Bayesian Belief Networks (BBN) to categorize IoT data, while a low-latency processing method on edge computing platforms enables effective detection of desertification trends. The classified data is subsequently analyzed using an Artificial Neural Network (ANN) optimized with a Genetic Algorithm (GA) for forecasting decisions. Using cloud computing infrastructure, the ANN-GA model examines intricate data connections to forecast desertification risk elements. Moreover, the Autoregressive Integrated Moving Average (ARIMA) model is employed to predict desertification over varied time intervals. Experimental simulations illustrate the effectiveness of the suggested framework, attaining enhanced performance in essential metrics: Temporal Delay (103.68 s), Classification Efficacy—Sensitivity (96.44 %), Precision (95.56 %), Specificity (96.97 %), and F-Measure (96.69 %)—Predictive Efficiency—Accuracy (97.76 %) and Root Mean Square Error (RMSE) (1.95 %)—along with Reliability (93.73 %) and Stability (75 %). The results of classification effectiveness and prediction performance emphasize the framework's ability to detect high-risk zones and predict the severity of desertification. This innovative method improves the comprehension of desertification processes and encourages sustainable land management practices, reducing the socio-economic impacts of desertification and bolstering at-risk ecosystems. The results of the study hold considerable importance for enhancing regional efforts in combating desertification, ensuring food security, and formulatin
In the machine learning(ML)paradigm,data augmentation serves as a regularization approach for creating ML *** increase in the diversification of training samples increases the generalization capabilities,which enhance...
详细信息
In the machine learning(ML)paradigm,data augmentation serves as a regularization approach for creating ML *** increase in the diversification of training samples increases the generalization capabilities,which enhances the prediction performance of classifiers when tested on unseen *** learning(DL)models have a lot of parameters,and they frequently ***,to avoid overfitting,data plays a major role to augment the latest improvements in ***,reliable data collection is a major limiting ***,this problem is undertaken by combining augmentation of data,transfer learning,dropout,and methods of normalization in *** this paper,we introduce the application of data augmentation in the field of image classification using Random Multi-model Deep Learning(RMDL)which uses the association approaches of multi-DL to yield random models for *** present a methodology for using Generative Adversarial Networks(GANs)to generate images for data *** experiments,we discover that samples generated by GANs when fed into RMDL improve both accuracy and model *** across both MNIST and CIAFAR-10 datasets show that,error rate with proposed approach has been decreased with different random models.
The increasing use of cloud-based image storage and retrieval systems has made ensuring security and efficiency crucial. The security enhancement of image retrieval and image archival in cloud computing has received c...
详细信息
The increasing use of cloud-based image storage and retrieval systems has made ensuring security and efficiency crucial. The security enhancement of image retrieval and image archival in cloud computing has received considerable attention in transmitting data and ensuring data confidentiality among cloud servers and users. Various traditional image retrieval techniques regarding security have developed in recent years but they do not apply to large-scale environments. This paper introduces a new approach called Triple network-based adaptive grey wolf (TN-AGW) to address these challenges. The TN-AGW framework combines the adaptability of the Grey Wolf Optimization (GWO) algorithm with the resilience of Triple Network (TN) to enhance image retrieval in cloud servers while maintaining robust security measures. By using adaptive mechanisms, TN-AGW dynamically adjusts its parameters to improve the efficiency of image retrieval processes, reducing latency and utilization of resources. However, the image retrieval process is efficiently performed by a triple network and the parameters employed in the network are optimized by Adaptive Grey Wolf (AGW) optimization. Imputation of missing values, Min–Max normalization, and Z-score standardization processes are used to preprocess the images. The image extraction process is undertaken by a modified convolutional neural network (MCNN) approach. Moreover, input images are taken from datasets such as the Landsat 8 dataset and the Moderate Resolution Imaging Spectroradiometer (MODIS) dataset is employed for image retrieval. Further, the performance such as accuracy, precision, recall, specificity, F1-score, and false alarm rate (FAR) is evaluated, the value of accuracy reaches 98.1%, the precision of 97.2%, recall of 96.1%, and specificity of 917.2% respectively. Also, the convergence speed is enhanced in this TN-AGW approach. Therefore, the proposed TN-AGW approach achieves greater efficiency in image retrieving than other existing
Vehicular Named Data Networks (VNDN) is a content centric approach for vehicle networks. The fundamental principle of addressing the content rather than the host, suits vehicular environment. There are numerous challe...
详细信息
In the field of Human Activity Recognition (HAR), the precise identification of human activities from time-series sensor data is a complex yet vital task, given its extensive applications across various industries. Th...
详细信息
Recent advancements in deep neural networks (DNNs) have made them indispensable for numerous commercial applications. These include healthcare systems and self-driving cars. Training DNN models typically demands subst...
详细信息
The segmentation of head and neck(H&N)tumors in dual Positron Emission Tomography/Computed Tomogra-phy(PET/CT)imaging is a critical task in medical imaging,providing essential information for diagnosis,treatment p...
详细信息
The segmentation of head and neck(H&N)tumors in dual Positron Emission Tomography/Computed Tomogra-phy(PET/CT)imaging is a critical task in medical imaging,providing essential information for diagnosis,treatment planning,and outcome *** by the need for more accurate and robust segmentation methods,this study addresses key research gaps in the application of deep learning techniques to multimodal medical ***,it investigates the limitations of existing 2D and 3D models in capturing complex tumor structures and proposes an innovative 2.5D UNet Transformer model as a *** primary research questions guiding this study are:(1)How can the integration of convolutional neural networks(CNNs)and transformer networks enhance segmentation accuracy in dual PET/CT imaging?(2)What are the comparative advantages of 2D,2.5D,and 3D model configurations in this context?To answer these questions,we aimed to develop and evaluate advanced deep-learning models that leverage the strengths of both CNNs and *** proposed methodology involved a comprehensive preprocessing pipeline,including normalization,contrast enhancement,and resampling,followed by segmentation using 2D,2.5D,and 3D UNet Transformer *** models were trained and tested on three diverse datasets:HeckTor2022,AutoPET2023,and *** was assessed using metrics such as Dice Similarity Coefficient,Jaccard Index,Average Surface Distance(ASD),and Relative Absolute Volume Difference(RAVD).The findings demonstrate that the 2.5D UNet Transformer model consistently outperformed the 2D and 3D models across most metrics,achieving the highest Dice and Jaccard values,indicating superior segmentation *** instance,on the HeckTor2022 dataset,the 2.5D model achieved a Dice score of 81.777 and a Jaccard index of 0.705,surpassing other model *** 3D model showed strong boundary delineation performance but exhibited variability across datasets,while the
ChatGPT is a powerful artificial intelligence(AI)language model that has demonstrated significant improvements in various natural language processing(NLP) tasks. However, like any technology, it presents potential sec...
详细信息
ChatGPT is a powerful artificial intelligence(AI)language model that has demonstrated significant improvements in various natural language processing(NLP) tasks. However, like any technology, it presents potential security risks that need to be carefully evaluated and addressed. In this survey, we provide an overview of the current state of research on security of using ChatGPT, with aspects of bias, disinformation, ethics, misuse,attacks and privacy. We review and discuss the literature on these topics and highlight open research questions and future *** this survey, we aim to contribute to the academic discourse on AI security, enriching the understanding of potential risks and mitigations. We anticipate that this survey will be valuable for various stakeholders involved in AI development and usage, including AI researchers, developers, policy makers, and end-users.
暂无评论