Nowadays, the IT departments face numerous problems which considerably affect their performance. Several factors such as technological obsolescence that shortens the useful life of equipment, amount of data that deman...
详细信息
Effective monitoring of the environment over a large area will require mobilization of a considerable amount of information. Otherwise, the use of traditional methods will prove to be costly and would take up so much ...
详细信息
Users can use online data computing services and computational resources from a distance in cloud computing environments. Task scheduling is a crucial part of cloud computing since it necessitates the creation of depe...
详细信息
Users can use online data computing services and computational resources from a distance in cloud computing environments. Task scheduling is a crucial part of cloud computing since it necessitates the creation of dependable and effective techniques for allocating tasks to resources. To achieve optimal performance, it requires accurate task allocation to resources. By optimizing task scheduling, cloud computing solutions can decrease processing times, boost efficiency, and improve overall system performance. To address these challenges, this paper proposes an improved version of Henry gas solubility optimization, which is presented as the Henry Gas-Harris Hawks-Comprehensive Opposition (HGHHC) method. This method is based on two elements: comprehensive opposition-based learning (COBL) and Harris Hawks Optimization (HHO). The HHO algorithm was employed as a local search strategy in this suggested algorithm to improve the quality of authorized solutions. Through meticulous analysis of their opposites and selecting an efficient option, COBL improves the less effective options. This method made it easier to improve insufficient solutions, which increased the overall effectiveness of the chosen strategies. The suggested technique was tested using cloudSim on the NASA, HPC2N, and Synthetic datasets. For makespan (MKS), it achieved performance of 34.30, 72.95, and 28.67, respectively. Regarding resource utilization (RU), the corresponding values were 16.92, 28.72, and 25.58. Therefore, the simulated makespan and resource usage of the proposed HGHHC algorithm were better than those of previous approaches. This highlights the effectiveness of hybrid meta-heuristic algorithms in achieving a balance between exploration and exploitation, preventing them from getting stuck in local optima.
The demand for efficient job scheduling in cloud computing has grown significantly with the rise of dynamic and heterogeneous cloud environments. While effective in simpler systems, traditional scheduling algorithms f...
详细信息
The demand for efficient job scheduling in cloud computing has grown significantly with the rise of dynamic and heterogeneous cloud environments. While effective in simpler systems, traditional scheduling algorithms fail to meet the complex requirements of modern cloud infrastructures. These limitations motivate the need for AI-driven solutions that offer adaptability, scalability, and energy efficiency. This paper comprehensively reviews AI-based job scheduling techniques, addressing several key research gaps in current approaches. The existing methods face challenges such as resource heterogeneity, energy consumption, and real-time adaptability in multi-cloud systems. Accordingly, the support of AI-based job scheduling in cloud computing is summarized here toward machine learning, optimization techniques, heuristic techniques, and hybrid AI models. This paper pointedly underlines the strengths and weaknesses of various approaches through deep comparative analysis and focuses on how AI will overcome traditional algorithm shortcomings. Is worth noticing that several important improvements this kind of AI-driven model provides, for example, in resource allocation, cost efficiency, energy consumption, and complex dependencies between jobs and system faults. In the end, AI-driven job scheduling seems to be a promising avenue toward effectively responding to the booming demands of cloud infrastructures. Future research should concentrate on three major outlooks: scalability, better integration of AI with traditional scheduling methods, and the use of other emerging technologies like edge computing and blockchain for better optimization of cloud-based job scheduling. The paper underscores the need for more adaptive, secure, and energy-efficient scheduling frameworks to meet the evolving challenges of cloud environments.
In modern cloud-based computing through pooled resources, service providers must ensure resource accessibility. The migration of workloads to the cloud necessitates careful planning, including the provision of a suffi...
详细信息
In modern cloud-based computing through pooled resources, service providers must ensure resource accessibility. The migration of workloads to the cloud necessitates careful planning, including the provision of a sufficient number of easily available virtual machines (VMs). This paper addresses the NP-hard problem of load distribution by proposing an advanced scheduling technique designed to tackle this issue directly. The main goal of the investigation is to maximize the assignment of tasks among virtual machines (VMs), ensuring an evenly distributed workload throughout the entire system. We proposed a new method to enhance the distribution of work in cloud-based structures, leveraging insights from spider monkey foraging habits. The proposed optimization technique tries to increase efficiency by strategically distributing jobs to the VMs with the least workload. The algorithm demonstrates robust performance in simulations evaluating load distribution, response time, and efficiency across several task types. The suggested load distribution technique demonstrates substantial enhancements compared to current methods, with an amazing 85% effectiveness in distributing the workload across 20 concurrent tasks. The proposed method outperforms existing algorithms, such as Improved Ant Colony Optimization and Particle Swarm Optimization-Artificial Bee Colony, which achieve load-balancing rates of 70% and 75%, respectively. This paper elucidates the intricacies of workload distribution in cloud-based computing systems while proposing a comprehensive method to improve resource consumption and overall system efficiency, hence advancing distributed computing settings.
Landslides pose a significant threat to humans as well as the environment. Rapid and precise mapping of landslide extent is necessary for understanding their spatial distribution, assessing susceptibility, and develop...
详细信息
Landslides pose a significant threat to humans as well as the environment. Rapid and precise mapping of landslide extent is necessary for understanding their spatial distribution, assessing susceptibility, and developing early warning systems. Traditional landslide mapping methods rely on labor-intensive field studies and manual mapping using high-resolution imagery, which are both costly and time-consuming. While existing machine learning-based automated mapping methods exist, they have limited transferability due to low availability of training data and the inability to handle out-of-distribution scenarios. This study introduces ML-CASCADE, a user-friendly open-source tool designed for real-time landslide mapping. It is a semi-automated tool that requires the user to create landslide and non-landslide samples using pre- and post-landslide Sentinel-2 imagery to train a machine learning model. The model training features include Sentinel-2 data, terrain data, vegetation indices, and bare soil index. ML-CASCADE is developed as an easy-to-use application on top of Google Earth Engine and supports both pixel and object-based classification methods. We validate the landslide extent developed using ML-CASCADE with independent expert-developed inventories. ML-CASCADE is not only able to identify the landslide extent accurately but can also map a complex cluster of landslides within 5 min and a simple landslide within 2 min. Due to its ease of use, speed, and accuracy, ML-CASCADE will serve as a critical operational asset for landslide risk management.
Under the current wave of Industry 4.0, the automated production process of smart manufacturing plants is becoming increasingly complex and efficient. The increasing proportion of heat consumption in the production pr...
详细信息
Under the current wave of Industry 4.0, the automated production process of smart manufacturing plants is becoming increasingly complex and efficient. The increasing proportion of heat consumption in the production process has become an important factor affecting production costs and environmental sustainability. This paper aims to explore how to realize the financial optimization of automatic production process of intelligent manufacturing plant by combining cloud computing technology with thermal energy consumption optimization strategy. This paper introduces the background of automatic production processes in intelligent manufacturing plants, and expounds the importance and current challenges of thermal energy consumption in them. The research focuses on how to use the powerful computing power and big data analysis capabilities of cloud computing platforms to monitor and manage heat consumption in real time. The cloud computing platform will serve as the core of data processing and analysis, supporting large-scale data storage, processing and analysis tasks. The thermal management system is responsible for collecting heat usage data in the production process and analyzing it through the cloud computing platform. By establishing a heat consumption model, the heat demand under different production conditions is predicted, so as to achieve accurate supply and optimal distribution of heat energy. The results of the study will show that by implementing cloud computing and thermal optimization strategies, the thermal energy consumption of smart manufacturing plants is significantly reduced, and the production cost is also reduced, indicating that through cloud computing and thermal optimization strategies, not only financial optimization can be achieved, but also improve the environmental sustainability of smart manufacturing plants.
This paper introduces an innovative load balancing algorithm that utilizes blockchain-enabled cloud computing environments. The proposed scheme leverages blockchain technology's decentralized architecture to dynam...
详细信息
This paper introduces an innovative load balancing algorithm that utilizes blockchain-enabled cloud computing environments. The proposed scheme leverages blockchain technology's decentralized architecture to dynamically and efficiently distribute workloads across virtual machines (VMs). This approach optimizes resource utilization and enhances the performance of cloud services. By integrating smart contracts and employing a meticulous VM selection process, our method effectively addresses the challenges associated with traditional load balancing techniques, which often struggle to adapt to dynamic, heterogeneous workloads. Furthermore, our algorithm promotes transparency and security in task allocation and execution, capitalizing on blockchain's inherent features of immutability and consensus. The effectiveness of the proposed scheme is demonstrated through rigorous simulation using the cloudSim toolkit, showcasing significant improvements over existing methods in terms of makespan, execution time, resource utilization, and throughput. These results underline the potential of our proposed solution to revolutionize cloud computing infrastructure management, making it more adaptable, efficient, and resilient to varying computing demands.
Nowadays, cloud computing (CC) has been utilized broadly owing to the services it provides which can be received from any location at any time on the basis of the customer's requirements. A huge amount of data tra...
详细信息
Nowadays, cloud computing (CC) has been utilized broadly owing to the services it provides which can be received from any location at any time on the basis of the customer's requirements. A huge amount of data transmission is made from both user to host as well as host to the customer in the cloud environment, but here placing the virtual machine (VM) on a suitable host and data transferring is a challenging task. In this research, a harmonic migration algorithm (HMA) is developed by combining both the migration algorithm (MA) and the harmonic analysis (HA) for migrating a VM from an overloaded to an under-loaded physical machine (PM) and enabling or disabling the VM through switching strategies in CC. The tasks are allocated to the corresponding VM in a round-robin (RR) manner and subsequently, the load of the VM is predicted through the gated recurrent unit (GRU). The HMA technique migrates the VM when the predicted load is higher than the value of threshold and also, it enables or disables the VM when necessary. Thus, the performance of the developed HMA is improved over the other previous schemes at tasks 100, 200, 300, and 400 by varying the iterations. Therefore, the predicted load, makespan, and resource utilization of the developed HMA are 0.148, 0.327 s, and 0.482% for task 100 at iteration 100.
Cyber security must be implemented when using cloud computing to identify and protect malevolent intrusions and strengthen the organizations capacity against cyberattacks. Detecting network intrusions with zero false ...
详细信息
Cyber security must be implemented when using cloud computing to identify and protect malevolent intrusions and strengthen the organizations capacity against cyberattacks. Detecting network intrusions with zero false alarms is a challenge. A number of intrusion detection systems (IDS) for cloud computing (CC) environments have put forward recently. The existing IDS exhibit significant false positive rates, poor classification accuracy, and over-fitting. Therefore, a Double Fuzzy Clustering-Driven Context Neural Network for Intrusion Detection in cloud computing (DFCCNN-BWOA-IDC) is proposed in this paper. Initially, the input data is gleaned from DARPA dataset. The input data is pre-processed utilizing Sequential pre-processing through orthogonalization (SPORT) method to replace the missing values and remove the duplicate values. After that, the pre-processing data is fed to the recursive feature elimination (REF) approach for selecting optimal features. Then the selected features are supplied to the DFCCNN to categorize the data as Normal or Anomaly. Finally, the Beluga Whale Optimization algorithm (BWOA) is proposed to enhance the weight parameters of DFCCNN classifier, which precisely detects the attacks. The proposed DFCCNN-BWOA-IDC approach is activated in MATLAB. The DFCCNN-BWOA-IDC method reaches better accuracy of 98.89% which is 15.98%, 13.59% and 19.53% higher than the existing approaches, like intrusion detection in CC with the help of hybrid deep learning approach (DKNN-CRDO-IDC), intrusion detection scheme under hybrid teacher learning optimization facilitates deep RNN in web and cloud computing (TL-DRNN-IDC), Intrusion detection scheme utilizing deep learning and Capuchin Search Algorithm for cloud and IoT (CNN-CapSA-IDC) respectively.
暂无评论