cloud computing, which acts as a service tool to internet users, has numerous data sources. The data from users should be stored and shared securely. OSI layer plays an important role in data communication in the clou...
详细信息
cloud computing, which acts as a service tool to internet users, has numerous data sources. The data from users should be stored and shared securely. OSI layer plays an important role in data communication in the cloud. However, if OSI layers are inadequately secured, then it can lead to vulnerabilities that compromise data integrity and security in cloud systems. However, the existing works didn't focus on data security at each layer of OSI. Therefore, a novel OSI network layers-based secure data sharing in cloud computing utilizing STXBRSK-QC and DHDECCT-MAC is proposed in this paper. Primarily, the data owner registers in the Application Layer and logs in to access the cloud. Then, with the help of preprocessing, optimal feature selection using the BD-LOA, and classification using BR-LSTM, the URL link is verified in the Presentation Layer. Now, the data is uploaded to the cloud via the legitimate site. Here, using STXORSK-QC, the data is secured in the Authorization Layer. Then, in the Network Layer, the IP address of the user is spoofed by the Knuth shuffle technique. Now, the data is uploaded to the Physical Layer using 2CGHA after balancing loads of multiple requests using BD-LOA in the Transport Layer. In the meantime, by using the DHDFCCT-MAC algorithm, the user verification is done. The data to be downloaded in the Data Link Layer is checked by the verified user. The user downloads the data if it is not attacked. The experimental results showed that the proposed system uploaded the data with a 98.03% security level and classified the data attack with 99.15% accuracy than the prevailing techniques. Further, when compared to the conventional methods, the proposed technique utilized lesser memory space of 1,428,142,563 kb for encryption and 1,584,278,963 kb for decryption. Thus, superior performance in terms of security level, attack detection, and computational overhead is obtained using the proposed approach.
With the rapid advancement of intelligent devices and cloud services, a novel edge-cloud computing paradigm is emerging, finding widespread adoption in numerous advanced applications. Despite its considerable convenie...
详细信息
With the rapid advancement of intelligent devices and cloud services, a novel edge-cloud computing paradigm is emerging, finding widespread adoption in numerous advanced applications. Despite its considerable convenience and benefits, edge-cloud computing raises security and privacy concerns. Although many cryptographic solutions have been proposed for the Internet of Things and cloud services, ensuring diverse access control in an untrusted edge-cloud environment and realizing flexible revocation and efficient outsourcing remain challenging. In this article, we propose a certificateless attribute-based matchmaking encryption scheme (CRO-ABME) that supports fine-grained bilateral access control, attribute and identity revocation, and cryptographic workload outsourcing. Leveraging CRO-ABME, we design an edge-cloud data sharing system that ensures secure data uploading with privacy protection between end-users, such that only authorized matchers can access the data in edge-cloud computing. Furthermore, rigorous security proofs for CRO-ABME are provided, and experimental analyses demonstrate the efficiency and flexibility of our proposed scheme.
As an Internet-based computing model, cloud computing realizes the elastic scaling and efficient utilization of resources by centralizing computing resources (such as servers, storage and networks) to form resource po...
详细信息
As an Internet-based computing model, cloud computing realizes the elastic scaling and efficient utilization of resources by centralizing computing resources (such as servers, storage and networks) to form resource pools and provide services to users on-demand. Task scheduling directly affects the operational efficiency, load balancing and energy consumption of the entire system. In order to improve the execution efficiency of task scheduling in cloud computing system, a Cycloid spiral X mayfly algorithm (CXMA) based on improved dance damping ratio and dance mode was proposed. Firstly, the elementary function is used to improve the dance damping ratio, which effectively improves the convergence stability of the algorithm and better balances the ability of global exploration and local development, so that the algorithm can locate the optimal solution more accurately while maintaining high search diversity. On the basis of improving the dance damping ratio, the basic mathematical function is used to improve the dance mode of the mayfly, and the search efficiency and solution accuracy of MA are significantly improved by optimizing the search behavior of the individual mayfly so as to improve the robustness and adaptability of MA. Through simulation experiments, the total cost, time cost, load cost and price cost of the system under large-scale and small-scale tasks are tested. Comparing the proposed CXMA with other swarm intelligence optimization algorithms, the experimental results show that the proposed CXMA has significant advantages in searching for the optimal task scheduling strategy. In terms of total cost, CXMA is 6.7% lower than ACO, 0.7% lower than CDO, 3.7% lower than WOA, 4.0% lower than BOA, 2.6% lower than AOA, 1.6% lower than SOA and 3.0% lower than RSO.
Terrace agriculture plays a vital role in mountainous regions by preventing soil erosion, optimizing land use, and supporting local ecosystems. However, research on the global distribution of terraces is limited due t...
详细信息
Terrace agriculture plays a vital role in mountainous regions by preventing soil erosion, optimizing land use, and supporting local ecosystems. However, research on the global distribution of terraces is limited due to the lack of unified automatic identification model. Despite the rapid advancements in deep-learning architectures in recent years, their performance in extracting terrace maps still needs investigation. To address this limitation, this study compares the performance of eight state-of-the-art deep learning models, including UNet, HRNet, DeepLabv3+, TransUNet, Segmenter, PVT v2, Swin-Unet, and PerSAM. Sentinel-2 imagery was selected for its spectral properties, while Digital Elevation Model (DEM) imagery was chosen for detailed topographic information. UNet outperformed others in terrace identification, achieving an overall accuracy of 92.8 % and a mean Intersection over Union (MIoU) of 75.9 %. The entire data processing workflow, using Google Earth Engine for data acquisition, Google Drive for storage, Google Earth Pro for computational capabilities, and the T4 GPU in cloud computing resources, requires approximately 625 h. As a result, the Global Terrace Map (GTM) was generated at 10-meter resolution for 2022. The total terrace area was estimated at 853,161 km2, accounting for about 5.1 % of global cropland. The countries with the most extensive terraced areas, as identified, are China (298,908 km2, 18 % of total cropland), Ethiopia (127,266 km2, 47 %), Kenya (36,385 km2, 37 %), India (34,485 km2, 2 %), and Democratic Republic of the Congo (31,422 km2, 21 %). This pioneering global terrace map is anticipated to bridge a significant data gap in the field of resilient agriculture. It will offer invaluable insights into the spatial distribution and attributes of terraced farming systems, along with their roles in enhancing food security and promoting environmental sustainability.
resource estimation is essential in cloud computing to minimize operational costs, optimize performance, and enhance user satisfaction. This study proposes a comprehensive framework for virtual machine optimization in...
详细信息
resource estimation is essential in cloud computing to minimize operational costs, optimize performance, and enhance user satisfaction. This study proposes a comprehensive framework for virtual machine optimization in cloud environments, focusing on predictive resource management to improve resource efficiency and system performance. The framework integrates real-time monitoring, advanced resource management techniques, and machine learning-based predictions. A simulated environment is deployed using PROXMOX, with Prometheus for monitoring and Grafana for visualization and alerting. By leveraging machine learning models, including Random Forest Regression and LSTM, the framework predicts resource usage based on historical data, enabling precise and proactive resource allocation. Results indicate that the Random Forest model achieves superior accuracy with a MAPE of 2.65%, significantly outperforming LSTM's 17.43%. These findings underscore the reliability of Random Forest for resource estimation. This research demonstrates the potential of predictive analytics in advancing cloud resource management, contributing to more efficient and scalable cloud computing practices.
This paper introduces a bifurcated vehicle control architecture integrating edge and cloud computing. It aims to enhance maritime remote control operations, addressing the slow-changing dynamics due to climate and wat...
详细信息
This paper introduces a bifurcated vehicle control architecture integrating edge and cloud computing. It aims to enhance maritime remote control operations, addressing the slow-changing dynamics due to climate and water flow, and limited network coverage. Our approach utilizes Deep Reinforcement Learning (DRL) to adapt to non-ideal, model-free scenarios. The edge server provides immediate control commands based on resident models, while the cloud server employs life-long learning to adapt to changes, continuously improving model accuracy. Specifically, we adopt the Soft Actor-Critic (SAC) algorithm under a discrete-time system, where the entropy in its loss function encourages the agent to explore the environment and exhibits greater stability when the controlled process dynamics change. Building on SAC, we propose the concept of Version of Model (VoM) and develop a weighted sampling strategy that adjusts the sampling probability for lifelong learning on the cloud server. We also introduce a dual-window updating strategy that surpasses standard updates by using a sliding and freeze window mechanism, minimizing unnecessary adjustments. The bifurcated control method, in conjunction with the dual-window update strategy, ensures timely and accurate process control, with effective correction for deviations. Finally, the efficacy of our approach is validated through simulations using a pendulum model and a Remotely Operated Vehicle (ROV).
Fog computing bridges IoT applications and cloud computing, providing low latency services through local computation and storage. Despite its advantages, challenges such as efficient scheduling and placement of IoT ap...
详细信息
Fog computing bridges IoT applications and cloud computing, providing low latency services through local computation and storage. Despite its advantages, challenges such as efficient scheduling and placement of IoT applications on fog cloud nodes hinder its widespread adoption. This manuscript presents a Performance Enhancement Algorithm for scheduling IoT applications in a fog cloud environment. The algorithm comprises four key procedures that schedule application modules across the available infrastructure devices. Merge sort along with the heterogeneous shortest module first strategy is used in fusion as the key to improve the performance. The effectiveness of this proposed algorithm was evaluated using the iFogSim simulator and the results demonstrate significant improvements, with total network usage improvement of 66.31% over HSMF, 88.40% over edge-wards, and 98.66% over the cloud-only method. Also, it improves the execution time significantly in most network configurations. Our research contributes to providing a very reliable means for placing applications in the fog-cloud infrastructure as our fusion algorithm makes it a very stable and scalable for CPU bound tasks in fog and cloud computing environment.
Could computing is an Internet-based computing paradigm where virtual servers or workstations are offered as platforms, software, infrastructure, and resources. Task scheduling is considered one of the major NP-hard p...
详细信息
Could computing is an Internet-based computing paradigm where virtual servers or workstations are offered as platforms, software, infrastructure, and resources. Task scheduling is considered one of the major NP-hard problems in cloud environments, posing several challenges to efficient resource allocation. Many metaheuristic algorithms have been extensively employed to address these task-scheduling problems as discrete optimization problems and have given rise to some proposals. However, these algorithms have inherent limitations due to local optima and convergence to poor results. This paper suggests a hybrid strategy for organizing independent tasks in heterogeneous cloud resources by incorporating the Butterfly Optimization Algorithm (BOA) and Flower Pollination Algorithm (FPA). Although BOA suffers from local optima and loss of diversity, which may cause an early convergence of the swarm, our hybrid approach outperforms such weaknesses by exploiting a mutualism-based mechanism. Indeed, the proposed hybrid algorithm outperforms existing methods while considering different task quantities with better scalability. Experiments are conducted within the cloudSim simulation framework with many task instances. Statistical analysis is performed to test the significance of the obtained results, which confirms that the suggested algorithm is effective at solving cloud-based task scheduling issues. The study findings indicate that the hybrid metaheuristic algorithm could be a promising approach to improving resource utilization and optimizing cloud task scheduling.
cloud computing (CC) is a fast emerging field that enables consumers to access network resources on-demand. However, ensuring a high level of security in CC environments remains a significant challenge. Traditional en...
详细信息
cloud computing (CC) is a fast emerging field that enables consumers to access network resources on-demand. However, ensuring a high level of security in CC environments remains a significant challenge. Traditional encryption algorithms are often inadequate in protecting confidential data, especially digital images, from complex cyberattacks. The increasing reliance on cloud storage and transmission of digital images has made it essential to develop strong security measures to stop unauthorized access and guarantee the integrity of sensitive information. This paper presents a novel Crayfish Optimization based Pixel Selection using Block Scrambling Based Encryption Approach (CFOPS-BSBEA) technique that offers a unique solution to improve security in cloud environments. By integrating steganography and encryption, the CFOPS-BSBEA technique provides a robust approach to secure digital images. Our key contribution lies in the development of a three-stage process that optimally selects pixels for steganography, encodes secret images using Block Scrambling Based Encryption, and embeds them in cover images. The CFOPS-BSBEA technique leverages the strengths of both steganography and encryption to provide a secure and effective approach to digital image protection. The Crayfish Optimization algorithm is used to select the most suitable pixels for steganography, ensuring that the secret image is embedded in a way that minimizes detection. The Block Scrambling Based Encryption algorithm is then used to encode the secret image, providing an additional layer of security. Experimental results show that the CFOPS-BSBEA technique outperforms existing models in terms of security performance. The proposed approach has significant implications for the secure storage and transmission of digital images in cloud environments, and its originality and novelty make it an attractive contribution to the field. Furthermore, the CFOPS-BSBEA technique has the potential to inspire further researc
cloud computing has become a more popular and well-known computing paradigm for delivering services to different organizations. The main benefits of the cloud computing paradigm, including on-demand services, pay-as-p...
详细信息
cloud computing has become a more popular and well-known computing paradigm for delivering services to different organizations. The main benefits of the cloud computing paradigm, including on-demand services, pay-as-per-use policy, rapid elasticity, and so on, make cloud computing a more emerging technology to lead with new methods. cloud systems have become more challenging than other systems because of their wide range of clients and the variety of services in the system. The cloud data center consists of many physical machines (PM) with virtual machines (VM), load balancers, switches, storage etc. Because of the inappropriate use of resources and inefficient scheduling, these data centers consume a lot of energy. In this paper, a multi-objective optimization model called Adaptive Remora Optimization (AROA) is proposed, which comprises sub-models viz;priority calculation, task clustering, probability definition and task-VM mapping using search mode based on Remora optimization to optimize energy consumption and execution time. cloudSim is used for the implementation of the proposed optimization technique. Through simulation the energy consumption is 0.695kWh and the execution time is 179.14sec. The result obtained by AROA is compared with the existing approaches to prove the efficacy of the proposed *** results show that the proposed AROA algorithm outperforms the existing approaches.
暂无评论