distributed authentication plays a crucial role in cloud storage, requiring users to authenticate their identity to a set of key management servers. This process guarantees that only authenticated users gain access to...
详细信息
ISBN:
(纸本)9798350350227;9798350350210
distributed authentication plays a crucial role in cloud storage, requiring users to authenticate their identity to a set of key management servers. This process guarantees that only authenticated users gain access to cryptographic keys stored in these servers for decrypting outsourced data. However, existing distributed authentication schemes rely solely on passwords, posing a critical weakness due to the recent surge in password-based attacks. Once passwords are compromised, adversaries can impersonate users, gaining unauthorized access to key management servers and acquiring cryptographic keys. To address this vulnerability, a recent multi-factor scheme has been proposed that introduces a distributed authentication mechanism for each popular factor such as passwords, Universal 2nd Factor (U2F), and email. Users could select multiple factors and utilize their corresponding mechanisms when authenticating to servers. Nevertheless, a simple combination of these authentication mechanisms incurs substantial overheads. In this paper, we propose a combined verification mechanism for passwords and U2F, allowing servers to efficiently verify both factors. Based on the mechanism, we present a two-factor distributed authentication for cloud storage, dubbed TDAC. In TDAC, users could authenticate to key management servers through a single password entry and a touch on their U2F. Experiments show that TDAC is efficient in terms of computation and communication overhead.
In Multi-Access Edge computingsystems, A UE provide tasks to edge nodes, these edge nodes may get over-whelmed in edge computing, resulting in processing lag or task dropouts. This is an issue because, particularly i...
详细信息
With the rapid development of sensors, wireless communication, and computing technology, Internet of Things (IoT) has gained significant research interest. In particular, increasing people-oriented applications and sy...
详细信息
In the landscape of cloud-driven environments, the convergence of artificial intelligence (AI) workloads with edge computing architectures holds promise for optimizing computational efficiency and minimizing latency. ...
详细信息
In this paper, we use solar energy resources. It is well known that electricity is usually produced by coal, which is at the extreme end in us cities take millions of years to recover from the burning of coal that has...
详细信息
ISBN:
(纸本)9798350348033
In this paper, we use solar energy resources. It is well known that electricity is usually produced by coal, which is at the extreme end in us cities take millions of years to recover from the burning of coal that has a negative effect on us Environment Today, electricity has become an important part of everyone's daily life. Sun is renewable resource are now fully available and have no harmful effects on us environment. Buildings account approximately 33% of the total energy used, and 40% of direct and indirect energy use global CO2 discharge. As energy consumption increases worldwide, managing energy use by encouraging on-site renewable energy can help meet this demand. This paper proposes a deep learning method based on the Discrete Wavelet Transform and Long Short-Term Memory (DWT-LSTM) and timescheduling method for the added need of modeling and energy management for buildings. The combination of deep learning algorithm, energy storage and energy scheduling will reduce the energy demand and sends the extra generated energy to the grid.
A large number of roadside Millimeter-Wave Radars (MMRs) serve Intelligent Transportation systems (ITS), where adaptive calibration is a crucial foundation for long-term roadside ITS services. In distributed radar pai...
详细信息
作者:
Indumathi, V.Ashokkumar, C.School of Computing
College of Engineering and Technology Srm Institute of Science and Technology Department of Computing Technologies Kattankulathur Chennai India
This research presents an innovative deep learning-based predictive maintenance model designed for smart automotive systems, utilizing the EnsembleAE-Boost (EAE-Boost) algorithm. The primary objective of the proposed ...
详细信息
Performance anomalies can manifest as irregular execution times or abnormal execution events for many reasons, including network congestion and resource contention. Detecting such anomalies in real-time by analyzing t...
详细信息
ISBN:
(纸本)9798350326970
Performance anomalies can manifest as irregular execution times or abnormal execution events for many reasons, including network congestion and resource contention. Detecting such anomalies in real-time by analyzing the details of performance traces at scale is impractical due to the sheer volume of data High-Performance computing (HPC) applications produce. In this paper, we propose formulating HPC performance anomaly detection as a signal-processing problem where anomalies can be treated as noise. We evaluate our proposed method in comparison with two other commonly used anomaly detection techniques of varying complexity based on their detection accuracy and scalability. Since real-time in-situ anomaly detection at a large scale requires lightweight methods that can handle a large volume of streaming data, we find that our proposed method provides the best trade-off. We then implement the proposed method in CHIMBUKO, the first online, distributed, and scalable workflow-level performance trace analysis framework. We compare our proposed signal-based anomaly detection algorithm with two other methods using a function of their accuracy, F1 score, and detection overhead. Our experiments demonstrate that our proposed approach achieves a 99% improvement for the benchmark datasets and a 93% improvement with CHIMBUKO traces.
With the continuous advancement of wireless communication technologies, the negative impact of base station (BS) impairments on network performance is becoming more and more significant. In this paper, we explore how ...
详细信息
ISBN:
(纸本)9798350377859;9798350377842
With the continuous advancement of wireless communication technologies, the negative impact of base station (BS) impairments on network performance is becoming more and more significant. In this paper, we explore how task offloading and resource allocation optimisation can be achieved through Unmanned Aerial Vehicle (UAV)-assisted edge computing in the context of BS impairments. This study proposes a UAV-assisted vehicular edge computing architecture that aims to provide the required Quality of Service (QoS) for computationally intensive and delay-sensitive applications. We provide an in-depth study of the task offloading and resource allocation problem with the aim of maximising the benefit gained by the vehicle by offloading tasks. This benefit is quantified by measuring the weighted sum of task completion time and energy consumption. We are faced with a mixed integer nonlinear programming (MINLP) problem that involves the joint optimisation of the task offloading decision, the uplink transmission power of the mobile vehicle, and the computational resource allocation on the UAV. Given the complexity of the problem, finding an optimal solution is both difficult and unrealistic for large-scale networks. To address this challenge, we employ a decomposition strategy that splits the original problem into two subproblems: a resource allocation (RA) problem that fixes the task offloading decision, and a task offloading (TO) problem that optimises the optimal value function corresponding to the RA problem. We solve the RA problem using convex optimisation and proposed convex optimisation techniques and apply genetic algorithms to solve the TO problem. The results of simulation experiments show that our proposed algorithm is close to the optimal solution in terms of performance and significantly improves the vehicle unloading benefits when compared to the conventional methods. This suggests that UAV-assisted edge computing can effectively optimise task offloading and resou
Ever-increasing ubiquity of data and computational resources in the last decade have propelled a notable transition in the machine learning paradigm towards more distributed approaches. Such a transition seeks to not ...
详细信息
暂无评论