The primary objective of fog computing is to minimize the reliance of IoT devices on the cloud by leveraging the resources of fog network. Typically, IoT devices offload computation tasks to fog to meet different task...
详细信息
The primary objective of fog computing is to minimize the reliance of IoT devices on the cloud by leveraging the resources of fog network. Typically, IoT devices offload computation tasks to fog to meet different task requirements such as latency in task execution, computation costs, etc. So, selecting such a fog node that meets task requirements is a crucial challenge. To choose an optimal fog node, access to each node's resource availability information is essential. Existing approaches often assume state availability or depend on a subset of state information to design mechanisms tailored to different task requirements. In this paper, OptiFog: a cluster-based fog computing architecture for acquiring the state information followed by optimal fog node selection and task offloading mechanism is proposed. Additionally, a continuous time Markov chain based stochastic model for predicting the resource availability on fog nodes is proposed. This model prevents the need to frequently synchronize the resource availability status of fog nodes, and allows to maintain an updated state information. Extensive simulation results show that OptiFog lowers task execution latency considerably, and schedules almost all the tasks at the fog layer compared to the existing state-of-the-art. IEEE
The right partner and high innovation speed are crucial for a successful research and development (R&D) alliance in the high-tech industry. Does homogeneity or heterogeneity between partners benefit innovation spe...
详细信息
Algorithms are central objects of every nontrivial computer application but their analysis and design are a great challenge. While traditional methods involve mathematical and empirical approaches, there exists a thir...
详细信息
The existing cloud model unable to handle abundant amount of Internet of Things (IoT) services placed by the end users due to its far distant location from end user and centralized nature. The edge and fog computing a...
详细信息
False Data Injection Attacks (FDIA) pose a significant threat to the stability of smart grids. Traditional Bad Data Detection (BDD) algorithms, deployed to remove low-quality data, can easily be bypassed by these atta...
详细信息
False Data Injection Attacks (FDIA) pose a significant threat to the stability of smart grids. Traditional Bad Data Detection (BDD) algorithms, deployed to remove low-quality data, can easily be bypassed by these attacks which require minimal knowledge about the parameters of the power bus systems. This makes it essential to develop defence approaches that are generic and scalable to all types of power systems. Deep learning algorithms provide state-of-the-art detection for FDIA while requiring no knowledge about system parameters. However, there are very few works in the literature that evaluate these models for FDIA detection at the level of an individual node in the power system. In this paper, we compare several recent deep learning-based model that proven their high performance and accuracy in detecting the exact location of the attack node, which are convolutional neural networks (CNN), Long Short-Term Memory (LSTM), attention-based bidirectional LSTM, and hybrid models. We, then, compare their performance with baseline multi-layer perceptron (MLP)., All the models are evaluated on IEEE-14 and IEEE-118 bus systems in terms of row accuracy (RACC), computational time, and memory space required for training the deep learning model. Each model was further investigated through a manual grid search to determine the optimal architecture of the deep learning model, including the number of layers and neurons in each layer. Based on the results, CNN model exhibited consistently high performance in very short training time. LSTM achieved the second highest accuracy;however, it had required an averagely higher training time. The attention-based LSTM model achieved a high accuracy of 94.53 during hyperparameter tuning, while the CNN model achieved a moderately lower accuracy with only one-fourth of the training time. Finally, the performance of each model was quantified on different variants of the dataset—which varied in their l2-norm. Based on the results, LSTM, CNN obta
This paper presents a comprehensive dataset of LoRaWAN technology path loss measurements collected in an indoor office environment, focusing on quantifying the effects of environmental factors on signal propagation. U...
详细信息
The architecture of integrating Software Defined Networking (SDN) with Network Function Virtualization (NFV) is excellent because the former virtualizes the control plane, and the latter virtualizes the data plane. As...
详细信息
ASR is an effectual approach, which converts human speech into computer actions or text format. It involves extracting and determining the noise feature, the audio model, and the language model. The extraction and det...
详细信息
Very recent attacks like ladder leak demonstrated feasibility to recover private key with side channel attacks using just one bit of secret nonce. ECDSA nonce bias can be exploited in many ways. Some attacks on ECDSA ...
详细信息
As the use of big data and its potential benefits become more widespread, public and private organizations around the world have realized the imperative of incorporating comprehensive and robust technologies into thei...
详细信息
暂无评论