This research study focuses on the prediction of inflation rates using two distinct models, Autoregressive Integrated Moving Average (ARIMA) and Long Short-Term Memory (LSTM), tailored to accommodate the temporal reso...
This research study focuses on the prediction of inflation rates using two distinct models, Autoregressive Integrated Moving Average (ARIMA) and Long Short-Term Memory (LSTM), tailored to accommodate the temporal resolutions of monthly and quarterly inflation data. While a comprehensive evaluation across varying time horizons was not conducted, the results suggest that the ARIMA model excels in short-term forecasting, as evidenced by its impressive performance on a 2021 holdout dataset. Conversely, the LSTM model displays potential for medium-term predictions. This study underscores the significance of aligning forecasting models with data characteristics, emphasizes the importance of temporal resolution, and discusses potential avenues for improvement, such as multivariate modeling or alternative techniques like Prophet. The findings have broad implications for economists, policymakers, and analysts seeking precise inflation forecasts, shedding light on the nuanced interplay between models and data in the domain of inflation prediction.
The smart grid is a modern solution to generate, distribute, and use energy effectively and efficiently. Ensuring the stability of the smart grid is critical to guarantee safe and consistent operation. This study prop...
详细信息
Solar cell is becoming one of the vibrant power generations among all the renewable energy resources and demanding the most attention of researchers to achieve the goal of increasing stability and efficiencies. To rei...
详细信息
As the Internet of Things (IoT) is poised to become a global phenomenon, it is imperative to schedule the transmissions of IoT devices effectively and in a fair way. Leveraging Long Range (LoRa) technology, we can ach...
详细信息
ISBN:
(数字)9798350387414
ISBN:
(纸本)9798350387421
As the Internet of Things (IoT) is poised to become a global phenomenon, it is imperative to schedule the transmissions of IoT devices effectively and in a fair way. Leveraging Long Range (LoRa) technology, we can achieve transmissions that consume minimal power while covering vast distances, aligning with the requirements of IoT devices. However, the proximity of multiple devices within the same area often leads to packet interference and collisions. To address this, our study introduces a pioneering scheduling method utilizing a constellation of Low Earth Orbit (LEO) satellites to manage and streamline the transmission of data from End Devices (EDs). This method employs two LEO satellites: the first satellite assigns the sequence for EDs to dispatch their packets, and the second collects these packets in the predetermined sequence before forwarding them to the LoRa Network Server (LNS). For urgent (URG) communications, EDs can alert the first satellite, which then coordinates with the LNS to schedule these priority transmissions. The LNS generates a schedule that is relayed to the second satellite, informing EDs with URG packets of their specific transmission times and channels. This scheduling approach is designed to optimize channel usage effectively while accommodating the transmission of urgent data.
In recent years, the popularity of network intrusion detection systems (NIDS) has surged, driven by the widespread adoption of cloud technologies. Given the escalating network traffic and the continuous evolution of c...
In recent years, the popularity of network intrusion detection systems (NIDS) has surged, driven by the widespread adoption of cloud technologies. Given the escalating network traffic and the continuous evolution of cyber threats, the need for a highly efficient NIDS has become paramount for ensuring robust network security. Typically, intrusion detection systems utilize either a pattern-matching system or leverage machine learning for anomaly detection. While pattern-matching approaches tend to suffer from a high false positive rate (FPR), machine learning-based systems, such as SVM and KNN, predict potential attacks by recognizing distinct features. However, these models often operate on a limited set of features, resulting in lower accuracy and higher FPR. In our research, we introduced a deep learning model that harnesses the strengths of a Convolutional Neural Network (CNN) combined with a Bi-directional LSTM (Bi-LSTM) to learn spatial and temporal data features. The model, evaluated using the NSL-KDD dataset, exhibited a high detection rate with a minimal false positive rate. To enhance accuracy, K-fold cross-validation was employed in training the model. This paper showcases the effectiveness of the CNN with Bi-LSTM algorithm in achieving superior performance across metrics like accuracy, F1-score, precision, and recall. The binary classification model trained on the NSL-KDD dataset demonstrates outstanding performance, achieving a high accuracy of 99.5% after 10-fold cross-validation, with an average accuracy of 99.3%. The model exhibits remarkable detection rates (0.994) and a low false positive rate (0.13). In the multiclass setting, the model maintains exceptional precision (99.25%), reaching a peak accuracy of 99.59% for $\text{k-value} =10$ . Notably, the Detection Rate for $\text{k-value}=10$ is 99.43%, and the mean False Positive Rate is calculated as 0.214925.
This paper proposes a new authentication method that enables mutual verification between two different PUFs without a server. The output signal (response) of one PUF is hidden in the opponent's input signal (chall...
详细信息
Parallel to the evolution of recent trends in the cybersecurity industry and the increase of cyberattacks in the last few years, there is renovated interest on the application of software-defined techniques to enforce...
详细信息
This study uses cryptography to tackle the important problem of data security in cloud computing. Two keys are used in the Dual Key Encryption (DKE) method for both encryption and decryption. DKE employs a public key ...
详细信息
ISBN:
(纸本)9798400708268
This study uses cryptography to tackle the important problem of data security in cloud computing. Two keys are used in the Dual Key Encryption (DKE) method for both encryption and decryption. DKE employs a public key for the first encryption round before cloud upload and a user-only private key for the second round of encryption. Decryption operates oppositely. When tested and simulated with different file sizes on a CloudAnalyst simulator, DKE operates faster and more efficiently than cryptographic methods such as Triple DES (3DES), AES, RSA, and DES. Thanks to developments in information technology, cloud computing has made several services available online. Data security assurance is still very difficult to achieve, nevertheless. In this context, cryptography is essential and has led to the creation of the novel DKE technology. This technology is not only far more efficient than well-known cryptographic techniques such as 3DES, but it also improves data security.
With the development of plans to replace conventional meters with smart meters to enhance the functionality and performance of power system, forecasting power consumption is still a challenge due to nonlinearity patte...
With the development of plans to replace conventional meters with smart meters to enhance the functionality and performance of power system, forecasting power consumption is still a challenge due to nonlinearity patterns, different types of noise in the datasets, and sometimes lack of data. Artificial Intelligence plays an important role in utilizing the data sent from smart meters, particularly in the area of short-term load forecasting, which is required to meet the needs of consumers. Based on such insight, this study suggests a hybrid model based on a one-dimension convolutional neural network and gated recurrent units (CONV1D-GRU), to forecast hourly power consumption. The outcome demonstrates that the suggested model is capable of encompassing the patterns in a historical data pertaining the hourly power use. The suggested model has outperformed the competing strategies in the mean absolute percentage error (MAPE%), and the coefficient of determination $(R^{2})$ metrics.
Using artificial intelligence (AI) to anticipate electrical consumption is one of the used applications of this technology. The analysis and training of AI models are done using data related to electricity consumption...
Using artificial intelligence (AI) to anticipate electrical consumption is one of the used applications of this technology. The analysis and training of AI models are done using data related to electricity consumption, including consumption itself, temperatures, working hours, the number of inhabitants, etc. AI models can help to reduce electricity expenses and consumption, which helps in identifying peak use periods and creating more efficient energy plans. Another use of artificial intelligence is to advise users on how to use less energy in a more efficient manner. Based on such insight, this study suggests a hybrid model based on a temporal convolutional network and multilayer perceptron (TCN-MLP), to forecast hourly power consumption. The results show that the recommended model can account for the patterns in the historical data relating to hourly power usage. TCN-MLP hybrid model has proved to be better than competing approaches in the coefficient of determination (R 2 ) and the mean absolute percentage error (MAPE%) metrics.
暂无评论