As the global migration to post-quantum cryptography (PQC) continues to progress actively, in Korea, the Post-Quantum Cryptography Research Center has been established to acquire PQC technology, leading the KpqC Compe...
详细信息
As the global migration to post-quantum cryptography (PQC) continues to progress actively, in Korea, the Post-Quantum Cryptography Research Center has been established to acquire PQC technology, leading the KpqC Competition. In February 2022, the KpqC Competition issued a call for proposals for PQC algorithms. By November 2022, 16 candidates were selected for the first round (7 KEMs and 9 DSAs). Currently, Round 1 submissions are being evaluated with respect to security, efficiency, and scalability in various environments. At the current stage, evaluating the software through an analysis to improve the software quality of the first-round submissions is judged appropriately. In this paper, we present analysis results regarding performance and implementation security on based dependency-free approach of external libraries. Namely, we configure extensive tests for an analysis with no dependencies by replacing external libraries that can complicate the build process with hard coding. From the performance perspective, we provide analysis results of performance profiling, execution time, and memory usage for each of the KpqC candidates. From the implementation security perspective, we examine bugs and errors in the actual implementations using Valgrind software, a Metamorphic Testing methodology that can include wide test coverage and constant-time implementation against the timing attack. Until the KpqC standard algorithm is announced, we argue that continuous integration of extensive tests will lead to a high-level clean code of KpqC candidates.
The transition of Control Plane (CP) block functions into software entities, as proposed by 3GPP, necessitates periodic downtime for maintenance activities such as software upgrades or failures. This downtime requires...
详细信息
The transition of Control Plane (CP) block functions into software entities, as proposed by 3GPP, necessitates periodic downtime for maintenance activities such as software upgrades or failures. This downtime requires the disconnection of all User Equipment (UE) connections to the CP, triggering the UE reattach procedure and resulting in increased UE power consumption and spectrum wastage. To mitigate these challenges, optimal CP upgrade timings should align with periods of low traffic. In this paper, we propose an AI/ML-based procedure to autonomously determine the optimal time to upgrade CP block functions, eliminating the need for manual intervention by operators. Our approach involves analyzing traffic conditions using statistical data from several CP blocks managing base stations across various areas, including residential and non-residential zones like subways, shopping complex and hospitals. Leveraging Seasonal Auto-Regressive Integrated Moving Average (SARIMA) forecasting, we predict bearer statistical data to calculate the optimal CP software upgrade time, validated using Z-Score analysis at the same time. In addition to address the suboptimal upgrade timings, we also proposed CP Outage Handling Procedure (COHP) v2 by preserving UE contexts during CP upgrades. Our results demonstrate SARIMA's high accuracy in predicting lean traffic conditions, with an R-Squared score of 0.99. Furthermore, upgrading CP software during predicted lean periods leads to substantial UE power savings ranging from 80% to 97% compared to manual upgrades.
Multiarmed Bandit (MAB) algorithms identify the best arm among multiple arms via exploration-exploitation trade-off without prior knowledge of arm statistics. Their usefulness in wireless radio, Internet of Things (Io...
详细信息
Multiarmed Bandit (MAB) algorithms identify the best arm among multiple arms via exploration-exploitation trade-off without prior knowledge of arm statistics. Their usefulness in wireless radio, Internet of Things (IoT), and robotics demand deployment on edge devices, and hence, a mapping on system-on-chip (SoC) is desired. Theoretically, the Bayesian-approach-based Thompson sampling (TS) algorithm offers better performance than the frequentist-approach-based upper confidence bound (UCB) algorithm. However, TS is not synthesizable due to Beta function. We address this problem by approximating it via a pseudorandom number generator (PRNG)-based architecture and efficiently realize the TS algorithm on Zynq SoC. In practice, the type of arms distribution (e.g., Bernoulli, Gaussian) is unknown, and hence, a single algorithm may not be optimal. We propose a reconfigurable and intelligent MAB (RI-MAB) framework. Here, intelligence enables the identification of appropriate MAB algorithms in an unknown environment, and reconfigurability allows on-the-fly switching between algorithms on the SoC. This eliminates the need for parallel implementation of algorithms resulting in huge savings in resources and power consumption. We analyze the functional correctness, area, power, and execution time of the proposed and existing architectures for various arm distributions, word length, and hardware-software codesign approaches. We demonstrate the superiority of the RI-MAB algorithm and its architecture over the TS and UCB algorithms.
Cloud computing is a trending topic that has raised attention to attack complicated problems. Likewise, the multi-criteria decision-making methods present extensive interest in decision-analysis situations. As a resul...
详细信息
Cloud computing is a trending topic that has raised attention to attack complicated problems. Likewise, the multi-criteria decision-making methods present extensive interest in decision-analysis situations. As a result, scientific research has shown a notable development in multi-criteria decision-making techniques. Nevertheless, sufficient software is still needed to drive the data produced by multi-criteria analysis. At the same time, in order to present a solution from a homogeneous point of view, the MCDM problems require the use of multiple data sets. In this way, the study for creating software that incorporates dimensional analysis using the intuitionistic fuzzy method is presented in this article. Lastly, the construction of a software package that incorporates the IFDA technique into an interface is the focus of this study. Furthermore, the software's primary feature makes it easier to manipulate data utilizing a fuzzy set approach for the assessment of multi-criteria decision-making problems. The outcomes showed that our suggestion, which makes use of intuitionistic fuzzy sets, is a powerful tool for solving MCMD problems.
Positioning accuracy can be compromised by the heterogeneity of software and hardware among different intelligent mobile devices. This is due to the fact that the heterogeneity of different devices leads to a signific...
详细信息
Positioning accuracy can be compromised by the heterogeneity of software and hardware among different intelligent mobile devices. This is due to the fact that the heterogeneity of different devices leads to a significant difference in the received signal strength index of the same Bluetooth access point (AP) captured at the same acquisition point of the device. To address this issue, we propose to use the honey badger algorithm back-propagation neural network (HBA-BPNN) model for calibration. The aim of this study is to calibrate the received signal strength indicator (RSSI) received by Bluetooth sensors of distinct intelligent mobile terminal devices to solve software and hardware heterogeneity issues. Second, this article uses an indoor fingerprint localization algorithm based on an improved generalized regression neural network (GRNN) model and combines it with the calibration algorithm to build a better localization model. Finally, we verified the effectiveness of the HBA-BPNN calibration model for different test intelligent mobile terminal devices and then compared and analyzed the calibration algorithm proposed in this study with different calibration algorithms. The experimental comparative analysis results show that the positioning accuracy can reach 0.84 m by combining the proposed calibration algorithm with the positioning algorithm.
The SZZ algorithm and its variants have been extensively utilized for identifying bug-inducing commits based on bug-fixing commits. However, these algorithms face challenges when there are no deletion lines in the bug...
详细信息
The SZZ algorithm and its variants have been extensively utilized for identifying bug-inducing commits based on bug-fixing commits. However, these algorithms face challenges when there are no deletion lines in the bug-fixing commit. Previous studies have attempted to address this issue by tracing back all lines in the block that encapsulates the added lines. However, this method is too coarse-grained and suffers from low precision. To address this issue, we propose a novel method in this paper called Sem-SZZ, which is based on fine-grained semantic analysis. Initially, we observe that a significant number of bug-inducing commits can be identified by tracing back the unmodified lines near added lines, resulting in improved precision and F1-score. Building on this observation, we conduct a more fine-grained semantic analysis. We begin by performing program slicing to extract the program part near the added lines. Subsequently, we compare the program's states between the previous version and the current version, focusing on data flow and control flow differences based on the extracted program part. Finally, we extract statements contributing to the bug based on these differences and utilize them to locate bug-inducing commits. We also extend our approach to fit the scenario where the bug-fixing commits contain deleted lines. Experimental results demonstrate that Sem-SZZ outperforms the state-of-the-art methods in identifying bug-inducing commits, regardless of whether the bug-fixing commit contains deleted lines.
Employing Artificial Intelligence techniques to address challenges in requirements elicitation is gaining traction. Although nine systematic literature reviews have been published on AI-based solutions in the requirem...
详细信息
Employing Artificial Intelligence techniques to address challenges in requirements elicitation is gaining traction. Although nine systematic literature reviews have been published on AI-based solutions in the requirements elicitation domain, to our knowledge, these studies do not cover a broad spectrum of elicitation tasks, data sources used for training, the performance of these algorithms, nor do they pinpoint the strengths and limitations of the algorithms used. This study contributes to the field by presenting a systematic literature review that explores the use of machine learning and NLP techniques in the elicitation phase of requirements engineering. The following research questions are addressed: 1) What elicitation tasks are supported by AI and what AI algorithms were employed? 2) What data sources have been used to construct AI-based solutions? 3) What performance outcomes were achieved? 4) What are the strengths and limitations of the current AI methods? Initially, 665 papers were retrieved from six data sources, and ultimately, 122 articles were selected for the review. This literature review identifies fifteen elicitation tasks currently supported by artificial intelligence and presents twelve publicly available data sources used for training these approaches. Furthermore, the study uncovers common limitations in current studies and suggests potential research directions. Overall, this systematic literature review provides insights into future research prospects for applying AI techniques to problems in the requirements elicitation domain.
In cloud computing environments, virtual machines (VMs) running on cloud servers are vulnerable to shared cache attacks, such as Spectre and Foreshadow. By exploiting memory sharing among VMs, these attacks can compro...
详细信息
In cloud computing environments, virtual machines (VMs) running on cloud servers are vulnerable to shared cache attacks, such as Spectre and Foreshadow. By exploiting memory sharing among VMs, these attacks can compromise cryptographic keys in software modules. Program obfuscation serves as a promising countermeasure against key compromises by transforming a program into an unintelligent form while preserving its functionality. Unfortunately, for certain cryptographic algorithms such as the digital signature schemes, it is extremely difficult to construct provably secure obfuscators using traditional obfuscation approaches. To address such a challenge, this study proposes a novel approach to construct obfuscators for cryptographic algorithms named space-hard obfuscation, which can mitigate the threats from adversaries with the capability of acquiring a limited size of memory in shared cache attacks. Considering the extensive use of the Elliptic Curve Digital Signature Algorithm (ECDSA) in cloud-based Blockchain-as-a-Service (BaaS) and its potential vulnerability to shared cache attacks, we construct an exemplary scheme with provable security using space-hard obfuscation for ECDSA. Experimental results have demonstrated the scheme's high efficiency on cloud servers, as well as its successful integration with Hyperledger Fabric and Ethereum, two widely used blockchain systems.
Deep learning algorithms are used in various advanced applications, including computer vision, large language models and many others due to their increasing success over traditional approaches. Enabling these algorith...
详细信息
Deep learning algorithms are used in various advanced applications, including computer vision, large language models and many others due to their increasing success over traditional approaches. Enabling these algorithms on various embedded systems is essential to extend the success of deep learning methods in the cutting-edge real-world systems and applications. The configurability of the embedded systems and software applications make them adaptable to different performance requirements such as latency, power, memory and accuracy. However, the vast number of combinations of hardware-software configurations makes the multi-objective optimization with respect to multiple performance metrics highly complex. Additionally, the lack of analytical form of the problem makes it harder to identify Pareto optimal configurations. To address these challenges, here we propose a fast, accurate and scalable search algorithm to efficiently solve these search spaces. We evaluate our algorithm, together with state-of-the-art methods, with different deep learning applications running on different hardware configurations, creating different search spaces, and show that our algorithm outperforms the other existing approaches by finding the Pareto frontier more accurately and with up to 15 times faster speed.
Neural network (NN) models implemented in embedded devices have been shown to be susceptible to side-channel attacks (SCAs), allowing recovery of proprietary model parameters, such as weights and biases. There are alr...
详细信息
Neural network (NN) models implemented in embedded devices have been shown to be susceptible to side-channel attacks (SCAs), allowing recovery of proprietary model parameters, such as weights and biases. There are already available countermeasure methods currently used for protecting cryptographic implementations that can be tailored to protect embedded NN models. Shuffling, a hiding-based countermeasure that randomly shuffles the order of computations, was shown to be vulnerable to SCA when the Fisher-Yates algorithm is used. In this article, we propose a design of an SCA-secure version of the Fisher-Yates algorithm. By integrating the masking technique for modular reduction and Blakely's method for modular multiplication, we effectively remove the vulnerability in the division operation that led to side-channel leakage in the original version of the algorithm. We experimentally evaluate that the countermeasure is effective against SCA by implementing a correlation power analysis (CPA) attack on an embedded NN model implemented on ARM Cortex-M4. Compared to the original proposal, the memory overhead is 2x the biggest layer of the network, while the time overhead varies from 4% to 0.49% for a layer with 100 and 1000 neurons, respectively.
暂无评论