By invitation, we revisit our 1990 paper - The Evolving Philosophers: Dynamic Change Management. The paper addressed the problem of safely updating large complex distributed systems while permitting the system to cont...
详细信息
By invitation, we revisit our 1990 paper - The Evolving Philosophers: Dynamic Change Management. The paper addressed the problem of safely updating large complex distributed systems while permitting the system to continue running to provide at least a partial service. We outline the context in which we addressed this problem, the key ideas behind the proposed solution and then focus on the development and influence of these ideas in both our own subsequent research and that of others.
This article discusses a formalization of aspects of Cyber-Sovereignty (CyS) for information and communication technology (ICT), linking them to technological trustworthiness and deriving an associated paradigm for ha...
详细信息
This article discusses a formalization of aspects of Cyber-Sovereignty (CyS) for information and communication technology (ICT), linking them to technological trustworthiness and deriving an associated paradigm for hard- and software design. The upcoming 6G ICT standard is considered a keystone within modern society's increasing interconnectedness and automatization, as it provides the necessary technological infrastructure for applications such as the Metaverse or large-scale digital twinning. Since emerging technological systems increasingly affect sensitive human goods, hard- and software manufacturers must consider a new dimension of societal and judicial constraints in the context of technological trustworthiness. This article aims to establish a formalized theory of specific aspects of CyS, providing a paradigm for hard- and software engineering in ICT. This paradigm is directly applicable in formal technology assessment and ensures that the relevant facets of CyS - specifically, the principle of Algorithmic Transparency (AgT) - are satisfied. The framework follows an axiomatic approach. Particularly, the formal basis of our theory consists of four fundamental assumptions about the general nature of physical problems and algorithmic implementations. This formal basis allows for drawing general conclusions on the relation between CyS and technological trustworthiness and entails a formal meta-thesis on AgT in digital computing.
A systematic approach is essential to help understand the system behaviors that lead to critical cyberphysical system failures. We present a methodology that identifies and isolates crucial anomalous behaviors that ca...
详细信息
A systematic approach is essential to help understand the system behaviors that lead to critical cyberphysical system failures. We present a methodology that identifies and isolates crucial anomalous behaviors that can hamper the system and are often challenging to capture.
An efficient approach for modelling 3D millimeter wave body scan is presented. The body is represented in the stereolithography (STL) format in terms of many triangles. We pre-cast scattering points in each triangle w...
详细信息
An efficient approach for modelling 3D millimeter wave body scan is presented. The body is represented in the stereolithography (STL) format in terms of many triangles. We pre-cast scattering points in each triangle where the number of points was determined by the area of the triangle and the minimum wavelength. The acquired signal on a receiver is then calculated by summing the effect of all scattering points. In addition, the dielectric parameter of human skin, which is frequency dependent, is used to calculate the reflection coefficient. Signals generated from the simulation software were validated by reconstructing the whole-body images by using the fast Fourier transform algorithm. The simulation data were compared with that from HFSS SBR+ and real measurements. The obtained image and post data analysis demonstrated the accuracy of the presented simulation technique was acceptable and can be used for rapid millimeter-wave body-scan modelling.
Neuromorphic computing is an emerging research field that aims to develop new intelligent systems by integrating theories and technologies from multiple disciplines, such as neuroscience, deep learning and microelectr...
详细信息
Neuromorphic computing is an emerging research field that aims to develop new intelligent systems by integrating theories and technologies from multiple disciplines, such as neuroscience, deep learning and microelectronics. Various software frameworks have been developed for related fields, but an efficient framework dedicated to spike-based computing models and algorithms is lacking. In this work, we present a Python-based spiking neural network (SNN) simulation and training framework, named SPAIC, that aims to support brain-inspired model and algorithm research integrated with features from both deep learning and neuroscience. To integrate different methodologies from multiple disciplines and balance flexibility and efficiency, SPAIC is designed with a neuroscience-style frontend and a deep learning-based backend. Various types of examples are provided to demonstrate the wide usability of the framework, including neural circuit simulation, deep SNN learning and neuromorphic applications. As a user-friendly, flexible, and high-performance software tool, it will help accelerate the rapid growth and wide applicability of neuromorphic computing methodologies.
Test case optimization (TCO) reduces the software testing cost while preserving its effectiveness. However, to solve TCO problems for large-scale and complex software systems, substantial computational resources are r...
详细信息
Test case optimization (TCO) reduces the software testing cost while preserving its effectiveness. However, to solve TCO problems for large-scale and complex software systems, substantial computational resources are required. Quantum approximate optimization algorithms (QAOAs) are promising combinatorial optimization algorithms that rely on quantum computational resources, with the potential to offer increased efficiency compared to classical approaches. Several proof-of-concept applications of QAOAs for solving combinatorial problems, such as portfolio optimization, energy optimization in power systems, and job scheduling, have been proposed. Given the lack of investigation into QAOA's application for TCO problems, and motivated by the computational challenges of TCO problems and the potential of QAOAs, we present IGDec-QAOA to formulate a TCO problem as a QAOA problem and solve it on both ideal and noisy quantum computer simulators, as well as on a real quantum computer. To solve bigger TCO problems that require many qubits, which are unavailable these days, we integrate a problem decomposition strategy with the QAOA. We performed an empirical evaluation with five TCO problems and four publicly available industrial datasets from ABB, Google, and Orona to compare various configurations of IGDec-QAOA, assess its decomposition strategy of handling large datasets, and compare its performance with classical algorithms (i.e., Genetic Algorithm (GA) and Random Search). Based on the evaluation results achieved on an ideal simulator, we recommend the best configuration of our approach for TCO problems. Also, we demonstrate that our approach can reach the same effectiveness as GA and outperform GA in two out of five test case optimization problems we conducted. In addition, we observe that, on the noisy simulator, IGDec-QAOA achieved similar performance to that from the ideal simulator. Finally, we also demonstrate the feasibility of IGDec-QAOA on a real quantum computer in
Pore-based high-resolution fingerprint recognition has been researched for many years, with studies demonstrating that pores can improve the accuracy of fingerprint identification. However, existing methods that rely ...
详细信息
Pore-based high-resolution fingerprint recognition has been researched for many years, with studies demonstrating that pores can improve the accuracy of fingerprint identification. However, existing methods that rely solely on pores to measure the similarity between two fingerprints face challenges, particularly when dealing with partial fingerprints with limited overlapping areas. This article presents a novel high-resolution fingerprint recognition method that integrates multiple features. The proposed approach consists of three main steps. First, pixelwise correspondences are established using a dense matching algorithm, facilitating one-to-one matched pixels for overall similarity estimation and image alignment. On the basis of the alignment, pores outside the overlapping regions are removed. Second, pores within the common areas are matched using a partial graph matching algorithm, which reduces the impact of outliers and noise on the matching results. Since most of the outliers are already eliminated during the alignment step, the accuracy of pore matching is further enhanced. Finally, the overlapping areas of the two fingerprint images are used to calculate the overall similarity using a vision transformer (ViT) network. The final similarity between the two fingerprints is computed by integrating the pore matching results with the overall similarity of the overlapping areas. Experimental evaluations are finally conducted to assess the performance of the proposed method.
As the global migration to post-quantum cryptography (PQC) continues to progress actively, in Korea, the Post-Quantum Cryptography Research Center has been established to acquire PQC technology, leading the KpqC Compe...
详细信息
As the global migration to post-quantum cryptography (PQC) continues to progress actively, in Korea, the Post-Quantum Cryptography Research Center has been established to acquire PQC technology, leading the KpqC Competition. In February 2022, the KpqC Competition issued a call for proposals for PQC algorithms. By November 2022, 16 candidates were selected for the first round (7 KEMs and 9 DSAs). Currently, Round 1 submissions are being evaluated with respect to security, efficiency, and scalability in various environments. At the current stage, evaluating the software through an analysis to improve the software quality of the first-round submissions is judged appropriately. In this paper, we present analysis results regarding performance and implementation security on based dependency-free approach of external libraries. Namely, we configure extensive tests for an analysis with no dependencies by replacing external libraries that can complicate the build process with hard coding. From the performance perspective, we provide analysis results of performance profiling, execution time, and memory usage for each of the KpqC candidates. From the implementation security perspective, we examine bugs and errors in the actual implementations using Valgrind software, a Metamorphic Testing methodology that can include wide test coverage and constant-time implementation against the timing attack. Until the KpqC standard algorithm is announced, we argue that continuous integration of extensive tests will lead to a high-level clean code of KpqC candidates.
The transition of Control Plane (CP) block functions into software entities, as proposed by 3GPP, necessitates periodic downtime for maintenance activities such as software upgrades or failures. This downtime requires...
详细信息
The transition of Control Plane (CP) block functions into software entities, as proposed by 3GPP, necessitates periodic downtime for maintenance activities such as software upgrades or failures. This downtime requires the disconnection of all User Equipment (UE) connections to the CP, triggering the UE reattach procedure and resulting in increased UE power consumption and spectrum wastage. To mitigate these challenges, optimal CP upgrade timings should align with periods of low traffic. In this paper, we propose an AI/ML-based procedure to autonomously determine the optimal time to upgrade CP block functions, eliminating the need for manual intervention by operators. Our approach involves analyzing traffic conditions using statistical data from several CP blocks managing base stations across various areas, including residential and non-residential zones like subways, shopping complex and hospitals. Leveraging Seasonal Auto-Regressive Integrated Moving Average (SARIMA) forecasting, we predict bearer statistical data to calculate the optimal CP software upgrade time, validated using Z-Score analysis at the same time. In addition to address the suboptimal upgrade timings, we also proposed CP Outage Handling Procedure (COHP) v2 by preserving UE contexts during CP upgrades. Our results demonstrate SARIMA's high accuracy in predicting lean traffic conditions, with an R-Squared score of 0.99. Furthermore, upgrading CP software during predicted lean periods leads to substantial UE power savings ranging from 80% to 97% compared to manual upgrades.
Multiarmed Bandit (MAB) algorithms identify the best arm among multiple arms via exploration-exploitation trade-off without prior knowledge of arm statistics. Their usefulness in wireless radio, Internet of Things (Io...
详细信息
Multiarmed Bandit (MAB) algorithms identify the best arm among multiple arms via exploration-exploitation trade-off without prior knowledge of arm statistics. Their usefulness in wireless radio, Internet of Things (IoT), and robotics demand deployment on edge devices, and hence, a mapping on system-on-chip (SoC) is desired. Theoretically, the Bayesian-approach-based Thompson sampling (TS) algorithm offers better performance than the frequentist-approach-based upper confidence bound (UCB) algorithm. However, TS is not synthesizable due to Beta function. We address this problem by approximating it via a pseudorandom number generator (PRNG)-based architecture and efficiently realize the TS algorithm on Zynq SoC. In practice, the type of arms distribution (e.g., Bernoulli, Gaussian) is unknown, and hence, a single algorithm may not be optimal. We propose a reconfigurable and intelligent MAB (RI-MAB) framework. Here, intelligence enables the identification of appropriate MAB algorithms in an unknown environment, and reconfigurability allows on-the-fly switching between algorithms on the SoC. This eliminates the need for parallel implementation of algorithms resulting in huge savings in resources and power consumption. We analyze the functional correctness, area, power, and execution time of the proposed and existing architectures for various arm distributions, word length, and hardware-software codesign approaches. We demonstrate the superiority of the RI-MAB algorithm and its architecture over the TS and UCB algorithms.
暂无评论