This paper presents a novel method to obtain non-power-of-two (NP2) fast Fourier transform (FFT) flow graphs based on a new prime factor algorithm (PFA). The FFT flow graph is crucial for designing FFT architectures b...
详细信息
This paper presents a novel method to obtain non-power-of-two (NP2) fast Fourier transform (FFT) flow graphs based on a new prime factor algorithm (PFA). The FFT flow graph is crucial for designing FFT architectures but previous works only provide systematic approaches to build flow graphs for power-of-two sizes (P2). Thus, the derivation of NP2 flow graphs is an important step towards the design of efficient NP2 FFT architectures. The proposed approach consists of two independent parts. On the one hand, it obtains all the possible index mappings that lead to a flow graph with no rotations between butterflies. On the other hand, it determines the permutations between butterflies in the flow graph. By combining these two parts, the order of the inputs and outputs is derived. As a result, the entire flow graph is obtained systematically. Additionally, the proposed approach generates all the possible flow graphs for a given factorization of the FFT size. The reduction in operations for NP2 FFTs using the proposed approach leads to a significant reduction in area and power consumption concerning P2 FFTs with similar sizes after implementing the proposed flow graphs directly in hardware. Particularly, there is a significant improvement between the proposed 30-point and 60-point FFT and previous efficient P2 FFTs. This remarkable fact sets NP2 at the forefront of FFT research after being in second place behind P2 FFTs for decades.
Originally, GenProg was created to repair buggy programs written in the C programming language, launching a new discipline in Generate-and-Validate approach of Automated Program Repair (APR). Since then, a number of o...
详细信息
Originally, GenProg was created to repair buggy programs written in the C programming language, launching a new discipline in Generate-and-Validate approach of Automated Program Repair (APR). Since then, a number of other tools has been published using a variety of repair approaches. Some of these still operate on programs written in C/C++, others on Java or even Python programs. In this work, a tool named GenProgJS is presented, which generates candidate patches for faulty JavaScript programs. The algorithm it uses is very similar to the genetic algorithm used in the original GenProg, hence the name. In addition to the traditional approach, solutions used in some more recent works were also incorporated, and JavaScript language-specific approaches were also taken into account when the tool was designed. To the best of our knowledge, the tool presented here is the first to apply GenProg's general generate-and-validate approach to JavaScript programs. We evaluate the method on the BugsJS bug database, where it successfully fixed 31 bugs in 6 open source *** projects. These bugs belong to 14 different categories showing the generic nature of the method. During the experiments, code transformations applied on the original source code are all traced, and an in-depth analysis of mutation operators and fine-grained changes are also presented. We share our findings with the APR research community and describe the difficulties and differences we faced while designed this JavaScript repair tool. The source code of GenProgJS is publicly available on Github, with a pre-configured Docker environment where it can easily be launched.
This paper details the outcome of a European scientific research and development project, funded to prototype a commercial solution for vehicle fleets. While current route optimization solutions have data sources such...
详细信息
This paper details the outcome of a European scientific research and development project, funded to prototype a commercial solution for vehicle fleets. While current route optimization solutions have data sources such as real-time road traffic and weather conditions, our solution was developed to offer companies a proposed plan with adapted optimized routes, considering critical operational requirements, local conditions, and drivers' driving experience. The iRoute solution consists of a module for receiving vehicle positions from GPS devices, an integration layer for end-customer applications, a route optimization engine, and applications for web and mobile users. Unlike other approaches, this innovative solution leverages historical, well-known, and familiar route data to enhance, refine, and adapt the optimized operational routes. This way, the drivers benefit from a more effective daily plan to follow based on business requirements such as locations, quantities, time, and distance. At the end of the day, adding these adjustments results in achieving up to a 20% improvement in the accuracy of executed routes, in terms of distance and time, compared to the planned route, outperforming standard optimization algorithms. This allows business owners to rely more confidently on the optimization results, providing a safer itinerary for their drivers. Also, due to lower incident rates on these known and preferred itineraries, the overall business will benefit from increased vehicle fleet availability and fewer days in service, with lower maintenance costs. The paper evaluates the usability and perceived efficiency value of the proposed solution by statistically analyzing feedback from dispatchers and business owners.
Over the past decade, Remote Photoplethysmography (rPPG) has emerged as an unobtrusive alternative to wearable sensors for measuring physiological signals, such as heart rate. Despite advancements, its real-world scal...
详细信息
Over the past decade, Remote Photoplethysmography (rPPG) has emerged as an unobtrusive alternative to wearable sensors for measuring physiological signals, such as heart rate. Despite advancements, its real-world scalability remains limited due to the absence of standardized benchmark methods for validation. This lack of standardization complicates proper comparisons between different approaches, creating inconsistencies in performance evaluation. To address this, we conducted a comprehensive review of recent rPPG methods, analyzing their pre- and post-processing algorithms, validation procedures, benchmark algorithms, datasets, evaluation metrics, data segmentation, and reported results. Our findings demonstrate significant variability in the reported Mean Absolute Error (MAE) of benchmark rPPG methods applied to the same public datasets, confirming the challenge of inconsistent evaluation. By examining the original implementations of established benchmark methods, we developed a flexible framework that optimally selects pre- and post-processing algorithms through an exhaustive search. Applying this framework to benchmark algorithms across three public datasets, we found that 80% of the refined methods ranked within the top 25th percentile in MAE, RMSE, and PCC, with 60% surpassing the highest reported accuracies. These refined methods provide a more rigorous foundation for evaluating novel rPPG techniques, addressing the standardization gap in the field. The codebase for this framework (frPPG) is available at [https://***/Building-Robotics-Lab/flexible_rPPG], offering a valuable tool for designing and benchmarking rPPG methods against the best-performing algorithms on a given dataset.
With the rapid expansion of Industry 4.0 technology, the proliferation of large-scale devices faces increasingly severe cyber threats, underscoring the critical importance of cryptographic technology for secure commun...
详细信息
With the rapid expansion of Industry 4.0 technology, the proliferation of large-scale devices faces increasingly severe cyber threats, underscoring the critical importance of cryptographic technology for secure communication and authentication. However, cryptographic systems, as the bedrock of security, have faced a barrage of attacks in recent years, including potential threats from quantum computing and memory disclosure vulnerabilities. In this article, we focus on enhancing the security of two standard quantum-safe cryptographic algorithms, Dilithium and eXtended Merkle signature scheme (XMSS), by leveraging hardware transactional memory (HTM) to create a secure operational environment. Unlike traditional cryptography such as Rivest-Shamir-Adleman (RSA) and elliptic curve cryptography (ECC), Dilithium, and XMSS involve more and larger sensitive variables, rendering conventional solutions inadequate. By conducting a comprehensive sensitivity analysis of variables within the abovementioned algorithms, we confine sensitive operations to transactional execution regions and employ transaction-splitting technology for efficiency. Our prototype, utilizing Intel transactional synchronization extension (TSX), demonstrates robust protection against memory disclosure attacks with acceptable performance overheads. Notably, our security-enhanced Dilithium and XMSS software implementations, recommended by NIST, achieve an average throughput factor of 0.75 compared to the (unprotected) reference implementations.
Nowadays, many services are offered via the cloud, i.e., they rely on interacting software components that can run on a set of connected Commercial Off-The-Shelf (COTS) servers sitting in data centers. As the demand f...
详细信息
Nowadays, many services are offered via the cloud, i.e., they rely on interacting software components that can run on a set of connected Commercial Off-The-Shelf (COTS) servers sitting in data centers. As the demand for any particular service evolves over time, the computational resources associated with the service must be scaled accordingly while keeping the Key Performance Indicators (KPIs) associated with the service under control. Consequently, scaling always involves a delicate trade-off between using the cloud resources and complying with the KPIs. In this paper, we show that a (workload-dependent) Pareto front embodies this trade-off's limits. We identify this Pareto front for various workloads and assess the ability of several scaling algorithms to approach that Pareto front.
In the development and verification of safety-critical and safety-related Instrumentation and Control (I&C) systems, it is essential to ensure there is no deviation from the requirements of the assignment during d...
详细信息
In the development and verification of safety-critical and safety-related Instrumentation and Control (I&C) systems, it is essential to ensure there is no deviation from the requirements of the assignment during development. Model checking is a method of formal verification which can be used to prove whether a formal model satisfies its formal requirement. Since algorithms of I&C systems are generally informal, they can not be verified by model checking directly, but they must be carefully translated. This article presents a new method based on functionally-equivalent formalisation and model checking. This method can be used for automatic verification of I&C algorithms by model checking while preserving obtained proofs from a formalised model in the original algorithm. There are several problems associated with the verification of PLCs by model checking: 1) State space explosion, 2) Model consistency, 3) Specifying Properties to be Checked, 4) Representing PLC execution cycle, 5) TONs timers representation. This aims of this article is to address points 1), 2), 4) and 5). The article also presents conditions for implementing these algorithms in a target I&C system under which the obtained proofs can also be expected in the physical I&C system. This article primarily focuses on formalisation and model checking of Function Block Diagram (FBD) algorithms. However, the presented methods can also be extended to other programming languages.
Local differential privacy (LDP) techniques obviate the need for trust in the data collector, as they provide robust privacy guarantees against untrusted data managers while simultaneously preserving the accuracy of s...
详细信息
Local differential privacy (LDP) techniques obviate the need for trust in the data collector, as they provide robust privacy guarantees against untrusted data managers while simultaneously preserving the accuracy of statistical information derived from the privatized data. As a result, these methods have garnered considerable interest and research efforts. In particular, (epsilon,delta) -LDP schemes have been utilized across a range of statistical tasks. Nonetheless, existing (epsilon,delta) -LDP mechanisms for mean estimation suffer from challenges such as elevated estimation errors and diminished data utility. To address this problem, we propose two novel (epsilon,delta) -LDP algorithms for mean estimation. Specifically, we design a one-dimensional piecewise mean estimation algorithm, which perturbs the input data into intervals, thereby reducing noise addition and enhancing both accuracy and efficiency. Building on this foundation, we extend our approach to multi-dimensional data, resulting in a multi-dimensional piecewise mean estimation algorithm. Furthermore, we conduct a theoretical analysis to derive both the variance and error bounds for the proposed algorithms. Extensive experiments conducted on real datasets demonstrate the high practicality of our algorithms for data statistical tasks, showing significant improvements in data utility.
The design of the layout of Vertical Lift Module (VLM) warehouses is a non-trivial process that involves selecting dimensions, internal configuration, and allocation of each tray to avoid space loss while satisfying l...
详细信息
The design of the layout of Vertical Lift Module (VLM) warehouses is a non-trivial process that involves selecting dimensions, internal configuration, and allocation of each tray to avoid space loss while satisfying logistic constraints. Our contribution in this context is a two-phase matheuristics -an algorithm that combines exact mathematical methods and heuristics- to simplify the design of VLMs layout. The proposed matheuristics relies on three Mixed-Integer Linear Programming models, addressing the internal configuration of trays and the allocation of trays into columns based on industrial logistic constraints. This approach requires as input parameters the items features, predetermined tray types with different dimensions, matheuristic settings, and a priority rule for tray allocation. The algorithm outputs to the logistics operator types and quantities of trays needed, internal partitioning, item positions in each tray, and tray positions in each column. Extensive testing demonstrates the effectiveness of our approach under realistic scenarios. Additionally, we introduce a comprehensive set of priority rules for allocating trays into columns, providing a comparison to assist logistics operators in selecting the most suitable for specific scenarios. Note to Practitioners-VLMs are closed structures composed of columns that house a variable number of sliding trays and a lift-mounted module that handles the trays. Their operation revolves around the so-called "goods to the man" principle, where goods are automatically brought to the operator using a dedicated access bay. This design ensures that stored items are easily accessible to operators, reducing time for order picking and improving workplace safety and ergonomics. Designing a VLM for logistics companies is a complex and time-consuming task since it requires optimizing a target objective while satisfying a large set of constraints. The current manual approach lacks automated methods and relies on experience
The parameters of high-power microwave sources exhibit high sensitivity and discontinuity regarding their power efficiency, operation frequency, and other key performances. Currently, particle in cell (PIC) simulation...
详细信息
The parameters of high-power microwave sources exhibit high sensitivity and discontinuity regarding their power efficiency, operation frequency, and other key performances. Currently, particle in cell (PIC) simulation in conjunction with intelligent optimization algorithms is frequently utilized to optimize device parameters but there is still the problem that fast power ramp-up and frequency control cannot be achieved at the same time. In pursuit of an effective resolution, this article undertakes a comparison of three major classes of evolutionary algorithms within the domain of artificial intelligence during the design process of a C-band relativistic backward wave oscillator (RBWO). Through a new design method of segmented optimization strategy using the OMOPSO and RSEA algorithms, a novel RBWO structure has been attained. This structure attains an output power of 2.3 GW, functions at a goal microwave frequency of 4.3 GHz, and achieves an energy conversion efficiency of 46% in simulation under a magnetic field of 0.28 T. This novel optimization strategy is characterized by rapid optimization speed and favorable optimization efficacy.
暂无评论