The software industry is proliferating at an unprecedented pace, with a massive volume of software being released every day. Among the manifold challenges faced by software engineering researchers, one of the most sig...
The software industry is proliferating at an unprecedented pace, with a massive volume of software being released every day. Among the manifold challenges faced by software engineering researchers, one of the most significant is maintaining and enhancing software quality. Software metrics, designed to quantify various aspects of software, are essential in achieving this goal. They provide developers with a comprehensive snapshot of a codebase’s status throughout its evolution, thereby facilitating timely intervention and continual improvement. Tools like Rust-Code-Analysis (RCA), developed and maintained by Mozilla, serve as crucial aids in this endeavour. RCA is a static code analyser that scrutinises a source code without executing it and computes a series of source code metrics, which quantitatively assess code characteristics such as complexity, maintainability, and robustness. The present article seeks to contribute to this area by undertaking a threefold task. Firstly, we intend to explore new source code Java metrics that can be integrated into RCA. We have chosen Java language due to its not yet declined pervasiveness in many industrial software and world of smartphones. The metrics will be selected based on their potential to provide valuable insights into codebase status and facilitate optimisation. Once the new metrics have been identified, the second part of our task involves implementing these metrics within RCA’s library and also accessed through its CLI. This involves the coding and integration of the metrics using the modern Rust language, taking advantage of its unique features like memory safety without garbage collection, and data concurrency. Finally, to ascertain the effectiveness and reliability of metrics, we conduct an evaluation using diverse Java repositories. This involves studying the values generated by these metrics across repositories of varying sizes and levels of activity. From the smallest library to large-scale applications, our an
Embedded systems, such as automotive applications, are increasingly used in safety-critical systems. The correct and reliable implementation of such systems depends on many factors, including the design of the system ...
Embedded systems, such as automotive applications, are increasingly used in safety-critical systems. The correct and reliable implementation of such systems depends on many factors, including the design of the system hardware, software, fault-tolerance mechanisms, and the choice of programming language, followed by the test, verification, and validation techniques employed. Even well-designed systems are not exempt from having defects that stem from their physical properties, and these imperfections can cause unforeseen and dangerous actions in safety critical systems. This paper focuses on isolating or mitigating the effects of Random Hardware Failures(RHFs). Hardening strategies are employed to mitigate RHFs in embedded systems, either by adding specialized hardware or using Software-Implemented Hardware Fault Tolerance (SIHFT) methods. SIHFT methods are applied to various applications to harden them against control Flow Errors (CFEs). This paper presents a guideline for applying a subset of SIHFT methods called control Flow Checking (CFC) methods to application code written in C language. The motivation is that in the literature few guidelines can be found that provide insight on implementing CFC methods with high-level programming languages. Most proposals implement CFC methods in low-level languages such as assembly. The rationale behind developing high-level language implementations lies in the pursuit of architecture independence as well as the inadequacy of a certified compiler for the target platform that can conveniently incorporate Certified Functionally Correct into the compiled assembly/machine language code.
The growing number of exploits and hacks on the Ethereum blockchain has led to the development of powerful smart contract vulnerability detection tools and the frequent patching of the smart contract’s programming la...
The growing number of exploits and hacks on the Ethereum blockchain has led to the development of powerful smart contract vulnerability detection tools and the frequent patching of the smart contract’s programming languages (such as Solidity). At the same time, an ever-increasing number of people are interested in blockchain and smart contract-related topics and willing to build and deploy their own Decentralized Applications (dApp). However, learning a new programming language and its best practices as long as how to actually deploy a smart contract on the blockchain is a difficult task even for experienced developers. Recently, ChatGPT, a new user-friendly deep learning tool, has been released to improve the ability of non-skilled users to write high-quality code and in general, to boost the performances of developers in key tasks related to code writing (i.e., writing functions, explaining runtime errors, fixing bugs, etc.). This paper aims to measure the capabilities of ChatGPT in fixing vulnerable smart contracts and to assess the effectiveness of this tool, determining whether it can be a valuable aid for those who want to correct their own smart contract or want to reuse existing ones by first checking their status and eventually fix their vulnerability. In particular, we asked ChatGPT to fix 143 smart contracts with well-known labeled vulnerabilities. We considered a vulnerability as "fixed" if the code corrected by ChatGPT no longer contained the vulnerability (for this purpose, we exploited Slither, one of the state-of-the-art tools for smart contracts vulnerability detection to check the status of the original and the corrected smart contracts). As a result we obtained that ChatGPT was able to fix bugs and vulnerable smart contracts on average the 57.1% of the time with an increase of +1.4% when a description of the bug was provided in addition to the smart contract’s source code.
Quantum Kernel Estimation (QKE) is a technique based on leveraging a quantum computer to estimate a kernel function that is classically difficult to calculate, which is then used by a classical computer for training a...
Quantum Kernel Estimation (QKE) is a technique based on leveraging a quantum computer to estimate a kernel function that is classically difficult to calculate, which is then used by a classical computer for training a Support Vector Machine (SVM). Given the high number of 2-local operators necessary for realizing a feature mapping hard to simulate classically, a high qubit connectivity is needed, which is not currently possible on superconducting devices. For this reason, neutral atom quantum computers can be used, since they allow to arrange the atoms with more freedom. Examples of neutral-atom-based QKE can be found in the literature, but they are focused on graph learning and use the analogue approach. In this paper, a general method based on the gate model is presented. After deriving 1-qubit and 2-qubit gates starting from laser pulses, a parameterized sequence for feature mapping on 3 qubits is realized. This sequence is then used to empirically compute the kernel matrix starting from a dataset, which is finally used to train the SVM. It is also shown that this process can be generalized up to $\mathrm{N}$ qubits taking advantage of the more flexible arrangement of atoms that this technology allows. The accuracy is shown to be high despite the small dataset and the low separation. This is the first paper that not only proposes an algorithm for explicitly deriving a universal set of gates but also presents a method of estimating quantum kernels on neutral atom devices for general problems using the gate model.
Given the widespread proliferation of smartphones, it is plausible to regard our personal devices, and by extension, ourselves, as integral components of the Internet of Audio Things. Furthermore, each smartphone is a...
Given the widespread proliferation of smartphones, it is plausible to regard our personal devices, and by extension, ourselves, as integral components of the Internet of Audio Things. Furthermore, each smartphone is already equipped with an application that enables its integration into a network of audio things: the web browser. This paper introduces a framework that facilitates the development of audio-aware applications on any smart device equipped with a web browser, utilizing WebAssembly and audio fingerprinting techniques. Results demonstrate that a few seconds of ambient recording are adequate to successfully identify audio tracks, even amidst background noise. This outcome paves the way for a wide range of applications that can potentially immerse users into the Internet of Sound Things realm.
Quantum Kernel Estimation (QKE) is a technique based on leveraging a quantum computer to estimate a kernel function that is classically difficult to calculate, which is then used by a classical computer for training a...
详细信息
Instance-level object re-identification is a fundamental computer vision task, with applications from image retrieval to intelligent monitoring and fraud detection. In this work, we propose the novel task of damaged o...
详细信息
Table tennis has experienced a century since its birth. With the vigorous development of competitive sports events, rapid changes have taken place in technology, tactics, equipment, rules and training methods. Under t...
详细信息
Thermal monitoring is a key requirement for cold chain management. In this context, the Internet of Things (IoT) offers new opportunities for dense and/or large-scale deployment of sensors, which need to collect data ...
Thermal monitoring is a key requirement for cold chain management. In this context, the Internet of Things (IoT) offers new opportunities for dense and/or large-scale deployment of sensors, which need to collect data to effectively control the cooling system. Various technologies are used for data transmission. Although Bluetooth is widely exploited for transmitting data in IoT applications, its use in the cold chain management is rare. In this paper, the architecture of an IoT temperature monitoring system is studied and the technological choices of its components are analyzed and compared. In particular, the paper focuses on IoT node boards with Bluetooth, in order to highlight the opportunities of a currently undervalued technology. A theoretical analysis highlights its benefits for the application context and evaluates its suitability for monitoring systems suitable for cold rooms. The theoretical results are supported by an experimental analysis based on the implementation of different systems.
Two of the most impressive features of biological neural networks are their high energy efficiency and their ability to continuously adapt to varying inputs. On the contrary, the amount of power required to train top-...
Two of the most impressive features of biological neural networks are their high energy efficiency and their ability to continuously adapt to varying inputs. On the contrary, the amount of power required to train top-performing deep learning models rises as they become more complex. This is the main reason for the increasing research interest in spiking neural networks, which mimic the functioning of the human brain achieving similar performances to artificial neural networks, but with much lower energy costs. However, even this type of network is not provided with the ability to incrementally learn new tasks, with the main obstacle being catastrophic forgetting. This paper investigates memory replay as a strategy to mitigate catastrophic forgetting in spiking neural networks. Experiments are conducted on the MNIST-split dataset in both class-incremental learning and task-free continual learning scenarios.
暂无评论