[Context and motivation] Explainability is a software quality aspect that is gaining relevance in the field of requirements engineering. The complexity of modern softwaresystems is steadily growing. Thus, understandi...
详细信息
ISBN:
(纸本)9783031573262;9783031573279
[Context and motivation] Explainability is a software quality aspect that is gaining relevance in the field of requirements engineering. The complexity of modern softwaresystems is steadily growing. Thus, understanding how these systems function becomes increasingly difficult. At the same time, stakeholders rely on these systems in an expanding number of crucial areas, such as medicine and finance. [Question/problem] While a lot of research focuses on how to make AI algorithms explainable, there is a lack of fundamental research on explainability in requirements engineering. For instance, there has been little research on the elicitation and verification of explainability requirements. [Principal ideas/results] Quality models provide means and measures to specify and evaluate quality requirements. As a solid foundation for our quality model, we first conducted a literature review. Based on the results, we then designed a user-centered quality model for explainability. We identified ten different aspects of explainability and offer criteria and metrics to measure them. [Contribution] Our quality model provides metrics that enable software engineers to check whether specified explainability requirements have been met. By identifying different aspects of explainability, we offer a view from different angles that consider different goals of explanations. Thus, we provide a foundation that will improve the management and verification of explainability requirements.
computer Vision (CV) is used in a broad range of Cyber-Physical systems such as surgical and factory floor robots and autonomous vehicles including small Unmanned Aerial systems (sUAS). It enables machines to perceive...
详细信息
ISBN:
(纸本)9798350301137
computer Vision (CV) is used in a broad range of Cyber-Physical systems such as surgical and factory floor robots and autonomous vehicles including small Unmanned Aerial systems (sUAS). It enables machines to perceive the world by detecting and classifying objects of interest, reconstructing 3D scenes, estimating motion, and maneuvering around objects. CV algorithms are developed using diverse machine learning and deep learning frameworks, which are often deployed on limited resource edge devices. As sUAS rely upon an accurate and timely perception of their environment to perform critical tasks, problems related to CV can create hazardous conditions leading to crashes or mission failure. In this paper, we perform a systematic literature review (SLR) of CV-related challenges associated with CV, hardware, and softwareengineering. We then group the reported challenges into five categories and fourteen sub-challenges and present existing solutions. As current literature focuses primarily on CV and hardware challenges, we close by discussing implications for softwareengineering, drawing examples from a CV-enhanced multi-sUAS system.
Log files generated by softwaresystems can be utilized as a valuable resource in data-driven approaches to improve the system health and stability. These files often contain valuable information about runtime executi...
详细信息
ISBN:
(纸本)9798350381566;9798350381559
Log files generated by softwaresystems can be utilized as a valuable resource in data-driven approaches to improve the system health and stability. These files often contain valuable information about runtime execution, and their effective monitoring requires analyzing an increasingly large volume of data logs. In this paper, a graph mining technique for log parsing is presented, which is source agnostic to the system. This means that the technique can function regardless of the source of the logs, making it more scalable and reusable. Unlike the existing approaches that rely heavily on domain knowledge and regular expression patterns, the proposed approach uses graph models and semantic analysis to detect data patterns with minimal user input. This makes it easy to implement it in a variety of scenarios where application-based logs may differ significantly. The proposed parsing technique is evaluated over seven datasets. It achieves the best performance on the Thunderbird dataset, where the technique takes 3.87 seconds for 2000 logs, while obtaining precision, recall and F1 measure higher than 0.99.
software testing is essential for assuring the reliability and excellence of softwaresystems. Nevertheless, already used optimization techniques, such Particle Swarm Optimization (PSO), sometimes get stuck in local o...
详细信息
ISBN:
(纸本)9798350371635;9798350371628
software testing is essential for assuring the reliability and excellence of softwaresystems. Nevertheless, already used optimization techniques, such Particle Swarm Optimization (PSO), sometimes get stuck in local optima during testing. This study suggests innovative improvements to the PSO algorithm to address and overcome this constraint. Initially, we propose a method in which every particle keeps track of a collection of superior particles and chooses a global best (gbest) at random. This approach helps to explore a wider range of solutions and reduces the likelihood of being stuck in local minima. Furthermore, we use an enhanced crowding method to specifically tackle the discrepancy between the exploration and exploitation stages. This approach prioritizes extensive exploration and exploitation during the early phases of the search, progressively shifting towards a strategy that focuses more on exploitation as the algorithm advances. We present a thorough explanation of these changes, specifically highlighting the modifications made to the pbest section and the use of a novel fitness function that enhances the search process in the given space. The method that we offer has the potential to improve software testing methods by optimizing PSO-based techniques, leading to better performance and efficiency. The experimental findings have shown that our method outperforms numerous existing evolutionary or meta-heuristic algorithms in terms of test data generation speed and achieves superior coverage with fewer evaluations. The algorithms being compared are the Adaptive Genetic Algorithm (AGA), Dandelion Optimizer (DO), Chaotic Flower-Fruit Fly Optimization Algorithm (CFFFOA), Imperialist Competitive Algorithm (ICA), Chaos Adaptive Particle Swarm Optimization Algorithm (CAPSO), Particle Swarm Optimization Algorithm with Empirical Balance Strategy (PSOEBS), and Teaching Learning-Based Optimization (TLBO).
In this work computer vision algorithms are applied to process BGUSAT SWIR images and generate single-frame attitude quaternions. Two types of image processing software were implemented for image registration. ENVI ha...
详细信息
The software development life cycle (SDLC) is a structured process for delivering a high-quality product within a specified timeframe. Developing a cost-effective and efficient software product life cycle has always b...
详细信息
Participatory citizen platforms are innovative solutions to digitally better engage citizens in policy-making and deliberative democracy in general. Although these platforms have been used also in an engineering conte...
详细信息
Self-Adaptive systems (SAS) are able to cope with the changes occurring dynamically in their execution environment in an autonomous and automated manner. They use adaptive strategies to address such changes with the m...
详细信息
ISBN:
(数字)9783031641824
ISBN:
(纸本)9783031641817;9783031641824
Self-Adaptive systems (SAS) are able to cope with the changes occurring dynamically in their execution environment in an autonomous and automated manner. They use adaptive strategies to address such changes with the main objective to ensure the proper functionality of a SAS and its performance. Hence, adaptive strategies play a key role in SAS. To ensure their use and re-use in various systems in different application domains, there is a need for common mechanisms that enable their analysis, evaluation, and comparison. In this direction, our Adaptive Strategies Metric Suite (ASMS) defines a set of software metrics for the measurement of various static and dynamic properties of adaptive strategies in SAS. The metrics concerning the static properties have been implemented in a plugin for the Understand tool.
Fuzz testing (or fuzzing) is a software testing technique aimed at identifying software vulnerabilities. Recently, the Go community added native support for fuzz testing into their standard library. Using that feature...
详细信息
ISBN:
(纸本)9798350395693;9798350395686
Fuzz testing (or fuzzing) is a software testing technique aimed at identifying software vulnerabilities. Recently, the Go community added native support for fuzz testing into their standard library. Using that feature, developers can write unit tests to perform deterministic and fuzz testing of their softwaresystems against unexpected inputs. Although the availability of support makes fuzz testing more accessible for the Go community at large, little is known about the degree to which Go developers adopt fuzz testing during software development. Therefore, in this paper, we set out to study the evolution of fuzz testing practices in open-source Go projects. More specifically, we strive to understand whether the introduction of support for fuzz testing in the Go standard library has led to the adoption of fuzz testing as part of the standard testing processes of Go projects. To achieve our goal, we study 1) to what extent fuzz tests are used in open-source Go projects, 2) who writes and maintains fuzz tests in Go projects, and finally, 3) how tightly coupled are fuzz tests with source code (as compared to non-fuzz tests). We find that fuzz testing only represents 3.15% of testing functions in open-source projects. Our results also suggest that fuzz testing development is not being conducted as part of standard testing activities. For developers contributing to fuzzing, we find that a median of only 12.50% of their testing-related commits contain fuzz tests. Finally, we perform a qualitative analysis and find that fuzz testing is mostly used by critical softwaresystems, such as blockchain technologies or network infrastructure projects, to test the most critical features of their systems (e.g., data processing functions, database endpoints). Our results lead us to conclude that fuzz testing is best used in combination with deterministic testing (e.g., unit testing) where fuzzing is used to thoroughly test important features, and deterministic testing is used to test
Purpose: This paper presents a theoretical analysis of the DynaTrans algorithm, a novel approach for dynamic optimization of urban transportation networks. Design/methodology/approach: We introduce an Adaptive Closene...
详细信息
暂无评论