multi-objective search algorithms (MOSAs) have been applied to solve diverse search-Based Software Engineering (SBSE) problems. In most cases, SBSE users select one or more commonly used MOSAs (for instance, Nondomina...
详细信息
multi-objective search algorithms (MOSAs) have been applied to solve diverse search-Based Software Engineering (SBSE) problems. In most cases, SBSE users select one or more commonly used MOSAs (for instance, Nondominated Sorting Genetic algorithm II (NSGA-II)) to solve their search problems, without any justification (i.e., not supported by any evidence) on why those particular MOSAs are selected. However, when working with a specific multi-objective SBSE problem, users typically know what kind(s) of qualities they are looking for in solutions. Such qualities are represented by one or more Quality Indicators (QIs), which are often employed to assess various MOSAs to select the best MOSA. However, users usually have limited time budgets, which prevents them from executing multiple MOSAs and consequently selecting the best MOSA in the end. Therefore, for such users, it is highly preferred to select only one MOSA since the beginning. To this end, in this paper, we aim to assist SBSE users in finding appropriate MOSAs for their experiments, given their choices of QIs or quality aspects (e.g., Convergence, Uniformity). To achieve this aim, we conduct an extensive empirical evaluation with 18 search problems from a set of real-world, industrial, and open-source case studies, to study preferences among commonly used QIs and MOSAs in SBSE. We observe that each QI has its own specific most-preferred MOSA and vice versa;NSGA-II and Strength Pareto Evolutionary algorithm 2 (SPEA2) are the most preferred MOSAs by QIs;no QI is the most preferred by all the MOSAs;the preferences between QIs and MOSAs vary across the search problems;QIs covering the same quality aspect(s) do not necessarily have the same preference for MOSAs. Based on our results, we provide discussions and guidelines for SBSE users to select appropriate MOSAs based on experimental evidence.
In search-Based Software Engineering (SBSE), users typically select a set of multi-objective search algorithms (MOSAs) for their experiments without any justification, or they simply choose an MOSA because of its popu...
详细信息
ISBN:
(数字)9783030597627
ISBN:
(纸本)9783030597610;9783030597627
In search-Based Software Engineering (SBSE), users typically select a set of multi-objective search algorithms (MOSAs) for their experiments without any justification, or they simply choose an MOSA because of its popularity (e.g., NSGA-II). On the other hand, users know certain characteristics of solutions they are interested in. Such characteristics are typically measured with Quality Indicators (QIs) that are commonly used to evaluate the quality of solutions produced by an MOSA. Consequently, these QIs are often employed to empirically evaluate a set of MOSAs for a particular search problem to find the best MOSA. Thus, to guide SBSE users in choosing an MOSA that represents the solutions measured by a specific QI they are interested in, we present an empirical evaluation with a set of SBSE problems to study the relationships among commonly used QIs and MOSAs in SBSE. Our aim, by studying such relationships, is to identify whether there are certain characteristics of a QI because of which it prefers a certain MOSA. Such preferences are then used to provide insights and suggestions to SBSE users in selecting an MOSA, given that they know which quality aspects of solutions they are looking for.
Requirements review is an effective technique to ensure the quality of requirements in practice, especially in safety-critical domains (e.g., avionics systems, automotive systems). In such contexts, a typical requirem...
详细信息
Requirements review is an effective technique to ensure the quality of requirements in practice, especially in safety-critical domains (e.g., avionics systems, automotive systems). In such contexts, a typical requirements review process often prioritizes requirements, due to limited time and monetary budget, by, for instance, prioritizing requirements with higher implementation cost earlier in the review process. However, such a requirement implementation cost is typically estimated by stakeholders who often lack knowledge about (future) requirements implementation scenarios, which leads to uncertainty in cost overrun. In this article, we explicitly consider such uncertainty (quantified as cost overrun probability) when prioritizing requirements based on the assumption that a requirement with higher importance, a higher number of dependencies to other requirements, and higher implementation cost will be reviewed with the higher priority. Motivated by this, we formulate four objectives for uncertainty-wise requirements prioritization: maximizing the importance of requirements, requirements dependencies, the implementation cost of requirements, and cost overrun probability. These four objectives are integrated as part of our search-based uncertainty-wise requirements prioritization approach with tool support, named as URP. We evaluated six multi-objective search algorithms (MOSAs) (i.e., NSGA-II, NSGA-III, MOCell, SPEA2, IBEA, and PAES) together with Random search (RS) using three real-world datasets (i.e., the RALIC, Word, and ReleasePlanner datasets) and 19 synthetic optimization problems. Results show that all the selected MOSAs can solve the requirements prioritization problem with significantly better performance than RS. Among them, IBEA was over 40% better than RS in terms of permutation effectiveness for the first 10% of prioritized requirements in the prioritization sequence of all three datasets. In addition, IBEA achieved the best performance in terms of th
search-Based Software Engineering (SBSE) researchers who apply multi-objective search algorithms (MOSAs) often assess the quality of solutions produced by MOSAs with one or more quality indicators (QIs). However, SBSE...
详细信息
search-Based Software Engineering (SBSE) researchers who apply multi-objective search algorithms (MOSAs) often assess the quality of solutions produced by MOSAs with one or more quality indicators (QIs). However, SBSE lacks evidence providing insights on commonly used QIs, especially about agreements among them and their relations with SBSE problems and applied MOSAs. Such evidence about Qls agreements is essential to understand relationships among QIs, identify redundant QIs, and consequently devise guidelines for SBSE researchers to select appropriate QIs for their specific contexts. To this end. we conducted an extensive empirical evaluation to provide insights on commonly used QIs in the context of SBSE, by studying agreements among QIs with and without considering differences of SBSE problems and MOSAs. In addition, by defining a systematic process based on three common ways of comparing MOSAs in SBSE, we present additional observations that were automatically produced based on the results of our empirical evaluation. These observations can be used by SBSE researchers to gain a better understanding of the commonly used QIs in SBSE, in particular, regarding their agreements. Finally, based on the results, we also provide a set of guidelines for SBSE researchers to select appropriate QIs for their particular context.
暂无评论