software reliability plays a crucial role in assessing the overall quality of software systems. Developing highly reliable software presents challenges, including ensuring usability, maintaining consistent performance...
详细信息
ISBN:
(数字)9798331529246
ISBN:
(纸本)9798331529253
software reliability plays a crucial role in assessing the overall quality of software systems. Developing highly reliable software presents challenges, including ensuring usability, maintaining consistent performance, offering reliable service, and facilitating smooth maintenance. To overcome these limitations, researchers have turned to intelligent prediction and optimization algorithms. This paper presents a comprehensive review of software reliability prediction, emphasizing the application of machine learning techniques. It also provides an analytical evaluation of existing methods to identify areas for improvement and future research opportunities in software reliability prediction.
This paper presents a study on the design and development of a learning management system (LMS) integrated with large language models (LLMs) and image recognition/generation algorithms to automate software task valida...
详细信息
ISBN:
(数字)9798331511241
ISBN:
(纸本)9798331511258
This paper presents a study on the design and development of a learning management system (LMS) integrated with large language models (LLMs) and image recognition/generation algorithms to automate software task validation and multimodal educational data analysis. The purpose of the development is to improve the quality of feedback for learners, deepen the understanding of errors made and expand the capabilities of the system by synthesizing formal LLM models and visual processing algorithms. The system is based on the use of domestic LLMs such as GigaChat and YandexGPT, Retrieval-Augmented Generation (RAG) technology, and modern computer vision algorithms. The system architecture is described, including GitVerse for code storage, GitVerse Actions for automated testing of solutions in Docker containers, and image processing modules for analyzing and generating visual data in the educational process. The conducted experiment demonstrates the effectiveness of the integrative approach, which allows to improve the quality of feedback and increase the performance of students. Prospects for further research include optimization of computational resources and development of techniques for integrated analysis of multimodal educational data.
Class imbalance presents considerable challenges for software defect prediction. However, software defect datasets exhibit additional complex characteristics, with class overlap being the most harmful to prediction pe...
详细信息
Class imbalance presents considerable challenges for software defect prediction. However, software defect datasets exhibit additional complex characteristics, with class overlap being the most harmful to prediction performance. Currently, most techniques only focus on balancing data distributions. To this end, this study applies the previously proposed Newton's Cooling Law-Based Oversampling Technique (NCLWO) to software defect prediction. The method first uses density and distance factors to identify hard-to-learn defect instances in SDP, such as boundary samples in overlapping distributions and small disjoints. Then, the density and distance factors are used to measure the initial heat of each defect instance. Newton's Cooling Law is applied to expand the sampling area into a hypersphere until thermal equilibrium is achieved, moving non-defective instances out of the region to clean the overlapping area. Finally, weighted oversampling is performed within the expanded sampling area to generate defect instances, addressing within-class imbalance. Comparative experiments were conducted between NCLWO and ten state-of-the-art sampling methods on 30 software defect datasets. The experimental results reveal that the proposed method excels in performance and showcases significant robustness and versatility in handling the joint effects of class overlap and imbalance.
software requirements prioritization is a crucial step in the requirements engineering process. It is applied to define the ordering of specific requirements according to schedule and budget constraints. Many requirem...
详细信息
software requirements prioritization is a crucial step in the requirements engineering process. It is applied to define the ordering of specific requirements according to schedule and budget constraints. Many requirements prioritization approaches have been proposed in the literature, but not all have been empirically investigated in real-life settings, cannot be used for all types of projects, and do not provide a hierarchical model for the requirements. Therefore, this study proposes an analytical hierarchy process-based technique to prioritize functional and quality requirements prioritization based on stakeholders' perspectives. The proposed technique has been demonstrated in the organization's quality management system and eliminates inconsistencies between business and technical stakeholder valuations. The results indicated that the consistency ratio for hierarchy was acceptable (0.089), thereby confirming the accuracy of the analysis. Notably, the effectiveness requirement (28%) was the highest among the functional and nonfunctional categories in this hierarchy.
With the launch of NISAR, available open access Earth Observation data reaches a new milestone with an estimated data rate of similar to 70 Terabytes of NISAR science data produced daily. This additional data creates ...
详细信息
With the launch of NISAR, available open access Earth Observation data reaches a new milestone with an estimated data rate of similar to 70 Terabytes of NISAR science data produced daily. This additional data creates many opportunities within the science and user community for new and more indepth analyses, yet at the same time, most organizations lack the budget, resources, and skills to analyze all this data on-premise. This article introduces concepts on how SAR processing software from commercial and open-source providers can be coupled with highly scalable cloud processing techniques to digest SAR data at various input levels (from raw to value added products). Included in this article are examples on the global-scale processing of Sentinel-1 data for InSAR coherence estimation, the input to global biomass modeling, and the routine operational processing of SAR time series data for national and international near-real time flood mapping. The cloud-scaling software for Earth big data Processing, Prediction Modeling, and Organization (SEPPO), which is used to meet the processing challenges utilizing Amazon Web Services (AWS) for large volume and operational SAR data processing in complex workflows is introduced.
The rapid development of large language models (LLMs) has significantly transformed the field of artificial intelligence, demonstrating remarkable capabilities in natural language processing and moving towards multi-m...
详细信息
The rapid development of large language models (LLMs) has significantly transformed the field of artificial intelligence, demonstrating remarkable capabilities in natural language processing and moving towards multi-modal functionality. These models are increasingly integrated into diverse applications, impacting both research and industry. However, their development and deployment present substantial challenges, including the need for extensive computational resources, high energy consumption, and complex software optimizations. Unlike traditional deep learning systems, LLMs require unique optimization strategies for training and inference, focusing on system-level efficiency. This paper surveys hardware and software co-design approaches specifically tailored to address the unique characteristics and constraints of large language models. This survey analyzes the challenges and impacts of LLMs on hardware and algorithm research, exploring algorithm optimization, hardware design, and system-level innovations. It aims to provide a comprehensive understanding of the trade-offs and considerations in LLM-centric computing systems, guiding future advancements in AI. Finally, we summarize the existing efforts in this space and outline future directions toward realizing production-grade co-design methodologies for the next generation of large language models and AI systems.
This research aims to design a hybrid swarm-optimized machine learning software defect prediction (HSoMLSDP) framework to predict software defects. We strive to do this by designing a swarm-optimized machine learning ...
详细信息
This research aims to design a hybrid swarm-optimized machine learning software defect prediction (HSoMLSDP) framework to predict software defects. We strive to do this by designing a swarm-optimized machine learning defect prediction (SoMLDP) model within the HSoMLSDP framework. In pursuit of enhancing the defect prediction accuracy of the SoMLDP model, this paper designed two novel hybrid swarm-optimization algorithms (SOAs) referred to as gravitational force grasshopper optimization algorithm-artificial bee colony (GFGOA-ABC), and levy flight grasshopper optimization algorithm-artificial bee colony (LFGOA-ABC) algorithms. By combining the enhanced exploration features of LFGOA and GFGOA with the robust exploitation capacity of the artificial bee colony (ABC), the LFGOA-ABC and GFGOA-ABC algorithms are proposed. Prior to validating the HSoMLSDP framework, the LFGOA-ABC and GFGOA-ABC algorithm's efficacy is first confirmed by experimenting on 19 benchmark functions (BFs) to assess their mean, standard deviation (SD) of optimal values, convergence rate, and convergence rate improvements. Following BFs verification, the second experiment tunes the hyperparameters of the ML models (artificial neural network, XGBOOST) to improve the defect accuracy of the SoMLDP model. The outcomes of the experiments justify a more rapid convergence rate for BFs and notable enhancements of 0.01-0.28 in software defect prediction (SDP) accuracy for NASA defect datasets when compared with state-of-the-art methods. As an enhancement of accuracy justifies the correctness of the SoMLDP model, thus validating the HSoMLSDP framework.
Existing approaches in software product lines usually neglect knowledge modeling and simulation of the interaction between features capable of bringing dynamism and automation. Consequently, these solutions miss oppor...
详细信息
Existing approaches in software product lines usually neglect knowledge modeling and simulation of the interaction between features capable of bringing dynamism and automation. Consequently, these solutions miss opportunities to resolve associated and emerging problems, including defect detection or quality assurance, which can be solved by effectively extracting and utilizing knowledge from data based on the differences between variants. We bring capabilities to seize them in introducing a fully automated and minimalistic approach to software product line evolution that strictly focuses on handling variability at low-level code fragments. It incorporates the autonomous modeling of emerging knowledge across preconfigured simulations. Specifically, fully automated knowledge-driven software product line evolution provides various views on an existing software product line, its variants, and their evolution through semantic and structural information accompanied by the time and order of the performed changes. We initially developed our approach for software product line evolution, followed by its successful application to the evolution of fractal scripts. We present it accordingly. Fractal products are much simpler because an application state does not span out of recursive behavior, making it manageable within the drawing. Each change is propagated into repetitive phases, causing the visual performance of the implemented feature to be infinitely detailed. Even minor changes in low code fragments tend to manifest as user-visible features owing to recursive behavior, allowing one to massively introduce new features and/or configure existing features and manage variability observable as infinitely detailed shapes. Future applications propose automated observation of more comprehensive in-code representation of feature trees.
In this letter, a novel high throughput software polarization-adjusted convolutional (PAC) decoder based on the four-node fast list (FFL) decoding algorithm is proposed. To improve the parallel processing capabilities...
详细信息
In this letter, a novel high throughput software polarization-adjusted convolutional (PAC) decoder based on the four-node fast list (FFL) decoding algorithm is proposed. To improve the parallel processing capabilities of the proposed decoder, we effectively parallel map the FFL decoding algorithm to processors by using the single-instruction-multiple-data (SIMD) instruction set to decode multiple frames of data in parallel. Moreover, the proposed decoder has sufficient generality to be implemented on X86 and ARM processors (NEON, SSE, and AVX256 instruction sets). Experimentations show that the proposed software PAC decoder can achieve 26.256 Mb/s.
Generating test data is a critical component of software testing, essential for ensuring the proper functioning of software systems. Automated test data generation, in particular, can greatly improve the efficiency an...
详细信息
Generating test data is a critical component of software testing, essential for ensuring the proper functioning of software systems. Automated test data generation, in particular, can greatly improve the efficiency and accuracy of the testing process. In 1990, the paper "Automated software Test Data Generation," published in IEEE Transactions on software Engineering (Korel, 1990), introduced a novel approach to automated test data generation, focusing on actual program execution, fitness function minimization methods, and dynamic data flow analysis. This paper discusses the impact of the 1990 IEEE TSE publication (Korel, 1990) on automated test data generation and software engineering.
暂无评论