In software project management, it is a useful method to control the progress of the project through effort estimation. How to estimate the effort in real time in the Agile software development has always been a diffi...
详细信息
Context: there is a growing belief in the literature that large language models (LLMs), such as ChatGPT, can mimic human behavior in surveys. Gap: While the literature has shown promising results in social sciences an...
详细信息
this paper presents initial insights into the effectiveness of ChatGPT in detecting code smells in Java projects. We utilize a large dataset comprising four code smells - Blob, Data Class, Feature Envy, and Long Metho...
详细信息
the maximum internal shear resistance of engineering geomaterials is defined as shear strength. the numerical modeling of numerous engineering problems (e.g. bearing capacity, slope stability, and lateral pressure) is...
详细信息
this paper proposes an approach that combines Fault Tree Analysis with Problem Frames to present the latent reliability risks of Cyber-Physical Systems. By extracting causality within each, this approach associates th...
详细信息
Vision-based control provides a significant potential for the end-point positioning of continuum robots under physical sensing limitations. Traditional visual servoing requires feature extraction and tracking followed...
详细信息
ISBN:
(数字)9781665490429
ISBN:
(纸本)9781665490429
Vision-based control provides a significant potential for the end-point positioning of continuum robots under physical sensing limitations. Traditional visual servoing requires feature extraction and tracking followed by full or partial pose estimation, limiting the controller's efficiency. We hypothesize that employing deep learning models and implementing direct visual servoing can effectively resolve the issue by eliminating such intermediate steps, enabling control of a continuum robot without requiring an exact system model. this paper presents the control of a single-section tendon-driven continuum robot using a modified VGG-16 deep learning network and an eye-in-hand direct visual servoing approach. the proposed algorithm is first developed in Blender software using only one input image of the target and then implemented on a real robot. the convergence and accuracy of the results in normal, shadowed, and occluded scenes demonstrate the effectiveness and robustness of the proposed controller.
SysML is a standard language that permits to model systems of any type such as plane, ships and software intensive systems. software Product Line large scale reuse approach has demonstrated its success. the industry p...
详细信息
ISBN:
(纸本)9798350319439
SysML is a standard language that permits to model systems of any type such as plane, ships and software intensive systems. software Product Line large scale reuse approach has demonstrated its success. the industry provides benefits in term of cost savings and acceleration of time to maket. the available literature indicates that there have been efforts to enhance the capability of SysML in handling product families. However, these attempts are not yet fully systematic, and there remains a significant amount of work to be undertaken in this area. In this present paper, we deal withthe SysML Block Diagram in order to investigate to what extent it permits variability representation and how it can evolve during the system evolution or when agility is needed. We want to capitalize on the knowledge necessary for block diagram extention and evolution and take advantage of knowledge from Product Line domain engineering and application engineering. So, we decide to use an ontology which is an articifial intelligence artifact. An ontology is a powerful mean to represent knowledge and reason about it. Here, we use the ontology to help decision making for Block diagram evolution as well.
In recent years, the significant increase in enterprise data availability and the progress in Artificial Intelligence (AI) have enabled organizations to address real-world issues through Machine Learning (ML). In this...
详细信息
ISBN:
(纸本)9798400717017
In recent years, the significant increase in enterprise data availability and the progress in Artificial Intelligence (AI) have enabled organizations to address real-world issues through Machine Learning (ML). In this regard, machine learning operations (MLOps) have been proven to be a beneficial strategy for evolving ML models from theoretical ideas to practical solutions of business sector issues. Withthe knowledge of MLOps being vast and scattered, this research work focuses on the application of MLOps methodologies in sophisticated operational technologies, prioritizing the enhancement of security measures. this research work also discusses the specific challenges in securing ML implementations in such settings and underscores the importance of robust MLOps strategies in ensuring effective security protocols. Moreover, it explains current practices and identified improvement areas, highlighting the importance of MLOps in overcoming obstacles and maximizing the value of ML in operational technology contexts.
WebAssembly (abbreviated Wasm) has emerged as a cornerstone of web development, offering a compact binary format that allows high-performance applications to run at near-native speeds in web browsers. Despite its adva...
详细信息
ISBN:
(数字)9798400712487
ISBN:
(纸本)9798400712487
WebAssembly (abbreviated Wasm) has emerged as a cornerstone of web development, offering a compact binary format that allows high-performance applications to run at near-native speeds in web browsers. Despite its advantages, Wasm's binary nature presents significant challenges for developers and researchers, particularly regarding readability when debugging or analyzing web applications. therefore, effective decompilation becomes crucial. Unfortunately, traditional decompilers often struggle with producing readable outputs. While some large language model (LLM)-based decompilers have shown good compatibility with general binary files, they still face specific challenges when dealing with Wasm. In this paper, we introduce a novel approach, WADEC, which is the first use of a fine-tuned LLM to interpret and decompile Wasm binary code into a higher-level, more comprehensible source code representation. the LLM was meticulously fine-tuned using a specialized dataset of wat-c code snippets, employing self-supervised learning techniques. this enables WADEC to effectively decompile not only complete wat functions but also finer-grained wat code snippets. Our experiments demonstrate that WADEC markedly outperforms current state-of-the-art tools, offering substantial improvements across several metrics. It achieves a code inflation rate of only 3.34%, a dramatic 97% reduction compared to the state-of-the-art's 116.94%. Unlike the output of baselines that cannot be directly compiled or executed, WADEC maintains a recompilability rate of 52.11%, a re-execution rate of 43.55%, and an output consistency of 27.15%. Additionally, it significantly exceeds state-of-the-art performance in AST edit distance similarity by 185%, cyclomatic complexity by 8%, and cosine similarity by 41%, achieving an average code similarity above 50%. In summary, WADEC enhances understanding of the code's structure and execution flow, facilitating automated code analysis, optimization, and security auditi
Developers often use crash reports to understand the root cause of bugs. However, locating the buggy source code snippet from such information is a challenging task, mainly when the log database contains many crash re...
详细信息
ISBN:
(纸本)9798400702174
Developers often use crash reports to understand the root cause of bugs. However, locating the buggy source code snippet from such information is a challenging task, mainly when the log database contains many crash reports. To mitigate this issue, recent research has proposed and evaluated approaches for grouping crash report data and using stack trace information to locate bugs. the effectiveness of such approaches has been evaluated by mainly comparing the candidate buggy code snippets withthe actual changed code in bug-fix commits-which happens in the context of retrospective repository mining studies. therefore, the existing literature still lacks discussing the use of such approaches in the daily life of a software company, which could explain the developers' perceptions on the use of these approaches. In this paper, we report our experience of using an approach for grouping crash reports and finding buggy code on a weekly basis for 18 months, within three development teams in a software company. We grouped over 750,000 crash reports, opened over 130 issues, and collected feedback from 18 developers and team leaders. Among other results, we observe that the amount of system logs related to a crash report group is not the only criteria developers use to choose a candidate bug to be analyzed. Instead, other factors were considered, such as the need to deliver customer-prioritized features and the difficulty of solving complex crash reports (e.g., architectural debts), to cite some. the approach investigated in this study correctly suggested the buggy file most of the time-the approach's precision was around 80%. In this study, the developers also shared their perspectives on the usefulness of the suspicious files and methods extracted from crash reports to fix related bugs.
暂无评论