Rapid progress in the field of artificial intelligence has created opportunities for extensive applications in software development. One area that receives attention is the evaluation of code quality using machine lea...
Rapid progress in the field of artificial intelligence has created opportunities for extensive applications in software development. One area that receives attention is the evaluation of code quality using machine learning techniques. In this investigation, we examined the possible application of machine learning to predict the likelihood of defects in computer code. We employ NASA archival data as case studies. Machine learning models employ neural network algorithms. Our exploration involves partitioning the dataset into training data and test data for performance evaluation. The findings indicate that the Neural Organize technique with resampling yields a high level of accuracy in predicting software defects. Our simulated neural network is capable of identifying intricate patterns in the data and providing precise measurements of the size and intensity of defects. These results have significant implications in the software business, enabling developers to promptly identify possible vulnerabilities and take preventive measures before product release.
Applications designed utilizing Microservices Architecture (MSA) provide the desirable trait of good maintainability. To ensure optimal maintainability, it is important to provide services that are suitable and adhere...
Applications designed utilizing Microservices Architecture (MSA) provide the desirable trait of good maintainability. To ensure optimal maintainability, it is important to provide services that are suitable and adhere to prescribed rules. Multiple aspects must be taken into account while designing services to ensure optimal maintainability. The objective of this study is to examine the elements that impact the capacity to sustain and improve maintainability in service design, ultimately resulting in an application that possesses strong maintainability. The Systematic Literature Review (SLR) will be utilized to identify variables and strategies for their enhancement, by examining pertinent publications on the subject. After examining 45 publications, the study discovered 8 elements and 14 solutions that can enhance the highlighted parameters throughout the services design process. The outcomes of this systematic literature review (SLR) are anticipated to give valuable insights to application developers, empowering them to generate service designs that exhibit commendable maintainability for the developed applications.
Mg/Al-TiO2 and Mg/Al-ZnO were successfully prepared for dibenzothiophene catalytic oxidative desulfurization. XRD, FTIR, TEM, and BET analyses were utilized to characterize the catalyst. In composites, the distinctive...
详细信息
The purpose of this study is to find out and analyze what has been done by previous studies in knowing the problems faced by SMEs today and how the influence of Industry 4.0 technology in dealing with problems in Smal...
The purpose of this study is to find out and analyze what has been done by previous studies in knowing the problems faced by SMEs today and how the influence of Industry 4.0 technology in dealing with problems in Small and Medium Enterprises (SMEs), so that they become developed and can maintain their business continuity. Most studies show that SMEs have considerable barriers when trying to use the potential of Industry 4.0 to improve the competitiveness and long-term sustain ability of their business. This systematic literature review uses the Kitchenham technique to identify SME problems and the role of Industry 4.0 technologies in SMEs in various industrialized and developing countries. Through several databases including: IEEE, science Direct, Web of science, Taylor & Francis, Emerald and Springer between 2019 - 2023 were used to find relevant research articles in this study. The research findings offer valuable perspectives on the strategies, obstacles and prospects that SMEs in different countries face in adjusting to Industry 4.0. The contribution of academics in this research is to provide solutions by designing more robust approaches to promote the business sustainability of SMEs and their role in driving economic progress in their countries.
This study discusses the development of smart precision farming systems using big data and cloud-based intelligent decision support systems. Big data plays an important role in collecting, storing, and analyzing large...
This study discusses the development of smart precision farming systems using big data and cloud-based intelligent decision support systems. Big data plays an important role in collecting, storing, and analyzing large amounts of data from various sources related to agriculture, including data from weather stations, soil sensors, satellite imagery, crop yield records, pest and disease reports, and other sources. This study highlights the differences between smart farming and precision farming. This study describes key techniques and system architecture, including data collection, processing, analysis, and decision support components. Utilizing a cloud platform enables scalability and optimized performance, which lowers costs and makes it safer and easier to manage. The integration of big data and Alibaba cloud computing in smart precision farming can improve farming productivity by providing timely information and recommendations to farmers for better decision-making. Finally, the system produces smart precision farming, which provides cost-effective real-time monitoring and predictive analytics to increase agricultural production and sustainability.
This research shows that social learning can be used to increase an organization's cybersecurity maturity level. Using a literature study and case study approach. Literature studies are used to identify social lea...
详细信息
In real life, many activities are performed sequentially. These activities must be carried out sequentially, such as the assembly process in the manufacturing production process. This series of activities cannot be re...
In real life, many activities are performed sequentially. These activities must be carried out sequentially, such as the assembly process in the manufacturing production process. This series of activities cannot be reduced or added so that the main goal of the series of activities is achieved. Apart from that, there are also time series events that occur naturally, such as rainy and hot conditions in a certain area. The classification process of time series activities is very important to see the possibility of anomalies occurring. The significant development of machine learning models in recent years has made the process of classifying time series data increasingly researched. Several previous studies stated that deep learning models were more accurate in classifying time series data. In this paper, we will compare Convolutional Neural Network (CNN) and Transformer deep learning models in classifying time series data. Experimental results using the same public datasets for CNN and Transformer model show that the CNN model is more accurate than the Transformer model. The results of measuring accuracy using confusion matrix show that CNN has an accuracy of 92% and Transformer has an accuracy of 80%.
The point of Agile Methodology is continuous improvement, delivering a small feature quickly without sacrificing the feature quality; every sprint must be better than the previous sprint, and better can be fewer bugs,...
The point of Agile Methodology is continuous improvement, delivering a small feature quickly without sacrificing the feature quality; every sprint must be better than the previous sprint, and better can be fewer bugs, faster development, and testing. We will present how we reduce production bugs by customizing our sprint iteration. As we know, bugs are unavoidable, there is no software engineer that can make software without a bug; however, we can reduce bugs in production if we can find bugs in lower environments as early as possible. The case study in this paper was taken from one of technology company in Indonesia, the activity was done by the Quality Engineer (QA) Team. We will show that shift-left testing can help us reduce bugs in production. Testing is part of agile methodology, and the main idea of shift-left testing is to move testing early and could be done by any team member, not only QA. We include shift-left testing in our agile methodology for one year in 2022 and compare the result in the previous year.
This paper explores the development of a multilabel machine learning system for predicting both gender and age from human gait patterns. Gait analysis, a non-intrusive method of identifying subtle nuances in human mov...
详细信息
Refasctoring is a technique used in software development to improve the quality of code without changing its functionality. One metric that is often used to measure code quality is Code Coverage. This study aims to ex...
Refasctoring is a technique used in software development to improve the quality of code without changing its functionality. One metric that is often used to measure code quality is Code Coverage. This study aims to examine refactoring techniques that can maximize Code Coverage Metric. Through the study, identification, evaluation, and summary of empirical evidence from various literature sources are carried out. The results of this study provide guidance on effective refactoring techniques to improve Code Coverage as well as other positive impacts for software development. There are ten refactoring techniques that can be used to improve Code Coverage Metrics in software testing.
暂无评论