Several studies suggest that sleep quality is associated with physical activities. Moreover, deep sleep time can be used to determine the sleep quality of an individual. In this work, we aim to find the association be...
详细信息
Several studies suggest that sleep quality is associated with physical activities. Moreover, deep sleep time can be used to determine the sleep quality of an individual. In this work, we aim to find the association between physical activities and deep sleep time by modeling the time series data such as heart rate and a number of steps captured from a commercial wearable device. Our previous study demonstrates that deep learning-based time series modeling is well suited for our problem since the temporal patterns in the two physical parameters need to be captured to obtain more accurate results. We first preprocess our series data to have a time-step size of 10 minutes. To improve our previous effort in this modeling, we compare four different variants of Long Short-Term Memory (LSTM)-based models, ranging from single input to dual input models. Our result shows that the simple stacked LSTM model performs better for our data because the remaining models suffer from overfitting due to a larger number of the trained parameters.
Applications designed utilizing Microservices Architecture (MSA) provide the desirable trait of good maintainability. To ensure optimal maintainability, it is important to provide services that are suitable and adhere...
Applications designed utilizing Microservices Architecture (MSA) provide the desirable trait of good maintainability. To ensure optimal maintainability, it is important to provide services that are suitable and adhere to prescribed rules. Multiple aspects must be taken into account while designing services to ensure optimal maintainability. The objective of this study is to examine the elements that impact the capacity to sustain and improve maintainability in service design, ultimately resulting in an application that possesses strong maintainability. The Systematic Literature Review (SLR) will be utilized to identify variables and strategies for their enhancement, by examining pertinent publications on the subject. After examining 45 publications, the study discovered 8 elements and 14 solutions that can enhance the highlighted parameters throughout the services design process. The outcomes of this systematic literature review (SLR) are anticipated to give valuable insights to application developers, empowering them to generate service designs that exhibit commendable maintainability for the developed applications.
Research in the development of Hepatitis C disease prediction is increasingly developing, especially using machine learning models which is able to make predictions quickly and accurately. In this study, a comparison ...
详细信息
ISBN:
(数字)9798331517601
ISBN:
(纸本)9798331517618
Research in the development of Hepatitis C disease prediction is increasingly developing, especially using machine learning models which is able to make predictions quickly and accurately. In this study, a comparison of several classification methods was carried out by also applying feature reduction, NCA. In this study, a comparison of performance was carried out if the data was entered into the NCA feature extraction method with KNearest Neighborhood (KNN) and Support Vector Machine (SVM) and a comparison of performance if the data did not use the NCA feature extraction method. The performance comparison metrics used in the study were accuracy, sensitivity, specification, Matthews Correlation Coefficient (MCC), and Kappa value. The highest accuracy (99.36%), sensitivity (91.94%), specification (99.67%), MCC $(0.895)$ and the best Kappa value $(0.889)$ were obtained in the KNN-NCA prediction method.
Typhoid fever is an endemic disease that burdens Indonesia and has a potentially fatal infection multisystem. Salmonella typhi bacterium is responsible for typhoid fever disease. Poor sanitation, crowding, and slums a...
详细信息
Rapid progress in the field of artificial intelligence has created opportunities for extensive applications in software development. One area that receives attention is the evaluation of code quality using machine lea...
Rapid progress in the field of artificial intelligence has created opportunities for extensive applications in software development. One area that receives attention is the evaluation of code quality using machine learning techniques. In this investigation, we examined the possible application of machine learning to predict the likelihood of defects in computer code. We employ NASA archival data as case studies. Machine learning models employ neural network algorithms. Our exploration involves partitioning the dataset into training data and test data for performance evaluation. The findings indicate that the Neural Organize technique with resampling yields a high level of accuracy in predicting software defects. Our simulated neural network is capable of identifying intricate patterns in the data and providing precise measurements of the size and intensity of defects. These results have significant implications in the software business, enabling developers to promptly identify possible vulnerabilities and take preventive measures before product release.
The point of Agile Methodology is continuous improvement, delivering a small feature quickly without sacrificing the feature quality; every sprint must be better than the previous sprint, and better can be fewer bugs,...
The point of Agile Methodology is continuous improvement, delivering a small feature quickly without sacrificing the feature quality; every sprint must be better than the previous sprint, and better can be fewer bugs, faster development, and testing. We will present how we reduce production bugs by customizing our sprint iteration. As we know, bugs are unavoidable, there is no software engineer that can make software without a bug; however, we can reduce bugs in production if we can find bugs in lower environments as early as possible. The case study in this paper was taken from one of technology company in Indonesia, the activity was done by the Quality Engineer (QA) Team. We will show that shift-left testing can help us reduce bugs in production. Testing is part of agile methodology, and the main idea of shift-left testing is to move testing early and could be done by any team member, not only QA. We include shift-left testing in our agile methodology for one year in 2022 and compare the result in the previous year.
Membership function (MF) in process of fuzzy logic is very meaningful. It depicts the core of model. It can be adopted from the expert judgment and also coming from the configuration of data behavior. The study is an ...
详细信息
Membership function (MF) in process of fuzzy logic is very meaningful. It depicts the core of model. It can be adopted from the expert judgment and also coming from the configuration of data behavior. The study is an experimental research by analyzing time-series data behavior in developing the MF by using the programming language Python. The MF academically produced called as floating MF (FMF). Such a FMF operated to realize the fuzzy logic conception in an evaluation model. The constructed model is a model in simply measuring the student's performance. By using the ten series-data of student's grade point average (GPA) and absence per annual (APS) values, the model can simulate the moving students' performance per semester. When compared to the conventional FL based model, the proposed model has a similarity average 85%.
This study discusses the development of smart precision farming systems using big data and cloud-based intelligent decision support systems. Big data plays an important role in collecting, storing, and analyzing large...
This study discusses the development of smart precision farming systems using big data and cloud-based intelligent decision support systems. Big data plays an important role in collecting, storing, and analyzing large amounts of data from various sources related to agriculture, including data from weather stations, soil sensors, satellite imagery, crop yield records, pest and disease reports, and other sources. This study highlights the differences between smart farming and precision farming. This study describes key techniques and system architecture, including data collection, processing, analysis, and decision support components. Utilizing a cloud platform enables scalability and optimized performance, which lowers costs and makes it safer and easier to manage. The integration of big data and Alibaba cloud computing in smart precision farming can improve farming productivity by providing timely information and recommendations to farmers for better decision-making. Finally, the system produces smart precision farming, which provides cost-effective real-time monitoring and predictive analytics to increase agricultural production and sustainability.
In real life, many activities are performed sequentially. These activities must be carried out sequentially, such as the assembly process in the manufacturing production process. This series of activities cannot be re...
In real life, many activities are performed sequentially. These activities must be carried out sequentially, such as the assembly process in the manufacturing production process. This series of activities cannot be reduced or added so that the main goal of the series of activities is achieved. Apart from that, there are also time series events that occur naturally, such as rainy and hot conditions in a certain area. The classification process of time series activities is very important to see the possibility of anomalies occurring. The significant development of machine learning models in recent years has made the process of classifying time series data increasingly researched. Several previous studies stated that deep learning models were more accurate in classifying time series data. In this paper, we will compare Convolutional Neural Network (CNN) and Transformer deep learning models in classifying time series data. Experimental results using the same public datasets for CNN and Transformer model show that the CNN model is more accurate than the Transformer model. The results of measuring accuracy using confusion matrix show that CNN has an accuracy of 92% and Transformer has an accuracy of 80%.
Refasctoring is a technique used in software development to improve the quality of code without changing its functionality. One metric that is often used to measure code quality is Code Coverage. This study aims to ex...
Refasctoring is a technique used in software development to improve the quality of code without changing its functionality. One metric that is often used to measure code quality is Code Coverage. This study aims to examine refactoring techniques that can maximize Code Coverage Metric. Through the study, identification, evaluation, and summary of empirical evidence from various literature sources are carried out. The results of this study provide guidance on effective refactoring techniques to improve Code Coverage as well as other positive impacts for software development. There are ten refactoring techniques that can be used to improve Code Coverage Metrics in software testing.
暂无评论