The existence of fine-grained image classification supporting smart retail provides effectiveness in recognizing products with high similarity. However, the generic classification method performs poorly in identifying...
详细信息
Air pollution is a pressing issue in cities, and managing air quality poses a challenge for urban designers and decision-makers. This study proposes a Digital Twin (DT) Smart City integrated with Mixed Reality technol...
详细信息
computer vision has been used in many areas such as medical, transportation, military, geography, etc. The fast development of sensor devices inside camera and satellite provides not only red-greed-blue (RGB) images b...
详细信息
The measurement of information security risk in a public sector organization is of utmost importance. This measurement serves the purpose of taking appropriate actions in response to potential risks or damaging incide...
The measurement of information security risk in a public sector organization is of utmost importance. This measurement serves the purpose of taking appropriate actions in response to potential risks or damaging incidents. The objective of this study is to develop a straightforward yet smart decision model capable of evaluating risks, particularly within the public sector organization. The decision support modelling (DSM) concept is employed as a framework to construct a computer model that supports decision-making in a specific case. The study comprises five stages, which are integral parts of the DSM process. These stages include analyzing the case, examining relevant documents, designing the model, constructing the model, and evaluating the finalized model. The object-oriented method serves as a fundamental approach to model design. Additionally, the fuzzy logic method, an intelligent computational technique, plays a central role in the development of this decision model. The proposed model demonstrates an average error value of 0.05 when compared to the actual risk measurement conducted. Furthermore, it reveals average risk values of 0.59 and 0.45 for the pre- and post-remediation scenarios, respectively.
Lactose intolerance is a type of digestive problem that may threaten the population because milk and dairy products compose of nutrients that are essential for human body. Genetic tests possess a great potential to de...
详细信息
Detecting fake news in the digital era is challenging due to the proliferation of misinformation. One of the crucial is-sues in this domain is the inherent class imbalance, where genuine news articles significantly ou...
Detecting fake news in the digital era is challenging due to the proliferation of misinformation. One of the crucial is-sues in this domain is the inherent class imbalance, where genuine news articles significantly outnumber fake ones. This imbalance severely hampers the performance of machine and deep learning models in accurately identifying fake news. Consequently, there is a compelling need to address this problem effectively. In this study, we delve into fake news detection and tackle the critical issue of imbalanced data. We investigate the application of Easy Data Augmentation (EDA) techniques, including back-translation, random insertion, random deletion, and random swap to mitigate the adverse effects of imbalanced data. This study focuses on employing these techniques in conjunction with a deep learning framework, specifically a Bidirectional Long Short-Term Memory (BiLSTM) architecture. The results of the EDA techniques will be systematically compared to see their effectiveness and their impacts on model performance. This study reveals that various EDA techniques, when coupled with a BiLSTM architecture, yield significant improvements in fake news detection. Among the experiments, it shows that Random Insertion, with an impressive accuracy rate of 81.68%, a precision score of 89.38%, and an F1-Score of 87.77% emerges as the most promising technique. The study also highlights the exceptional potential of Back-translation stands out with an 87.16% recall performance.
Addressing multiple criteria and parameter issues in computer modelling presents a significant challenge. Several factors including data types, parameter behaviours, and purposes, must be taken into account to enhance...
Addressing multiple criteria and parameter issues in computer modelling presents a significant challenge. Several factors including data types, parameter behaviours, and purposes, must be taken into account to enhance computer modelling capability; particularly in evaluation cases. Through the utilization of a multi-criteria and method approach, a decision model was effectively developed to assess a case of environmental sustainability level of a building. One method operated in the study is the curve method for handling membership function form in realizing fuzzy logic. This innovative model demonstrates superior performance. It achieves an impressive accuracy rate of 96%, surpassing the previous model that employed a trapezoidal approach to describe fuzzy membership functions hy 1%.
Segmentation is manually performed by physicians, which takes considerable time and may be subject to observers. Automating this task can increase efficiency and consistency. Existing studies on meningioma segmentatio...
详细信息
Segmentation is manually performed by physicians, which takes considerable time and may be subject to observers. Automating this task can increase efficiency and consistency. Existing studies on meningioma segmentation used data from limited study centers, indicating the need for research on multi-center data to assess generalizability. In this work, two semi-automated methods with bounding box priors, LiteMedSAM and BBU-Net, are evaluated on the brain tumor segmentation (BraTS) 2023 meningioma dataset collected from five study-centers. Preprocessing included exclusion of small tumors, z-score normalization, and extraction of slices that contain tumors, generating 25,602 2D axial magnetic resonance imaging (MRI) scans. A fine-tuning strategy is adopted for LiteMedSAM while BBU-Net is trained from scratch. The models are evaluated using a five-fold cross-validation, with data split at the case level. Results show that while U-Net models can achieve performance close to LiteMedSAM, the foundation model has overall better performance, with more than 90% in all evaluation scores.
Containerization has become a popular approach in application development in applications development and deployment, many benefits we can get such as improved scalability, portability, and resource efficiency. Contai...
Containerization has become a popular approach in application development in applications development and deployment, many benefits we can get such as improved scalability, portability, and resource efficiency. Container-based applications, utilizing technologies like Docker and Kubernetes, have transformed the packaging, deployment, and management of software from the desktop environment to the cloud platform. In this context, software metrics approach plays a good role in evaluating the characteristics and performance of container-based applications, ensuring that developers and operators are on the same page. This article explores the importance of software metrics in optimizing the software lifecycle of container-based applications, addressing the unique challenges they present, and highlighting the potential benefits of leveraging metrics to improve performance and efficiency. Our finding Performance Metrics and Availability Metrics is the most metrics that the most measure by applications owner, relevant studies and industry practices, this study aims to provide insights and recommendations to effectively measure and optimize region-based software systems.
A wide variety of disciplines contribute to bioinformatics research, including computerscience, biology, chemistry, mathematics, and physics. This study determines the number of research articles published on arXiv c...
详细信息
A wide variety of disciplines contribute to bioinformatics research, including computerscience, biology, chemistry, mathematics, and physics. This study determines the number of research articles published on arXiv classified as bioinformatics topics and the most frequently used bioinformatics terms using topic modeling, Latent Dirichlet Allocation (LDA). An algorithm based on LDA is used to discover topics hidden within large collections of documents through the use of statistical analysis. Our research examined 226453 articles on arXiv between January 2023 and January 2024. As a result, there are more than 10521 articles categorized into bioinformatics topics. Most commonly, 6352 documents are in the "Mathematical Physics" category. The second most popular category is "computerscience," with 2950 documents. Accordingly, the terms 'RNA,' 'sequence,' 'tree,' and 'homology' are the three most commonly used terms in bioinformatics. The study of RNA plays a vital role in molecular biology; thus, the study of RNA is prevalent in bioinformatics. Sequential data refer to the order in which nucleotides or amino acids can be found in a DNA molecule or a protein.
暂无评论