This study addresses the critical need for predictive models that balance high accuracy with computational efficiency, a requirement essential for timely and effective public health responses to COVID-19 outbreaks. We...
详细信息
This study addresses the critical need for predictive models that balance high accuracy with computational efficiency, a requirement essential for timely and effective public health responses to COVID-19 outbreaks. We evaluate the performance of two advanced machine learning models, LightGBM and XGBoost, in predicting COVID-19 case trends across five Saudi cities. Using time-series data, we analyze key metrics such as RMSE, MAE, MAPE, R2, and computation time to assess each model's suitability for real-time applications. The findings highlight XGBoost's superior performance in computational speed, being up to three times faster than LightGBM in certain cases, making it ideal for rapid decision-making scenarios. Meanwhile, LightGBM demonstrates competitive accuracy and exceptional scalability, positioning it as a reliable tool for managing large datasets. These insights underscore the importance of timecomplexity as a critical factor in predictive modeling, enabling public health organizations to allocate resources efficiently, implement containment strategies promptly, and develop agile responses to future pandemics.
The computational study of complex systems increasingly requires model integration. The drivers include a growing interest in leveraging accepted legacy models, an intensifying pressure to reduce development costs by ...
详细信息
ISBN:
(纸本)9781479974863
The computational study of complex systems increasingly requires model integration. The drivers include a growing interest in leveraging accepted legacy models, an intensifying pressure to reduce development costs by reusing models, and expanding user requirements that are best met by combining different modeling methods. There have been many published successes including supporting theory, conceptual frameworks, software tools, and case studies. Nonetheless, on an empirical basis, the published work suggests that correctly specifying model integration strategies remains challenging. This naturally raises a question that has not yet been answered in the literature, namely 'what is the computational difficulty of model integration?' This paper's contribution is to address this question with a time and space complexityanalysis that concludes that deep model integration with proven correctness is both NP-complete and PSPACE-complete and that reducing this complexity requires sacrificing correctness proofs in favor of guidance from both subject matter experts and modeling specialists.
Sorting Permutations by Transpositions is a famous problem in the Computational Biology field. This problem is NP-Hard, and the best approximation algorithm, proposed by Elias and Hartman in 2006, has an approximation...
详细信息
ISBN:
(数字)9783031211751
ISBN:
(纸本)9783031211744;9783031211751
Sorting Permutations by Transpositions is a famous problem in the Computational Biology field. This problem is NP-Hard, and the best approximation algorithm, proposed by Elias and Hartman in 2006, has an approximation factor of 1.375. Since then, several researchers have proposed modifications to this algorithm to reduce the timecomplexity. More recently, researchers showed that the algorithm proposed by Elias and Hartman might need one more operation above the approximation ratio and presented a new 1.375-approximation algorithm using an algebraic approach that corrected this issue. This algorithm runs in O(n(6)) time. In this paper, we present an efficient way to fix Elias and Hartman algorithm that runs in O(n(5)). By comparing the three approximation algorithms with all permutations of size n <= 12, we also show that our algorithm finds the exact distance in more instances than the previous two algorithms.
By the package design problem we are given a set of queries (referred to as a query log) with each being a bit string indicating the favourite activities or items of customers and required to design a package of activ...
详细信息
ISBN:
(纸本)9798350320886
By the package design problem we are given a set of queries (referred to as a query log) with each being a bit string indicating the favourite activities or items of customers and required to design a package of activities (or items) to satisfy as many customers as possible. It is a typical problem of data mining. In this paper, we address this issue and propose an efficient algorithm for solving the problem based on a new tree search strategy, the so-called priority-first search, by which the tree search is controlled by using a priority queue, instead of a stack or a queue data structure. Extensive experiments have been conducted, which show that our method for this problem is promising.
We have witnessed the rapid development of learned image compression (LIC). The latest LIC models have outperformed almost all traditional image compression standards in terms of rate-distortion (RD) performance. Howe...
详细信息
ISBN:
(纸本)9781728185514
We have witnessed the rapid development of learned image compression (LIC). The latest LIC models have outperformed almost all traditional image compression standards in terms of rate-distortion (RD) performance. However, the timecomplexity of LIC model is still underdiscovered, limiting the practical applications in industry. Even with the acceleration of GPU, LIC models still struggle with long coding time, especially on the decoder side. In this paper, we analyze and test a few prevailing and representative LIC models, and compare their complexity with traditional codecs including H.265/HEVC intra and H.266/VVC intra. We provide a comprehensive analysis on every module in the LIC models, and investigate how bitrate changes affect coding time. We observe that the timecomplexity bottleneck mainly exists in entropy coding and context modelling. Although this paper pay more attention to experimental statistics, our analysis reveals some insights for further acceleration of LIC model, such as model modification for parallel computing, model pruning and a more parallel context model.
Purpose The purpose of this paper is to make a theoretical comprehensive efficiency evaluation of a nonlinear analysis method based on the Woodbury formula from the efficiency of the solution of linear equations in ea...
详细信息
Purpose The purpose of this paper is to make a theoretical comprehensive efficiency evaluation of a nonlinear analysis method based on the Woodbury formula from the efficiency of the solution of linear equations in each incremental step and the selected iterative algorithms. Design/methodology/approach First, this study employs the timecomplexity theory to quantitatively compare the efficiency of the Woodbury formula and the LDLT factorization method which is a commonly used method to solve linear equations. Moreover, the performance of iterative algorithms also significantly effects the efficiency of the analysis. Thus, the three-point method with a convergence order of eight is employed to solve the equilibrium equations of the nonlinear analysis method based on the Woodbury formula, aiming to improve the iterative performance of the Newton-Raphson (N-R) method. Findings First, the result shows that the asymptotic timecomplexity of the Woodbury formula is much lower than that of the LDLT factorization method when the number of inelastic degrees of freedom (IDOFs) is much less than that of DOFs, indicating that the Woodbury formula is more efficient for local nonlinear problems. Moreover, the timecomplexity comparison of the N-R method and the three-point method indicates that the three-point method is more efficient than the N-R method for local nonlinear problems with large-scale structures or a larger ratio of IDOFs number to the DOFs number. Originality/value This study theoretically evaluates the efficiency of nonlinear analysis method based on the Woodbury formula, and quantitatively shows the application condition of the comparative methods. The comparison result provides a theoretical basis for the selection of algorithms for different nonlinear problems.
Sorting remains a quintessential problem in computer science;henceforth, considerable research has focused on optimizing runtime efficiency when sorting a collection of elements. Most algorithms for sorting objects ar...
详细信息
ISBN:
(纸本)9781728138855
Sorting remains a quintessential problem in computer science;henceforth, considerable research has focused on optimizing runtime efficiency when sorting a collection of elements. Most algorithms for sorting objects are comparison-based. Bucket Sort and Radix Sort are noncomparison based sorting algorithms that can sort objects in linear time. In both cases, the corresponding array indices represent a hash for the object. Nevertheless, Radix Sort still requires an auxiliary array. In this paper, replacing Radix Sort's auxiliary array with a hash table is proposed. Use of a hash table in Radix Sort should avoid the calculations for the array and be better suited for handling objects. As with an array-based Radix sort, this hash-based approach should maintain linearity, thereby sorting objects more efficiently. Following successfully programming this novel sorting algorithm, as the number of elements increases, the runtime progresses linearly, not exponentially. Moreover, as the number of digits increases, the sorting runtime still lengthens linearly. Thus, a hash-based Radix sort is feasible and overcomes many issues associated with current sorting algorithms.
We present a framework in Isabelle for verifying asymptotic timecomplexity of imperative programs. We build upon an extension of Imperative HOL and its separation logic to include running time. Our framework is able ...
详细信息
ISBN:
(纸本)9783319942056;9783319942049
We present a framework in Isabelle for verifying asymptotic timecomplexity of imperative programs. We build upon an extension of Imperative HOL and its separation logic to include running time. Our framework is able to handle advanced techniques for time complexity analysis, such as the use of the Akra-Bazzi theorem and amortized analysis. Various automation is built and incorporated into the auto2 prover to reason about separation logic with time credits, and to derive asymptotic behaviour of functions. As case studies, we verify the asymptotic timecomplexity (in addition to functional correctness) of imperative algorithms and data structures such as median of medians selection, Karatsuba's algorithm, and splay trees.
Exemplar learning is a kind of inductive learning, and it is a very important component in the research of machine learning. But the explosion problem of calculation which caused by it is a big trouble in use. Finding...
详细信息
ISBN:
(纸本)9781509041527
Exemplar learning is a kind of inductive learning, and it is a very important component in the research of machine learning. But the explosion problem of calculation which caused by it is a big trouble in use. Finding key feature of concept can solve this problem well. time complexity analysis shows that the Exemplar learning problem can be translated into the problem that is independent of the number of instances by using the Key Feature Variable Based Learning Model for Exemplar learning, which can reduce the size of the search space, and this model is of practical significance to improve the efficiency of Exemplar learning and the stability of the system, while the risk didn't raise at all.
Two-dimensional curve offsets have a wide application area ranging from manufacturing to medical imaging. To that end, this paper concentrates on two novel techniques to produce planar curve offsets. Both methods, whi...
详细信息
Two-dimensional curve offsets have a wide application area ranging from manufacturing to medical imaging. To that end, this paper concentrates on two novel techniques to produce planar curve offsets. Both methods, which are based on mathematical morphology, employ the concept that the boundaries formed by a circular structuring element whose center moves across the points on a base curve comprise the entire offsets of the progenitor. The first technique titled IMOBS was introduced in our former paper and was shown to have superior properties in terms of its high accuracy, low computational complexity, and its ability to handle complex curves if compared to the techniques available in the literature. Consequently, an all-purpose algorithm titled AMOBS is introduced to enhance further the performance of the former technique by making good use of gradient information to find globally the most suitable candidate points in the boundary data set via grid search techniques. Thus, the new paradigm is demonstrated to overcome some of the problems (like orphan curve offsets) encountered in extreme cases. Both algorithms, which have similar attributes in terms of run-timecomplexity and memory cost, are comparatively tested via two experimental cases where most CAD/CAM packages fail to yield acceptable results.
暂无评论