At present, the dual structure-based block decomposition methods can generally obtain relatively ideal results. However, current dual structure-based block decomposition algorithms suffer from reliability and efficien...
详细信息
At present, the dual structure-based block decomposition methods can generally obtain relatively ideal results. However, current dual structure-based block decomposition algorithms suffer from reliability and efficiency issues. To this end, to enable them to effectively deal with complex models, this paper proposes a progressive block decomposition algorithm. The algorithm first simplifies the input model by suppressing features and decomposes the simplified model into a block structure. Then, to recover the suppressed features in the simplified model's block structure, for each suppressed feature, the algorithm generates a local model to cover the feature. After decomposing the local model into a block structure with constraints, the algorithm replaces the corresponding block set in the block structure with the local model's block structure. In this step, the block structures are refined to obtain a consistency-ensured block structure. Finally, to achieve a balance between the total number of blocks and the blocks' quality in the block structure, the algorithm optimizes the consistency-ensured block structure by simplifying the structure. Experimental results show the effectiveness of the proposed method.
Many applications involve humans in the loop, where continuous and accurate human motion monitoring provides valuable information for safe and intuitive human-machine interaction. Portable devices such as inertial mea...
详细信息
ISBN:
(纸本)9781713872344
Many applications involve humans in the loop, where continuous and accurate human motion monitoring provides valuable information for safe and intuitive human-machine interaction. Portable devices such as inertial measurement units (IMUs) are applicable to monitor human motions, while in practice often limited computational power is available locally. The human motion in task space coordinates requires not only the human joint motion but also the nonlinear coordinate transformation depending on the parameters such as human limb length. In most applications, measuring these kinematics parameters for each individual requires undesirably high effort. Therefore, it is desirable to estimate both, the human motion and kinematic parameters from IMUs. In this work, we propose a novel computational framework for dual estimation in real-time exploiting in-network computational resources. We adopt the concept of field Kalman filtering, where the dual estimation problem is decomposed into a fast state estimation process and a computationally expensive parameter estimation process. In order to further accelerate the convergence, the parameter estimation is progressively computed on multiple networked computational nodes. The superiority of our proposed method is demonstrated by a simulation of a human arm, where the estimation accuracy is shown to converge faster than with conventional *** (c) 2023 The Authors.
Many applications involve humans in the loop, where continuous and accurate human motion monitoring provides valuable information for safe and intuitive human-machine interaction. Portable devices such as inertial mea...
详细信息
Many applications involve humans in the loop, where continuous and accurate human motion monitoring provides valuable information for safe and intuitive human-machine interaction. Portable devices such as inertial measurement units (IMUs) are applicable to monitor human motions, while in practice often limited computational power is available locally. The human motion in task space coordinates requires not only the human joint motion but also the nonlinear coordinate transformation depending on the parameters such as human limb length. In most applications, measuring these kinematics parameters for each individual requires undesirably high effort. Therefore, it is desirable to estimate both, the human motion and kinematic parameters from IMUs. In this work, we propose a novel computational framework for dual estimation in real-time exploiting in-network computational resources. We adopt the concept of field Kalman filtering, where the dual estimation problem is decomposed into a fast state estimation process and a computationally expensive parameter estimation process. In order to further accelerate the convergence, the parameter estimation is progressively computed on multiple networked computational nodes. The superiority of our proposed method is demonstrated by a simulation of a human arm, where the estimation accuracy is shown to converge faster than with conventional approaches.
The multiple longest common subsequence (MLCS) problem, which is related to the measurement of sequence similarity, is one of the fundamental problems in many fields. As an NP-hard problem, finding a good approximate ...
详细信息
The multiple longest common subsequence (MLCS) problem, which is related to the measurement of sequence similarity, is one of the fundamental problems in many fields. As an NP-hard problem, finding a good approximate solution within a reasonable time is important for solving large-size problems in practice. In this paper, we present a new progressive algorithm, Pro-MLCS, based on the dominant point approach. Pro-MLCS can find an approximate solution quickly and then progressively generate better solutions until obtaining the optimal one. Pro-MLCS employs three new techniques: 1) a new heuristic function for prioritizing candidate points;2) a novel d-index-tree data structure for efficient computation of dominant points;and 3) a new pruning method using an upper bound function and approximate solutions. Experimental results show that Pro-MLCS can obtain the first approximate solution almost instantly and needs only a very small fraction, e. g., 3 percent, of the entire running time to get the optimal solution. Compared to existing state-of-the-art algorithms, Pro-MLCS can find better solutions in much shorter time, one to two orders of magnitude faster. In addition, two parallel versions of Pro-MLCS are developed: DPro-MLCS for distributed memory architecture and DSDPro-MLCS for hierarchical distributed shared memory architecture. Both parallel algorithms can efficiently utilize parallel computing resources and achieve nearly linear speedups. They also have a desirable progressiveness property-finding better solutions in shorter time when given more hardware resources.
Today, Stock investment is an important part of the economy of the country. Therefore, forecasting changes in the behavior of market has become significantly important to shareholders. In the past years, classic metho...
详细信息
ISBN:
(纸本)9781467397629
Today, Stock investment is an important part of the economy of the country. Therefore, forecasting changes in the behavior of market has become significantly important to shareholders. In the past years, classic methods often were used to forecast changes in the market behavior, but, in recent years, intelligent methods increasingly have been applied to forecast the behavior of stock market. Using intelligent methods, the accuracy and precision of forecasts have increased to some extent, but further researches still can be carried out to improve the precision of forecasts. The aim of the present research was to develop an appropriate model for increasing the accuracy and precision of forecasting the behavior of Tehran Stock Exchange by using Hidden Markov Model. In this model, normal conditions were supposed for the market and it was also assumed that the changes in the market behavior were independent from factors such as political issues, war and natural disasters including flood, earthquake and fire. First, the model was trained by Baum-Welch algorithm and after implementing the progressive forecast algorithm for three specific industries in MATLAB, it was found that, in comparison with the conventional methods, accuracy and precision of forecasting had been improved by 2 percent, on average. Moreover, by increasing the number of the days included in a specific interval considered for a certain industry, the precision of training algorithm increased and, consequently, the precision and accuracy of the algorithm increased, too.
In order to solve the problem that we can only collect data from one single data source at some fixed time after mining the keywords in a rather superficial level, and to take full use of the information returned by s...
详细信息
ISBN:
(纸本)9780769538747
In order to solve the problem that we can only collect data from one single data source at some fixed time after mining the keywords in a rather superficial level, and to take full use of the information returned by search engines to construct the social relationship network based on the semantic link of the searched subject, we do the regular research by using the ROST Content Mining System which helps to undergo the process of page monitoring, content analysis and social network mining based on the pages returned from the four search engines (Google, Baidu, Sougou and Youdao). In the mining process, we adopt the cross-page framework adaptive algorithm which helps to solve the instability problem of the HTML framework codes, to extract information from the acquired web pages. Then we extract the cooccurrence set of high-frequency characteristic words to create the tridimensional social network graph by, adopting the progressive search algorithm in the meta-search engine to extend the attribute set of the keywords. Finally, we conducted three typical case studies. They, are the comparison of the coverage rate between Google and the meta-search engine, the dynamic changes in real-time network based on the meta-search engine and the progressive mining of effective content in meta-search engine, which all showed the advantages of the method in which we proposed the meta-search engine, as we could have more data sources, stronger real-time dynamic monitoring capacity, and deeper progressive searching ability. So we propose this meta-search engine method which can be used in social network study, aiming to develop the quality, of the social network based on content mining, observe the hiding relationships in deeper levels and widen the research scope of content mining.
Dynamic programming algorithms guarantee to find the optimal alignment between two sequences. For more than a few sequences, exact algorithms become computationally impractical, and progressive algorithms iterating pa...
详细信息
Dynamic programming algorithms guarantee to find the optimal alignment between two sequences. For more than a few sequences, exact algorithms become computationally impractical, and progressive algorithms iterating pairwise alignments are widely used. These heuristic methods have a serious drawback because pairwise algorithms do not differentiate insertions from deletions and end up penalizing single insertion events multiple times. Such an unrealistically high penalty for insertions typically results in overmatching of sequences and an underestimation of the number of insertion events. We describe a modification of the traditional alignment algorithm that can distinguish insertion from deletion and avoid repeated penalization of insertions and illustrate this method with a pair hidden Markov model that uses an evolutionary scoring function. In comparison with a traditional progressive alignment method, our algorithm infers a greater number of insertion events and creates gaps that are phylogenetically consistent but spatially less concentrated. Our results suggest that some insertion/deletion "hot spots" may actually be artifacts of traditional alignment algorithms.
A Chebyshev-Vandermonde matrix V = [pj(Z(k))]j,k = 0n is-an-element-of C(n+1) x (n+1) is obtained by replacing the monomial entries of a Vandermonde matrix by Chebyshev polynomials p(j) for an ellipse. The ellipse is ...
详细信息
A Chebyshev-Vandermonde matrix V = [pj(Z(k))]j,k = 0n is-an-element-of C(n+1) x (n+1) is obtained by replacing the monomial entries of a Vandermonde matrix by Chebyshev polynomials p(j) for an ellipse. The ellipse is also allowed to be a disk or an interval. We present a progressive scheme for allocating distinct nodes z(k) on the boundary of the ellipse such that the Chebyshev-Vandermonde matrices obtained are reasonably well-conditioned. Fast progressive algorithms for the solution of the Chebyshev-Vandermonde systems are described. These algorithms are closely related to methods recently presented by Higham. We show that the node allocation is such that the solution computed by the progressive algorithms is fairly insensitive to perturbations in the right-hand side vector. Computed examples illustrate the numerical behavior of the schemes. Our analysis can also be used to bound the condition number of the polynomial interpolation operator defined by Newton's interpolation formula. This extends earlier results of Fischer and the first author.
暂无评论