Face recognition has become a popular topic due to its applications in security, surveillance and so on. Current local methods such as the local binary pattern (LBP) or local derivative pattern (LDP) perform better th...
详细信息
Face recognition has become a popular topic due to its applications in security, surveillance and so on. Current local methods such as the local binary pattern (LBP) or local derivative pattern (LDP) perform better than holistic methods since they are more stable on local changes such as misalignment, expression or occlusion, but their high computational complexity limit their applications. While LBP is a good feature method, the scale invariant feature transform (SIFT) is widely accepted as one of the best features to capture edge or local shape information. However, SIFT-based schemes are sensitive to illumination variation. Thus, the authors propose an LBP edge-mapped descriptor that uses maxima of gradient magnitude points. It accurately illustrates facial contours and has low computational complexity. Under variable lighting, experimental results show that the authors' method has a 16.5% higher recognition rate and requires 9.06 times less execution time than SIFT under FERET fc. Besides, when applied to the Extended Yale Face Database B, the authors' method outperformed SIFT-based approaches as well as saving about 70.9% in execution time. In uncontrolled conditions, their method has a 0.82% higher recognition rate than LDP histogram sequences in the Unconstrained Facial Images database.
Read trimming is a fundamental first step of the analysis of next generation sequencing (NGS) data. Traditionally, it Is performed heuristically, and algorithmic work in this area has been neglected. Here, we address ...
详细信息
Read trimming is a fundamental first step of the analysis of next generation sequencing (NGS) data. Traditionally, it Is performed heuristically, and algorithmic work in this area has been neglected. Here, we address this topic and formulate three optimization problems for block-based trimming (truncating the same low-quality positions at both ends for all reads and removing low-quality truncated reads). We find that all problems are NP-hard. Hence, we investigate the approximability of the problems. Two of them are NP-hard to approximate. However, the non-random distribution of quality scores in NGS data sets makes it tempting to speculate that quality constraints for read positions are typically satisfied by fulfilling quality constraints for reads. Thus, we propose three relaxed problems and develop efficient polynomial-time algorithms for them including heuristic speed-up techniques and parallelizations. We apply these optimized block trimming algorithms to 12 data sets from three species, four sequencers, and read lengths ranging from 36 to 101 bp and find that (i) the omitted constraints are indeed almost always satisfied, (ii) the optimized read trimming algorithms typically yield a higher number of untrimmed bases than traditional heuristics, and (iii) these results can be generalized to alternative objective functions beyond counting the number of untrimmed bases.
Fitting models to behavior is commonly used to infer the latent computational factors responsible for generating behavior. However, the complexity of many behaviors can handicap the interpretation of such models. Here...
详细信息
Fitting models to behavior is commonly used to infer the latent computational factors responsible for generating behavior. However, the complexity of many behaviors can handicap the interpretation of such models. Here we provide perspectives on problems that can arise when interpreting parameter fits from models that provide incomplete descriptions of behavior. We illustrate these problems by fitting commonly used and neurophysiologically motivated reinforcement-learning models to simulated behavioral data sets from learning tasks. These model fits can pass a host of standard goodness-of-fit tests and other model-selection diagnostics even when the models do not provide a complete description of the behavioral data. We show that such incomplete models can be misleading by yielding biased estimates of the parameters explicitly included in the models. This problem is particularly pernicious when the neglected factors are unknown and therefore not easily identified by model comparisons and similar methods. An obvious conclusion is that a parsimonious description of behavioral data does not necessarily imply an accurate description of the underlying computations. Moreover, general goodness-of-fit measures are not a strong basis to support claims that a particular model can provide a generalized understanding of the computations that govern behavior. To help overcome these challenges, we advocate the design of tasks that provide direct reports of the computational variables of interest. Such direct reports complement model-fitting approaches by providing a more complete, albeit possibly more task-specific, representation of the factors that drive behavior. computational models then provide a means to connect such task-specific results to a more general algorithmic understanding of the brain.
We introduce and investigate a natural extension of Dung's well-known model of argument systems in which attacks are associated with a weight, indicating the relative strength of the attack. A key concept in our f...
详细信息
We introduce and investigate a natural extension of Dung's well-known model of argument systems in which attacks are associated with a weight, indicating the relative strength of the attack. A key concept in our framework is the notion of an inconsistency budget, which characterises how much inconsistency we are prepared to tolerate: given an inconsistency budget beta, we would be prepared to disregard attacks up to a total weight of beta. The key advantage of this approach is that it permits a much finer grained level of analysis of argument systems than unweighted systems, and gives useful solutions when conventional (unweighted) argument systems have none. We begin by reviewing Dung's abstract argument systems, and motivating weights on attacks (as opposed to the alternative possibility, which is to attach weights to arguments). We then present the framework of weighted argument systems. We investigate solutions for weighted argument systems and the complexity of computing such solutions, focussing in particular on weighted variations of grounded extensions. Finally, we relate our work to the most relevant examples of argumentation frameworks that incorporate strengths. (C) 2010 Elsevier B.V. All rights reserved.
In the IEEE 802.11ax Wireless Local Area Networks (WLANs), Orthogonal Frequency Division Multiple Access (OFDMA) has been applied to enable the high-throughput WLAN amendment. However, with the growth of the number of...
详细信息
In the IEEE 802.11ax Wireless Local Area Networks (WLANs), Orthogonal Frequency Division Multiple Access (OFDMA) has been applied to enable the high-throughput WLAN amendment. However, with the growth of the number of devices, it is difficult for the Access Point (AP) to schedule uplink transmissions, which calls for an efficient access mechanism in the OFDMA uplink system. Based on Multi-Agent Proximal Policy Optimization (MAPPO), we propose a Mean-Field Multi-Agent Proximal Policy Optimization (MFMAPPO) algorithm to improve the throughput and guarantee the fairness. Motivated by the Mean-Field games (MFGs) theory, a novel global state and action design are proposed to ensure the convergence of MFMAPPO in the massive access scenario. The Multi-Critic Single-Policy (MCSP) architecture is deployed in the proposed MFMAPPO so that each agent can learn the optimal channel access strategy to improve the throughput while satisfying fairness requirement. Extensive simulation experiments are performed to show that the MFMAPPO algorithm 1) has low computational complexity that increases linearly with respect to the number of stations 2) achieves nearly optimal throughput and fairness performance in the massive access scenario, 3) can adapt to various diverse and dynamic traffic conditions without retraining, as well as the traffic condition different from training traffic.
The notion of in-core fuel management (ICFM) involves decision making in respect of the specific arrangement of fuel assemblies in a nuclear reactor core. This arrangement, referred to as a reload configuration, influ...
详细信息
The notion of in-core fuel management (ICFM) involves decision making in respect of the specific arrangement of fuel assemblies in a nuclear reactor core. This arrangement, referred to as a reload configuration, influences the efficiency and effectiveness of fuel usage in a reactor. A decision support system (DSS) may assist nuclear reactor operators in improving the quality of their reload configuration designs. In this paper, a generic optimisation-based DSS framework is proposed for multi-objective ICFM, with the intention of serving as a high-level formalisation of a computerised tool that can assist reactor operators in their complex ICFM decisions.
When the duration of the cyclic prefix (CP) is shorter than that of the channel impulse response in single carrier-frequency division multiple access systems, inter-symbol interference and inter-carrier interference w...
详细信息
When the duration of the cyclic prefix (CP) is shorter than that of the channel impulse response in single carrier-frequency division multiple access systems, inter-symbol interference and inter-carrier interference will degrade the system performance. Previously, one solution of this problem while considering the effect of carrier frequency offsets (CFOs) and the co-channel interference is a blind received beamforming scheme based on eigenanalysis in a batch mode. As the capability in suppressing the multipath signal with the delay larger than the CP length has not previously been analysed theoretically for the scheme, the theoretical analysis regarding the capability in suppressing the long-delayed multipath signal is provided in this study. The analysis provided in this study is also utilised to design an adaptive processing scheme. The adaptive algorithm is developed to find the beamforming weight vector updated on a per symbol basis without using reference signals. The proposed adaptive algorithm reduces the computational complexity of and shows competitive performance under the insufficient CP, the CFOs, the co-channel interference and the time-varying scenarios. The simulation results reveal that the proposed adaptive algorithm provides better performance than the previously proposed algorithm.
We consider the problem of a searcher exploring an initially unknown weighted planar graph G. When the searcher visits a vertex v, it learns of each edge incident to v. The searcher's goal is to visit each vertex ...
详细信息
We consider the problem of a searcher exploring an initially unknown weighted planar graph G. When the searcher visits a vertex v, it learns of each edge incident to v. The searcher's goal is to visit each vertex of G, incurring as little cost as possible. We present a constant competitive algorithm for this problem.
The use of the discrete wavelet transform (DWT) for embedded lossy image compression is now well established. One of the possible implementations of the DWT is the lifting scheme (LS). Because perfect reconstruction i...
详细信息
The use of the discrete wavelet transform (DWT) for embedded lossy image compression is now well established. One of the possible implementations of the DWT is the lifting scheme (LS). Because perfect reconstruction is granted by the structure of the LS, nonlinear transforms can be used, allowing efficient lossless compression as well. The integer wavelet transform (IWT) is one of them. This is an interesting alternative to the DWT because its rate-distortion performances is similar and the differences can be predicted. This topic is investigated in a theoretical framework. A model of the degradations caused by the use of the IWT instead of the DWT for lossy compression is presented. The rounding operations are modeled as additive noises. The noises are then propagated through the LS structure to measure their impact on the reconstructed pixels. This methodology is verified using simulations with random noise as input. It predicts accurately the results obtained using images compressed by the well-known EZW [1] algorithm. Experiments are also performed to measure the difference in terms of bitrate and visual quality. This allows to a better understanding of the impact of the IWT when applied to lossy image compression.
Exclusion algorithms have been used recently to find all solutions of a system of nonlinear equations or to find the global minimum of a function over a compact domain. These algorithms are based on a minimization con...
详细信息
Exclusion algorithms have been used recently to find all solutions of a system of nonlinear equations or to find the global minimum of a function over a compact domain. These algorithms are based on a minimization condition that can be applied to each cell in the domain. In this paper, we consider Lipschitz functions of order x and give a new minimization condition for the exclusion algorithm. Furthermore, convergence and complexity results are presented for such algorithm. (c) 2006 Elsevier B.V. All rights reserved.
暂无评论