The presented paper aims to analyze the influence of the selection of transfer function and training algorithms on neural network flood runoff forecast. Nine of the most significant flood events, caused by the extreme...
详细信息
The presented paper aims to analyze the influence of the selection of transfer function and training algorithms on neural network flood runoff forecast. Nine of the most significant flood events, caused by the extreme rainfall, were selected from 10 years of measurement on small headwater catchment in the Czech Republic, and flood runoff forecast was investigated using the extensive set of multilayer perceptrons with one hidden layer of neurons. The analyzed artificial neural network models with 11 different activation functions in hidden layer were trained using 7 local optimization algorithms. The results show that the Levenberg-Marquardt algorithm was superior compared to the remaining tested local optimization methods. When comparing the 11 nonlinear transfer functions, used in hidden layer neurons, the RootSig function was superior compared to the rest of analyzed activation functions.
A linear combination of Gaussian components, i.e. a Gaussian 'mixture', is used to represent the target probability density function (pdf) in Multiple Hypothesis Tracking (MHT) systems. The complexity of MHT i...
详细信息
A linear combination of Gaussian components, i.e. a Gaussian 'mixture', is used to represent the target probability density function (pdf) in Multiple Hypothesis Tracking (MHT) systems. The complexity of MHT is typically managed by 'reducing' the number of mixture components. Two complementary MHT mixture reduction algorithms are proposed and assessed using a simulation involving a cluttered infrared (IR) seeker scene. A simple means of incorporating intensity information is also derived and used by both methods to provide well balanced peak-to-track association weights. The first algorithm (MHT-2) uses the Integral Squared Error (ISE) criterion, evaluated over the entire posterior MHT pdf, in a guided optimization procedure, to quickly fit at most two components. The second algorithm (MHT-PE) uses many more components and a simple strategy, involving Pruning and Elimination of replicas, to maximize hypothesis diversity while keeping computational complexity under control. (C) 2013 Elsevier Ltd. All rights reserved.
Detecting communities within networks is of great importance to understand the structure and organizations of real-world systems. To this end, one of the major challenges is to find the local community from a given no...
详细信息
Detecting communities within networks is of great importance to understand the structure and organizations of real-world systems. To this end, one of the major challenges is to find the local community from a given node with limited knowledge of the global network. Most of the existing methods largely depend on the starting node and require predefined parameters to control the agglomeration procedure, which may cause disturbing inference to the results of local community detection. In this work, we propose a parameter-free local community detecting algorithm, which uses two self-adaptive phases in detecting the local community, thus comprehensively considering the external and internal link similarity of neighborhood nodes in each clustering iteration. Based on boundary nodes identification, our self-adaptive method can effectively control the scale and scope of the local community. Experimental results show that our algorithm is efficient and well-behaved in both computer-generated and real-world networks, greatly improving the performance of local community detection in terms of stability and accuracy.
Focusing on novel database application scenarios, where data sets arise more and more in uncertain and imprecise formats, in this paper we propose a novel decomposition framework for efficiently computing and querying...
详细信息
Focusing on novel database application scenarios, where data sets arise more and more in uncertain and imprecise formats, in this paper we propose a novel decomposition framework for efficiently computing and querying multidimensional OLAP data cubes over probabilistic data, which well-capture previous kind of data. Several models and algorithms supported in our proposed framework are formally presented and described in details, based on well-understood theoretical statistical/probabilistic tools, which converge to the definition of the so-called probabilistic OLAP data cubes, the most prominent result of our research. Finally, we complete our analytical contribution by introducing an innovative Probability Distribution Function (PDF)-based approach, which makes use of well-known probabilistic estimators theory, for efficiently querying probabilistic OLAP data cubes, along with a comprehensive experimental assessment and analysis over synthetic probabilistic databases.
Face super-resolution refers to inferring the high-resolution face image from its low-resolution one. In this paper, we propose a parts-based face hallucination framework which consists of global face reconstruction a...
详细信息
Face super-resolution refers to inferring the high-resolution face image from its low-resolution one. In this paper, we propose a parts-based face hallucination framework which consists of global face reconstruction and residue compensation. In the first phase, correlation-constrained non-negative matrix factorization (CCNMF) algorithm combines non-negative matrix factorization and canonical correlation analysis to hallucinate the global high-resolution face. In the second phase, the High-dimensional Coupled NMF (HCNMF) algorithm is used to compensate the error residue in hallucinated images. The proposed CCNMF algorithm can generate global face more similar to the ground truth face by learning a parts-based local representation of facial images;while the HCNMF can learn the relation between high-resolution residue and low-resolution residue to better preserve high frequency details. The experimental results validate the effectiveness of our method. (C) 2014 Elsevier Ltd. All rights reserved.
One of the most common defects in digital photography is motion blur caused by camera shake. Shift-invariant motion blur can be modeled as a convolution of the true latent image and a point spread function (PSF) with ...
详细信息
One of the most common defects in digital photography is motion blur caused by camera shake. Shift-invariant motion blur can be modeled as a convolution of the true latent image and a point spread function (PSF) with additive noise. The goal of image deconvolution is to reconstruct a latent image from a degraded image. However, ringing is inevitable artifacts arising in the deconvolution stage. To suppress undesirable artifacts, regularization based methods have been proposed using natural image priors to overcome the ill-posedness of deconvolution problem. When the estimated PSF is erroneous to some extent or the PSF size is large, conventional regularization to reduce ringing would lead to loss of image details. This paper focuses on the nonblind deconvolution by adaptive regularization which preserves image details, while suppressing ringing artifacts. The way is to control the regularization weight adaptively according to the image local characteristics. We adopt elaborated reference maps that indicate the edge strength so that textured and smooth regions can be distinguished. Then we impose an appropriate constraint on the optimization process. The experiments' results on both synthesized and real images show that our method can restore latent image with much fewer ringing and favors the sharp edges.
To express temporal properties of dense-time real-valued signals, the Signal Temporal Logic (STL) has been defined by Maler et al. The work presented a monitoring algorithm deciding the satisfiability of STL formulae ...
详细信息
To express temporal properties of dense-time real-valued signals, the Signal Temporal Logic (STL) has been defined by Maler et al. The work presented a monitoring algorithm deciding the satisfiability of STL formulae on finite discrete samples of continuous signals. The logic is not expressive enough to sufficiently distinguish oscillatory properties important in biology. In this paper we introduce the extended logic STL* in which STL is augmented with a signal-value freezing operator allowing to express (and distinguish) various dynamic aspects of oscillations. This operator may be nested for further increase of expressiveness. The logic is supported by a monitoring algorithm prototyped in Matlab for the fragment that avoids nesting of the freezing operator. The monitoring procedure for STL* is evaluated on a sample oscillatory signal with varied parameters. Application of the extended logic is demonstrated on a case study of a biological oscillator. We also discuss expressive power of STL with respect to STL*. (c) 2014 Elsevier Inc. All rights reserved.
The vast volumes of seismic data being recorded by both permanent and temporary networks operating all over the world provide exciting opportunities for studying the Earth’s interior and earthquake source characteris...
详细信息
The vast volumes of seismic data being recorded by both permanent and temporary networks operating all over the world provide exciting opportunities for studying the Earth’s interior and earthquake source characteristics. As a result, the development of efficient computer algorithms and procedures capable of automatically extracting and processing such long streams of data is one of the most challenging issues facing modern seismological research. Valoroso et al. (2013) obtained an extraordinary degree of detail in the anatomy of the normal-fault system of the l’Aquila earthquake after processing around 64,000 aftershocks (extracted, picked, and located) via an automated procedure. Spectral analysis of K-NET and KiK-net data in Japan was carried out by Oth et al. (2011) on the basis of more than 67,000 records analyzed via an automated procedure that included phase picking, earthquake location, and coda identification.
Community structure is one of the most fundamental and important topology characteristics of complex networks. The research on community structure has wide applications and is very important for analyzing the topology...
详细信息
Community structure is one of the most fundamental and important topology characteristics of complex networks. The research on community structure has wide applications and is very important for analyzing the topology structure, understanding the functions, finding the hidden properties, and forecasting the time-varying of the networks. This paper analyzes some related algorithms and proposes a new algorithm-CN agglomerative algorithm based on graph theory and the local connectedness of network to find communities in network. We show this algorithmis distributed and polynomial;meanwhile the simulations show it is accurate and fine-grained. Furthermore, we modify this algorithm to get one modified CN algorithm and apply it to dynamic complex networks, and the simulations also verify that the modified CN algorithm has high accuracy too.
暂无评论