Place cells are neurons in the hippocampus that are sensitive to location within an environment. Simulations of a large-scale, computational model of the rat dentate gyrus using grid cell input have been performed res...
详细信息
ISBN:
(纸本)9781457702198
Place cells are neurons in the hippocampus that are sensitive to location within an environment. Simulations of a large-scale, computational model of the rat dentate gyrus using grid cell input have been performed resulting in granule cells that express multiple place fields. The typical method of detecting place fields using a global threshold on this data is unreliable as the characteristics of the place fields from a single neuron can be highly variable. A grid-based implementation of DENCLUE has been developed to calculate local thresholds to identify each place field. An adaptive binning algorithm used to smooth the rate maps was combined with the DENCLUE implementation to adaptively choose the size of the smoothing kernel and reduce the number of free parameters of the total algorithm. A sensitivity analysis was performed using the threshold parameter to demonstrate the robustness of using local thresholds as opposed to using a single global threshold in detecting the place fields resulting from the large-scale simulation. The analysis supports the use of applying local thresholds for place field detection and will be used to further investigate the role of granule cells in hippocampal function.
Metaheuristics are high level strategies for exploring the search space by using different methods to solve global optimization problems. In this paper, Football Game algorithm has been proposed as a new metaheuristic...
详细信息
Metaheuristics are high level strategies for exploring the search space by using different methods to solve global optimization problems. In this paper, Football Game algorithm has been proposed as a new metaheuristic algorithm based on the simulation of football players' behavior during a game for finding best positions to score a goal under supervision of the team coach. Simulation of humans' intelligences who are working together as a team to reach a specific goal instead of simulating the intelligence of various animal swarms in the nature is the most important distinction of the proposed algorithm to other existing algorithms that also introduces a new approach for making balance between diversification and intensification. Football Game algorithm is a nature inspired, population base algorithm with ability in finding multiple global optimums. We have studied general football game tactics and idealized its characteristics to formulate Football Game algorithm. We have then compared the proposed algorithm with other metaheuristics, including standard and modified particle swarm optimization and bat algorithm. The result of comparison studies show that the proposed algorithm outperforms other algorithms and also has more robust performance. Finally, we have discussed and concluded by pointing out special attributes of the Football Game algorithm.
What is exactly `Big Data', and for what purpose and application is it really efficient? Between the commercial promises made by the industrial actors and the Cassandra's cautions from some whistle-blowers, we...
详细信息
ISBN:
(纸本)9781509000463
What is exactly `Big Data', and for what purpose and application is it really efficient? Between the commercial promises made by the industrial actors and the Cassandra's cautions from some whistle-blowers, we propose a singular Big Data field to investigate with Inductive Data-Driven algorithms: developing collections. Last but not least, we investigate the innovative possibility to curate `figural' collections, characterized by Jean Piaget as follows: “A figural collection composes a figure, through the spatial relationships between its elements, whereas non-figural collections and classes are free of any figure”. Thus, we incidentally disclose some important Abstract Truth of Big Data.
In this work we present a framework that profiles HEVC (High Efficiency Video Coding) encoders modules focusing on cache memory performance and energy. This framework considers the HEVC reference software (HM) and ana...
详细信息
In this work we present a framework that profiles HEVC (High Efficiency Video Coding) encoders modules focusing on cache memory performance and energy. This framework considers the HEVC reference software (HM) and analyzes the impact of some coding parameters on the cache hierarchy. HEVC was proposed in 2013, presenting new video coding techniques to deal with the demand for higher resolutions. The tools included in the HEVC increase significantly the computational effort and energy consumption required to encode videos when compared to its predecessor, H.264/AVC. For this analysis we used the proposed framework MAP-HEVC (Memory Access Profiling for HEVC) considering seven different video coding configurations and four video resolutions. In the prediction module, the results showed that using Full Search (FS) results in 75% more accesses than using Test Zonal (TZ) Search. The results also suggest that using 16×16 search ranges is a very viable option to reduce the memory accesses achieving a better compression. For the residual coding, the Rate-Distortion Optimized Quantization (RDOQ) was evaluated and our analysis showed that this tool adds 10% more accesses to the memory.
Indexes are very important in query data from database, especially in large scale of data. Rational use of index technology is essential in improving database's query performance. SQL Server database use B + tree...
详细信息
ISBN:
(纸本)9781509009107
Indexes are very important in query data from database, especially in large scale of data. Rational use of index technology is essential in improving database's query performance. SQL Server database use B + tree structure to store indexes. Clustered index can change data's physical position. Both of Clustered index and non-clustered index depend on B + index tree to query data. Non-clustered index should depend on either data row pointer or clustered index key to find the retrieve data. Two algorithms are used in SQL Server database to retrieve data. Usually table scan should be avoided except that large-scale of data involved in the query or most columns of the table are covered in the query. Table seek has a better efficiency for most cases. Some principle of how to use index properly are given in this paper. The paper also suggest that functions, calculations and some query conditions should to be avoided or replaced, or to be used as little as possible in query statement to make indexes effective.
Given a graph G = (V, E), a node is called perfect (with respect to a set S ⊆ V) if its closed neighborhood contains exactly one node in set S, a node is called nearly perfect if it is not perfect but is adjacent to a...
详细信息
ISBN:
(纸本)9781509044719
Given a graph G = (V, E), a node is called perfect (with respect to a set S ⊆ V) if its closed neighborhood contains exactly one node in set S, a node is called nearly perfect if it is not perfect but is adjacent to a perfect node. S is called a perfect neighborhood set if each node is either perfect or nearly perfect. We present the first self-stabilizing algorithm for computing a perfect neighborhood set in an arbitrary graph. This anonymous, constant space algorithm terminates in O(n2) steps using an unfair central daemon, where n is the number of nodes in the graph.
Significant wave height (SWH) is one of the major ocean altimetry products. This paper proposes a fast algorithm in retrieving the SWH from altimeter waveform. The idea is to differentiate the echo waveform and acquir...
详细信息
Significant wave height (SWH) is one of the major ocean altimetry products. This paper proposes a fast algorithm in retrieving the SWH from altimeter waveform. The idea is to differentiate the echo waveform and acquire a Gaussian-shape waveform. Then the standard deviation of the Gaussian function is computed directly and the standard deviation of the scatter probability density function (hence, SWH) is retrieved. The performance of the algorithm is verified by Jason-2 altimeter data. This algorithm can be employed onboard or in the near-real-time processing in future altimeter missions.
Support vector machine (SVM) is a supervised method widely used in the statistical classification and regression analysis. SVM training can be solved via the interior point method (IPM) with the advantages of low stor...
详细信息
Support vector machine (SVM) is a supervised method widely used in the statistical classification and regression analysis. SVM training can be solved via the interior point method (IPM) with the advantages of low storage, fast convergence and easy parallelization. However, it is still confronted with the challenges of training speed and memory use. In this paper, we propose a parallel primal-dual IPM algorithm based on the incomplete Cholesky factorization (ICF) for efficiently training large-scale SVMs, named HPSVM, on CPU-GPU cluster. Our approach is distinguished from earlier work in that it is specifically designed to take maximal advantage of the CPU-GPU collaborative computation with the dual buffers 3-stage pipeline mechanism, and efficiently handles large-scale training datasets. In HPSVM, the heterogeneous hierarchical memory is fully explored to alleviate the bottleneck for optimizing data transfer, and the programming paradigm is presented to build an efficient collaboration mechanism between CPU and GPU. Comprehensive experiments show that HPSVM is up to 11 times faster than the CPU version on real datasets.
The main focus of this work is on analysis of the Differential Doppler method having a designed algorithm for computing and 2-D visualization of the iso doppler curves for three-positional and two-positional moving st...
详细信息
ISBN:
(纸本)9781509013036
The main focus of this work is on analysis of the Differential Doppler method having a designed algorithm for computing and 2-D visualization of the iso doppler curves for three-positional and two-positional moving stations with a stationary target. This work also describes the effect of the target position and velocity vectors of the moving stations on the shape of the iso doppler curves. Using the Differential Doppler method, new algorithm was proposed for determining the target position from two or three moving stations and for verifying its ambiguity in the Matlab program.
The difficulty to solve many objective optimization problems (MaOP) with well-established Multi-objective Evolutionary algorithms as NSGA-II (Non-dominated Sorting Genetic algorithm-II) motivates this work to develop ...
详细信息
ISBN:
(纸本)9781509033409
The difficulty to solve many objective optimization problems (MaOP) with well-established Multi-objective Evolutionary algorithms as NSGA-II (Non-dominated Sorting Genetic algorithm-II) motivates this work to develop a new alternative for solving MaOP problems. Thus, this paper proposes a novel variant of Simulated Annealing (SA) as an alternative to solve MaOP problems, combining also the proposed SA with clustering reduction techniques and tabu search. A comparative analysis between the proposed algorithm and the reference algorithm NSGA-II is presented using the recognized test set DTLZ. Experimental results using different performance metrics prove the advantages of the proposed algorithm over a well-established state of the art algorithm as NSGA-II.
暂无评论