Complex human-engineered systems involve an interconnection of multiple decision makers (or agents) whose collective behavior depends on a compilation of local decisions that are based on partial information about eac...
详细信息
Complex human-engineered systems involve an interconnection of multiple decision makers (or agents) whose collective behavior depends on a compilation of local decisions that are based on partial information about each other and the state of the environment [1]-[4]. Strategic interactions among agents in these systems can be modeled as a multiplayer simultaneous-move game [5]-[8]. The agents involved can have conflicting objectives, and it is natural to make decisions based upon optimizing individual payoffs or costs.
This paper presents an adaptive step-size gradient adaptive filter. The step size of the adaptive filter is changed according to a gradient descent algorithmdesigned to reduce the squared estimation error during each...
详细信息
This paper presents an adaptive step-size gradient adaptive filter. The step size of the adaptive filter is changed according to a gradient descent algorithmdesigned to reduce the squared estimation error during each iteration. An approximate analysis of the performance of the adaptive filter when its inputs are zero mean, white, and Gaussian and the set of optimal coefficients are time varying according to a random-walk model is presented in the paper. The algorithm has very good convergence speed and low steady-state misadjustment. Furthermore, the tracking performance of these algorithms in nonstationary environments is relatively insensitive to the choice of the parameters of the adaptive filter and is very close to the best possible performance of the least mean square (LMS) algorithm for a large range of values of the step size of the step-size adaptation algorithm. Several simulation examples demonstrating the good properties of the adaptive filter as well as verifying the analytical results are also presented in the paper.
A new algorithm is presented for detecting graph monomorphisms for a pair of graphs. This algorithm entails a tree search based on the projections of the product graph called the net of the two graphs. It uses the min...
详细信息
A new algorithm is presented for detecting graph monomorphisms for a pair of graphs. This algorithm entails a tree search based on the projections of the product graph called the net of the two graphs. It uses the minimum number of neighbors of the projected graphs to detect infeasible subtrees. The algorithm, in comparison with that of Deo and coworkers, is more efficient in its storage space utilization and average execution time. It does not suffer from the ambiguity which arises in Deo et al.'s work when cyclic graphs are matched. Applications to attributed graph monomorphisms are included.
Steady-state analysis of soft-switching converters is discussed in this paper. Finding the steady-state solution of soft-switching converters by means of start-up transient simulation may involve onerous computations ...
详细信息
Steady-state analysis of soft-switching converters is discussed in this paper. Finding the steady-state solution of soft-switching converters by means of start-up transient simulation may involve onerous computations and convergence failure, because of the mix of fast and slow natural frequencies determined respectively by the presence of soft-switching Lr-Cr cells elements and bulky L-C filter elements. The numerical method proposed in this paper shows high reliability and fast convergence, thanks to the adoption of an interval analysis based technique for the detection of commutations and of a compensation theorem based technique for the analysis of commutations. Some examples of steady-state analysis of two dc-dc converters, an inverter and a power factor corrector are presented to highlight the good performances of the simulation algorithm.
Generalized TLM formulations based on modified grids of 2-D shunt nodes or 3-D expanded nodes are proposed. Generalization consists of permitting flexible control of the numerical stability margin (and thus a time-ste...
详细信息
Generalized TLM formulations based on modified grids of 2-D shunt nodes or 3-D expanded nodes are proposed. Generalization consists of permitting flexible control of the numerical stability margin (and thus a time-step for a particular discretization), and of introducing enhanced models for curved boundaries. Formal equivalence between generalized TLM and FDTD algorithms based on the same grids is proved. Simple rules for transforming circuit models (from TLM to FDTD and vice versa) and for their equivalent excitation are given. It is demonstrated that the application of the generalized algorithm reduces computer resources required for the TLM analysis of a circular waveguide by an order of magnitude.
The methodology from this paper exploits fine and coarse-grained parallelism for the automated design of digital architectures for multimedia applications. Specific focus is placed on iterative algorithms, as demonstr...
详细信息
The methodology from this paper exploits fine and coarse-grained parallelism for the automated design of digital architectures for multimedia applications. Specific focus is placed on iterative algorithms, as demonstrated through a case study of the Chambolle algorithm for optical flow estimation.
This paper presents an algorithm for training vector quantizers with an improved version of the Neural Gas model, and its implementation in analog circuitry. Theoretical properties of the algorithm are proven that cla...
详细信息
This paper presents an algorithm for training vector quantizers with an improved version of the Neural Gas model, and its implementation in analog circuitry. Theoretical properties of the algorithm are proven that clarify the performance of the method in terms of quantization quality, and motivate design aspects of the hardware implementation. The architecture for vector quantization training includes two chips, one for Euclidean distance computation, the other for programmable sorting of codevectors. Experimental results obtained in a real application (image coding) support both the algorithm's effectiveness and the hardware performance, which can speed up the training process by up to two orders of magnitude.
Processing high-resolution digital elevation models (DEMs) can be tedious due to the large size of the data. In uncertainty-aware drainage basin delineation, we apply a Monte Carlo (MC) simulation that further increas...
详细信息
Processing high-resolution digital elevation models (DEMs) can be tedious due to the large size of the data. In uncertainty-aware drainage basin delineation, we apply a Monte Carlo (MC) simulation that further increases the processing demand by two to three orders of magnitude. Utilizing graphics processing units (GPUs) can speed up the programs, but their on-chip random access memory (RAM) limits the size of the DEMs that can be processed efficiently on one GPU. Here, we present a parallel uncertainty-aware drainage basin delineation algorithm and a multinode GPU compute unified device architecture (CUDA) implementation along with scalability benchmarking. All of the computations are run on the GPUs, and the parallel processes communicate using a message-passing interface (MPI) via the host central processing units (CPUs). The implementation can utilize any number of nodes, with one or many GPUs per node. The performance and scalability of the program have been tested with a 10-m DEM covering 390,905 km2, i.e., the entire area of Finland. Performing the drainage basin delineation for the DEM with different numbers of GPUs shows a nearly linear strong scalability.
Simulation data is presented for eleven benchmark circuits to show how test pattern correlation in a scan-path design circuit adversely affects delay fault coverage, and to demonstrate that most undetected delay fault...
详细信息
Simulation data is presented for eleven benchmark circuits to show how test pattern correlation in a scan-path design circuit adversely affects delay fault coverage, and to demonstrate that most undetected delay faults caused by correlation of test patterns are close to the outputs of latches. Topology- based latch correlation measures are introduced and used by a companion latch arrangement algorithm to guide the placement of latches in a scan-path design, with the objective of minimizing the effect of correlation and maximizing the coverage of delay faults. Simulation results with benchmark circuits indicate that the scan-path found by the algorithm clearly achieves better delay fault coverage than a scan-path having no deliberate arrangement. The data also indicates that the algorithm is most effective in covering delay faults that are located nearest the latch outputs of the circuit. The approach has an advantage over other arrangement schemes in that it is simple to implement and does not require significant computational time even for large circuits.
For control system analysis and design, and high-speed real-time computation, we highlight the computational formulation, properties and applications of the delta operator model of dynamic physical systems. Doubling a...
详细信息
For control system analysis and design, and high-speed real-time computation, we highlight the computational formulation, properties and applications of the delta operator model of dynamic physical systems. Doubling algorithms for reliable computation and illustrate the transfer function matrix of the delta operator model are formulated. In the fast sampling limit, the delta operator model tends to the dynamic system model. This intrinsic property of the delta operator model unifies continuous and discrete time control engineering and paves the way for the design of robust computational algorithms.
暂无评论