Micro-fabricated devices are increasingly being used to interface with biological samples. Protein patterning is an essential component to this interface, allowing the device to specifically interact with target prote...
详细信息
A method for detecting and segmenting periodic motion is presented. We exploit periodicity as a cue and detect periodic motion in complex scenes where common methods for motion segmentation are likely to fail. We note...
详细信息
Dynamic Power Management (DPM) entails employing strategies that yield acceptable trade-off between power/energy usage and their performance penalties. These include heuristic shutdown policies, prediction-based shutd...
详细信息
Dynamic Power Management (DPM) entails employing strategies that yield acceptable trade-off between power/energy usage and their performance penalties. These include heuristic shutdown policies, prediction-based shutdown policies, multiple voltage scaling and stochastic modeling based policy optimization. On the other hand, Architectural techniques for power savings include application specific techniques for multi-media hardware systems, and generic techniques like clock gating, on-line profiling based monitoring and control etc. Other paradigms of architectures such as network on chip (NoCs) target power optimization as well. Protocol level power optimization methods include generic techniques employed in wireless standard protocols, as well as techniques specific to multi-media traffic. DPM strategies get increasingly sophisticated due to improved power manageability of hardware components. In this context, there is a positive feedback in action. Power management techniques show the potential for power savings, and this pushes hardware developers to support more advanced (finer grained and lower overhead) power management modes. In this tutorial, we will provide an overview of three main issues in three segments, namely, architecture level, system level and protocol level techniques of power minimization and management, how they influence each other. However, we will not concentrate on low power VLSI techniques.
Consider the problem of joint parameter estimation and prediction in a Markov random field: i.e., the model parameters are estimated on the basis of an initial set of data, and then the fitted model is used to perform...
详细信息
Consider the problem of joint parameter estimation and prediction in a Markov random field: i.e., the model parameters are estimated on the basis of an initial set of data, and then the fitted model is used to perform prediction (e.g., smoothing, denoising, interpolation) on a new noisy observation. Working in the computation-limited setting, we analyze a joint method in which the same convex variational relaxation is used to construct an M-estimator for fitting parameters, and to perform approximate marginalization for the prediction step. The key result of this paper is that in the computation-limited setting, using an inconsistent parameter estimator (i.e., an estimator that returns the "wrong" model even in the infinite data limit) is provably beneficial, since the resulting errors can partially compensate for errors made by using an approximate prediction technique. En route to this result, we analyze the asymptotic properties of M-estimators based on convex variational relaxations, and establish a Lipschitz stability property that holds for a broad class of variational methods. We show that joint estimation/prediction based on the reweighted sum-product algorithm substantially outperforms a commonly used heuristic based on ordinary sum-product.
The computation required for Gaussian process regression with n training examples is about O(n3) during training and O(n) for each prediction. This makes Gaussian process regression too slow for large datasets. In thi...
The computation required for Gaussian process regression with n training examples is about O(n3) during training and O(n) for each prediction. This makes Gaussian process regression too slow for large datasets. In this paper, we present a fast approximation method, based on kd-trees, that significantly reduces both the prediction and the training times of Gaussian process regression.
This paper presents a novel and effective Bayesian belief network that integrates object segmentation and recognition. The network consists of three latent variables that represent the local features, the recognition ...
详细信息
Existing ASIPs (application-specific instruction-set processors) and compiler-based co-processor synthesis approaches meet the increasing performance requirements of embedded applications while consuming less power th...
详细信息
Existing ASIPs (application-specific instruction-set processors) and compiler-based co-processor synthesis approaches meet the increasing performance requirements of embedded applications while consuming less power than high-performance gigahertz microprocessors. However, existing approaches place restrictions on software languages and compilers. Binary-level co-processor generation has previously been proposed as a complementary approach to reduce impact on tool restrictions, supporting all languages and compilers, at the cost of some decrease in performance. In a binary-level approach, decompilation recovers much of the high-level information, like loops and arrays, needed for effective synthesis, and in many cases yields hardware similar to that of a compiler-based approach. However, previous binary-level approaches have not considered the effects of software compiler optimizations on the resulting hardware. In this paper, we introduce two new decompilation techniques, strength promotion and loop rerolling, and show that they are necessary to synthesize an efficient custom hardware coprocessor from a binary in the presence of software compiler optimizations. In addition, unlike previous approaches, we show the robustness of binary-level co-processor generation by achieving order of magnitude speedups for binaries generated for three different instruction sets, MIPS, ARM, and MicroBlaze, using two different levels of compiler optimizations.
Aliasing artifacts in images are visually very disturbing. Therefore, most imaging devices apply a low-pass filter before sampling. This removes all aliasing from the image, but it also creates a blurred image. Actual...
详细信息
Aliasing artifacts in images are visually very disturbing. Therefore, most imaging devices apply a low-pass filter before sampling. This removes all aliasing from the image, but it also creates a blurred image. Actually, all the image information above half the sampling frequency is removed. In this paper, we present a new method for the reconstruction of a high resolution image from a set of highly undersampled and thus aliased images. We use the information in the entire frequency spectrum, including the aliased part, to create a sharp, high resolution image. The unknown relative shifts between the images are computed using a subspace projection approach. We show that the projection can be decomposed into multiple projections onto smaller subspaces. This allows for a considerable reduction of the overall computational complexity of the algorithm. A high resolution image can then be reconstructed from the registered low resolution images. Simulation results show the validity of our algorithm.
We present a new method for calculating approximate marginals for probability distributions defined by graphs with cycles, based on a Gaussian entropy bound combined with a semidefinite outer bound on the marginal pol...
详细信息
ISBN:
(纸本)0262201526
We present a new method for calculating approximate marginals for probability distributions defined by graphs with cycles, based on a Gaussian entropy bound combined with a semidefinite outer bound on the marginal polytope. This combination leads to a log-determinant maximization problem that can be solved by efficient interior point methods [8]. As with the Bethe approximation and its generalizations [12], the optimizing arguments of this problem can be taken as approximations to the exact marginals. In contrast to Bethe/Kikuchi approaches, our variational problem is strictly convex and so has a unique global optimum. An additional desirable feature is that the value of the optimal solution is guaranteed to provide an upper bound on the log partition function. In experimental trials, the performance of the log-determinant relaxation is comparable to or better than the sum-product algorithm, and by a substantial margin for certain problem classes. Finally, the zero-temperature limit of our log-determinant relaxation recovers a class of well-known semidefinite relaxations for integer programming [e.g., 3].
We describe a system for pronoun interpretation that is self-trained from raw data, that is, using no annotated training data. The result outperforms a Hobbsian baseline algorithm and is only marginally inferior to an...
暂无评论