A high-level partitioning methodology has been reported which explores the space of equivalent discrete Fourier transform formulations, achieving significant improvements over previously published results. In this art...
详细信息
A high-level partitioning methodology has been reported which explores the space of equivalent discrete Fourier transform formulations, achieving significant improvements over previously published results. In this article, we discuss the extension of this formulation-exploration strategy to partition the discrete cosine transform (DCT) onto distributed hardware architectures, e.g. multi-FPGA platforms. We study several regular DCT formulations and consider their potential for distributed implementation. By analyzing the DCT computational structure, a new Cooley-Tukey-like formulation was derived which allows the DCT factorization into arbitrary sized blocks while preserving structural regularity. Experiments were conducted to measure the partition quality of the previous and new DCT formulations, evidencing the need for formulation-exploration as part of the partition optimization process.
We propose a feature-based multi-target tracking algorithm which can track multiple targets in real time with a simple but efficient velocity segmentation method. The optical flow velocity distribution profile of the ...
详细信息
We propose a feature-based multi-target tracking algorithm which can track multiple targets in real time with a simple but efficient velocity segmentation method. The optical flow velocity distribution profile of the feature points detected on a moving target or on the background when the camera has ego-motion is assumed to have a Gaussian-like function. We can separate the background feature points and those of moving targets by examining the velocity distribution function profile at each frame without a prior knowledge on the number and texture of the moving objects. The optical flow velocity of a feature point in each image frame is computed by the iterative Lucas-Kanade algorithm. These feature points are divided into groups with the similar velocity. Feature points are further divided into sub-groups according to their proximity the image frame. The feature point group with the largest span is identified as the background feature and its velocity is the camera velocity when the background is fixed. Multiple targets are identified based on their velocities and proximity in one frame. With a pyramidal sampling scheme to reduce the frame size to one-sixteenth of its original size, and the iterative Lucas-Kanade algorithm to find the optical flow in a multi-resolution manner, we are able to track multiple objects in real time with a moving hand-held camera.
In this paper we describe the recent development of a low-bandwidth wireless camera sensor network. We propose a simple, yet effective, network architecture which allows multiple cameras to be connected to the network...
详细信息
In this paper we describe the recent development of a low-bandwidth wireless camera sensor network. We propose a simple, yet effective, network architecture which allows multiple cameras to be connected to the network and synchronize their communication schedules. Image compression of greater than 90% is performed at each node running on a local DSP coprocessor, resulting in nodes using 1/8th the energy compared to streaming uncompressed images. We briefly introduce the Fleck wireless node and the DSP/camera sensor, and then outline the network architecture and compression algorithm. The system is able to stream color QVGA images over the network to a base station at up to 2 frames per second.
In this paper, we present hardware decompression accelerators for bridging the gap between high speed FPGA configuration interfaces and slow configuration memories. We discuss different compression algorithms suitable...
详细信息
In this paper, we present hardware decompression accelerators for bridging the gap between high speed FPGA configuration interfaces and slow configuration memories. We discuss different compression algorithms suitable for a decompression on FPGAs as well as on CPLDs with respect to the achievable compression ratio, throughput, and hardware overhead. This leads to various decompressor implementations with one capable to decompress at high data rates of up to 400 megabytes per second while only requiring slightly more than a hundred look-up tables. Furthermore, we present a sophisticated configuration bitstream benchmark.
This work proposes an algorithm which combines estimation distribution algorithm with a chromosome compression scheme to solve large scale problems. The search space reduction resulted from chromosome compression enab...
详细信息
This work proposes an algorithm which combines estimation distribution algorithm with a chromosome compression scheme to solve large scale problems. The search space reduction resulted from chromosome compression enables the proposed algorithm to solve one-million-bit problems and a one-billion-bit problem. Arithmetic coding represents a compressed binary string with two real numbers. Using this representation, a model of highly fit individuals can be constructed. This model can be used to evolve the solution in the manner of estimation distribution algorithm. The proposed algorithm is applied to large scale problems which are one-million-bit OneMax, royal road, trap functions. It is also applied to one-billion-bit OneMax problem. The experimental result shows that the proposed algorithm can solve million-bit OneMax problem in 4 seconds and billion-bit OneMax problem in 92 minutes using a normal PC-class computer.
Satisfying quality of service (QoS) is one of the main goals of the wireless fourth generation (4G), which will integrate tightly a multitude of different heterogeneous networks including cellular networks (second gen...
详细信息
Satisfying quality of service (QoS) is one of the main goals of the wireless fourth generation (4G), which will integrate tightly a multitude of different heterogeneous networks including cellular networks (second generation, third generation, wireless local area networks, bluetooth, etc). In this paper, we develop an information-theoretic framework for optimal location updating and paging for wireless 4G network, which integrate tightly many different access networks. Then, we adopt the LZW compression algorithm as the basis of our location management schemes. Simulation results demonstrate that our proposed schemes decrease the signaling cost.
In this paper we propose new model of vector quantizers which combines two classical models of quantizers. Particularly, the proposed model, denoted as hybrid, combines the compandor model and the general Lloyd-Max...
详细信息
In this paper we propose new model of vector quantizers which combines two classical models of quantizers. Particularly, the proposed model, denoted as hybrid, combines the compandor model and the general Lloyd-Max's model of vector quantizers. The performance analizes of these models are caried out for the Gaussian input signals. It is demonstrated that for a fixed number of quantizaton cells vector quantizers designed according to the hybrid model provides quantizer's performance very close to the optimal that corresponds to the Lloyd-Max's quantizers.
Motion-compensated temporal filtering (MCTF) is an essential ingredient of recently developed wavelet-based scalable video coding schemes. Lifting implementation of these decompositions represents a versatile tool for...
详细信息
Motion-compensated temporal filtering (MCTF) is an essential ingredient of recently developed wavelet-based scalable video coding schemes. Lifting implementation of these decompositions represents a versatile tool for spatio-temporal optimizations and several improvements have already been proposed in this framework. Several coding parameters affect the performance of the scalable video coding scheme, such as the number of temporal levels and the interpolation filter used for sub-pixel accuracy. We show that the influence of these parameters depends on the video content. Thus, we present an adaptive way of choosing the value of these parameters, based on the video content. Experimental results show that the proposed method not only significantly improves the performance, but reduces the complexity of the coding procedure.
In this paper we explore H.264/AVC operating in intraframe mode to compress a mixed image, i.e. composed of text, graphics and pictures. Even though mixed contents (compound) documents usually require the use of multi...
详细信息
In this paper we explore H.264/AVC operating in intraframe mode to compress a mixed image, i.e. composed of text, graphics and pictures. Even though mixed contents (compound) documents usually require the use of multiple compressors, we apply a single compressor for both text and pictures. For that, distortion is taken into account differently between text and picture regions. Our approach is to use a segmentation-driven adaptation strategy to change the H.264/ AVC quantization parameter on a macroblock by macroblock basis, i.e. we deviate bits from pictorial regions to text in order to keep text edges sharp. We show results of a segmentation driven quantizer adaptation method applied to compress documents. Our reconstructed images have better text sharpness compared to straight unadapted coding, at negligible visual losses on pictorial regions. Our results also highlight the fact that H.264/AVC-INTRA outperforms coders such as JPEG-2000 as a single coder for compound images.
Program traces are commonly used for purposes such as profiling, processor simulation, and program slicing. Uncompressed, these traces are often too large to exist on disk. Although existing trace compression algorith...
详细信息
Program traces are commonly used for purposes such as profiling, processor simulation, and program slicing. Uncompressed, these traces are often too large to exist on disk. Although existing trace compression algorithms achieve high compression rates, they sacrifice the accessibility of uncompressed traces; typical compressed traces must be traversed linearly to reach a desired position in the stream. This paper describes seekable compressed traces that allow arbitrary positioning in the compressed data stream. Furthermore, we enhance existing value prediction based techniques to achieve higher compression rates, particularly for difficult-to-compress traces. Our base algorithm achieves a harmonic mean compression rate for SPEC2000 memory address traces that is 3.47 times better than existing methods. We introduce the concept of seekpoints that enable fast seeking to positions evenly distributed throughout a compressed trace. Adding seekpoints enables rapid sampling and backwards traversal of compressed traces. At a granularity of every 10 M instructions, seekpoints only increase trace sizes by an average factor of 2.65.
暂无评论