Our present ability to work with video has been confined to a wired environment, requiring both the video encoder and decoder to be physically connected to a power supply and a wired communication link. This paper des...
详细信息
Our present ability to work with video has been confined to a wired environment, requiring both the video encoder and decoder to be physically connected to a power supply and a wired communication link. This paper describes An integrated approach to the design of a portable video-on-demand system capable of delivering high-quality image and video data in a wireless communication environment. The discussion will focus on both the algorithm and circuit design techniques developed for implementing a low-power video compression/decompression system at power levels that are two orders of magnitude below existing solutions. This low-power video compression system not only provides a compression efficiency similar to industry standards, but also maintains a high degree of error tolerance to guard against transmission errors often encountered in wireless communication The required power reduction can best be attained through reformulating compression algorithms for energy conservation. We developed an intra-frame compression algorithm that requires minimal computation energy in its hardware implementations. Examples of algorithmic trade-offs are the development of a vector quantization scheme that allows on-chip computation to eliminate off-chip memory accesses, the use fo channel-optimized delta representations to avoid the error control hardware that would otherwise be necessary and the coding of internal data representations to further reduce the energy consumed in data exchanges, The architectural and circuit design techniques used include the selection of a filter bank structure that minimizes the energy consumed in the datapath, the data shuffle strategy that results in reduced internal memory size, and the design of digital and analog circuits optimized for low supply voltages, Our hardware prototype is a video decoding chip set for decompressing full-motion video transmitted through a wireless link at less than 10 mW, which is incorporated into a hand-held portable
The universal filtered multicarrier technique is a competitive candidate multicarrier modulation scheme for 5G communication systems. Conventional channel-estimation algorithms suffer from significant performance loss...
详细信息
The universal filtered multicarrier technique is a competitive candidate multicarrier modulation scheme for 5G communication systems. Conventional channel-estimation algorithms suffer from significant performance losses due to the large spread in the delay of multipath channels in high-speed scenarios. To address this problem, we here propose a low-complexity, partial priori information-sparsity adaptive matching pursuit (PPI-SAMP) algorithm. Unlike the conventional SAMP algorithm, the PPI-SAMP algorithm improves performance over fast-fading channels by adequately exploiting the sparse characteristics and temporal correlation of wireless channels. First, the PPI-SAMP algorithm averages the channel impulse responses (CIRs) of consecutive symbols over the coherence time for achieving the accuracy required for coarse channel estimation. Second, the improved SAMP algorithm acquires the accurate CIRs with low complexity based on the coarse CIR. Moreover, the MSE performance and recovery probability with varying sizes of IBI-free region indicate that the proposed PPI-SAMP algorithm offers a longer CIR for multipath interference and is more robust against larger multipath-channel delay than the conventional SAMP and CoSaMP algorithms. The proposed algorithm also estimates channels more accurately than conventional SAMP and CoSaMP algorithms despite having a complexity reduced by approximately 52% compared to conventional SAMP algorithm.
The development of automotive-information & communication technology (ICT) convergence has resulted in various vehicle-based electrical/electronic (E/E) systems. An automotive E/E system consists of one or more el...
详细信息
The development of automotive-information & communication technology (ICT) convergence has resulted in various vehicle-based electrical/electronic (E/E) systems. An automotive E/E system consists of one or more electronic control units (ECUs), sensors and actuators. With the commercialization of connected/autonomous cars, vehicle-based wireless communication systems have appeared, and are expected to grow in popularity. This increases the number of attack surfaces that can potentially threaten in-vehicle controller area network (CAN). To combat the vulnerabilities of the CAN protocol, the CAN data field should be encrypted and transmitted with authentication codes. Recently, a method of transmitting authentication codes by modifying the CAN protocol was proposed. However, changing the original CAN protocol can cause serious problems in CAN systems. In this paper, to enhance CAN security, a data compression algorithm is used to reduce the data frame length so that there is space for a message authentication code (MAC) to be contained inside the data field. The proposed algorithm guarantees that all CAN frames are authenticated by a MAC of at least four bytes without any change of the original CAN protocol. Simulations using CAN data from Kia Sorento, Kia Soul, and LS Mtron vehicles show that the proposed algorithm works successfully with only a slight increase in the peak load.
In this study, the authors present an energy-efficient data compression protocol for data collection in wireless sensor networks (WSNs). WSNs are essentially constrained by motes' limited battery power and network...
详细信息
In this study, the authors present an energy-efficient data compression protocol for data collection in wireless sensor networks (WSNs). WSNs are essentially constrained by motes' limited battery power and networks bandwidth. The authors focus on data compression algorithms and protocol development to effectively support data compression for data gathering in WSNs. Their design of compressed data-stream protocol (CDP) is generic in the sense that other lossless or lossy compression algorithms can be easily 'plugged' into the proposed protocol system without any changes to the rest of the CDP. This design intends to support various different WSN applications where users may prefer more specific compression algorithms, tailored to the sensing data characteristics in question, to their general algorithm. CDP is not only able to significantly reduce energy consumptions of data gathering in multi-hop WSNs, but also able to reduce sensor network traffic and thus avoid congestion accordingly. The proposed CDP is implemented on the tinyOS platform using the nesC programming language. To evaluate their work, the authors conduct simulations via TOSSIM and PowerTOSSIM-z with real-world sensor data. The results demonstrate the significance of CDP.
Most compression algorithms for motion television require large data storage, usually several television fields, and typically operate on blocks of data. A chip has been built to support both of these features. It gen...
详细信息
Most compression algorithms for motion television require large data storage, usually several television fields, and typically operate on blocks of data. A chip has been built to support both of these features. It generates, from a single clock source, all of the control and address signals required by standard off-the-shelf dynamic RAMs (DRAMs). This includes data packing and unpacking and automatic refresh when required. Counters are provided to address the data into and out of the memories of the form of blocks. The block sizes and field dimensions are programmable and are independent for both read and write operations. Thus, one set of counters can be programmed for sequentially scanned data coming from a camera or going to a television monitor, and other set of counters can be programmed for the block size employed in the compression hardware. Blocks of data can be accessed either continuously or one at a time. When data are read from the memories, a single pel-width pulse marks the start of valid data. Signals marking both end of the block and end of field have also been provided to ease system interfacing.< >
New emerging technologies in the field of multimedia, such as high dynamic range (HDR) and wide color gamut (WCG), challenge the performance of existing image and video data processing algorithms, especially in terms ...
详细信息
New emerging technologies in the field of multimedia, such as high dynamic range (HDR) and wide color gamut (WCG), challenge the performance of existing image and video data processing algorithms, especially in terms of visual data compression and its faithful representation. HDR and WCG offer the technological capabilities to capture and represent a scene with an extended amount of details in both luminance and chrominance, respectively. This article presents and evaluates possible solutions to cope with such an extensive amount of data while delivering a faithful representation of visual information. More particularly, two approaches proposing novel color space, a constant intensity color difference signal representation called ICtCp and a novel compression optimization algorithm called the reshaper, are described and subjectively evaluated. The results of the conducted experiments show competitive performance of the two proposed approaches, ICtCp and reshaper, allowing a 10% bit rate reduction while preserving the perceived quality.
This paper is concerned with algorithms for prediction of discrete sequences over a finite alphabet,using variable order Markov models. The class of such algorithms is large and in principle includes any lossless comp...
详细信息
This paper is concerned with algorithms for prediction of discrete sequences over a finite alphabet,using variable order Markov models. The class of such algorithms is large and in principle includes any lossless compression algorithm. We focus on six prominent prediction algorithms, including Context Tree Weighting (CTW), Prediction by Partial Match (PPM) and Probabilistic Suffix Trees (PSTs). We discuss the properties of these algorithms and compare their performance using real life sequences from three domains:proteins, English text and music pieces. The comparison is made with respect to prediction quality as measured by the average log-loss. We also compare classification algorithms based on these predictors with respect to a number of large protein classification tasks. Our results indicate that a decomposed" CTW(a variant of the CTW algorithm) and PPM outperform all other algorithms in sequence prediction tasks. Somewhat surprisingly, a different algorithm,which is a modification of the Lempel-Ziv compression algorithm,significantly outperforms all algorithms on the protein classification problems.
The design of conventional sensors is based primarily on the Shannon?Nyquist sampling theorem, which states that a signal of bandwidth W Hz is fully determined by its discrete time samples provided the sampling rate e...
详细信息
The design of conventional sensors is based primarily on the Shannon?Nyquist sampling theorem, which states that a signal of bandwidth W Hz is fully determined by its discrete time samples provided the sampling rate exceeds 2 W samples per second. For discrete time signals, the Shannon?Nyquist theorem has a very simple interpretation: the number of data samples must be at least as large as the dimensionality of the signal being sampled and recovered. This important result enables signal processing in the discrete time domain without any loss of information. However, in an increasing number of applications, the Shannon-Nyquist sampling theorem dictates an unnecessary and often prohibitively high sampling rate (see “What Is the Nyquist Rate of a Video Signal?”). As a motivating example, the high resolution of the image sensor hardware in modern cameras reflects the large amount of data sensed to capture an image. A 10-megapixel camera, in effect, takes 10 million measurements of the scene. Yet, almost immediately after acquisition, redundancies in the image are exploited to compress the acquired data significantly, often at compression ratios of 100:1 for visualization and even higher for detection and classification tasks. This example suggests immense wastage in the overall design of conventional cameras.
This article presents the design principles for implementing low-power wireless video systems through the use of two examples, a single-chip digital video camera and a wireless video-on-demand system. The discussion w...
详细信息
This article presents the design principles for implementing low-power wireless video systems through the use of two examples, a single-chip digital video camera and a wireless video-on-demand system. The discussion will focus on the architectural and circuit techniques developed specifically for silicon integration of high-performance low-power wireless video systems. The proposed single-chip digital camera incorporates a parallel architecture to perform MPEG-2 encoding in real time, while the video-on-demand system employs an error-resilient compression algorithm to guard against the transmission errors often encountered in wireless communication. Both wireless video systems, one for encoding and the other for decoding, dissipate only tens of milliwatts of power, achieving a power reduction two orders of magnitude below standard solutions.
This paper examines the benefits of edge mining-data mining that takes place on the wireless, battery-powered, and smart sensing devices that sit at the edge points of the Internet of Things. Through local data reduct...
详细信息
This paper examines the benefits of edge mining-data mining that takes place on the wireless, battery-powered, and smart sensing devices that sit at the edge points of the Internet of Things. Through local data reduction and transformation, edge mining can quantifiably reduce the number of packets that must be sent, reducing energy usage, and remote storage requirements. In addition, edge mining has the potential to reduce the risk in personal privacy through embedding of information requirements at the sensing point, limiting inappropriate use. The benefits of edge mining are examined with respect to three specific algorithms: linear Spanish inquisition protocol (L-SIP), ClassAct, and bare necessities (BN), which are all instantiations of general SIP. In general, the benefits provided by edge mining are related to the predictability of data streams and availability of precise information requirements;results show that L-SIP typically reduces packet transmission by around 95% (20-fold), BN reduces packet transmission by 99.98% (5000-fold), and ClassAct reduces packet transmission by 99.6% (250-fold). Although energy reduction is not as radical because of other overheads, minimization of these overheads can lead up to a 10-fold battery life extension for L-SIP, for example. These results demonstrate the importance of edge mining to the feasibility of many IoT applications.
暂无评论