An important parameter in evaluating data hiding methods is hiding capacity (Cohen et al., 1999) i.e. the amount of data that a certain algorithm can "hide" until reaching allowable distortion limits. One fu...
详细信息
An important parameter in evaluating data hiding methods is hiding capacity (Cohen et al., 1999) i.e. the amount of data that a certain algorithm can "hide" until reaching allowable distortion limits. One fundamental difference between watermarking (Cox et al., 1997; 1999) and generic data hiding resides exactly in the main applicability and descriptions of the two domains. In the digital framework, watermarking algorithms that make use of information hiding techniques have been developed and hiding capacity was naturally used as a metric in evaluating their power to hide information. Whereas the maximal amount of information that a certain algorithm can "hide" (while keeping the data within allowable distortion bounds) is certainly related to the ability to assert ownership in court, it is not directly measuring its power of persuasion, in part also because it does not consider directly the existence and power of watermarking attacks. We show why, due to its particularities, watermarking requires a different metric, more closely related to its ultimate purpose, claiming ownership in a court of law. We define one suitable metric (watermarking power) and show how it relates to derivates of hiding capacity. We prove that there are cases where considering hiding capacity is sub-optimal as a metric in evaluating watermarking methods whereas the metric of watermarking power delivers good results.
conference proceedings front matter may contain various advertisements, welcome messages, committee or program information, and other miscellaneous conferenceinformation. This may in some cases also include the cover...
ISBN:
(纸本)0769510620
conference proceedings front matter may contain various advertisements, welcome messages, committee or program information, and other miscellaneous conferenceinformation. This may in some cases also include the cover art, table of contents, copyright statements, title-page or half title-pages, blank pages, venue maps or other general information relating to the conference that was part of the original conference proceedings.
A speech content integrity verification scheme integrated with ITU G 723.1 speech coding to minimize the total computational cost is proposed in this research. Speech features relevant to the semantic meaning are extr...
详细信息
ISBN:
(纸本)0769510620
A speech content integrity verification scheme integrated with ITU G 723.1 speech coding to minimize the total computational cost is proposed in this research. Speech features relevant to the semantic meaning are extracted, encrypted and attached as the header information. This scheme is not only much faster than cryptographic bitstream integrity algorithms, but also more compatible with a variety of applications. The speech signal could go through recompression, amplification, transcoding, re-sampling, D/A and A/D conversion and minor white noise pollution without triggering the verification alarm.
The traditional approach to coding of noisy and distorted images assumes preliminary enhancement and restoring of images and only then in fact their coding. In the given work novel technique is proposed, where the ind...
详细信息
ISBN:
(纸本)0769510620
The traditional approach to coding of noisy and distorted images assumes preliminary enhancement and restoring of images and only then in fact their coding. In the given work novel technique is proposed, where the indicated procedures are combined in the optimum algebraic coding. At the heart of the offered approach is the optimization of coordinate basis of an image representation. If is shown, that the required optimum properties have the. eigenfunctions of the Fisher's information operator known in thc statistical theory of estimation. These functions satisfying defined criterions of a selection are called in the paper as Principal Iinformative Components (PIC). PIC-projection technique for improvement of an arbitrary image decomposition is proposed. The given technique allows at anyone primary physical representation of an image to decrease a statistical errror or of its coding oil a noisy data.
The amount of structured information available in Internet sources is rapidly increasing. This information includes commercial databases on product information and information on e-services forming the so-called e-sho...
详细信息
ISBN:
(纸本)0769510620
The amount of structured information available in Internet sources is rapidly increasing. This information includes commercial databases on product information and information on e-services forming the so-called e-shops. However. the process of using this information has become more complicated, and can sometimes be tedious for users with different goals, interests, levels of expertise, abilities and preferences. This paper deals with the definition of an architectural framework for intelligent, adaptive and personalised navigation within large hypertext Electronic Commerce environments.
Applications like multimedia databases or enterprise-wide information management systems have to meet the challenge of efficiently retrieving best matching objects from vast collections of data. We present a new algor...
详细信息
ISBN:
(纸本)0769510620
Applications like multimedia databases or enterprise-wide information management systems have to meet the challenge of efficiently retrieving best matching objects from vast collections of data. We present a new algorithm Stream-Combine for processing multi-feature queries on heterogeneous data sources. Stream-Combine is self-adapting to different data distributions and to the specific kind of the combining function. Furthermore we present a new retrieval strategy that will essentially speed rtp the output of relevant objects.
We address the problem of robust transmission of compressed visual information over unreliable networks. Our approach employs the principle of multiple-descriptions, through pre- and post-processing of the image data,...
详细信息
ISBN:
(纸本)0769510620
We address the problem of robust transmission of compressed visual information over unreliable networks. Our approach employs the principle of multiple-descriptions, through pre- and post-processing of the image data, without modification to the source or channel codecs. We employ oversampling to add redundancy to the original image data followed by a partitioning of the oversampled image into "equal" sub-images which can be coded and transmitted over separate channels. Simulations using two descriptors show that this approach yields exceptional performance when only one descriptor is received and outperforms other popular multiple description approaches.
We report on the performance of an optimized parallel channel coder for high-speed optical transmission systems. The coding properties are discussed by an evaluation of the signal statistics of the coded pulse train i...
详细信息
ISBN:
(纸本)0769510620
We report on the performance of an optimized parallel channel coder for high-speed optical transmission systems. The coding properties are discussed by an evaluation of the signal statistics of the coded pulse train in the time and frequency domain. The discussion is mainly based on the results far the power spectral density (PSD) and the autocorrelation function (AKF). The theoretical investigations have been verified by measurements with a developed 2.488 Gbit/s optical transmission system. Reliability studies have shown a system's bit error rate below 10(-13).
Fusion of various retrieval strategies has long been suggested as a means of improving retrieval effectiveness. To date, testing effusion was done by combining result sets from widely disparate approaches that include...
详细信息
ISBN:
(纸本)0769510620
Fusion of various retrieval strategies has long been suggested as a means of improving retrieval effectiveness. To date, testing effusion was done by combining result sets from widely disparate approaches that include an uncontrolled mixture of retrieval strategies and utilities. To isolate the effect effusion on individual retrieval models, we have implemented probabilistic, vector space, and weighted Boolean models and tested the effect of fusion on these strategies in a systematic fashion. We also tested the effect of fusion on various query representations and have shown up to a twelve percent improvement in average precision.
暂无评论