This dissertation is a study of error in long haul optical fiber systems and how to coupe with them. First we characterize error events occurring during transmission, then we determine lower bounds on information capa...
详细信息
This dissertation is a study of error in long haul optical fiber systems and how to coupe with them. First we characterize error events occurring during transmission, then we determine lower bounds on information capacity (achievable information rates) and at the end we propose coding schemes for these systems. Existing approaches for obtaining probability density functions (PDFs) for pulse energy in long-haul optical fiber transmission systems rely on numerical simulations or analytical approximations. Numerical simulations make far tails of the PDFs difficult to obtain, while existing analytic approximations are often inaccurate, as they neglect nonlinear interaction between pulses and noise. Our approach combines the instanton method from statistical mechanics to model far tails of the PDFs, with numerical simulations to refine the middle part of the PDFs. We combine the two methods by using an orthogonal polynomial expansion constructed specifically for this problem. We demonstrate the approach on an example of a specific submarine transmission system. Once the channel is characterized estimating achievable information rates is done by a modification of a method originally proposed by Arnold and Pfitser. We give numerical results for the same optical transmission system (submarine system at transmission rate 40Gb/s). The achievable information rate varies with noise and length of the bit patterns considered (among other parameters). We report achievable numerical rates for systems with different noise levels, propagation distances and length of the bit patterns considered. We also propose two iterative decoding schemes suitable for high-speed long-haul optical transmission. One scheme is a modification of a method, originally proposed in the context of magnetic media, which incorporates the BCJR algorithm (to overcome intersymbol interference) and Low-Density Parity-Check (LDPC) codes for additional error resilience. This is a “soft decision scheme”-meaning that
Error Correcting Output coding (ECOC) is an established technique to face a classification problem with many possible classes decomposing it into a set of two class subproblems. In his paper, we propose an ECOC system...
详细信息
ISBN:
(纸本)9783540724810
Error Correcting Output coding (ECOC) is an established technique to face a classification problem with many possible classes decomposing it into a set of two class subproblems. In his paper, we propose an ECOC system with a reject option that is performed by taking into account the confidence degree of the dicholomizers. Such a scheme makes use of a coding matrix based on Low Density Parity Check (LDPC) codes that can also be usefully employed to implement an iterative recovery strategy for the binary rejects. The experimental results have confirmed the effectiveness of the proposed approach.
Paper based health records has proved to be not only costly but also has many disadvantages. Such records are difficult to organize and are hard to manage. This has led to the adoption of computerized electronic healt...
详细信息
Paper based health records has proved to be not only costly but also has many disadvantages. Such records are difficult to organize and are hard to manage. This has led to the adoption of computerized electronic health records (EHR). To facilitate interoperability and consistency in exchange of information between various computerized clinical applications standard medical terminologies plays a vital role. These standard terminologies are based on controlled medical vocabularies like SNOMED, ICD9, and LOINC etc. coding clinical documents with controlled medical vocabularies benefits the health care system in a wide range of applications like laboratory reporting, clinical research, computerized patient entry etc. This document deals with using the SNOMED medical vocabulary subset from UMLS for coding clinical documents. coding of clinical documents would be more effective if we can avail the available natural language processing techniques. This work utilizes the UIMA platform which has a collection of natural language processing (NLP) tools. It also provides a framework on how these open NLP tools can be plugged into the UIMA architecture and how they can be used in developing any application. This report discusses the use of a statistical n-gram natural language processing technique that proved effective in assigning SNOMED codes to composite medical terms. With the component based architecture of UIMA which allows sharing and using of components between projects the thesis discusses how two components developed in separate projects are combined to form a SNOMED annotator. Thus this document describes the process of developing a SNOMED annotator for coding clinical documents which is scalable, flexible, UIMA framework compatible and extendible.
A location map is frequently employed in reversible watermarking to distinguish the watermarked pixels. It is usually compressed by arithmetic coding before being concealed. The size of the compressed location map inf...
详细信息
A location map is frequently employed in reversible watermarking to distinguish the watermarked pixels. It is usually compressed by arithmetic coding before being concealed. The size of the compressed location map influences the payload greatly. Proposed is a novel principle of creating the location map, which increases the compression ratio of the location map and achieves higher payload at high peak signal-to-noise ratio. Experimental results show the proposal is effective.
Three different encoding mechanisms are combined to improve further steganographic embedding efficiency. They are secret-bit representation derived from the parity check coding, exploitation of the modification direct...
详细信息
Three different encoding mechanisms are combined to improve further steganographic embedding efficiency. They are secret-bit representation derived from the parity check coding, exploitation of the modification direction on pixel-values, and wet paper coding. The parity of least significant bit weight in each pixel-group is used to accommodate one secret bit. The weighted-sums of all pixel-groups and wet paper coding arc used to select which pixels arc modified and decide the way of modification as to whether to add or subtract one for carrying more secret data. It is shown that, by taking full advantage of these mechanisms, the embedding efficiency of the proposed scheme is higher than any of the previous steganographic embedding techniques.
A new computer software tool for coding and analyzing verbal report data is described. Combining and extending the capabilities of earlier verbal report coding software tools, CAPAS 2.0 enables researchers to code two...
详细信息
A new computer software tool for coding and analyzing verbal report data is described. Combining and extending the capabilities of earlier verbal report coding software tools, CAPAS 2.0 enables researchers to code two different types of verbal report data: (1) verbal reports already transcribed and stored in text files and (2) verbal reports in their original digitally recorded audio format. For both types of data, individual verbal report segments are presented in random order and coded independently of other segments in accordance with a localized encoding principle. Once all reports are coded, CAPAS 2.0 converts the coded reports to a formatted file suitable for analysis by statistical packages such as SPSS.
The article discusses the direct role of weather in the planning and successful execution of Japan's attack on Pearl Harbor, Hawaii, on December 7, 1941. It recalled that the weather was favorable for a bombing at...
详细信息
The article discusses the direct role of weather in the planning and successful execution of Japan's attack on Pearl Harbor, Hawaii, on December 7, 1941. It recalled that the weather was favorable for a bombing attack from Japanese planes because visibility was good. The weather also served as a convenient coding for covert messages Japan sent to its overseas officials through weather reports aired before the day of the attack.
Mukerjee and Wu(2001) employed projective geometry theory to find the wordlength pattern of a regular mixed factorial design in terms of its complementary set, but only for the numbers of words of length 3 or 4. In ...
详细信息
Mukerjee and Wu(2001) employed projective geometry theory to find the wordlength pattern of a regular mixed factorial design in terms of its complementary set, but only for the numbers of words of length 3 or 4. In this paper, by introducing a concept of consulting design and based on the connection between factorial design theory and coding theory, we obtain some combinatorial identities that relate the wordlength pattern of a regular mixed-level (2^r)2^n factorial design to that of its consulting design. Consequently, a general rule for identifying minimum aberration (2^r)2^n factorial designs through their consulting designs is established. It is an improvement and generalization of the related result in Mukerjee and Wu(2001).
A correction to the article 'The Receptors And coding Logic for Bitter Taste,' by K.L. Mueller, M.A. Hoon, I. Erlenbach, J. Chandrashekar, C.S. Zuker and N.J.P. Ryba that was published in the earlier issue of ...
详细信息
A correction to the article 'The Receptors And coding Logic for Bitter Taste,' by K.L. Mueller, M.A. Hoon, I. Erlenbach, J. Chandrashekar, C.S. Zuker and N.J.P. Ryba that was published in the earlier issue of the journal 'Nature' is presented.
The error probability of maximum-likelihood (ML) soft-decision decoded binary block codes rarely accepts exact closed forms. In addition, for long codes ML decoding becomes prohibitively complex. Nevertheless, bounds ...
详细信息
The error probability of maximum-likelihood (ML) soft-decision decoded binary block codes rarely accepts exact closed forms. In addition, for long codes ML decoding becomes prohibitively complex. Nevertheless, bounds on the performance of ML decoded systems provide insight into the effect of system parameters on the overall system performance in addition to a measure of efficiency of the sub-optimum decoding methods used in practice. In the article, a comprehensive study of a number of lower and tipper bounds on the error probability of ML decoding of binary codes over AWGN channel is provided. Bounds considered here are bounds based oil the so-called Bonferroni-type inequalities and bounds developed primarily in the light of the geometrical structure of the underlying signal constellations. The interrelationships among the bounds are explored and current tightest bounds at different noise levels are pointed out.
暂无评论