Fault-tolerant quantum computation with quantum error-correcting codes has been considerably developed over the past decade. However, there are still difficult issues, particularly on the resource requirement. For fur...
详细信息
Fault-tolerant quantum computation with quantum error-correcting codes has been considerably developed over the past decade. However, there are still difficult issues, particularly on the resource requirement. For further improvement of fault-tolerant quantum computation, here we propose a soft-decision decoder for quantum error correction and detection by teleportation. This decoder can achieve almost optimal performance for the depolarizing channel. Applying this decoder to Knill's C-4/C-6 scheme for fault-tolerant quantum computation, which is one of the best schemes so far and relies heavily on error correction and detection by teleportation, we dramatically improve its performance. This leads to substantial reduction of resources.
The sensory cortex has been interpreted as coding information rather than stimulus properties since Sokolov in 1960 showed increased response to an unexpected stimulus decrement. The motor cortex is also organized aro...
详细信息
The sensory cortex has been interpreted as coding information rather than stimulus properties since Sokolov in 1960 showed increased response to an unexpected stimulus decrement. The motor cortex is also organized around expectation, coding the goal of an act rather than a set of muscle movements. Expectation drives not only immediate responses but also the very structure of the cortex, as demonstrated by development of receptive fields that mirror the structure of the visual world.
In order to utilize digital images effectively, specific techniques are needed to reduce the number of bits required for their representation. This Tutorial Text provides the groundwork for understanding these image c...
详细信息
ISBN:
(数字)9780819478528
ISBN:
(纸本)9780819406484
In order to utilize digital images effectively, specific techniques are needed to reduce the number of bits required for their representation. This Tutorial Text provides the groundwork for understanding these image compression tecniques and presents a number of different schemes that have proven useful. The algorithms discussed in this book are concerned mainly with the compression of still-frame, continuous-tone, monochrome and color images, but some of the techniques, such as arithmetic coding, have found widespread use in the compression of bilevel images. Both lossless (bit-preserving) and lossy techniques are considered. A detailed description of the compression algorithm proposed as the world standard (the JPEG baseline algorithm) is provided. The book contains approximately 30 pages of reconstructed and error images illustrating the effect of each compression technique on a consistent image set, thus allowing for a direct comparison of bit rates and reconstucted image quality. For each algorithm, issues such as quality vs. bit rate, implementation complexity, and susceptibility to channel errors are ***, in its many forms, is a valuable commodity in today's society, and the amount of information is increasing at a phenomenal rate. As a result, the ability to store, access, and transmit information in an efficient manner has become crucial. This is particularly true in the case of digital images. A large number of bits is typically required to represent even a single digital image, and with the rapid advances in sensor technology and digital electronics, this number grows larger with each new generation of products. Furthermore, the number of digital images created each day increases as more applications are found.
In order to utilize digital images effectively, specific techniques are needed to reduce the number of bits required for their representation. The branch of digital image processing that deals with this problem is called image c
Dynamics of coded information over Bloch channels is investigated. We show that the coded information is sent with high accuracy over the Bloch channel by increasing the absolute equilibrium value of the information c...
详细信息
Dynamics of coded information over Bloch channels is investigated. We show that the coded information is sent with high accuracy over the Bloch channel by increasing the absolute equilibrium value of the information carrier or decreasing the ratio of relaxation time. The robustness of coded information in maximum and partial entangled states is discussed. It is shown that, the maximum entangled states are more robust than the partial entangled state over these types of channels. The dynamics of the local and the non-local information is investigated for different values of the channel's parameters and the initial state setting. It is found that by increasing the absolute equilibrium values for both qubits, the local information of both qubits decreases faster and consequently the information gained by the eavesdropper increases.
Unimpaired readers process words incredibly fast and hence it was assumed that top-down processing, such as predicting upcoming words, would be too slow to play an appreciable role in reading. This runs counter the ma...
详细信息
Unimpaired readers process words incredibly fast and hence it was assumed that top-down processing, such as predicting upcoming words, would be too slow to play an appreciable role in reading. This runs counter the major postulate of the predictive coding framework that our brain continually predicts probable upcoming sensory events. This means, it may generate predictions about the probable upcoming word during reading (dubbed forward inferences). Trying to asses these contradictory assumptions, we evaluated the effect of the predictability of words in sentences on eye movement control during silent reading. Participants were a group of fluent (i.e., fast) and a group of speed-impaired (i.e., slow) readers. The findings indicate that fast readers generate forward inferences, whereas speed-impaired readers do so to a reduced extent - indicating a significant role of predictive coding for fluent reading.
Following on after two previous papers discussing the formation of primordial black holes in the early universe, we present here results from an in-depth investigation of the extent to which primordial black hole form...
详细信息
Following on after two previous papers discussing the formation of primordial black holes in the early universe, we present here results from an in-depth investigation of the extent to which primordial black hole formation in the radiative era can be considered as an example of the critical collapse phenomenon. We focus on initial supra-horizon-scale perturbations of a type which could have come from inflation, with only a growing component and no decaying component. In order to study perturbations with amplitudes extremely close to the supposed critical limit, we have modified our previous computer code with the introduction of an adaptive mesh refinement scheme. This has allowed us to follow black hole formation from perturbations whose amplitudes are up to eight orders of magnitude closer to the threshold than we could do before. We find that scaling-law behaviour continues down to the smallest black hole masses that we are able to follow and we see no evidence of shock production such as has been reported in some previous studies and which led there to a breaking of the scaling-law behaviour at small black hole masses. We attribute this difference to the different initial conditions used. In addition to the scaling law, we also present other features of the results which are characteristic of critical collapse in this context.
Differential-I/O configurations can substantially reduce power and groundnoise internal to an IC (references 1 and 2). The resulting circuits require fewer power and groundpins than might otherwise be necessary. Unfor...
详细信息
Differential-I/O configurations can substantially reduce power and groundnoise internal to an IC (references 1 and 2). The resulting circuits require fewer power and groundpins than might otherwise be necessary. Unfortunately, differential signaling doubles the number ofI/O drivers, BGA balls, and pc-board interconnects. Specialized data codes can reduce power andground noise without doubling the number of interconnects. A data code judiciously assigns to eachM-bit binary word a unique binary pattern (a code word) that it selects from a larger field of N-bitpo,ssibilities,.in which N is greater than M. Each individual code word has a weight, w, whichequals the number of ones in that word. For an N-bit word, the possible weights range from zero toN.
By closely connecting research and development Google is able to conduct experiments on an unprecedented scale, often resulting in new capabilities for the company.
By closely connecting research and development Google is able to conduct experiments on an unprecedented scale, often resulting in new capabilities for the company.
暂无评论