The rate-distortion optimisation of redundant picture-based multiple description coding is studied, and the minimisation of the end-to-end distortion is attacked by fully exploring the redundant allocation utilising m...
详细信息
The rate-distortion optimisation of redundant picture-based multiple description coding is studied, and the minimisation of the end-to-end distortion is attacked by fully exploring the redundant allocation utilising multiple redundant pictures. The amount and relative encoding parameters of redundant pictures for each frame are adaptively decided for multiple redundant pictures, and this is different from the relevant existing description generation methods, which mainly focus on determining the amount of inserted redundancy with a fixed single redundant picture. The relationship of encoding parameters between each redundant picture of the same frame is derived according to the classification and formulation of relative distortion and rate. To accelerate the encoding process, the statistical properties between mismatch distortion, bit rate and encoding parameter are explored for the fast decision of redundant picture number. The experimental results show that the proposed method improves the overall end-to-end rate-distortion performance compared with the state-of-the-art multiple description coding method utilising redundant coding.
The covering radius problem is a question in coding theory concerned with finding the minimum radius r such that, given a code that is a subset of an underlying metric space, balls of radius r over its code words cove...
详细信息
The covering radius problem is a question in coding theory concerned with finding the minimum radius r such that, given a code that is a subset of an underlying metric space, balls of radius r over its code words cover the entire metric space. Klapper (IEEE Trans. Inform. theory 43:1372-1377, 1997) introduced a code parameter, called the multicovering radius, which is a generalization of the covering radius. In this paper, we introduce an analogue of the multicovering radius for permutation codes (Des. Codes Cryptogr. 41:79-86, cf. 2006) and for codes of perfect matchings (cf. 2012). We apply probabilistic tools to give some lower bounds on the multicovering radii of these codes. In the process of obtaining these results, we also correct an error in the proof of the lower bound of the covering radius that appeared in (Des. Codes Cryptogr. 41:79-86, cf. 2006). We conclude with a discussion of the multicovering radius problem in an even more general context, which offers room for further research.
The article discusses the computer company Apple Inc. and its reported discovery of a Secure Sockets Layer (SSL) vulnerability involving the firm's iOS 6.0 computer operating system in 2014, focusing on man-in-the...
详细信息
The article discusses the computer company Apple Inc. and its reported discovery of a Secure Sockets Layer (SSL) vulnerability involving the firm's iOS 6.0 computer operating system in 2014, focusing on man-in-the-middle computer attacks and various computer bugs (viruses). An apparent short circuit involving an SSL/TLS (Transport Layer Security) handshake algorithm is mentioned, along with the U.S. National Vulnerability Database and the C computer programming language. Computer coding standards and duplications of 'goto' statements are examined.
A developmental model of algorithmic concepts is proposed here for program comprehension. Unlike traditional approaches that cannot do anything beyond their predesigned representation, this model can develop its inter...
详细信息
A developmental model of algorithmic concepts is proposed here for program comprehension. Unlike traditional approaches that cannot do anything beyond their predesigned representation, this model can develop its internal representation autonomously from chaos into algorithmic concepts by mimicking concept formation in the brain under an uncontrollable environment that consists of program source codes from the Internet. The developed concepts can be employed to identify what algorithm a program performs. The accuracy of such identification reached 97.15% in a given experiment.
This paper presents a new near lossless compression algorithm for hyperspectral images based on distributed source coding. The algorithm is performed on blocks that have the same location and size in each band. Becaus...
详细信息
This paper presents a new near lossless compression algorithm for hyperspectral images based on distributed source coding. The algorithm is performed on blocks that have the same location and size in each band. Because the importance varies from block to block along the spectral orientation, an adaptive rate allocation algorithm that weights the energy of each block under the target rate constraints is introduced. A simple linear prediction model is employed to construct the side information of each block for Slepian-Wolf coding. The relationship between the quantized step size and the allocated rate of each block is determined under the condition of correct reconstruction with the side information at the Slepian-Wolf decoder. Slepian-Wolf coding is performed on the quantized version of each block. Experimental results show that the performance of the proposed algorithm is competitive with that of state-of-the-art compression algorithms, making it appropriate for on-board compression. (C) 2014 Elsevier Ltd. All rights reserved.
Following the processing and validation of JEFF-3.1 performed in 2006 and presented in ND2007, and as a consequence of the latest updated of this library (JEFF-3.1.2) in February 2012, a new processing and validation ...
详细信息
Following the processing and validation of JEFF-3.1 performed in 2006 and presented in ND2007, and as a consequence of the latest updated of this library (JEFF-3.1.2) in February 2012, a new processing and validation of JEFF-3.1.2 cross section library is presented in this paper. The processed library in ACE format at ten different temperatures was generated with NJOY-99.364 nuclear data processing system. In addition, NJOY-99 inputs are provided to generate PENDF, GENDF, MATXSR and BOXER formats. The library has undergone strict QA procedures, being compared with other available libraries (e.g. ENDF/B-VII.1) and processing codes as PREPRO-2000 codes. A set of 119 criticality benchmark experiments taken from ICSBEP-2010 has been used for validation purposes.
Prompt fission neutron spectra (PFNS) have proven to have a significant effect on criticality of selected benchmarks, in some cases as important as cross-sections. Therefore, a precise determination of uncertainties i...
详细信息
Prompt fission neutron spectra (PFNS) have proven to have a significant effect on criticality of selected benchmarks, in some cases as important as cross-sections. Therefore, a precise determination of uncertainties in PFNS is desired. Existing PFNS evaluations in nuclear data libraries relied so far almost exclusively on the Los Alamos model. However, deviations of evaluated data from available experiments have been noticed at both low and high neutron emission energies. New experimental measurements of PFNS have been recently published, thus demanding new evaluations. The present work describes the effort of integrating Kalman and EMPIRE codes in such a way to allow for parameter fitting of PFNS models. The first results are shown for the major actinides for two different PFNS models (Kornilov and Los Alamos). This represents the first step towards reevaluation of both cross-section and fission spectra data considering both microscopic and integral experimental data for major actinides.
An improved combining method is proposed for constructing quasi-cyclic low-density parity-check (QC-LDPC) codes with large girth and few short cycles. Large girth QC-LDPC codes can be obtained by carefully designing o...
详细信息
An improved combining method is proposed for constructing quasi-cyclic low-density parity-check (QC-LDPC) codes with large girth and few short cycles. Large girth QC-LDPC codes can be obtained by carefully designing one component code with large girth. By properly designing the other component codes, a lot of QC-LDPC codes with very few short cycles can be designed. Simulations show that compared with Chinese remainder theorem based QC-LDPC codes and progressive edge-growth based QC-LDPC codes, the proposed QC-LDPC codes have much less short cycles and better performance.
Pin power peaking in a VVER-1000 fuel assembly and its sensitivity and uncertainty was analyzed by TSUNAMI-2D code. Several types of fuel assemblies were considered. They differ in number and position of gadolinium fu...
详细信息
Pin power peaking in a VVER-1000 fuel assembly and its sensitivity and uncertainty was analyzed by TSUNAMI-2D code. Several types of fuel assemblies were considered. They differ in number and position of gadolinium fuel pins. The calculations were repeated for several fuel compositions obtained by fuel depletion calculation. The results are quantified sensitivity data, which can be used for enrichment profiling.
The need for cross sections far from the valley of stability, for applications such as nuclear astrophysics or future nuclear facilities, challenges the robustness as well as the predictive power of nuclear reaction m...
详细信息
The need for cross sections far from the valley of stability, for applications such as nuclear astrophysics or future nuclear facilities, challenges the robustness as well as the predictive power of nuclear reaction models. Traditionally, cross section predictions rely on more or less phenomenological approaches, depending on parameters adjusted to generally scarce experimental data or deduced for systematic relations. While such predictions are expected to be reliable for nuclei not too far from the experimentally accessible regions, they are clearly questionable when dealing with exotic nuclei. To improve the predictive power of nuclear model codes, one should use more fundamental approaches, relying on sound physical bases, when determining the nuclear inputs (ingredients) required by the reaction model. Thanks to the high computer power available today, all these major ingredients have been microscopically or semi-microscopically determined, starting from the information provided by a Skyrme effective (and efficient) nucleon-nucleon interaction. These microscopic inputs have shown their ability to compete with the traditional classical methods as such, but also when they are used in actual nuclear cross sections calculations. We will discuss the current efforts made to improve the predictive power of such microscopic inputs using a more coherent nucleon-nucleon interaction, namely the finite-range Gogny force.
暂无评论