The capability of the three-dimensional two-fluid codes to simulate the local boiling flow processes has been assessed. Boiling flow experiments of Roy et al. [Roy, R.P., Kang, S., Zarate, J.A., Laporta, A., 2002. Tur...
详细信息
The capability of the three-dimensional two-fluid codes to simulate the local boiling flow processes has been assessed. Boiling flow experiments of Roy et al. [Roy, R.P., Kang, S., Zarate, J.A., Laporta, A., 2002. Turbulent subcooled boiling flow-experiments and simulations, J. Heat Transfer 124, 73-93] and Lee et al. [Lee, T.H., Park, G.C., Lee, DJ., 2002. Local flow characteristics of subcooled boiling flow of water in a vertical concentric annulus. Int. J. Multiphase Flow 28, 1351-1368], both performed in annular vertical channels were used as an experimental benchmark data set. The boiling flow is strongly affected by local mechanisms in the boundary layer near the heated wall. In this paper, the influence of near-wall modelling on the distribution of flow parameters at flow boiling has been analyzed. A generic wall function model for 3D two-fluid codes, based on surface roughness analogy has been proposed instead of commonly used single-phase log-law model. The new model has been implemented in the code CFX-4.4. In general, better agreement of phase velocities with experimental data were obtained with the new model. Presented results show that the influence of nucleating bubbles on the near-wall velocity profile should be taken into account. The second goal of this paper is to compare the NEPTUNE-CFD simulations against CFX-4.4 results and experimental data. (C) 2008 Elsevier B.V. All rights reserved.
In a series of experiments color coding, intensity coding, and decluttering were compared in order to assess their potential benefits for accessing information from electronic map displays. Participants viewed electro...
详细信息
In a series of experiments color coding, intensity coding, and decluttering were compared in order to assess their potential benefits for accessing information from electronic map displays. Participants viewed electronic battlefield maps containing 5 classes of information discriminable by color or intensity, or, in the decluttering condition, displayed or removed entirely by a key press. Participants were asked questions requiring them to focus on objects within a class (objects presented at the same color or intensity) or to integrate data between objects in different classes (objects presented at different colors and intensities). The results suggested that the benefits of color and intensity coding appear to be in segregating the visual field rather than calling attention to the objects presented at a certain color or intensity. Interactivity proved to be a disadvantage;the time cost of information retrieval outweighed the time benefits of presenting less information on the display or even allowing map users to customize their displays. Potential applications of this research include a cost-benefit analysis for the use of 3 attentional filtering techniques and an attempt to quantitatively measure map complexity.
Ever since the birth of coding theory almost 60 years ago, researchers have been pursuing the elusive goal of constructing the "best codes," whose encoding introduces the minimum possible redundancy for the ...
详细信息
Ever since the birth of coding theory almost 60 years ago, researchers have been pursuing the elusive goal of constructing the "best codes," whose encoding introduces the minimum possible redundancy for the level of noise they can correct. In this article, we survey recent progress in list decoding that has led to efficient error-correction schemes with an optimal amount of redundancy, even against worst-case errors caused by a potentially malicious channel. To correct a proportion rho (say 20%) of worst-case errors, these codes only need close to a proportion r of redundant symbols. The redundancy cannot possibly be any lower information theoretically. This new method holds the promise of correcting a factor of two more errors compared to the conventional algorithms currently in use in diverse everyday applications.
Let C be a binary linear block code of length n, dimension k and minimum Hamming distance d over GF(2)(n). Let d(perpendicular to) denote the minimum Hamming distance of the dual code of C over GF(2)(n). Let epsilon :...
详细信息
Let C be a binary linear block code of length n, dimension k and minimum Hamming distance d over GF(2)(n). Let d(perpendicular to) denote the minimum Hamming distance of the dual code of C over GF(2)(n). Let epsilon : GF(2)(n) bar right arrow {-1, 1}(n) be the component-wise mapping epsilon(v(i)):=(-1)(vi), for v = (v(1), v(2), ..., v(n)) is an element of GF(2)(n). Finally, for p < n, let Phi(C) be a p x n random matrix whose rows are obtained by mapping a uniformly drawn set of size p of the codewords of C under epsilon. It is shown that for d(perpendicular to) large enough and y:=p/n is an element of (0, 1) fixed, as n -> infinity the empirical spectral distribution of the Gram matrix of 1/root n Phi(C) resembles that of a random i.i.d. Rademacher matrix (i.e., the Marchenko-Pastur distribution). Moreover, an explicit asymptotic uniform bound on the distance of the empirical spectral distribution of the Gram matrix of 1/root n Phi(C) to the Marchenko-Pastur distribution as a function of y and d(perpendicular to) is presented.
Consecutive images keep some extra information which is not stored in each of them. These consecutive images form a gradually changing data set. The dynamic changes are kept wholly in such a data set. A gradually chan...
详细信息
Consecutive images keep some extra information which is not stored in each of them. These consecutive images form a gradually changing data set. The dynamic changes are kept wholly in such a data set. A gradually changing data set occupies a large amount of memory space unless it overlays the same parts. Based on the linear quadtree structures and overlapping concepts, a new representation is proposed for storing a sequence of similar binary images. Experiments have been made to compare our representation with other overlapping structures. From the experiments, our representation is shown to be better than other representations. (C) 1999 John Wiley & Sons, Inc.
Agreement statistics play an important role in the evaluation of coding schemes for discourse and dialogue. Unfortunately there is a lack of understanding regarding appropriate agreement measures and how their results...
详细信息
Agreement statistics play an important role in the evaluation of coding schemes for discourse and dialogue. Unfortunately there is a lack of understanding regarding appropriate agreement measures and how their results should be interpreted. In this article we describe the role of agreement measures and argue that only chance-corrected measures that assume a common distribution of labels for all coders are suitable for measuring agreement in reliability studies. We then provide recommendations for how reliability should be inferred from the results of agreement statistics.
Code-based cryptography is one of the few mathematical techniques that enables the construction of public-key cryptosystems that are secure against an adversary equipped with a quantum computer. The McEliece public-ke...
详细信息
Code-based cryptography is one of the few mathematical techniques that enables the construction of public-key cryptosystems that are secure against an adversary equipped with a quantum computer. The McEliece public-key encryption scheme and its variants are candidates for a postquantum public-key encryption standard.
In this paper we introduce Ouroboros, a new family of Key Exchange protocols based on coding theory. The protocols propose a middle ground between the cryptosystems based on QC-MDPC codes, which feature small paramete...
详细信息
In this paper we introduce Ouroboros, a new family of Key Exchange protocols based on coding theory. The protocols propose a middle ground between the cryptosystems based on QC-MDPC codes, which feature small parameter sizes, but have a security reduction to two problems: the syndrome decoding problem and the indistinguishability of the code, and the HQC protocol, which features bigger parameters but has a security reduction to the syndrome decoding problem only. Ouroboros features a reduction to the syndrome decoding problem with only a small overhead compared to the QC-MDPC based cryptosystems. The approach is based on an ideal structure and also works for the rank metric. This yields a simple, secure and efficient approach for key exchange, the Ouroboros family of protocols. For the Hamming metric we obtain the same type of parameters (and almost the same simple decoding) as for MDPC based cryptosystems, but with a security reduction to decoding random quasi-cyclic codes in the Random Oracle Model. This represents a reduction of up to 38% on the public key size compared to HQC, for the most secure parameters. For the rank metric, we obtain better parameters than for RQC, saving up to 31% on the public key for the most secure set of parameters, using non homogeneous errors in Ouroboros. In this full version, the protocol and decoding algorithm have been slightly improved, additional details are given in the security proof, and the protocol is fully described for the rank metric.
Distributed storage systems for large-scale applications typically use replication for reliability. Recently, erasure codes were used to reduce the large storage overhead, while increasing data reliability. A main lim...
详细信息
Distributed storage systems for large-scale applications typically use replication for reliability. Recently, erasure codes were used to reduce the large storage overhead, while increasing data reliability. A main limitation of off-the-shelf erasure codes is their high-repair cost during single node failure events. A major open problem in this area has been the design of codes that: 1) are repair efficient and 2) achieve arbitrarily high data rates. In this paper, we explore the repair metric of locality, which corresponds to the number of disk accesses required during a single node repair. Under this metric, we characterize an information theoretic tradeoff that binds together the locality, code distance, and storage capacity of each node. We show the existence of optimal locally repairable codes (LRCs) that achieve this tradeoff. The achievability proof uses a locality aware flow-graph gadget, which leads to a randomized code construction. Finally, we present an optimal and explicit LRC that achieves arbitrarily high data rates. Our locality optimal construction is based on simple combinations of Reed-Solomon blocks.
In applications of dynamic programming, a certain number of alternatives (paths) are explored and, as more and more information is gathered, some paths may merge. In Viterbi decoding, the number of new paths created i...
详细信息
In applications of dynamic programming, a certain number of alternatives (paths) are explored and, as more and more information is gathered, some paths may merge. In Viterbi decoding, the number of new paths created is equal to the number of paths discarded. In this paper we present an architecture that can be used to store and update the paths dynamically. It consists of a mapping of the trellis of the Viterbi decoding into a 2D array of simple cells. We show how the path storage can be efficiently implemented in VLSI.
暂无评论