Neutron flux non-uniformity and gradients of neutron current resulting in corresponding power (fission rate) distribution changes can represent root causes of the fuel failure. Such situation can be expected in vicini...
详细信息
Neutron flux non-uniformity and gradients of neutron current resulting in corresponding power (fission rate) distribution changes can represent root causes of the fuel failure. Such situation can be expected in vicinity of some core heterogeneities and construction materials. Since needed data cannot be obtained from nuclear power plant (NPP), results of some benchmark type experiments performed on light water, zero-power research reactor LR-0 were used for investigation of the above phenomenon. Attention was focused on determination of the spatial power distribution changes in fuel assemblies (FAs): Containing fuel rods (FRs) with Gd burnable absorber in WWER-440 and WWER-1000 type cores, Neighboring the core blanket and dummy steel assembly simulators on the periphery of the WWER-440 standard and low leakage type cores, resp., Neighboring baffle in WWER-1000 type cores, and Neighboring control rod (CR) in WWER-440 type cores, namely (a) power peak in axial power distribution in periphery FRs of the adjacent FAs near the area between CR fuel part and butt joint to the CR absorbing part and (b) decrease in radial power distribution in FRs near CR absorbing part. An overview of relevant experimental results from reactor LR-0 and some information concerning leaking FAs on NPP Temelin are presented. Obtained data can be used for code validation and subsequently for the fuel failure occurrence investigation. (C) 2013 Elsevier B.V. All rights reserved.
The condition-based approach identifies sets of input vectors, called conditions, for which it is possible to design an asynchronous protocol solving a distributed problem despite process crashes. This paper establish...
详细信息
The condition-based approach identifies sets of input vectors, called conditions, for which it is possible to design an asynchronous protocol solving a distributed problem despite process crashes. This paper establishes a direct correlation between distributed agreement problems and error-correcting codes. In particular, crash failures in distributed agreement problems correspond to erasure failures in error-correcting codes and Byzantine and value domain faults correspond to corruption errors. This correlation is exemplified by concentrating on two well-known agreement problems, namely, consensus and interactive consistency, in the context of the condition-based approach. Specifically, the paper presents the following results: First, it shows that the conditions that allow interactive consistency to be solved despite f(c) crashes and f(e) value domain faults correspond exactly to the set of error- correcting codes capable of recovering from f(c) erasures and f(e) corruptions. Second, the paper proves that consensus can be solved despite f(c) crash failures iff the condition corresponds to a code whose Hamming distance is f(c) + 1 and Byzantine consensus can be solved despite f(b) Byzantine faults iff the Hamming distance of the code is 2f(b) + 1. Finally, the paper uses the above relations to establish several results in distributed agreement that are derived from known results in error-correcting codes and vice versa.
In 2012, Lyubashevsky introduced a new framework for building lattice-based signature schemes without resorting to any trapdoor [such as Gentry C, Peikert C, Vaikuntanathan V, in: Ladner and Dwork (eds) 40th ACM STOC,...
详细信息
In 2012, Lyubashevsky introduced a new framework for building lattice-based signature schemes without resorting to any trapdoor [such as Gentry C, Peikert C, Vaikuntanathan V, in: Ladner and Dwork (eds) 40th ACM STOC, ACM Press, Victoria, pp. 197-206, 2008 or Hoffstein J, Pipher J, Silverman JH in: Pfitzmann (ed) EUROCRYPT 2001. LNCS, vol. 2045, pp 211-228, Springer, Heidelberg, 2001]. The idea is to sample a set of short lattice elements and construct the public key as a Short Integer Solution (SIS for short) instance. Signatures are obtained using a small subset sum of the secret key, hidden by a (large) Gaussian mask. (Information leakage is dealt with using rejection sampling.) Recently, Persichetti proposed an efficient adaptation of this framework to coding theory (Persichetti E in Cryptography 2(4):30, 2018). In this paper, we show that this adaptation cannot be secure, even for one-time signatures (OTS), due to an inherent difference between bounds in Hamming and Euclidean metrics. The attack consists in rewriting a signature as a noisy syndrome decoding problem, which can be handled efficiently using the extended bit flipping decoding algorithm. We illustrate our results by breaking Persichetti's OTS scheme built upon this approach (Persichetti 2018): using a single signature, we recover the secret (signing) key in about the same amount of time as required for a couple of signature verifications.
Replication is a standard technique for fault tolerance in distributed systems modeled as deterministic finite state machines (DFSMs or machines). To correct crash or Byzantine faults among different machines, replica...
详细信息
Replication is a standard technique for fault tolerance in distributed systems modeled as deterministic finite state machines (DFSMs or machines). To correct crash or Byzantine faults among different machines, replication requires backup machines. We present a solution called fusion that requires just backup machines. First, we build a framework for fault tolerance in DFSMs based on the notion of Hamming distances. We introduce the concept of an (, )-fusion, which is a set of backup machines that can correct crash faults or Byzantine faults among a given set of machines. Second, we present an algorithm to generate an (, )-fusion for a given set of machines. We ensure that our backups are efficient in terms of the size of their state and event sets. Third, we use locality sensitive hashing for the detection and correction of faults that incurs almost the same overhead as that for replication. We detect Byzantine faults with time complexity on average while we correct crash and Byzantine faults with time complexity with high probability, where is the average state reduction achieved by fusion. Finally, our evaluation of fusion on the widely used MCNC'91 benchmarks for DFSMs shows that the average state space savings in fusion (over replication) is 38 % (range 0-99 %). To demonstrate the practical use of fusion, we describe its potential application to two areas: sensor networks and the MapReduce framework. In the case of sensor networks a fusion-based solution can lead to significantly fewer sensor-nodes than a replication-based solution. For the MapReduce framework, fusion can reduce the number of map-tasks compared to replication. Hence, fusion results in considerable savings in state space and other resources such as the power needed to run the backups.
We prove a general structural theorem for a wide family of local algorithms, which includes property testers, local decoders, and probabilistically checkable proofs of proximity. Namely, we show that the structure of ...
详细信息
We prove a general structural theorem for a wide family of local algorithms, which includes property testers, local decoders, and probabilistically checkable proofs of proximity. Namely, we show that the structure of every algorithm that makes q adaptive queries and satisfies a natural robustness condition admits a sample-based algorithm with n(1-1/O(q2 log 2 q)) sample complexity, following the definition of Goldreich and Ron [ACM Trans. Comput. theory, 8 (2016), 7]. We prove that this transformation is nearly optimal. Our theorem also admits a scheme for constructing privacypreserving local algorithms. Using the unified view that our structural theorem provides, we obtain results regarding various types of local algorithms, including the following. We strengthen the stateof-the-art lower bound for relaxed locally decodable codes, obtaining an exponential improvement on the dependency in query complexity;this resolves an open problem raised by Gur and Lachish [SIAM J. Comput., 50 (2021), pp. 788-813]. We show that any (constant-query) testable property admits a sample-based tester with sublinear sample complexity;this resolves a problem left open in a work of Fischer, Lachish, and Vasudev [Proceedings of the 56th Annual Symposium on Foundations of Computer Science, IEEE, 2015, pp. 1163-1182], bypassing an exponential blowup caused by previous techniques in the case of adaptive testers. We prove that the known separation between proofs of proximity and testers is essentially maximal;this resolves a problem left open by Gur and Rothblum [Proceedings of the 8th Innovations in Theoretical Computer Science Conference, 2017, pp. 39:1-39:43;Comput. Complexity, 27 (2018), pp. 99-207] regarding sublinear-time delegation of computation. Our techniques strongly rely on relaxed sunflower lemmas and the Hajnal-Szemeredi theorem.
This study presents a spread-spectrum chaos-based communication system with polarisation diversity in a multipath channel. The propagation model takes into account the random direction angle of arrival and the polaris...
详细信息
This study presents a spread-spectrum chaos-based communication system with polarisation diversity in a multipath channel. The propagation model takes into account the random direction angle of arrival and the polarisation orientation assigned to each version of the transmitted signal. To improve the performance of the proposed system, the receiver integrates monopoles with different orientations and no space diversity. Once the number of antennas is defined, many antenna positions are simulated, and then the optimal position is deduced to improve the performance of the system. To demodulate the received signal, a RAKE receiver is used for multi-antenna processing. An analysis is carried out leading to the analytical expression of the system bit error rate (BER). Simulation results show first that our system performance is improved with the use of this new receiver, and the perfect match observed between simulations and analytical BER expressions confirms the exactitude of our computation approach. Finally, the performance of our studied system is compared and discussed to that of a conventional spread-spectrum system using gold codes as spreading sequences.
In this work we investigate the problem of simultaneous privacy and integrity protection in cryptographic circuits. We consider a white-box scenario with a powerful, yet limited attacker. A concise metric for the leve...
详细信息
In this work we investigate the problem of simultaneous privacy and integrity protection in cryptographic circuits. We consider a white-box scenario with a powerful, yet limited attacker. A concise metric for the level of probing and fault security is introduced, which is directly related to the capabilities of a realistic attacker. In order to investigate the interrelation of probing and fault security we introduce a common mathematical framework based on the formalism of information and coding theory. The framework unifies the known linear masking schemes. We proof a central theorem about the properties of linear codes which leads to optimal secret sharing schemes. These schemes provide the lower bound for the number of masks needed to counteract an attacker with a given strength. The new formalism reveals an intriguing duality principle between the problems of probing and fault security, and provides a unified view on privacy and integrity protection using error detecting codes. Finally, we introduce a new class of linear tamper-resistant codes. These are eligible to preserve security against an attacker mounting simultaneous probing and fault attacks.
This paper presents the performance of the Weight-Balanced Testing (WBT) algorithm with multiple testers. The WBT algorithm aims to minimize the expected number of ( round of) tests and has been proposed for coding, m...
详细信息
This paper presents the performance of the Weight-Balanced Testing (WBT) algorithm with multiple testers. The WBT algorithm aims to minimize the expected number of ( round of) tests and has been proposed for coding, memory storage, search and testing applications. It often provides reasonable results if used with a single tester. Yet, the performance of the WBT algorithm with multiple testers and particularly its upper bound have not been previously analyzed, despite the large body of literature that exists on the WBT algorithm, and the recent papers that suggest its use in various testing applications. Here we demonstrate that WBT algorithm with multiple testers is far from being the optimal search procedure. The main result of this paper is the generalization of the upper bound on the expected number of tests previously obtained for a single-tester WBT algorithm. For this purpose, we first draw an analogy between the WBT algorithm and alphabetic codes;both being represented by the same Q-ary search tree. The upper bound is then obtained on the expected path length of a Q-ary tree, which is constructed by the WBT algorithm. Applications to the field of testing and some numerical examples are presented for illustrative purposes.
Typical statistical methods of data analysis only handle determinate uncertainty, the type of uncertainty that can be modeled under the Bayesian or confidence theories of inference. An example of indeterminate uncerta...
详细信息
Typical statistical methods of data analysis only handle determinate uncertainty, the type of uncertainty that can be modeled under the Bayesian or confidence theories of inference. An example of indeterminate uncertainty is uncertainty about whether the Bayesian theory or the frequentist theory is better suited to the problem at hand. Another example is uncertainty about how to modify a Bayesian model upon learning that its prior is inadequate. Both problems of indeterminate uncertainty have solutions under the proposed framework. The framework is based on an information-theoretic definition of an incoherence function to be minimized. It generalizes the principle of choosing an estimate that minimizes the reverse relative entropy between it and a previous posterior distribution such as a confidence distribution. The simplest form of the incoherence function, called the incoherence distribution, is a min-plus probability distribution, which is equivalent to a possibility distribution rather than a measure-theoretic probability distribution. A simple case of minimizing the incoherence leads to a generalization of minimizing relative entropy and thus of maximizing entropy. The framework of minimum incoherence is applied to problems of Bayesian-confidence uncertainty and to parallel problems of indeterminate uncertainty about model revision.(c) 2022 Elsevier B.V. All rights reserved.
A novel effective detection method is proposed for electronic intelligence (ELINT) systems detecting polyphase codes radar signal in the low signal-to-noise ratio (SNR) scenario. The core idea of the proposed method i...
详细信息
A novel effective detection method is proposed for electronic intelligence (ELINT) systems detecting polyphase codes radar signal in the low signal-to-noise ratio (SNR) scenario. The core idea of the proposed method is first to calculate the time-frequency distribution of polyphase codes radar signals via Wigner-Ville distribution (WVD);then the modified Hough transform (HT) is employed to cumulate all the energy of WVD's ridges effectively to achieve signal detection. Compared with the generalised Wigner Hough transform (GWHT) method, the proposed method has a superior performance in low SNR and is not sensitive to the code type. Simulation results verify the validity of the proposed method.
暂无评论