This paper aims to improve the understanding of the complexities for Matsui's Algorithm 2 - one of the most well-studied and powerful cryptanalytic techniques available for block ciphers today. We start with the o...
详细信息
ISBN:
(纸本)9783662439333;9783662439326
This paper aims to improve the understanding of the complexities for Matsui's Algorithm 2 - one of the most well-studied and powerful cryptanalytic techniques available for block ciphers today. We start with the observation that the standard interpretation of the wrong key randomisation hypothesis needs adjustment. We show that it systematically neglects the varying bias for wrong keys. Based on that, we propose an adjusted statistical model and derive more accurate estimates for the success probability and data complexity of linear attacks which are demonstrated to deviate from all known estimates. Our study suggests that the efficiency of Matsui's Algorithm 2 has been previously somewhat overestimated in the cases where the adversary attempts to use a linear approximation with a low bias, to attain a high computational advantage over brute force, or both. These cases are typical since cryptanalysts always try to break as many rounds of the cipher as possible by pushing the attack to its limit. Surprisingly, our approach also reveals the fact that the success probability is not a monotonously increasing function of the data complexity, and can decrease if more data is used. Using less data can therefore result in a more powerful attack. A second assumption usually made in linear cryptanalysis is the key equivalence hypothesis, even though due to the linear hull effect, the bias can heavily depend on the key. As a further contribution of this paper, we propose a practical technique that aims to take this into account. All theoretical observations and techniques are accompanied by experiments with small-scale ciphers.
Deep neural networks use multiple layers of functions to map an object represented by an input vector progressively to different representations, and with sufficient training, eventually to a single score for each cla...
详细信息
ISBN:
(数字)9781665490627
ISBN:
(纸本)9781665490627
Deep neural networks use multiple layers of functions to map an object represented by an input vector progressively to different representations, and with sufficient training, eventually to a single score for each class that is the output of the final decision function. Ideally, in this output space, the objects of different classes achieve maximum separation. Motivated by the need to better understand the inner working of a deep neural network, we analyze the effectiveness of the learned representations in separating the classes from a data complexity perspective. Using a simple complexity measure, a popular benchmarking task, and a well-known architecture design, we show how the data complexity evolves through the network, how it changes during training, and how it is impacted by the network design and the availability of training samples. We discuss the implications of the observations and the potentials for further studies.
The global population is aging. These older adults face changes in health that result in increasing frailty. Digital biomarkers created by passive, continuous sensors offer an early indicator of impending frailty that...
详细信息
ISBN:
(纸本)9798400701269
The global population is aging. These older adults face changes in health that result in increasing frailty. Digital biomarkers created by passive, continuous sensors offer an early indicator of impending frailty that can be used to delay or reverse frailty. Building on the notion of a human as a complex system, we introduce and compare three methods to model and estimate the complexity of indoor human behavior. Each method offers potential benefits for estimating human frailty. We introduce a formalization of the approaches, extend their use for arbitrary-size sensor suites, and demonstrate how they can be used to visualize and calculate a person's behavioral complexity based on smart home data collected continuously for an older adult subject.
It is well known that Bloom Filters have a performance essentially independent of the data used to query the filters themselves, but this is no more true when considering Learned Bloom Filters. In this work we analyze...
详细信息
ISBN:
(数字)9783031342042
ISBN:
(纸本)9783031342035;9783031342042
It is well known that Bloom Filters have a performance essentially independent of the data used to query the filters themselves, but this is no more true when considering Learned Bloom Filters. In this work we analyze how the performance of such learned data structures is impacted by the classifier chosen to build the filter and by the complexity of the dataset used in the training phase. Such analysis, which has not been proposed so far in the literature, involves the key performance indicators of space efficiency, false positive rate, and reject time. By screening various implementations of Learned Bloom Filters, our experimental study highlights that only one of these implementations exhibits higher robustness to classifier performance and to noisy data, and that only two families of classifiers have desirable properties in relation to the previous performance indicators.
CLEFIA is a new block cipher proposed by SONY corporation recently. The fundamental structure of CLEFIA is a generalized Feistel structure consisting of 4 data lines. Convenient for cryptanalysis, we rewrite the ciphe...
详细信息
ISBN:
(纸本)9783642014390
CLEFIA is a new block cipher proposed by SONY corporation recently. The fundamental structure of CLEFIA is a generalized Feistel structure consisting of 4 data lines. Convenient for cryptanalysis, we rewrite the cipher in traditional Feistel structure consisting of 2 data lines. We proposed a new 9-round impossible differential. By using the 9 round impossible differential, we presented an idea on the analysis of 14-round CLEFIA-128 without whitening layers.
In this work we want to analyse the behaviour of two classic Artificial Neural Network models respect to a data complexity measures. In particular, we consider a Radial Basis Function Network and a MultiLayer Perceptr...
详细信息
ISBN:
(纸本)9783642024771
In this work we want to analyse the behaviour of two classic Artificial Neural Network models respect to a data complexity measures. In particular, we consider a Radial Basis Function Network and a MultiLayer Perceptron. We examine the metrics of data complexity known as Measures of Separability of Classes over a wide range of data sets built from real data, and try to extract behaviour patterns from the results. We obtain rules that describe both good or bad behaviours of the Artificial Neural Networks mentioned. With the obtained rules, we try to predict the behaviour of the methods from the data set complexity metrics prior to its application, and therefore establish their domains of competence.
We consider the problem of computing the best swap edges of a shortest-path tree T, rooted in r. That is, given a single link failure: if the path is not affected by the failed link, then the message will be delivered...
详细信息
ISBN:
(纸本)3540240764
We consider the problem of computing the best swap edges of a shortest-path tree T, rooted in r. That is, given a single link failure: if the path is not affected by the failed link, then the message will be delivered through that path;otherwise, we want to guarantee that, when the message reaches the edge (u, v) where the failure has occurred, the message will then be re-routed using the computed swap edge. There exist highly efficient serial solutions for the problem, but unfortunately because of the structures they use, there is no known (nor foreseeable) efficient distributed implementation for them. A distributed protocol exists only for finding swap edges, not necessarily optimal ones. In [6], distributed solutions to compute the swap edge that minimizes the distance from u to r have been presented. In contrast;in this paper we focus on selecting, efficiently and distributively, the best swap edge according to an objective function suggested in [13]: we choose the swap edge that minimizes the distance from u to v.
Software size estimation calculations often do not consider data complexity. However, in practice, data is considered an integral part of the system, so it needs to be taken into account in software size estimation. T...
详细信息
In this paper, an empirical analysis of linear state space models and long short-term memory neural networks is performed to compare the statistical performance of these models in predicting the spread of COVID-19 inf...
详细信息
In this paper, an empirical analysis of linear state space models and long short-term memory neural networks is performed to compare the statistical performance of these models in predicting the spread of COVID-19 infections. data on the pandemic daily infections from the Arabian Gulf countries from 2020/03/24 to 2021/05/20 are fitted to each model and a statistical analysis is conducted to assess their short-term prediction accuracy. The results show that state space model predictions are more accurate with notably smaller root mean square errors than the deep learning forecasting method. The results also indicate that the poorer forecast performance of long short-term memory neural networks occurs in particular when health surveillance data are characterized by high fluctuations of the daily infection records and frequent occurrences of abrupt changes. One important result of this study is the possible relationship between data complexity and forecast accuracy with different models as suggested in the entropy analysis. It is concluded that state space models perform better than long short-term memory networks with highly irregular and more complex surveillance data.
The rapid advances in life science, including the sequencing of the human genome and numerous other techiques, has given an extraordinary ability to aquire data on biological systems and human disease. Even so, drug d...
详细信息
The rapid advances in life science, including the sequencing of the human genome and numerous other techiques, has given an extraordinary ability to aquire data on biological systems and human disease. Even so, drug development costs are higher than ever, while the rate of new approved treatments is historically low. A potential explanation to this discrepancy might be the difficulty of understanding the biology underlying the acquired data; the difficulty to refine the data to useful knowledge through interpretation. In this thesis the refinement of the complex data from mass spectrometry proteomics is studied. A number of new algorithms and programs are presented and demonstrated to provide increased analytical ability over previously suggested alternatives. With the higher goal of increasing the mass spectrometry laboratory scientific output, pragmatic studies were also performed, to create new set on compression algorithms for reduced storage requirement of mass spectrometry data, and also to characterize instrument stability. The final components of this thesis are the discussion of the technical and instrumental weaknesses associated with the currently employed mass spectrometry proteomics methodology, and the discussion of current lacking academical software quality and the reasons thereof. As a whole, the primary algorithms, the enabling technology, and the weakness discussions all aim to improve the current capability to perform mass spectrometry proteomics. As this technology is crucial to understand the main functional components of biology, proteins, this quest should allow better and higher quality life science data, and ultimately increase the chances of developing new treatments or diagnostics.
暂无评论