To improve the communication quality of extremely-low-frequency (ELF) communication effectively, an interference cancellation algorithm based on the generative model widely used in the artificial intelligence field is...
详细信息
To improve the communication quality of extremely-low-frequency (ELF) communication effectively, an interference cancellation algorithm based on the generative model widely used in the artificial intelligence field is proposed. Some magnetic antennas with higher sensitivity and analogue circuits with lower noise floor are designed to suppress various out-of-band interferences. For the first time, a generative model is introduced into the interference cancellation field in ELF communication. Based on the speech enhancement generative adversarial network widely used in speech signal enhancement, an improved generative model that can be applied in the interference cancellation field is proposed to provide some more revelant reference information. By the combination of the improved generative model and the improved generalised sidelobe cancellation (GSC) algorithm, the estimated accuracy of noise and interference is improved effectively. To verify the effectiveness of the proposed algorithm, an experimental platform is built in a laboratory environment, and multiple sets of controlled experiments are performed. Also, the experimental results show that the improved generative model has better performance, better robustness, and relatively lower computational complexity. In addition, compared with the original GSC algorithm and its traditional improved algorithm, the proposed algorithm further promoted the signal-to-interference-plus-noise ratio gain within the range of signal bandwidth.
Guess-and-determine attack is a cryptanalysis method that has been applied to various stream ciphers. In this study, the authors study the guess-and-determine attacks on two ISO standardised, Panama-like stream cipher...
详细信息
Guess-and-determine attack is a cryptanalysis method that has been applied to various stream ciphers. In this study, the authors study the guess-and-determine attacks on two ISO standardised, Panama-like stream ciphers: MUGI and Enocoro. Utilising the word-oriented structure of the two ciphers, they are able to launch heuristic guess-and-determine attacks in a more efficient manner. Their first target MUGI is both an ISO standard and a Japanese-government-selected CRYPTREC standard. By splitting its basic 64-bit words into 16-bit quarter-words, they are able to conduct a guess-and-determine attack with complexity 2(388), much lower than its 1216-bit internal state size. Enocoro is a lightweight stream cipher family. It has two versions named according to key-length as Enocoro-80 and Enocoro-128v2. They provide the specific guessing paths and they are able to launch guess-and-determine attacks on Enocoro-80 and Enocoro-128v2 with complexities 2(88) and 2(144), respectively. In addition to specific attacking results, they also find some generic rules that may help to improve the efficiency of guess-and-determine attacks in the future.
Deep Learning techniques have been successfully used to solve a wide range of computer vision problems. Due to their high computation complexity, specialized hardware accelerators are being proposed to achieve high pe...
详细信息
Deep Learning techniques have been successfully used to solve a wide range of computer vision problems. Due to their high computation complexity, specialized hardware accelerators are being proposed to achieve high performance and efficiency for deep learning-based algorithms. However, soft errors, i.e., bit flipping errors in the layer output, are often caused due to process variation and high energy particles in these hardware systems. These can significantly reduce model accuracy. To remedy this problem, we propose new algorithms that effectively reduce the impact of errors, thus keeping high accuracy. We firstly propose to incorporate an Error Correction Layer (ECL) into neural networks where convolution is performed multiple times in each layer and majority reporting is conducted for the outputs at bit level. We found that ECL can eliminate most errors while bypassing the bit-error when the bits at the same position are corrupted multiple times under the simulated condition. In order to solve this problem, we analyze the impact of errors depending on the position of bits, thus observing that errors in most significant bit (MSB) positions tend to severely corrupt the output of the network compared to the errors in the least significant bit (LSB) positions. According to this observation, we propose a new specialized activation function, called Piece-wise Rectified Linear Unit (PwReLU), which selectively suppresses errors depending on the bit positions, resulting in an increased model resistance against the errors. Compared to existing activation functions, the proposed PwReLU outperforms with large accuracy margins of up-to 20% even with very high bit error rates (BERs). Our extensive experiments show that the proposed ECL and PwReLU work in a complementary manner, achieving comparable accuracy to the error-free networks even at a severe BER of 0.1% on CIFAR10, CIFAR100, and ImageNet.
This study proposes a framework for the evaluation and validation of software complexity measure. This framework is designed to analyse whether or not software metric qualifies as a measure from different perspectives...
详细信息
This study proposes a framework for the evaluation and validation of software complexity measure. This framework is designed to analyse whether or not software metric qualifies as a measure from different perspectives. Unlike existing frameworks, it takes into account the practical usefulness of the measure and includes all the factors that are important for theoretical and empirical validation including measurement theory. The applicability of the framework is tested by using cognitive functional size measure. The testing process shows that in the same manner the proposed framework can be applied to any software measure. A comparative study with other frameworks has also been performed. The results reflect that the present framework is a better representation of most of the parameters that are required to evaluate and validate a new complexity measure.
From a computational complexity point of view, some syntactical ingredients play different roles depending on the kind of combination considered. Inspired by the fact that the passing of a chemical substance through a...
详细信息
From a computational complexity point of view, some syntactical ingredients play different roles depending on the kind of combination considered. Inspired by the fact that the passing of a chemical substance through a biological membrane is often done by an interaction with the membrane itself, systems with active membranes were considered. Several combinations of different ingredients have been used in order to know which kind of problems could they solve efficiently In this paper, minimal cooperation With a minimal expression (the left-hand side of every object evolution rule has at most two objects and its right-hand side contains only one object) in object evolution rules is considered and a polynomial-time uniform solution to the SAT problem is presented. Consequently, a new way to tackle the P versus NP problem is provided. (C) 2017 Elsevier B.V. All rights reserved.
Management of large water distribution systems can be improved by dividing their networks into so-called district metered areas (DMAs). However, such divisions must be based on appropriated technical criteria. Conside...
详细信息
Management of large water distribution systems can be improved by dividing their networks into so-called district metered areas (DMAs). However, such divisions must be based on appropriated technical criteria. Considering the importance of deeply understanding the relationship between DMA creation and these criteria, this work proposes a performance analysis of DMA generation that takes into account such indicators as resilience index, demand similarity, pressure uniformity, water age (and thus water quality), solution implantation costs, and electrical consumption. To cope with the complexity of the problem, suitable mathematical techniques are proposed in this paper. We use a social community detection technique to define the sectors, and then a multilevel particle swarm optimization approach is applied to find the optimal placement and operating point of the necessary devices. The results obtained by implementing themethodology in a real water supply network show its validity and the meaningful influence on the final result of, especially, elevation and pipe length.
Cyclic Redundancy Check (CRC) or Cyclic Redundancy Code is a Cyclic Error Detection Code used to preserve the integrity of data in storage and transmission applications. CRC of a stream of message bits is usually calc...
详细信息
Cyclic Redundancy Check (CRC) or Cyclic Redundancy Code is a Cyclic Error Detection Code used to preserve the integrity of data in storage and transmission applications. CRC of a stream of message bits is usually calculated block-wise in parallel with the help of a Look-Up Table (LUT) in software or State Space transformation matrix in hardware. Presented here is a novel method and architecture of parallel computation of Cyclic Redundancy Codes without any Look-Up Table for an arbitrary generating polynomial which is programmable at runtime. The method reduces computational complexity and storage requirements by implicitly factorizing the transformation matrix needed to compute the remainder into two simpler Toeplitz matrices. The resulting hardware architecture is suitable for embedded applications.
Causal discovery based on observational data is important for deciphering the causal mechanism behind complex systems. However, the effectiveness of existing causal discovery methods is limited due to inferior prior k...
详细信息
Causal discovery based on observational data is important for deciphering the causal mechanism behind complex systems. However, the effectiveness of existing causal discovery methods is limited due to inferior prior knowledge, domain inconsistencies, and the challenges of high-dimensional datasets with small sample sizes. To address this gap, we propose a novel weakly supervised fuzzy knowledge and data co-driven causal discovery method named KEEL. KEEL introduces a fuzzy causal knowledge schema to encapsulate diverse types of fuzzy knowledge, and forms corresponding weakened constraints. This schema not only lessens the dependency on expertise but also allows various types of limited and error-prone fuzzy knowledge to guide causal discovery. It can enhance the generalization and robustness of causal discovery, especially in high-dimensional and small-sample scenarios. In addition, we integrate the extended linear causal model into KEEL for dealing with the multi-distribution and incomplete data. Extensive experiments with different datasets demonstrate the superiority of KEEL over several state-of-the-art methods in accuracy, robustness and efficiency. The effectiveness of KEEL is also verified in limited real protein signal transduction process data, with the better performance than benchmark methods. In summary, KEEL is effective to tackle the causal discovery tasks with higher accuracy while alleviating the requirement for extensive domain expertise.
A comparison is made of two techniques for recognizing numeric handprint characters using a variety of features including two-dimensional (2-D) fast Fourier transform (FFT) coefficients, geometrical moments, and topol...
详细信息
A comparison is made of two techniques for recognizing numeric handprint characters using a variety of features including two-dimensional (2-D) fast Fourier transform (FFT) coefficients, geometrical moments, and topological features. A backpropagation network and a nearest neighbor classifier are evaluated in terms of recognition performance and computational requirements. The results indicate that for complex problems, the neural network performs comparably to the nearest-neighbor classifier while being significantly more cost effective.
In this paper,we study a stochastic Newton method for nonlinear equations,whose exact function information is difficult to obtain while only stochastic approximations are *** each iteration of the proposed algorithm,a...
详细信息
In this paper,we study a stochastic Newton method for nonlinear equations,whose exact function information is difficult to obtain while only stochastic approximations are *** each iteration of the proposed algorithm,an inexact Newton step is first computed based on stochastic zeroth-and first-order *** encourage the possible reduction of the optimality error,we then take the unit step size if it is acceptable by an inexact Armijo line search ***,a small step size will be taken to help induce desired good *** we investigate convergence properties of the proposed algorithm and obtain the almost sure global convergence under certain *** also explore the computational complexities to find an approximate solution in terms of calls to stochastic zeroth-and first-order oracles,when the proposed algorithm returns a randomly chosen ***,we analyze the local convergence properties of the algorithm and establish the local convergence rate in high *** last we present preliminary numerical tests and the results demonstrate the promising performances of the proposed algorithm.
暂无评论