WSN (wireless sensor network) is a perfect tool of temperature monitoring in coal goaf. Based on the three-zone theory of goaf, the GtmWSN model is proposed, and its dynamic features are analyzed. Accordingly, a data ...
详细信息
WSN (wireless sensor network) is a perfect tool of temperature monitoring in coal goaf. Based on the three-zone theory of goaf, the GtmWSN model is proposed, and its dynamic features are analyzed. Accordingly, a data transmission scheme, named DTDGD, is worked out. Firstly, sink nodes conduct dynamic grid division on the GtmWSN according to virtual semicircle. Secondly, each node will confirm to which grid it belongs based on grid number. Finally, data will be delivered to sink nodes with greedy forward and hole avoidance. Simulation results and field data showed that the GtmWSN and DTDGD satisfied the lifetime need of goaf temperature monitoring.
Recently, there is an emerging trend of addressing "energy efficiency" aspect of wireless communications. And coordinated multipoint (CoMP) communication is a promising method to improve energy efficiency. H...
详细信息
Recently, there is an emerging trend of addressing "energy efficiency" aspect of wireless communications. And coordinated multipoint (CoMP) communication is a promising method to improve energy efficiency. However, since the downlink performance is also important for users, we should improve the energy efficiency as well as keeping a perfect downlink performance. This paper presents a control theoretical approach to study the energy efficiency and downlink performance issues in cooperative wireless cellular networks with CoMP communications. Specifically, to make the decisions for optimal base station grouping in energy-efficient transmissions in CoMP, we develop a Reinforcement Learning (RL) Algorithm. We apply the Q-learning of the RL Algorithm to get the optimal policy for base station grouping with introduction of variations at the beginning of the Q-learning to prevent Q from falling into local maximum points. Simulation results are provided to show the process and effectiveness of the proposed scheme.
The problem of arithmetic operations performance in number fields is actively researched by many scientists, as evidenced by significant publications in this field. In this work, we offer some techniques to increase p...
详细信息
The problem of arithmetic operations performance in number fields is actively researched by many scientists, as evidenced by significant publications in this field. In this work, we offer some techniques to increase performance of software implementation of finite field multiplication algorithm, for both 32-bit and 64-bit platforms. The developed technique, called "delayed carry mechanism," allows to preventing necessity to consider a significant bit carry at each iteration of the sum accumulation loop. This mechanism enables reducing the total number of additions and applies the modern parallelization technologies effectively.
The conventional per-survivor-processing (PSP) scheme suffers from the error propagation problem because it does not fully use the state message provided by a hidden Markov process. This study proposes a vertical coop...
详细信息
The conventional per-survivor-processing (PSP) scheme suffers from the error propagation problem because it does not fully use the state message provided by a hidden Markov process. This study proposes a vertical cooperation among states to enhance the estimate reliabilities in the PSP scheme. The key idea of the proposed algorithm is to estimate the system uncertainty at each time stage in a maximum likelihood (ML) manner that chooses the parameter estimate of the state with the minimum cumulative branch metric as the survival estimate of the time stage. computer simulations show that with the improved phase estimates, the proposed algorithm significantly outperforms the conventional PSP scheme in data decoding. (C) 2013 The Franklin Institute. Published by Elsevier Ltd. All rights reserved.
This study addresses the problem of designing robust stabilisation control for a large class of uncertain single-machine infinite-bus electrical power systems with static var compensator (SVC). This class of systems m...
详细信息
This study addresses the problem of designing robust stabilisation control for a large class of uncertain single-machine infinite-bus electrical power systems with static var compensator (SVC). This class of systems may be perturbed by plant uncertainties, unmodelled perturbations and external disturbances. An adaptive neural network-based dynamic feedback controller is developed such that all the states and signals of the closed-loop system are bounded and the stabilisation error can be made as small as possible. As the small perturbations in the input weighting gains are neglected, an H-infinity control performance can be guaranteed. The adaptive neural network approximation systems are designed to learn the behaviours of the unknown functions, and in turn a modified procedure is proposed such that the number of the neural network basis functions can be significantly reduced. Consequently, the intelligent robust control scheme developed here possesses the properties of computational simplicity and easy implementation from the viewpoint of practical applications. The developed robust control scheme not only can handle a large class of uncertain SVC-driven power systems, but also achieve the aim of enhancing the stability performance. Finally, simulations are provided to demonstrate the effectiveness and performance of the proposed control algorithm.
The shrew DDoS attacks are stealth low-rate TCP-targeted DDoS attacks, which conceal their malicious activities into normal traffic. Although the good pretense eludes them from being detected in time domain, the exist...
详细信息
The shrew DDoS attacks are stealth low-rate TCP-targeted DDoS attacks, which conceal their malicious activities into normal traffic. Although the good pretense eludes them from being detected in time domain, the existent energy exposes them in frequency domain. Online Power Spectral Density (PSD) analysis necessitates real-time PSD data conversion is a must. In this paper, an optimized FPGA based real-time PSD converter is proposed, which is based on our innovative component-reusable Auto-Correlation (AC) algorithm and the adapted 2N-point real-valued Discrete Fourier Transform (DFT) algorithm. Further optimization is achieved through the exploration of algorithm characters and hardware parallelism for this case. The evaluation results from both simulation and synthesis are provided. The conversion of a 512-point data sequence can be finished in the time interval of sampling one data point;and the overall design could be easily fitted in a Xilinx Virtex2 Pro FGPA. (C) 2012 Elsevier Ltd. All rights reserved.
This paper is a case study of visiting an external audit company to explore the usefulness of machine learning algorithms for improving the quality of an audit work. Annual data of 777 firms from 14 different sectors ...
详细信息
This paper is a case study of visiting an external audit company to explore the usefulness of machine learning algorithms for improving the quality of an audit work. Annual data of 777 firms from 14 different sectors are collected. The Particle Swarm Optimization (PSO) algorithm is used as a feature selection method. Ten different state-of-the-art classification models are compared in terms of their accuracy, error rate, sensitivity, specificity, F measures, Mathew's Correlation Coefficient (MCC), Type-I error, Type-II error, and Area Under the Curve (AUC) using Multi-Criteria Decision-Making methods like Simple Additive Weighting (SAW) and Technique for Order of Preference by Similarity to Ideal Solution (TOPSIS). The results of Bayes Net and J48 demonstrate an accuracy of 93% for suspicious firm classification. With the appearance of tremendous growth of financial fraud cases, machine learning will play a big part in improving the quality of an audit field work in the future.
This paper proposes the recurrent learning algorithm for designing thr controllers of continuous dynamical systems in the optimal control problems. The designed controllers are in the form of unfolded recurrent neural...
详细信息
This paper proposes the recurrent learning algorithm for designing thr controllers of continuous dynamical systems in the optimal control problems. The designed controllers are in the form of unfolded recurrent neural networks embedded with physical laws coming from the classical control techniques. The proposed learning algorithm is characterized by its double-forward-recurrent-loops structure for solving both the temporal recurrent and the structure recurrent problems. The first problem is resulted from the nature of general optimal control problems, where the objective functions are often related to (evaluated at) some specific (instead of all) time steps or system states only, causing missing Learning signals at some time steps or system states. The second problem is due to the high-order discretization of the continuous systems bg the Runge-Kutta method that we perform to increase the control accuracy. This discretization transforms the system into several identical subnetworks interconnected together, like a recurrent neural network expanded in the time axis. Two recurrent learning algorithms with different convergence properties are derived;the first- and second-order learning algorithms. The computations of both algorithms are local and performed efficiently as network signal propagation. We also propose two new nonlinear controller structures for two specific control problems:1) two-dimensional (2-D) guidance problem and 2) optimal PI control problem. Under the training of the proposed recurrent learning algorithms, these two controllers can be easily tuned to be suboptimal for given objective functions. Extensive computer simulations have shown the optimization and generalization abilities of the controllers designed bg the proposed learning scheme.
In advanced technologies an increasing proportion of defects manifest themselves as small delay faults. Most of today's advanced delay-fault. algorithms are able to propagate those delay faults which create logic ...
详细信息
In advanced technologies an increasing proportion of defects manifest themselves as small delay faults. Most of today's advanced delay-fault. algorithms are able to propagate those delay faults which create logic or glitch faults. An algorithm is proposed for circuit fault diagnosis in deep sub-micron technology to propagate the actual timing faults as well as those delay faults that eventually create logic faults to the primary outputs. Unlike the backtrack algorithm that predicts the fault site by tracing the syndrome at a faulty output back into the circuit, this approach propagates the fault from the fault site by mapping a nine-valued voltage model on top of a five-valued voltage model. In such a forward approach, accuracy is greatly increased since all composite syndromes at all faulty outputs are considered simultaneously. As a result, the proposed approach is applicable even when the delay size is relatively small. Experimental results show that the number of fault candidates produced by this approach is considerable. (C) 2008 Elsevier Ltd. All rights reserved.
Software visualization is the systematic and imaginative use of the technology of interactive computer graphics and the disciplines of graphic design, typography, color, cinematography, animation, and sound design to ...
详细信息
Software visualization is the systematic and imaginative use of the technology of interactive computer graphics and the disciplines of graphic design, typography, color, cinematography, animation, and sound design to enhance the comprehension of algorithms and computer programs. This article demonstrate that, graphical and auditory representations of programs' are useful in debugging and can enliven and enrich programming as a cognitively accessible multimedia experience. To illustrate these ideas, authors present three visualization approaches -- algorithm animation, typographic source code presentation, and interactive auralization for debugging. The three software visualization approaches described here are useful for debugging. Carefully crafted algorithm animations can show how programs work; enhanced typographic representations can improve the readability and comprehensibility of source code; and an interactive environment can let programmers specify software visualizations, including audio portraits of running programs.
暂无评论