Cellular Automata was applied to model the pedestrian flow, where the local neighbor and transition rules implemented to each person in the crowd were determined automatically by the experience of pedestrians. The exp...
详细信息
ISBN:
(纸本)9783642159787
Cellular Automata was applied to model the pedestrian flow, where the local neighbor and transition rules implemented to each person in the crowd were determined automatically by the experience of pedestrians. The experience was based on two parameters;the number of continuous vacant cells in front of the cell to proceed, and the number of pedestrian in the cell to proceed. The experience was evaluated numerically, and a pedestrian selected the cell to proceed by the evaluated index. The flow formations by pedestrians in the opposite direction on a straight pathway and on a corner were simulated, and the number of rows was discussed in relation to the density of pedestrian on the simulation space.
This paper devoted for the implementation of a learning algorithm, utilized Artificial Neural Network (ANN), to predict the passive joint angular positioning for 3R under-actuated serial robot. Under-actuated system h...
详细信息
This paper devoted for the implementation of a learning algorithm, utilized Artificial Neural Network (ANN), to predict the passive joint angular positioning for 3R under-actuated serial robot. Under-actuated system has less number of actuators than the degrees of freedom; therefore, the estimating or modelling of its behaviour is difficult with many uncertainties. Thus, to overcome the disadvantages of several methods reported in literatures. A specific ANN model has been designed and trained to learn a desired set of joint angular positions for the passive joint from a given set of input torques and angular positions for the active joints over a certain period of time. ANN proposes from being free model technique. Consequently, data from sensors fixed on each joints were collected experimentally and provided for the developed ANN model. The learning algorithm can directly determine the position of its passive joint, and can, therefore, completely eliminate the need for any system modelling. Hence, this method could be generalized for the prediction of under-actuated systems behaviour. Results show a successful implementation of the learning algorithm in predicting the behaviour for 3R underactuated robot.
Training neural network is a NP complete problem[1][3].A learning algorithm is proposed in this *** analysis and simulation results show that this kind of algorithm, which is used to train large BP neural network,can ...
详细信息
ISBN:
(纸本)0780312333
Training neural network is a NP complete problem[1][3].A learning algorithm is proposed in this *** analysis and simulation results show that this kind of algorithm, which is used to train large BP neural network,can speed up the learning rate and gain accurate results.
Neural networks are usually trained using local gradient-based procedures. Such methods frequently found sub-optimal solutions being trapped in local minima. In solving some application problems, the input/output data...
详细信息
Neural networks are usually trained using local gradient-based procedures. Such methods frequently found sub-optimal solutions being trapped in local minima. In solving some application problems, the input/output data sets used to train a neural network may not be hundred percent precise but within certain range. It is difficult for the traditional neural network to solve such problems. A learning algorithm based on interval optimization is presented in this paper. The above disadvantages of the traditional learning algorithm are settled by using this method.
In this paper, we propose a novel algorithm to learn a Buchi automaton from a teacher who knows an omega-regular language. The learned Buchi automaton can be a nondeterministic Buchi automaton or a limit deterministic...
详细信息
In this paper, we propose a novel algorithm to learn a Buchi automaton from a teacher who knows an omega-regular language. The learned Buchi automaton can be a nondeterministic Buchi automaton or a limit deterministic Buchi automaton. The learning algorithm is based on learning a formalism called family of DFAs (FDFAs) recently proposed by Angluin and Fisman. The main catch is that we use a classification tree structure instead of the standard observation table structure. The worst case storage space required by our algorithm is quadratically better than that required by the table-based algorithm proposed by Angluin and Fisman. We implement the proposed learning algorithms in the learning library ROLL (Regular Omega Language learning), which also consists of other complete omega-regular learning algorithms available in the literature. Experimental results show that our tree-based learning algorithms have the best performance among others regarding the number of solved learning tasks. (C) 2020 Elsevier Inc. All rights reserved.
Edge computing and IIoT (Industrial Internet of Things) are two representative application scenarios in 5G (5th Generation) mobile communication technology network. Therefore, this paper studies the resource allocatio...
详细信息
Edge computing and IIoT (Industrial Internet of Things) are two representative application scenarios in 5G (5th Generation) mobile communication technology network. Therefore, this paper studies the resource allocation methods for these two typical scenarios, and proposes an energy-efficient resource allocation method by using DRL algorithm, which enhances the network performance and reduces the network operation cost. Firstly, ac-cording to the characteristics of edge computing, taking the connection relationship between base stations, users and the transmission power allocated by base stations to users as decision variables, minimizing the overall energy efficiency as the goal, and taking the needs of mobile users as constraints, a resource allocation model for user quality of service (QoS) guarantee is designed. Secondly, simulation experiments are used to verify that the proposed method has faster training speed and convergence speed, ensures the quality-of-service requirements of mobile users, and realizes intelligent and energy-saving resource allocation. The approximate amounts of steps to convergence under four cases are 500, 350, 250, and 220, respectively, and the values of awards are 21.76, 21.09, 20.38, and 20.25, respectively. Thirdly, aiming at the IIoT environment scenario, this paper proposes a resource allocation method for 5G wide connection low delay services based on asynchronous dominant action evaluation algorithm (A3C). Firstly, according to the characteristics of IIoT environment, taking the connection relationship between base stations and users and the transmission power allocated by base stations to users as decision variables and maximizing energy efficiency while satisfying needs of each user as optimization goal, a resource allocation model for 5G wide connection low delay services is designed. On this basis, an energy -efficient resource allocation method according to A3C is proposed. The hierarchical aggregation clustering al-gorithm
The Cerebellar Model Arithmetic Computer (CMAC) proposed by Albus is a neural network that imitates the human cerebellum, and it is basically a table lookup technique for representing nonlinear functions. The CMAC can...
详细信息
The Cerebellar Model Arithmetic Computer (CMAC) proposed by Albus is a neural network that imitates the human cerebellum, and it is basically a table lookup technique for representing nonlinear functions. The CMAC can approximate a wide variety of nonlinear functions by learning. However, the CMAC requires a number of learning iterations to achieve the adequate approximation accuracy, because CMAC learning algorithm is based on the Least Mean Square method. In this paper, a CMAC learning algorithm based on the Kalman filter is proposed. Using this algorithm, the number of learning iterations is reduced, and the comparable approximation accuracy is achieved as compared to the system using conventional learning algorithm. Generally, the learning algorithm based on the Kalman filter requires much larger computational quantity than that required by algorithms based on the Least Mean Square method. For CMAC system equations contain a sparse matrix, the computational quantity of the proposed learning algorithm can be reduced. Furthermore, since two CMAC weights being in the far place from each other can be considered to have little correlation, the number of weights that should be updated for an learning iteration can be reduced. This reduces the computational quantity of the learning algorithm. Computer simulation results for the modeling problems are presented.
1H-Benzo[ b] pyrrole samples were irradiated in the air with gamma source at 0.969 kGy per hour at room temperature for 24, 48 and 72 h. After irradiation, electron spin resonance, thermogravimetry analysis (TGA) and ...
详细信息
1H-Benzo[ b] pyrrole samples were irradiated in the air with gamma source at 0.969 kGy per hour at room temperature for 24, 48 and 72 h. After irradiation, electron spin resonance, thermogravimetry analysis (TGA) and differential thermal analysis (DTA) measurements were immediately carried out on the irradiated and unirradiated samples. The ESR measurements were performed between 320 and 400 K. ESR spectra were recorded from the samples irradiated for 48 and 72 h. The obtained spectra were observed to be dependent on temperature. Two radical-type centres were detected on the sample. Detected radiation-induced radicals were attributed to R-+center dot NH and R=(center dot)CC2H2. The g-values and hyperfine constants were calculated by means of the experimental spectra. It was also determined from TGA spectrum that both the unirradiated and irradiated samples were decomposed at one step with the rising temperature. Moreover, a theoretical study was presented. Success of the machine learning methods was tested. It was found that bagging techniques, which are widely used in the machine learning literature, could optimise prediction accuracy noticeably.
In this study, a force and torque estimation method based on an adaptive neuro-fuzzy inference system (ANFIS) has been developed to get rid of multiple integral calculations of air gap coefficients that cause time del...
详细信息
In this study, a force and torque estimation method based on an adaptive neuro-fuzzy inference system (ANFIS) has been developed to get rid of multiple integral calculations of air gap coefficients that cause time delay for magnetic levitation control applications. During magnetic levitation applications that contain a 4-pole hybrid electromagnet, multiple integral calculations have to be done for obtaining air gap permanence parameters, and these parameters are needed to calculate force and torque parameters that are produced by the poles of the hybrid electromagnet, which means, if time delay occurs for calculation of permanence parameters, actual force and actual torque values that are produced by hybrid electromagnet's poles cannot be exactly known;thus, the advantage of having an exact model of the system gets lost and, as a result, the controller's performance goes down. To address a solution, an ANFIS using a hybrid learning algorithm consisting of backpropagation and least-squares learning methods is proposed to estimate force and torque parameters using training data already obtained using multiple integral calculations before.
In recent years, technology developments are more rapidly. How to learn and obtain desired knowledge efficiently has become an important but complicated problem. We hope that there are methods can give us some suggest...
详细信息
In recent years, technology developments are more rapidly. How to learn and obtain desired knowledge efficiently has become an important but complicated problem. We hope that there are methods can give us some suggestions about how to learn knowledge efficiently. In this paper, we introduced some learning behavior of people, and then use our designed Effective learning Curve Model to imitate this learning phenomenon. Using our learning function model, we can imitate people's learning behavior through pretesting. Every one has different learning behavior functions on learning distinct courses. Different learning sequence of courses will cause different learning efficiency. From this view, we proposed Max learning Efficiency Slope First algorithm (MLESFA) by differential learning functions to give people some suggestions about courses learning sequence and obtain desired knowledge efficiently. These algorithms also can help us to understand how much time we have to spend on each course in order to get better learning efficiency under time limitation. Finally, we make some learning example and compare simulation results with other courses learning algorithms. From the simulation results, we can see that our MLESFA algorithm has better learning efficiency than others.
暂无评论