Back propagation is one of the supervised learning and multi-layered training program and uses errors during the process of changing the weight value in the backward process as well as the forward propagation. In the ...
Back propagation is one of the supervised learning and multi-layered training program and uses errors during the process of changing the weight value in the backward process as well as the forward propagation. In the method for predicting cognitive abilities backpropagation the first step is to set the input neuron number, the number of neurons that are hidden, and the number of output neurons. The number of neurons used in the program is 6 neurons consisting of cognitive criteria, 6 hidden neuron layers, and 2 neuron outputs. The highest level of accuracy is in the binary sigmoid and bipolar sigmoid activation functions at the 64th epoch with the accuracy of each function of 82.93% +/- 37.63% and 85.37% +/- 35.34%. The smallest root mean squared error value was found in binary sigmoid of 0.266 with a tolerance of +/- 0.258 on the 100th epoch with an accuracy of 80.49% while for the sigmoid bipolar activation function the smallest root mean squared error value was obtained at the epoch 500 of 0.282 with tolerance +/- 0.353.
Second order graph Laplacian regularization has the limitation that the solution remains biased towards a constant which restricts its extrapolation capability. The lack of extrapolation results in poor generalization...
Second order graph Laplacian regularization has the limitation that the solution remains biased towards a constant which restricts its extrapolation capability. The lack of extrapolation results in poor generalization. An additional penalty factor is needed on the function to avoid its over-fitting on seen unlabeled training instances. The third order derivative based technique identifies the sharp variations in the function and accurately penalizes them to avoid overfitting. The resultant function leads to a more accurate and generic model that exploits the twist and curvature variations on the manifold. Extensive experiments on synthetic and real-world data set clearly shows that the additional regularization increases accuracy and generic nature of model.
Affinity Propagation Method it is necessary to modify the algorithm by using Principal Component Analysis (PCA). PCA method is used to reduce the attributes or characteristics that are less influential on the data so ...
Affinity Propagation Method it is necessary to modify the algorithm by using Principal Component Analysis (PCA). PCA method is used to reduce the attributes or characteristics that are less influential on the data so that the most influential attributes are obtained to then be carried out the clustering process with Affinity Propagation. The comparison results of the PCA + AP grouping model have better performance than the conventional AP grouping model. This is justified because the number of iterations and clusters produced by the PCA + AP clustering model does not change and converges when there are 8 optimal cluster clusters. While the performance of conventional clustering models produces an optimal number of clusters from 14 clusters with a significant number of iterations. So it can be concluded that the PCA + AP grouping model is suitable for the Air Quality dataset because it produces an optimal number of clusters and iterations of 8 clusters. The comparison results of the PCA + AP grouping model have better performance than the conventional AP grouping model. This is justified because the number of iterations and clusters produced by the PCA + AP clustering model does not change and converges when the optimal number of clusters is 5 clusters. While the performance of conventional clustering models produces a suboptimal number of 10 clusters with a significant number of iterations. So it can be concluded that the PCA + AP grouping model is suitable for the Water Quality Status dataset because it produces an optimal number of clusters and 5 cluster repetitions.
Today big data has become the basis of discussion for the organizations. The big task associated with big data stream is coping with its various challenges and performing the appropriate testing for the optimal analys...
Today big data has become the basis of discussion for the organizations. The big task associated with big data stream is coping with its various challenges and performing the appropriate testing for the optimal analysis of the data which may benefit the processing of various activities, especially from a business perspective. Big data term follows the massive volume of data, (might be in units of petabytes or exabytes) exceeding the processing and analytical capacity of the conventional systems and thereby raising the need for analyzing and testing the big data before applications can be put into use. Testing such huge data coming from the various number of sources like the internet, smartphones, audios, videos, media, etc. is a challenge itself. The most favourable solution to test big data follows the automated/programmed approach. This paper outlines the big data characteristics, and various challenges associated with it followed by the approach, strategy, and proposed framework for testing big data applications.
The RSA public key cryptosystem was among the first algorithms to implement the Diffie-Hellman key exchange protocol. At the core of RSA's security is the problem of factoring its modulus, a very large integer, in...
The RSA public key cryptosystem was among the first algorithms to implement the Diffie-Hellman key exchange protocol. At the core of RSA's security is the problem of factoring its modulus, a very large integer, into its prime factors. In this study, we show a step-by-step tutorial on how to factor the RSA modulus using Euler's factorization algorithm, an algorithm that belongs to the class of exact algorithms. The Euler's factorization algorithm is implemented in Python programming language. In this experiment, we also record the relation between the length of the RSA moduli and its factorization time. As a result, this study shows that the Euler's factorization algorithm can be used to factor small modulus of RSA, the correlation between the factoring time and the size of RSA modulus is directly proportional, and better selection of some Euler's parameters may lead to lower factoring time.
In senior high schools, especially in the first class were required to place a department that is in accordance with the value produced. The application predicts student majors based on the value of students using art...
In senior high schools, especially in the first class were required to place a department that is in accordance with the value produced. The application predicts student majors based on the value of students using artificial neural network algorithms using rapid miner to be able to produce more precise and faster accuracy results. The results obtained from the analysis carried out obtained an accuracy value of 71.86%.ANN has a network architecture that is a single layer net. Networks that have more than one layer are called multilayer net and competitive layer networks (competitive layer net). The shape of a multilayer net 1 or more has between the input layer and the output layer, which weighs between 2 adjacent layers. ANN architecture using 3 layers is 7 input layers, 6 hidden layers, and 2 output layers. 20 neurons are the number of neuron outputs to artificial neural networks
Emotion recognition gains huge popularity now a days. Physiological signals provides an appropriate way to detect human emotion with the help of IoT. In this paper, a novel system is proposed which is capable of deter...
Emotion recognition gains huge popularity now a days. Physiological signals provides an appropriate way to detect human emotion with the help of IoT. In this paper, a novel system is proposed which is capable of determining the emotional status using physiological parameters, including design specification and software implementation of the system. This system may have a vivid use in medicine (especially for emotionally challenged people), smart home etc. Various Physiological parameters to be measured includes, heart rate (HR), galvanic skin response (GSR), skin temperature etc. To construct the proposed system the measured physiological parameters were feed to the neural networks which further classify the data in various emotional states, mainly in anger, happy, sad, joy. This work recognized the correlation between human emotions and change in physiological parameters with respect to their emotion.
K-Nearest Neighbor is a method of lazy learning method which is a group of instances-based learning. K-NN searches by searching for groups of objects in the training data that are closest to the object on new data or ...
K-Nearest Neighbor is a method of lazy learning method which is a group of instances-based learning. K-NN searches by searching for groups of objects in the training data that are closest to the object on new data or testing data. Support Vector Machine is a learning machine method that works with the aim of finding the best hyperplane that separates two classes in input space. School Achievement is an achievement obtained by serious learning and discipline. The category of outstanding students is to get a good average score and not have an attendance list, especially Absent (A) and a list of late attendance at school can be classified to obtain information on the accuracy of the data being tested. In the testing process both methods obtained good accuracy results between the two methods, namely K-NN obtained an accuracy of 88.52% while SVM is 91.07%.
IT Governance are one of the needs in managing Enterprise Level IT. This study shows part of the decision domain of IT Governance Help, which are IT Investment and Prioritization. The purpose of this study is to deter...
IT Governance are one of the needs in managing Enterprise Level IT. This study shows part of the decision domain of IT Governance Help, which are IT Investment and Prioritization. The purpose of this study is to determine feasibility of implementation open source ERP system Odoo project management module that replaces current system based on operational, economic, and technical aspects. The method used are Fit / Gap Analysis and Cost Benefit model using economic impact worksheet. The analysis is done by comparing the requirement with features owned by current system and Odoo ERP system. This research resulted that Odoo ERP System is feasible in the 3 aspects, the operational feasibility Odoo reaching 72% Fit, the economic feasibility with ROI of 84%, and technical feasibility indicating the technical requirement can be fulfilled by leasing AWS server. The conclusion is the open source ERP system Odoo project management module is feasible to be applied in terms of 3 aspects that have been studied, those are operational, economic, and technical.
Climate anomalies are considered as an important factor closely related to many disasters causing many human losses, such as airline crash, wildfires, drought and flooding in many areas. Many researchers have projecte...
详细信息
暂无评论