In the contemporary phase of big data, data visualization is one of the challenging segment of the discovery process. For a classifier, the primary goal is to identify the hidden levels of big data. The performance of...
In the contemporary phase of big data, data visualization is one of the challenging segment of the discovery process. For a classifier, the primary goal is to identify the hidden levels of big data. The performance of classifiers depends on the feature space, number of classes and size of the data. To improve the reliability, efficiency and accuracy of the classifiers, new algorithms are required for analysis. This paper enables classification and visualization of information on diseases using a Deep Learning based Convolution Neural Network classifier. For feature selection and handling massive data in analysis of multivariate data is performed using particle swarm optimization (PSO) and principal component analysis (PCA) techniques. Real-world datasets are utilized for demonstration of the proposed learning algorithm. Deep learning classifiers are scientifically higher and offers better performance when compared to other classifiers according to the comparative study.
In this paper, a prototype model demonstrating the rationality in humans is developed. For queries to be answered, the internal knowledge representation mechanism is understood to be in the form of inter-related clust...
详细信息
In this paper, a prototype model demonstrating the rationality in humans is developed. For queries to be answered, the internal knowledge representation mechanism is understood to be in the form of inter-related clusters permanently stored in the long term memory (LTM). In response to a query involving a single predicate (simplest case), only one cluster along with its various attributes and values may be retrieved. In contrast, a complex query involving multiple predicates requires the retrieval of multiple related chunks, forming a network of knowledge, often referred to as a concept map, from which the response is generated. An internal process that happens is the matching of predicate of the question with the facts in the knowledge-base and retrieval of the chunk(s) with the corresponding Attribute-Value pairs, through a process akin to unification, a substitution procedure. However, an unanswered query due to the absence of a specific knowledge generates a negative response.
作者:
R. KrithigaE. IlavarasanAssistant Professor
Department of Computer Applications Perunthalaivar Kamaraj Arts College Puducherry Professor
Department of Computer Science & Engineering Pondicherry Engineering College Puducherry
Social networks have grown into a popular way for internet surfers to interact with friends in addition to family members, reading news, and also discuss events. Users spend more time on popular social platforms (e.g....
Social networks have grown into a popular way for internet surfers to interact with friends in addition to family members, reading news, and also discuss events. Users spend more time on popular social platforms (e.g., Facebook, Twitter, etc.) storing and sharing their personal information. This fact together with the prospect of communicating thousands of users fascinates the concentration of malicious users. They exploit the implicit trust interactions concerning users with the purpose of accomplishing their malicious objectives, for instance, create malicious links inside the posts/tweets, spread fake news, send out unsolicited messages to genuine users and so on. In this paper, we reviewed various existing techniques on spam profile detection in online social networks.
作者:
Sangayya GulledmathV ArulkumarResearch Scholar
Department of Computer Science and Applications Reva University Yelahanka Bangalore-560064 India Assistant Professor
Department of Computer Science and Applications Reva University Yelahanka Bangalore-560064 India
Data Mining and Knowledge Discovery becomes buzzwords from last two decades. Data Mining as we all know its trending industry in Data Analytics and knowledge representation. This research article initiates novel appro...
Data Mining and Knowledge Discovery becomes buzzwords from last two decades. Data Mining as we all know its trending industry in Data Analytics and knowledge representation. This research article initiates novel approach of how Data Mining can be relatively used in Agricultural domain and their applications more dominant in the specified area of soil classification and soil fertility. Now Data Analysis is becoming part of our life why can't we consider our own farmers to empower them to use more knowledge-oriented techniques to yield the good crop. I have taken Bidar District in Karnataka for soil Sampling and Analysis of Soil. Data pre-processing is challenging, important and critical step in the data mining process and it has enormous scope and it directly impacts the success of data mining process. There are many pre-processing techniques but which one makes better selection for agricultural data sets is vital task of the computational challenges that needs to address and incorporate more suitable one for the better representation of knowledge in soil sampling and analysis.
Software is essential for the bulk of research today. It appears in the research cycle as infrastructure (both inputs and outputs, software obtained from others before the research is performed and software provided t...
Software is essential for the bulk of research today. It appears in the research cycle as infrastructure (both inputs and outputs, software obtained from others before the research is performed and software provided to others after the research is complete), as well as being part of the research itself (e.g., new software development). To measure and give credit for software contributions, the simplest path appears to be to overload the current paper citation system so that it also can support citations of software. A multidisciplinary working group built a set of principles for software citation in late 2016. Now, in ACAT 2017 and its proceedings, we want to experimentally encourage those principles to be followed, both to provide credit to the software developers and maintainers in the ACAT community and to try out the process, potentially finding flaws and places where it needs to be improved.
作者:
P. RamyaR. RajeswariPhd Research Scholar
Department of Computer Applications Bharathiar University Coimbatore and 641046 India Assistant Professor
Department of Computer Applications Bharathiar University Coimbatore and 641046 India
Background subtraction is one of the most important step in video surveillance which is used in a number of real life applications such as surveillance, human machine interaction, optical motion capture and intelligen...
详细信息
Background subtraction is one of the most important step in video surveillance which is used in a number of real life applications such as surveillance, human machine interaction, optical motion capture and intelligent visual observation of animals, insects. Background subtraction is one of the preliminary stages which are used to differentiate the foreground objects from the relatively stationary background. Normally a pixel is considered as foreground if its value is greater than its value in the reference image. Hence, every pixel has to be compared to find the foreground and background pixel. This paper presents a technique which improves the frame difference method by first classifying the blocks in the frame as background and others using correlation coefficient. Further refinement is performed by performing pixel-level classification on blocks which are not considered as background. Experiments are conducted on standard data-sets and the performance measures shows good results in some critical conditions.
Epilepsy is a critical brain disorder which can be detected through the signals captured from the brain. Electroencephalography is an efficient method used to capture signals from the brain. K nearest neighbor is one ...
详细信息
Epilepsy is a critical brain disorder which can be detected through the signals captured from the brain. Electroencephalography is an efficient method used to capture signals from the brain. K nearest neighbor is one of the simplest methods used for classifying epilepsy patterns. Classification of the epilepsy signal from normal pattern will be primarily based on features extracted from brain signals. This paper discusses statistical based linear feature extraction methods such as Root Mean Square, Variance and Linear Prediction Coefficient. This paper also focuses influence of decision rules such as consensus and majority rule in the classification of epilepsy data set. Results show better classification with respect to increased k value.
作者:
S. SuriyaS. P. ShantharajahAssistant Professor
Department of Computer Science and Engineering Velammal College of Engineering and Technology Madurai Tamilnadu India. Professor
Department of Master of Computer Applications Sona College of Technology Salem Tamilnadu India.
Data mining involves analysis, extraction, refining and representation of required data from large databases. kDCI (k Direct Count and Intersect) algorithm is one of the best scalable algorithms for identifying freque...
详细信息
ISBN:
(纸本)9781467349215
Data mining involves analysis, extraction, refining and representation of required data from large databases. kDCI (k Direct Count and Intersect) algorithm is one of the best scalable algorithms for identifying frequent items in huge repository of data. This algorithm uses a special kind of compressed data structure which helps in mining the datasets easily. Apriori algorithm, a realization of frequent pattern matching, is universally adopted for reliable mining. It is based on parameters namely support and confidence. kDCI algorithm is hybridized with Apriori algorithm for better performance than their individual contribution. The result proves scalability and mining speed effectively.
作者:
R.EswariS.NickolasAssistant Professor
Department of Computer Applications National Institute of Technology Trichy Tamilnadu India Associate Professor
Department of Computer Applications National Institute of Technology Trichy Tamilnadu India
Effective scheduling of a distributed application is one of the major issues in distributed computing systems since scheduling algorithms are playing important role in achieving better performance. In this paper we pr...
详细信息
Effective scheduling of a distributed application is one of the major issues in distributed computing systems since scheduling algorithms are playing important role in achieving better performance. In this paper we proposed a static Expected Completion Time based Scheduling (ECTS) algorithm to effectively schedule application tasks onto the heterogeneous processors. The algorithm is mainly focused on minimizing the application execution time. It consists of two phases: First, the order of execution of tasks is computed in the task prioritization phase and secondly, the ordered tasks are assigned to the available processors in the processor selection phase. In comparison with the existing HEFT and PETS algorithms, the proposed ECTS algorithm offers better schedule length.
暂无评论