作者:
T. RamrajR. PrabhakarAssistant Professor
Department of Computer Science and Engineering Coimbatore Institute of Technology Coimbatore - 641014 India Emeritus Professor
Department of Computer Science and Engineering Coimbatore Institute of Technology Coimbatore - 641014 India
Graphs are common data structures used to represent / model real-world systems. Graph Mining is one of the arms of Data mining in which voluminous complex data are represented in the form of graphs and mining is done ...
详细信息
Graphs are common data structures used to represent / model real-world systems. Graph Mining is one of the arms of Data mining in which voluminous complex data are represented in the form of graphs and mining is done to infer knowledge from them. Frequent sub graph mining is a sub section of graph mining domain which is extensively used for graph classification, building indices and graph clustering purposes. The frequent sub graph mining is addressed from various perspectives and viewed in different directions based upon the domain expectations. In this paper, a survey is done on the approaches in targeting frequent sub graphs and various scalable techniques to find them.
Cloud architecture is used for maintaining the personal health record and provide symptoms based treatment to the patients. The details of a patient need to be stored in a secured manner. It is to create a cloud stora...
详细信息
Cloud architecture is used for maintaining the personal health record and provide symptoms based treatment to the patients. The details of a patient need to be stored in a secured manner. It is to create a cloud storage server for long term storage over the internet. The storage server will act as a database server. Uploaded data stored in the cloud server through proxy re-encryption method. A secured threshold proxy re-encryption server and integrates it with a decentralized erasure code such that a secure distributed storage system. To generate proxy re-encryption key for one-time data access. A proxy server will be created virtually for one time data access. We can achieve the symptoms based treatment by secure Personal Health Record in cloud storage when applying the proposed encryption algorithm.
Surveillance video is characterized by large amount of data and redundancy, which makes the suspicious face detection to be a problem. To solve the problem above we proposed suspicious face detection based on key fram...
详细信息
作者:
M. SuryaN. AnithadeviPG Student
Computer Science and Engineering Coimbatore Institute of Technology Coimbatore- 641014 India Assistant Professor
Computer Science and Engineering Coimbatore Institute of Technology Coimbatore- 641014
Single Sign On (SSO) is an authentication mechanism that enables legal user with a single credential to be authenticated by multiple service providers in a distributed computer networks. SSO obtains credential from tr...
详细信息
Single Sign On (SSO) is an authentication mechanism that enables legal user with a single credential to be authenticated by multiple service providers in a distributed computer networks. SSO obtains credential from trusted authorities i.e., Smart Card Producing Center (SCPC) and Trusted Credential Privacy (TCP) which is used for mutual authentication and authorization of legal users. Chang-Lee coined a new SSO scheme which makes use of SCPC for mutual authentication and session key establishment whereas Schnorr signature makes use of TCP which generates and verifies the signature for user's authentication. RSA algorithm and Attribute Based Encryption (ABE) is used for encryption and decryption of messages in which ABE tends to be more efficient than RSA based algorithm. ABE is a new public key based on one-to-many encryption that allows users to decrypt the message based on set of attributes and access policies. Decryption is an expensive process and this ABE system eliminates the decryption overhead using outsourced decryption. Integrity of data is maintained by verification of the cipher text which guarantees that the encrypted and decrypted files are same and original message is recovered using hash technique in case of any modifications in the file.
Online social networks have already become a bridge connecting our physical daily life with the (web-based) information space. This connection produces a huge volume of data, not only about the information itself, but...
ISBN:
(数字)9781608458585
Online social networks have already become a bridge connecting our physical daily life with the (web-based) information space. This connection produces a huge volume of data, not only about the information itself, but also about user behavior. The ubiquity of the social Web and the wealth of social data offer us unprecedented opportunities for studying the interaction patterns among users so as to understand the dynamic mechanisms underlying different networks, something that was previously difficult to explore due to the lack of available data. In this book, we present the architecture of the research for social network mining, from a microscopic point of view. We focus on investigating several key issues in social networks. Specifically, we begin with analytics of social interactions between users. The first kinds of questions we try to answer are: What are the fundamental factors that form the different categories of social ties? How have reciprocal relationships been developed from parasocial relationships? How do connected users further form groups? Another theme addressed in this book is the study of social influence. Social influence occurs when one"s opinions, emotions, or behaviors are affected by others, intentionally or unintentionally. Considerable research has been conducted to verify the existence of social influence in various networks. However, few literature studies address how to quantify the strength of influence between users from different aspects. In Chapter 4 and in [138], we have studied how to model and predict user behaviors. One fundamental problem is distinguishing the effects of different social factors such as social influence, homophily, and individual"s characteristics. We introduce a probabilistic model to address this problem. Finally, we use an academic social network, ArnetMiner, as an example to demonstrate how we apply the introduced technologies for mining real social networks. In this system, we try to mine knowledge from both
Decentralized planning for large teams of robots is challenging, requiring control and coordination of the multi-robot team in settings with noisy sensors and stochastic transition dynamics. Although the Decentralized...
详细信息
Plagiarism means intellectual theft which consists of turning someone else's work as your own. Plagiarism has become widespread in many fields like institutions, companies etc. This paper proposes a new technique ...
详细信息
Plagiarism means intellectual theft which consists of turning someone else's work as your own. Plagiarism has become widespread in many fields like institutions, companies etc. This paper proposes a new technique which uses Semantic Role Labelling and Sentence Ranking for plagiarism detection. Sentence ranking gives suspicious and original sentence pairs through vectorising the document. Then proposed method analyses and compares the ranked suspected and original documents based on the semantic allocation of each term in the sentence using SRL. It was found out that the application of sentence ranking in plagiarism detection method decreases the time of checking.
It is well known that developers and professionals do not agree on common factors that derive quality of software. Therefore it is appropriate to apply group decision taking to rank quality factors of software. The pr...
详细信息
ISBN:
(纸本)9781467379113
It is well known that developers and professionals do not agree on common factors that derive quality of software. Therefore it is appropriate to apply group decision taking to rank quality factors of software. The present research considers three decision takers, viz., users, developers and professionals to decide ranking of quality factors. In this process, each decision maker provides his/her intuitionistic fuzzy preference values rather than exact values for each factor. These preference values are aggregated using averaging operator, which is further aggregated using weighted arithmetic averaging operator. Finally eight considered factors are ranked.
作者:
D. JaganA.N. SenthilvelR. PrabhakarS. Uma MaheswariPG Scholar
Department of Computer Science and Engineering Coimbatore Institute of Technology Coimbatore - 641014 India Assistant Professor(SG)
Department of Computer Science and Engineering Coimbatore Institute of Technology Coimbatore - 641014 India Emeritus Professor
Department of Computer Science and Engineering Coimbatore Institute of Technology Coimbatore - 641014 India Associate Professor
Department of Electronics and Communication Engineering Coimbatore Institute of Technology Coimbatore641014 India
In the real world the Scheduling of Jobs in industries is provided without any idle time which is very tedious. Practically it becomes difficult when any of the spare part has started to malfunction and has to be chan...
详细信息
In the real world the Scheduling of Jobs in industries is provided without any idle time which is very tedious. Practically it becomes difficult when any of the spare part has started to malfunction and has to be changed in the machine then some idle time is needed in order to undergo the change. In this proposed work some amount of idle time is allotted to schedule the jobs in a single machine which includes three stages namely scheduling strategy, inserting idle time and optimizing the net penalty value of all the jobs.
Self Organization Map(SOM) is an automatic tool in data analysis in data mining,it is used to explore the multi-dimentional data which simplifies complexity and produce meaningful relation with each other or high dime...
Self Organization Map(SOM) is an automatic tool in data analysis in data mining,it is used to explore the multi-dimentional data which simplifies complexity and produce meaningful relation with each other or high dimentional into low dimentional .the powerful method of SOM i.e learning method results excellent performance .the SOM algorithum have various steps from starting stage to the final neuron and their weight updation and modification, these procedure resultant a lot of compplexity accoording to the parameters on the basis of experiments .this paper will compare and discuss various papameters and their result or factors that can improve and refine the image through varius process of SOM.
暂无评论