In online social networks, social influence of a user reflects his or her reputation or importance in the whole network or to a personalized user. Social influence analysis can be used in many real applications, such ...
详细信息
ISBN:
(纸本)9780769550886
In online social networks, social influence of a user reflects his or her reputation or importance in the whole network or to a personalized user. Social influence analysis can be used in many real applications, such as link prediction, friend recommendation and personalized searching. Personalized PageRank, which ranks nodes according to the probabilities that a random walk starting from a personalized node stops at all nodes, is one of the most popular metrics for influence analysis. In this paper, we study the problem of inverse influence in online social networks. Different from Personalized PageRank, the inverse influence for a personalized node ranks nodes according to the probabilities that all nodes stop at the personalized node in limited steps. We propose two computation models for inverse influence, i.e., the random walk based and the path based. Both of the models have high computation complexity, and cannot be used in large graphs, so we propose a Monte Carlo based approximation algorithm. Experiments from synthetic and real world datasets show that, our algorithm has equivalent or even better accuracy than related researches in link prediction, and thus can be used in friend recommendation in online social networks.
Data mining (DM) techniques have developed in tandem withthe telecommunications market. they are designed to analyze communication behaviors to enable personalized services and reduce customer churn. the major DM pro...
详细信息
ISBN:
(纸本)9780769550886
Data mining (DM) techniques have developed in tandem withthe telecommunications market. they are designed to analyze communication behaviors to enable personalized services and reduce customer churn. the major DM process uses data exploration technology to extract data, create predictive models using decision trees, and test and verify the stability and effectiveness of the models. the K-means method segments customers into clusters based on billing, loyalty and payment behaviors to create decision tree-based models. Determining the number of k clusters in a data set with limited prior knowledge of the appropriate value is a common problem that is distinct from solving data clustering issues. Several method categories exist to decide the value of k, but the optimal choice will maximally compress the data inside a single cluster and accurately assign each observation its own cluster. this paper presents a parallel approach for accelerating the determination of k in n observations. We introduce two methods for selecting the initial centroids that save computation iterations in K-means clustering: 1) Carrying centroids forward;2) Minimum impact. Both approaches are designed to expedite K-means computing and the identification of k.
Comparative evaluations of peer-to-peer protocols through simulations are a viable approach to judge the performance and costs of the individual protocols in large-scale networks. In order to support this work, we enh...
详细信息
this book constitutes thoroughly refereed post-conference proceedings of the workshops of the 19thinternationalconference on parallelcomputing, Euro-Par 2013, held in Aachen, Germany in August 2013. the 99 papers p...
详细信息
ISBN:
(数字)9783642544200
ISBN:
(纸本)9783642544194
this book constitutes thoroughly refereed post-conference proceedings of the workshops of the 19thinternationalconference on parallelcomputing, Euro-Par 2013, held in Aachen, Germany in August 2013. the 99 papers presented were carefully reviewed and selected from 145 submissions. the papers include seven workshops that have been co-located with Euro-Par in the previous years: - Big Data Cloud (Second Workshop on Big Data Management in Clouds) - Hetero Par (11th Workshop on Algorithms, Models and Tools for parallelcomputing on Heterogeneous Platforms) - HiBB (Fourth Workshop on High Performance Bioinformatics and Biomedicine) - OMHI (Second Workshop on On-chip Memory Hierarchies and Interconnects) - PROPER (Sixth Workshop on Productivity and Performance) - Resilience (Sixth Workshop on Resiliency in High Performance computing with Clusters, Clouds, and Grids) - UCHPC (Sixth Workshop on Un Conventional High Performance computing) as well as six newcomers: - DIHC (First Workshop on Dependability and Interoperability in Heterogeneous Clouds) - Fed ICI (First Workshop on Federative and Interoperable Cloud Infrastructures) - LSDVE (First Workshop on Large Scale distributed Virtual Environments on Clouds and P2P) - MHPC (Workshop on Middleware for HPC and Big Data Systems) -PADABS ( First Workshop on parallel and distributed Agent Based Simulations) - ROME (First Workshop on Runtime and Operating Systems for the Many core Era) All these workshops focus on promotion and advancement of all aspects of parallel and distributedcomputing.
暂无评论