Nano-CT (computerized tomography) has emerged as a non-destructive high-resolution cross-sectional imaging technique to effectively study the sub-mu m pore structure of shale, which is of fundamental importance to the...
详细信息
Nano-CT (computerized tomography) has emerged as a non-destructive high-resolution cross-sectional imaging technique to effectively study the sub-mu m pore structure of shale, which is of fundamental importance to the evaluation and development of shale oil and gas. Nano-CT poses unique challenges to the inverse problem of reconstructing the 3D structure due to the lower signal-to-noise ratio (than Micro-CT) at the nano-scale, increased sensitivity to the misaligned geometry caused by the movement of object manipulator, limited sample size, and a larger volume of data at higher resolution. We propose a scalable double regularization (SDR) method to utilize the entire dataset for simultaneous 3D structural reconstruction across slices through total variation regularization within slices and L-1 regularization between adjacent slices. SDR allows information borrowing both within and between slices, contrasting with the traditional methods that usually build on slice by slice reconstruction. We develop a scalable and memory-efficient algorithm by exploiting the systematic sparsity and consistent geometry induced by such Nano-CT data. We illustrate the proposed method using synthetic data and two Nano-CT imaging datasets of Jiulaodong (JLD) shale and Longmaxi (LMX) shale acquired in the Sichuan Basin. These numerical experiments show that the proposed method substantially outperforms selected alternatives both visually and quantitatively.
Community detection has arisen as one of the most relevant topics in the field of graph data mining due to its applications in many fields such as biology, social networks, or network traffic analysis. Although the ex...
详细信息
Community detection has arisen as one of the most relevant topics in the field of graph data mining due to its applications in many fields such as biology, social networks, or network traffic analysis. Although the existing metrics used to quantify the quality of a community work well in general, under some circumstances, they fail at correctly capturing such notion. The main reason is that these metrics consider the internal community edges as a set, but ignore how these actually connect the vertices of the community. We propose the Weighted Community Clustering (WCC), which is a new community metric that takes the triangle instead of the edge as the minimal structural motif indicating the presence of a strong relation in a graph. We theoretically analyse WCC in depth and formally prove, by means of a set of properties, that the maximization of WCC guarantees communities with cohesion and structure. In addition, we propose scalable Community Detection (SCD), a community detection algorithm based on WCC, which is designed to be fast and scalable on SMP machines, showing experimentally that WCC correctly captures the concept of community in social networks using real datasets. Finally, using ground-truth data, we show that SCD provides better quality than the best disjoint community detection algorithms of the state of the art while performing faster.
How can we find patterns from an enormous graph with billions of vertices and edges? The subgraph enumeration, which is to find patterns from a graph, is an important task for graph data analysis with many application...
详细信息
How can we find patterns from an enormous graph with billions of vertices and edges? The subgraph enumeration, which is to find patterns from a graph, is an important task for graph data analysis with many applications, including analyzing the social network evolution, measuring the significance of motifs in biological networks, observing the dynamics of Internet, and so on. Especially, the triangle enumeration, a special case of the subgraph enumeration, where the pattern is a triangle, has many applications such as identifying suspicious users in social networks, detecting web spams, and finding communities. However, recent networks are so large that most of the previous algorithms fail to process them. Recently, several MapReduce algorithms have been proposed to address such large networks;however, they suffer from the massive shuffled data resulting in a very long processing time. In this article, we propose scalable methods for enumerating trillion subgraphs on distributed systems. We first propose PTE (Pre-partitioned Triangle Enumeration), a new distributed algorithm for enumerating triangles in enormous graphs by resolving the structural inefficiency of the previous MapReduce algorithms. PTE enumerates trillions of triangles in a billion scale graph by decreasing three factors: the amount of shuffled data, total work, and network read. We also propose PSE (Pre-partitioned Subgraph Enumeration), a generalized version of PTE for enumerating subgraphs that match an arbitrary query graph. Experimental results show that PTE provides 79 times faster performance than recent distributed algorithms on real-world graphs, and succeeds in enumerating more than 3 trillion triangles on the ClueWeb12 graph with 6.3 billion vertices and 72 billion edges. Furthermore, PSE successfully enumerates 265 trillion clique subgraphs with 4 vertices from a subdomain hyperlink network, showing 47 times faster performance than the state of the art distributed subgraph enumeration algor
Modelling interactions on complex networks needs efficient algorithms for describing processes on a detailed level in the network structure. This kind of modelling enables more realistic applications of spreading proc...
详细信息
Modelling interactions on complex networks needs efficient algorithms for describing processes on a detailed level in the network structure. This kind of modelling enables more realistic applications of spreading processes, network metrics, and analyses of communities. However, different real-world processes may impose requirements for implementations and their efficiency. We discuss different transmission and spreading processes and their interrelations. Two pseudo-algorithms are presented, one for the complex contagion spreading mechanism using non-self-avoiding paths in the modelling, and one for simple contagion processes using self-avoiding paths in the modelling. The first algorithm is an efficient implementation that can be used for describing social interaction in a social network structure. The second algorithm is a less efficient implementation for describing specific forms of information transmission and epidemic spreading.
Covariance and correlation estimates have important applications in data mining. In the presence of outliers, classical estimates of covariance and correlation matrices are not reliable. A small fraction of outliers, ...
详细信息
ISBN:
(纸本)9781581135671
Covariance and correlation estimates have important applications in data mining. In the presence of outliers, classical estimates of covariance and correlation matrices are not reliable. A small fraction of outliers, in some cases even a single outlier, can distort the classical covariance and correlation estimates making them virtually useless. That is, correlations for the vast majority of the data can be very erroneously reported; principal components transformations can be misleading; and multidimensional outlier detection via Mahalanobis distances can fail to detect outliers. There is plenty of statistical literature on robust covariance and correlation matrix estimates with an emphasis on affine-equivariant estimators that possess high breakdown points and small worst case biases. All such estimators have unacceptable exponential complexity in the number of variables and quadratic complexity in the number of observations. In this paper we focus on several variants of robust covariance and correlation matrix estimates with quadratic complexity in the number of variables and linear complexity in the number of observations. These estimators are based on several forms of pairwise robust covariance and correlation estimates. The estimators studied include two fast estimators based on coordinate-wise robust transformations embedded in an overall procedure recently proposed by [14]. We show that the estimators have attractive robustness properties, and give an example that uses one of the estimators in the new Insightful Miner data mining product.
暂无评论