The Scholarly database (SDB) at Indiana University aims to serve researchers and practitioners interested in the analysis, modeling, and visualization of large-scale scholarly datasets. This database focuses on suppor...
详细信息
The Scholarly database (SDB) at Indiana University aims to serve researchers and practitioners interested in the analysis, modeling, and visualization of large-scale scholarly datasets. This database focuses on supporting large studies of changes in science over time and communicating findings via knowledge-domain visualizations. The database currently provides access to around 18 million publications, patents, and grants, ten percent of which contain full-text abstracts. Except for some datasets with restricted access conditions, the data can be retrieved in raw or pre-processed format using either a web-based or relational database client. This paper motivates the need for the database from bibliometric and scientometric perspectives (Cronin & Atkins, 2000;White & McCain, 1989). It explains the database design, setup, and interfaces as well as the temporal, geographical, and topic coverage of datasets currently served. Planned work and the potential for this database to become a global test bed for information science research are discussed.
Attempt to visualize high dimensional datasets typically encounter over plotting and decline in visual comprehension that makes the knowledge discovery and feature subset analysis difficult. Hence, reshaping the datas...
详细信息
ISBN:
(纸本)9783642249570;9783642249587
Attempt to visualize high dimensional datasets typically encounter over plotting and decline in visual comprehension that makes the knowledge discovery and feature subset analysis difficult. Hence, reshaping the datasets using dimensionality reduction technique is paramount by removing the superfluous attributes to improve visual analytics. In this work, we applied rough set theory as dimensionality reduction and feature selection methods on visualization to facilitate knowledge discovery of multi-dimensional datasets. We provided the case study using real datasets and comparison against other methods to demonstrate the effectiveness of our approach.
The study of urban spatial structure has long been of interest to those concerned with urban studies and spatial dataanalysis. Urban population spatial structure is an important component of urban spatial structure. ...
详细信息
ISBN:
(纸本)9781424473021
The study of urban spatial structure has long been of interest to those concerned with urban studies and spatial dataanalysis. Urban population spatial structure is an important component of urban spatial structure. This paper explores the change of urban population structure based on methods of exploring spatial dataanalysis (ESDA), population density function, and GIS and aims to discover the features and change of urban population structure of Nanjing City sine 1980'. The paper organizes as three sections. In the first section, we reviewed the methods of the urban population spatial structure. The sub-centre of a city as an important indicator of urban spatial structure is often identified using threshold value, ESDA and population density function based on GIS technology. In the second section, we analyzed the features and change of urban population spatial structure using the methods reviewed above. Firstly, we pictured the features of urban population spatial structure using threshold value and GIS. Secondly, we carried out an ESDA on population density in order to identify possible population patterns of spatial heterogeneity and spatial autocorrelation in the study area. Next, we validated the result of ESDA using population density function. The third section summarized the key findings in this paper. The application of this procedure showed that the pattern of population structure of Nanjing City was single center in 1982;sub-center distribution appeared in 1990;and the pattern of sub-center strengthened in 2000. The multi-centric distribution of urban population was discovered through the ESDA analysis in 2007. In addition, through the modeling of urban population density, the polycentric population density function has been found to perform better than the mono-centric one, which demonstrated that the polycentric pattern of population density distribution has being emerged in Nanjing City during the period from 1982 to 2007.
We describe an interview-based data-collection procedure for social network analysis designed to aid gathering information about the people known by a respondent and reduce problems with data integrity and respondent ...
详细信息
We describe an interview-based data-collection procedure for social network analysis designed to aid gathering information about the people known by a respondent and reduce problems with data integrity and respondent burden. This procedure, a participant-aided network diagram (sociogram), is an extension of traditional name generators. Although such a diagram can be produced through computer-assisted programs for interviewing (CAPIs) and low technology (i.e., paper), we demonstrate both practical and methodological reasonsfor keeping high technology in the lab and low technology in the field. We provide some general heuristics that can reduce the time needed to complete a name generator We present findings from our Connected Lives field study to illustrate this procedure and compare to an alternative method for gathering network data.
Super-resolution algorithms typically improve the resolution of a video frame by mapping and performing signal processing operations on data from frames immediately preceding and immediately following the frame of int...
详细信息
ISBN:
(纸本)9780387737416
Super-resolution algorithms typically improve the resolution of a video frame by mapping and performing signal processing operations on data from frames immediately preceding and immediately following the frame of interest. However, these algorithms ignore forensic considerations. In particular, the high-resolution video,evidence they produce could be challenged on the grounds that it incorporates data or artifacts that were not present in the original recording. This paper presents a super-resolution algorithm that differs from its counterparts in two important respects. First, it is explicitly parameterized, enabling forensic video analysts to tune it to yield higher quality in regions of interest at the cost of degraded quality in other regions. Second, the higher resolution output is only constructed in the final visualization step. This allows the intermediate refinement step to be repeatedly composed without tainting the original data.
We discuss here an improved multidimensional scaling (MDS) algorithm allowing for fast and accurate visualization of multidimensional clusters. Unlike in traditional approaches we use a natural heuristics - N-body sol...
详细信息
ISBN:
(纸本)9780819471246
We discuss here an improved multidimensional scaling (MDS) algorithm allowing for fast and accurate visualization of multidimensional clusters. Unlike in traditional approaches we use a natural heuristics - N-body solver - for extracting the global minimum of the multidimensional, multimodal and nonlinear "stress function". As was shown earlier, the method is very reliable avoiding stuck the solver in local minima. We focus on decreasing the time complexity of the algorithm from Omega(N-2) to O(N-2) by eliminating from computations most of distances, which are irrelevant in reproducing the real cluster structure in low dimensional spaces. This way we can speed up MDS algorithm significantly (even in order of magnitude for large datasets) allowing for interactive immersion into the data by immediate on-screen manipulation on different data representations.
A recently developed, freely available, application specifically designed for the visualization of multimodal data sets is presented. The application allows multiple 3D data sets such as CT (x-ray computer tomography)...
详细信息
ISBN:
(纸本)9780819465443
A recently developed, freely available, application specifically designed for the visualization of multimodal data sets is presented. The application allows multiple 3D data sets such as CT (x-ray computer tomography), MRI (magnetic resonance imaging), PET (positron emission tomography), and SPECT (single photon emission tomography) of the same subject to be viewed simultaneously. This is done by maintaining synchronization of the spatial location viewed within all modalities, and by providing fused views of the data where multiple data sets are displayed as a single volume. Different options for the fused views are provided by plug-ins. Plug-ins typically used include color-overlays and interlacing, but more complex plug-ins such as those based on different color spaces, and component analysis techniques are also supported. Corrections for resolution differences and user preference of contrast and brightness are made. Pre-defined and custom color tables can be used to enhance the viewing experience. In addition to these essential capabilities, multiple options are provided for mapping 16-bit data sets onto an 8-bit display, including windowing, automatically and dynamically defined tone transfer functions, and histogram based techniques. The 3D data sets can be viewed not only as a stack of images, but also as the preferred three orthogonal cross sections through the volume. More advanced volumetric displays of both individual data sets and fused views are also provided. This includes the common MIP (maximum intensity projection) both with and without depth correction for both individual data sets and multimodal data sets created using a fusion plug-in.
This proceedings contains 26 papers. data mining and analytics today have advanced rapidly from the early days of pattern finding in commercial databases. They are now a core part of business intelligence and inform d...
ISBN:
(纸本)9781920682514
This proceedings contains 26 papers. data mining and analytics today have advanced rapidly from the early days of pattern finding in commercial databases. They are now a core part of business intelligence and inform decision-making in many areas of human endeavor including science, business, health care and security. Mining of unstructured text, semi-structured web information and multimedia data have continued to receive attention, as have professional challenges to using data mining in industry. Accepted submissions have been grouped into seven sessions reflecting these application areas. Papers published in this conference are categorized under topics such as Industry data Mining, Text Mining, Unsupervised Learning, Association Rule and Frequent Pattern Mining, Financial and Policing/Security data Mining, Algorithms, data Mining Education. The key terms of this proceedings include Audits, business controls, fraud and abuse, customer analytics, industry analytics, data mining effectiveness, analytics projects management, analytics industry case studies, corporate insolvency prediction, logistic regression, random forests, medical and health data mining, blogs, weblogs, Blog06, TREC, opinion detection, opinion identification, self-organizing maps, cluster analysis, neural network, imbalanced data, drill-down, visualization, data mining, soil profiles, agriculture time series, clustering, subsequence-time series clustering, data linkage, data matching, deduplication, entity resolution, clustering, support vector machines, quality measures.
Computational modeling and simulation of knee joint can help expand our understanding of the knee biomechanics, and thus improve orthopedic practice. Although many computational knee models have been developed, very f...
详细信息
Progress in fluid mechanics depends heavily on the availability of good experimental data which can inspire new ideas and concepts but which are also necessary to check and validate theories and numerical calculations...
ISBN:
(纸本)9780792319948
Progress in fluid mechanics depends heavily on the availability of good experimental data which can inspire new ideas and concepts but which are also necessary to check and validate theories and numerical calculations. With the advent of new recording and image analysis techniques new and promising experimental methods in fluid flows have presented themselves which are rather newly developed techniques such as particle tracking velocimetry (PTV), particle image velocimetry (PIV) and laser fluorescene (LIF). This volume presents state-of-the-art research on these techniques and their application to fluid flow. Selected papers from the EUROMECH conference on Image analysis are published in this volume.
暂无评论