NASA's 10,240-processor Columbia supercomputer gained worldwide recognition in 2004 for increasing the space agency's computing capability ten-fold, and enabling U.S. scientists and engineers to perform signif...
详细信息
NASA's 10,240-processor Columbia supercomputer gained worldwide recognition in 2004 for increasing the space agency's computing capability ten-fold, and enabling U.S. scientists and engineers to perform significant, breakthrough simulations. Columbia has amply demonstrated its capability to accelerate NASA 's key missions in space operations, exploration systems, science, and aeronautics. Columbia is part of an integrated high-end computing (HEC) environment comprised of massive storage and archive systems, highspeed networking, high-fidelity modeling and simulation tools, application performance optimization, and advanced dataanalysis and visualization. In this paper, we illustrate the impact Columbia is having on NASA's numerous space and exploration applications, such as the development of the Crew Exploration and Launch Vehicles (CEV/CLV), effects of long-duration human presence in space, and damage assessment and repair recommendations for remaining shuttle flights. We conclude by discussing HEC challenges that must be overcome to solve space-related science problems in the future
High-throughput experiments, such as gene expression microarrays in the life sciences, result in very large data sets. In response, a wide variety of visualization tools have been created to facilitate dataanalysis. ...
详细信息
High-throughput experiments, such as gene expression microarrays in the life sciences, result in very large data sets. In response, a wide variety of visualization tools have been created to facilitate dataanalysis. A primary purpose of these tools is to provide biologically relevant insight into the data. Typically, visualizations are evaluated in controlled studies that measure user performance on predetermined tasks or using heuristics and expert reviews. To evaluate and rank bioinformatics visualizations based on real-world dataanalysis scenarios, we developed a more relevant evaluation method that focuses on data insight. This paper presents several characteristics of insight that enabled us to recognize and quantify it in open-ended user tests. Using these characteristics, we evaluated five microarray visualization tools on the amount and types of insight they provide and the time it takes to acquire it. The results of the study guide biologists in selecting a visualization tool based on the type of their microarray data, visualization designers on the key role of user interaction techniques, and evaluators on a new approach for evaluating the effectiveness of visualizations for providing insight. Though we used the method to analyze bioinformatics visualizations, it can be applied to other domains.
The design and evaluation of most current information visualization systems descend from an emphasis on a user's ability to "unpack" the representations of data of interest and operate on them independen...
详细信息
The design and evaluation of most current information visualization systems descend from an emphasis on a user's ability to "unpack" the representations of data of interest and operate on them independently. Too often, successful decision-making and analysis are more a matter of serendipity and user experience than of intentional design and specific support for such tasks;although humans have considerable abilities in analyzing relationships from data, the utility of visualizations remains relatively variable across users, data sets, and domains. In this paper, we discuss the notion of analytic gaps, which represent obstacles faced by visualizations in facilitating higher-level analytic tasks, such as decision-making and learning. We discuss support for bridging these gaps, propose a framework for the design and evaluation of information visualization systems, and demonstrate its use.
High-throughput experiments, such as gene expression microarrays in the life sciences, result in very large data sets. In response, a wide variety of visualization tools have been created to facilitate dataanalysis. ...
详细信息
High-throughput experiments, such as gene expression microarrays in the life sciences, result in very large data sets. In response, a wide variety of visualization tools have been created to facilitate dataanalysis. A primary purpose of these tools is to provide biologically relevant insight into the data. Typically, visualizations are evaluated in controlled studies that measure user performance on predetermined tasks or using heuristics and expert reviews. To evaluate and rank bioinformatics visualizations based on real-world dataanalysis scenarios, we developed a more relevant evaluation method that focuses on data insight. This paper presents several characteristics of insight that enabled us to recognize and quantify it in open-ended user tests. Using these characteristics, we evaluated five microarray visualization tools on the amount and types of insight they provide and the time it takes to acquire it. The results of the study guide biologists in selecting a visualization tool based on the type of their microarray data, visualization designers on the key role of user interaction techniques, and evaluators on a new approach for evaluating the effectiveness of visualizations for providing insight. Though we used the method to analyze bioinformatics visualizations, it can be applied to other domains.
The design and evaluation of most current information visualization systems descend from an emphasis on a user's ability to "unpack" the representations of data of interest and operate on them independen...
详细信息
The design and evaluation of most current information visualization systems descend from an emphasis on a user's ability to "unpack" the representations of data of interest and operate on them independently. Too often, successful decision-making and analysis are more a matter of serendipity and user experience than of intentional design and specific support for such tasks;although humans have considerable abilities in analyzing relationships from data, the utility of visualizations remains relatively variable across users, data sets, and domains. In this paper, we discuss the notion of analytic gaps, which represent obstacles faced by visualizations in facilitating higher-level analytic tasks, such as decision-making and learning. We discuss support for bridging these gaps, propose a framework for the design and evaluation of information visualization systems, and demonstrate its use.
In this paper we propose a novel method for learning a Mahalanobis distance measure to be used in the KNN classification algorithm. The algorithm directly maximizes a stochastic variant of the leave-one-out KNN score ...
详细信息
ISBN:
(纸本)0262195348
In this paper we propose a novel method for learning a Mahalanobis distance measure to be used in the KNN classification algorithm. The algorithm directly maximizes a stochastic variant of the leave-one-out KNN score on the training set. It can also learn a low-dimensional linear embedding of labeled data that can be used for datavisualization and fast classification. Unlike other methods, our classification model is non-parametric, making no assumptions about the shape of the class distributions or the boundaries between them. The performance of the method is demonstrated on several data sets, both for metric learning and linear dimensionality reduction.
In this paper, we address some of the issues associated with infrared (IR) imaging, with reference to our work on brain tissue from the TgCRND8 mouse, a transgenic model of Alzheimer's disease (AD). AD is the most...
详细信息
In this paper, we address some of the issues associated with infrared (IR) imaging, with reference to our work on brain tissue from the TgCRND8 mouse, a transgenic model of Alzheimer's disease (AD). AD is the most common cause of dementia in the aging population. One of the characteristic hallmarks of this chronic neurodegenerative disorder is the accumulation of plaques in the brain, usually visualized with histochemistry and immunostaining. Although these methods are extremely useful, they illustrate only certain aspects of the sample, require a great amount of tissue processing, and are highly dependent on experimental protocols and reagent quality. IR imaging provides information on multiple components, with a minimal amount of sample processing. However, in order to interpret the data successfully, the issues of spectral acquisition parameters, pre-processing, and spectral artifacts need to be considered. The methods commonly used to process the data, such as uni- and bi-variate spectral analysis, and multivariate methods, such as hierarchical cluster analysis, and some issues concerning the use of second derivatives of IR spectra are discussed. (c) 2005 Elsevier B.V. All rights reserved.
This paper discusses the ongoing efforts on development of a Decentralized data Fusion (DDF) simulator for analysis and design of a distributed fusion-based tracking system. We have identified the requirements for a D...
详细信息
ISBN:
(纸本)0780395190
This paper discusses the ongoing efforts on development of a Decentralized data Fusion (DDF) simulator for analysis and design of a distributed fusion-based tracking system. We have identified the requirements for a DDF simulator and have developed a fully interactive, graphical user interface based scenario generation tool called SceneGen (Srimath-veeravalli, Subramanian and Kesavadas 2004) for creating battlefield scenarios, and a simulation tool called VizSim for running various DDF algorithms on scenarios created in SceneGen and displaying the simulation results in an easy to understand fashion. SceneGen and VizSim has been designed with a full compliment of user utilities, including an efficient terrain database generation module, a sensor report generation module and the database connectivity to store and retrieve scenarios and simulation results. The innovative visualization techniques used in the simulator helps in displaying the data in a fashion that transfers maximum information to a user. Both SceneGen and VizSim have been tested successfully for running scenarios consisting of a large number of entities.
作者:
Mika, Peter
De Boelelaan 1081 1081HV Amsterdam Netherlands
We present the Flink system for the extraction, aggregation and visualization of online social networks. Flink employs semantic technology for reasoning with personal information extracted from a number of electronic ...
详细信息
The analysis of the distribution of pharmaceutical materials in tablet formulations, such as drugs and matrix elements, is critical to product performance and is used in such areas as quality control, impurity testing...
The analysis of the distribution of pharmaceutical materials in tablet formulations, such as drugs and matrix elements, is critical to product performance and is used in such areas as quality control, impurity testing, and process monitoring. Recently imaging techniques, such as Raman, near-IR, and fluorescence imaging, have become popular for "visualization" of pharmaceutical formulations, allowing for spatial and chemical composition information to be obtained simultaneously. These methods have been primarily focused on molecular imaging, or spatial analysis of the molecular characteristics of the tablet formulation. However, elemental species are also an important part of pharmaceuticals. Micro X-ray fluorescence (MXRF) elemental imaging offers complementary information to molecular imaging techniques. In this study, MXRF was used for the elemental imaging of various commercial pharmaceutical drug and vitamin supplements. Specifically, elemental composition and heterogeneity were monitored for each different tablet. (c) 2005 International Centre for Diffraction data.
暂无评论