X-ray microtomography is rapidly becoming the tool of choice for three-dimensional (3D) imaging of thick structures at the 1-10 mum scale. The fast microtomography system developed at beamline 2-BM of the Advanced Pho...
详细信息
ISBN:
(纸本)0819454737
X-ray microtomography is rapidly becoming the tool of choice for three-dimensional (3D) imaging of thick structures at the 1-10 mum scale. The fast microtomography system developed at beamline 2-BM of the Advanced Photon Source (APS) is a new class of instrument offering near video-rate acquisition of tomographic data combined with pipelined processing, reconstruction, and visualization. This system can acquire and reconstruct 720 projections (1024x1024 pixels) at 0.25 deg angular increments in under 5 min using a dedicated 32-node computer cluster. At this throughput, hundreds of specimens can be imaged in a 24 h experiment. Alternatively, time-dependent 3D sample evolution can be studied on practical time scales. In this work, we present the current instrument status and the most recent application.
A model of a healthcare admission and utilization processes is constructed and analyzed. The performance of the system is investigated using Visual SLAM and AweSim Software Packages. The process of data collection for...
详细信息
ISBN:
(纸本)1932415343
A model of a healthcare admission and utilization processes is constructed and analyzed. The performance of the system is investigated using Visual SLAM and AweSim Software Packages. The process of data collection for a healthcare admission process is explained. The utilization of each service type is calculated and compared with the real life setting. The healthcare admission and utilization process network model is built and the summary of the simulation results is given. The objectives are to improve the service quality and reduce the service time which are achieved through recommendations to improve the services.
Geographical databases are available to date containing detailed and georeferenced data on population, commercial activities, business, transport and services at urban level. Such data allow examining urban phenomena ...
详细信息
ISBN:
(纸本)3540220569
Geographical databases are available to date containing detailed and georeferenced data on population, commercial activities, business, transport and services at urban level. Such data allow examining urban phenomena at very detailed scale but also require new methods for analysis, comprehension and visualization of the spatial phenomena. In this paper a density-based method for extracting spatial information from large geographical databases is examined and first results of its application at the urban scale are presented. Kernel Density Estimation is used as a density based technique to detect clusters in spatial data distributions. GIS and spatial analytical methods are examined to detect areas of high services' supply in an urban environment. The analysis aims at identifying clusters of services in the urban environment and at verifying the correspondence between urban centres and high levels of service.
Geologists routinely perform three-dimensional analyzes to understand and describe spatial relationships. The end product of any dataanalysis obtained from structural or stratigraphic geological studies is usually a ...
详细信息
Geologists routinely perform three-dimensional analyzes to understand and describe spatial relationships. The end product of any dataanalysis obtained from structural or stratigraphic geological studies is usually a set of complex surfaces. The graphic presentation of data is important because it influences the geological interpretation of natural phenomena. The purpose of this paper is to present geometric and visual modelling tools and methods that allow geologists to incorporate geological interpretation during the data input process and automatically obtain a realistic three-dimensional simulation of the real shape of a complex geologic structure. The procedure will also provide a graph for any (cross or oblique) section of the structure. Such displays are particularly useful for people who are not familiar with contour maps to help them "see" unusual or special features and to improve the ability to interpret a map. Geometric modelling of these complex surfaces is based on an interpolation method for discontinuous parametric surfaces using finite elements. Visual modelling is based on photorealistic image synthesis following the ray-tracing method. Our techniques are generic (not tied to a specific program) and discussed in terms of general capabilities (not specific program parameters). (C) 2004 Elsevier Ltd. All rights reserved.
We present an automatic video processing system for tracking analysis in the Morris water escape testing videos. This system is able to extract automatically from such videos information (metadata) describing the spat...
详细信息
ISBN:
(纸本)0889864152
We present an automatic video processing system for tracking analysis in the Morris water escape testing videos. This system is able to extract automatically from such videos information (metadata) describing the spatio-temporal trajectories of animals in the maze and the timings of behavioral events such as stopping or crossing of a target area. The specific semantic metadata are produced by the system and saved in a XML file that describes the mice behaviour extracted from the video content. This description is also stored in a database using a data model that allows one to perform subsequent queries to obtain important factual and analytical information and to retrieve and visualize selected video sequences matching specific query criteria. In all cases, we have observed a significant increase in both accuracy and efficiency when undertaking such analysis using the procedures described in this paper, in comparison with current methods.
Planar laser induced fluorescence (PLIF) was applied to horizontal air/water two-phase annular flow in order to clearly image the liquid film and interfacial wave behavior at the top, side and bottom of the tube. The ...
详细信息
Duplication of code is a common phenomenon in the development and maintenance of large software systems. The detection and removal of duplicated code has become a standard activity during the refactoring phases of a s...
详细信息
ISBN:
(纸本)0769522432
Duplication of code is a common phenomenon in the development and maintenance of large software systems. The detection and removal of duplicated code has become a standard activity during the refactoring phases of a software life-cycle. However code duplication identification tends to produce large amounts of data making the understanding of the duplication situation as a whole difficult. Reengineers can easily lose sight of the forest for the trees. There is a need to support a qualitative analysis of the duplicated code. In this paper we propose a number of visualizations of duplicated source elements that support reengineers in answering questions, e.g., which parts of the system are connected by copied code or which parts of the system are copied the most.
In this article, a new multiple resolution volume rendering method for Finite Element analysis (FEA) data is presented. Our method is composed of three stages: At the first stage, the Gauss points of the FEA cells are...
详细信息
ISBN:
(纸本)0780387880
In this article, a new multiple resolution volume rendering method for Finite Element analysis (FEA) data is presented. Our method is composed of three stages: At the first stage, the Gauss points of the FEA cells are calculated. The function values, gradients, diffusions, and influence scopes of the Gauss points are computed. By representing the Gauss points a, graph vertices and connecting adjacent Gauss points with edges, an adjacency graph is created. The adjacency graph is used to represent the FEA data in the subsequent computation. At the second stage, a hierarchical structure is established upon the adjacency graph. Any two neighboring vertices with similar function values are merged into a new vertex. The similarity is measured by using a user-defined threshold. Consequently a new adjacency graph is constructed. Then the threshold is increased. and the graph reduction is triggered again to generate another adjacency graph. By repeating the processing, multiple adjacency graphs are computed, and a Level of Detail(LoD) representation of the FEA data is established. At the third stage, the LoD structure is rendered by using a splatting method. At first, a level of adjacency graph is selected by users. The graph vertices are sorted based on their visibility orders and projected onto the image plane in back-to-front order. Billboards are used to render the vertices in the projection. The function values, gradients, and influence scopes of the vertices are utilized to decide the colors, opacities. orientations, and shapes of the billboards. The billboards are then modulated with texture maps to generate the footprints of the vertices. Finally, these footprints are composited to produce the volume rendering image.
dynaLYZE is a Matlab-based multifunctional analysis and visualization package accompanying the DYNOT image system, and was developed by NIRx Medical Technologies. It allows users to easily reorganize, image and interp...
详细信息
This discussion/tutorial consists of a few short discussions on the (theoretical) trade-offs of various choices in constructing benchmarks. Most of the results discussed here are "common sense" at high leve...
详细信息
ISBN:
(纸本)0819453501
This discussion/tutorial consists of a few short discussions on the (theoretical) trade-offs of various choices in constructing benchmarks. Most of the results discussed here are "common sense" at high level. However, all of this "common sense" is (to some extent) quantifiable common sense, and occasionally that quantification is useful. These short discussions cover: 1) Prediction Domains & Loss functions, 2) Prediction settings, 3) Assumption Failures.
暂无评论