the proceedings contain 58 papers. the special focus in this conference is on Combinatorial Image Analysis. the topics include: Binary matrices under the microscope;on the reconstruction of crystals through discrete t...
ISBN:
(纸本)9783540239420
the proceedings contain 58 papers. the special focus in this conference is on Combinatorial Image Analysis. the topics include: Binary matrices under the microscope;on the reconstruction of crystals through discrete tomography;binary tomography by iterating linear programs from noisy projections;hexagonal pattern languages;a combinatorial transparent surface modeling from polarization images;integral trees: subtree depth and diameter;supercover of non-square and non-cubic grids;calculating distance with neighborhood sequences in the hexagonal grid;z-tilings of polyominoes and standard basis;curve tracking by hypothesis propagation and voting-based verification;3D topological thinning by identifying non-simple voxels;convex hulls in a 3-dimensional space;on recognizable infinite array languages;on the number of digitizations of a disc depending on its position;on the language of standard discrete planes and surfaces;characterization of bijective discretized rotations;magnification in digital topology;simple points and generic axiomatized digital surface-structures;jordan surfaces in discrete antimatroid topologies;algorithms in digital geometry based on cellular topology;an efficient euclidean distance transform;two-dimensional discrete morphing;a comparison of property estimators in stereology and digital geometry;thinning by curvature flow;convex functions on discrete sets;discrete surface segmentation into discrete planes;corner detection and curve partitioning using arc-chord distance;shape preserving sampling and reconstruction of grayscale images;junction and corner detection through the extraction and analysis of line segments tree species recognition with fuzzy texture parameters and joint non-rigid motion estimation and segmentation.
the development of a highly configurable triple digital particle image velocimetry (DPIV) system is described, which is capable of acquiring both continuous, statistically independent measurements at up to 14 Hz and t...
详细信息
the development of a highly configurable triple digital particle image velocimetry (DPIV) system is described, which is capable of acquiring both continuous, statistically independent measurements at up to 14 Hz and time-resolved PIV data at MHz rates. the system was used at QinetiQ's Noise Test Facility (NTF) as part of the EU-funded CoJeN programme to obtain measurements from high subsonic (Mach <= 0.9), hot (similar to 500 degrees C), large (1/10th) scale coaxial jet flows at a standoff distance of similar to 1 m. High-resolution time-averaged velocity and turbulence data were obtained for complete coaxial engine exhaust plumes down to 4 m (20 jet diameters) from the nozzle exit in less than 1 h. In addition, the system allowed volumetric data to be obtained, enabling fast assessment of spatial alignment of nozzle configurations. Furthermore, novel six-frame time-series data-capture is demonstrated up to 330 kHz, used to calculate time-space correlations within the exhaust, allowing for study of spatio-temporal developments in the jet, associated with jet-noise production. the highly automated system provides synchronization triggers for simultaneous acquisition from different measurement systems (e.g. LDA) and is shown to be versatile, rugged, reliable and portable, operating remotely in a hostile environment. data are presented for three operating conditions and two nozzle geometries, providing a database to be used to validate CFD models of coaxial jet flow.
Semistructured data presents many challenges, mainly due to its lack of a strict schema. these challenges are further magnified when large amounts of data are gathered from heterogeneous sources. We address this by in...
详细信息
ISBN:
(纸本)9781581134360
Semistructured data presents many challenges, mainly due to its lack of a strict schema. these challenges are further magnified when large amounts of data are gathered from heterogeneous sources. We address this by investigation and development of methods to automatically infer structural information from example data. Using XML as a reference format, we approach the schema generation problem by application of inductive inference theory. In doing so, we review and extend results relating to the search spaces of grammatical inferences. We then adapt a method for evaluating the result of an inference process from computational linguistics. Further, we combine several inference algorithms, including both new techniques introduced by us and those from previous work. Comprehensive experimentation reveals our new hybrid method, based upon recently developed optimisation techniques, to be the most effective.
作者:
Pekala, BarbaraUniv Rzeszow
Fac Math & Nat Sci Interdisciplinary Ctr Computat Modelling Pigonia 1A PL-35310 Rzeszow Poland
Interval-valued fuzzy relations can be interpreted as a tool that may help to model in a better way imperfect information, especially under imperfectly defined facts and imprecise knowledge. Preference structures are ...
详细信息
ISBN:
(纸本)9783319668277;9783319668260
Interval-valued fuzzy relations can be interpreted as a tool that may help to model in a better way imperfect information, especially under imperfectly defined facts and imprecise knowledge. Preference structures are of great interest nowadays because of their applications. From a weak preference relation derive the following relations: strict preference, indifference and incomparability, which by aggregations and negations are created and examined in this paper. Moreover, we propose the algorithm of decision making by using new preference structure.
the efficient use of the cache hierarchy of an execution platform often has a major impact on the performance of an application. It is often difficult for the application programmer or the compiler to determine a suit...
详细信息
ISBN:
(纸本)9781424416936
the efficient use of the cache hierarchy of an execution platform often has a major impact on the performance of an application. It is often difficult for the application programmer or the compiler to determine a suitable memory layout for the application data, since the interactions between the memory accesses cannot be fully anticipated before the program execution. this paper introduces an approach to improve the cache efficiency by dynamically padding memory allocations by a post-compilation tool. datastructures like histograms are used to evaluate cache simulations of memory traces of the application considered and to compute optimized pad sets. these can then be used for later runs of the application with different data sets. the accumulated representation of the references' memory accesses additionally offers a visualization interface to the algorithm-specific memory access pattern of each reference captured. As implied above, the advantage of the method is that it also allows for an improvement of the cache usage of binary only applications for which no source code is available. Post optimization cache behavior analyses as well as run-time measurements show that the cache hit rates of the runtime-modified applications are considerably increased by applying the generated pad set.
India graduates 1.5 million engineering students every year, majority of them from its Tier 2 and Tier 3 colleges where there is severe shortage of qualified faculty and lab infrastructure, and where English is second...
详细信息
the paper describes the basic principles of complex threats modeling, and the task of complex threats detection is formalized. the proposed modeling principles are based on the idea of identifying the links between el...
详细信息
In this paper author provides description of the original approach for visual analysis of data represented with general graphs, based on modification of magnetic-spring model and color-coded cognitive manipulation wit...
详细信息
ISBN:
(纸本)9781614991618;9781614991601
In this paper author provides description of the original approach for visual analysis of data represented with general graphs, based on modification of magnetic-spring model and color-coded cognitive manipulation with graph elements. the theoretical background of magnetic fields in application to graph drawing is presented along with discussion of appropriate visualization techniques for improved information analysis and comprehension. Usage of other existing graph layout strategies (e. g. hierarchical, circular) in conjunction with magnetic-spring approach are also considered for improved data representation capabilities. A concept of integrated virtual workshop for graph visualization is introduced which relies on aforementioned model and can be used in GVS (Graph Visualization Systems). A case study of application of proposed approach is presented along with conclusions of its usability and potential future work in this field.
this paper briefly introduces a series of workflows for automated digital geological mapping based on 3D photogrammetry. the 3D photogrammetry technique can produce a detailed 3D digital model of the area of interest ...
详细信息
A foundation is investigated for the application of loosely structured data on the Web. this area is often referred to as Linked data, due to the use of URIs in data to establish links. this work focuses on emerging W...
详细信息
A foundation is investigated for the application of loosely structured data on the Web. this area is often referred to as Linked data, due to the use of URIs in data to establish links. this work focuses on emerging W3C standards which specify query languages for Linked data. the approach is to provide an abstract syntax to capture Linked datastructures and queries, which are then internalised in a process calculus. An operational semantics for the calculus specifies how queries, data and processes interact. A labelled transition system is shown to be sound with respect to the operational semantics. Bisimulation over the labelled transition system is used to verify an algebra over queries. the derived algebra is a contribution to the application domain. For instance, the algebra may be used to rewrite a query to optimise its distribution across a cluster of servers. the framework used to provide the operational semantics is powerful enough to model related calculi for the Web.
暂无评论