Summary form only given. In this paper we generalize the shortest path algorithm to the shortest cycles in each homotopy class on a surface with arbitrary topology, utilizing the universal covering space (UCS) in alge...
详细信息
Summary form only given. In this paper we generalize the shortest path algorithm to the shortest cycles in each homotopy class on a surface with arbitrary topology, utilizing the universal covering space (UCS) in algebraic topology. In order to store and handle the UCS, we propose a two-level data structure which is efficient for storage and easy to process. We also pointed several practical applications for our shortest cycle algorithms and the UCS data structure.
the rise of cloud computing and its elastic, on-demand resource provisioning introduces the need for a flexible and scalable multi-tenant architecture. In a multi-tenant application every tenant (client) makes use of ...
详细信息
the rise of cloud computing and its elastic, on-demand resource provisioning introduces the need for a flexible and scalable multi-tenant architecture. In a multi-tenant application every tenant (client) makes use of shared application instances, but each tenant typically has its own user data. the shared application instance behaves like a private instance by guaranteeing bothdata separation and performance separation for every tenant. As the number of tenants increases, the amount of data grows. A scalable solution for the storage is needed, allowing tenant data to be divided over multiple database instances, but taking into account performance isolation and custom data assurance policies. In this paper we introduce an abstraction layer for achieving high scalability for the storage of tenant data. this layer uses data allocation algorithms to determine an acceptable allocation of tenant data to different databases. We describe a mathematical model for the allocation of tenant data which can be optimized using existing linear programming techniques, and introduce the BDAA-n and FDAA, two algorithmsthat will find an optimal allocation of data by iterating over the possible permutations. the proposed solutions are evaluated based on their flexibility, complexity and efficiency. the flexibility of the BDAA and FDAA makes them easy to customize and extend to fit most scenarios, but the algorithms will achieve best results for tenants with a limited number of subtenants. Linear programming is an alternative for tenants with a higher number of subtenants, but the customizability of the algorithm for specific use cases is limited due to the need for linear functions.
Polygon mesh is among the most common datastructures used for representing objects in computer graphics. Unfortunately, a polygon mesh does not capture high-level structures, unlike a hierarchical model. In general, ...
详细信息
ISBN:
(纸本)0769520847
Polygon mesh is among the most common datastructures used for representing objects in computer graphics. Unfortunately, a polygon mesh does not capture high-level structures, unlike a hierarchical model. In general, high-level abstractions are useful for managing data in applications. In this paper, we present a method for decomposing an object represented in polygon meshes into components by means of critical points. the method consists of steps to define the root vertex of the object, define a function on the polygon meshes, compute the geodesic tree and critical points, decide the decomposition order, and extract components using backwards flooding. We have implemented the method. the preliminary results show that it works effectively and efficiently. the decomposition results can be useful for applications such as 3D model retrieval and morphing.
Specifically designed to exchange configuration information from a management platform to network components, the XML-based NETCONF protocol has become widely used. In combination with NETCONF, YANG is the correspondi...
详细信息
Specifically designed to exchange configuration information from a management platform to network components, the XML-based NETCONF protocol has become widely used. In combination with NETCONF, YANG is the corresponding protocol that defines the associated datastructures, supporting virtually all network configuration protocols. YANG itself is a semantically rich language, which - in order to facilitate familiarization withthe relevant subject - is often visualized using UML to involve other experts or developers and to support them by their daily work (writing applications which make use of YANG/NETCONF). To support this process, this paper presents an novel approach to optimize and simplify YANG data models, as current solutions tend to produce very complex UML diagrams. therefore, we have (i) defined a bidirectional mapping of YANG to UML, (ii) developed a strategy to reduce the numbers of objects, and (iii) created a tool that renders the created UML diagrams, closing the gap between technically improved data models and their human readability.
A kind of tree structure called extended binary tree (EBT) is presented to represent the line adjacency graph (LAG) in order to reduce the computational complexities of LAG-based algorithms in binary image processing....
详细信息
A kind of tree structure called extended binary tree (EBT) is presented to represent the line adjacency graph (LAG) in order to reduce the computational complexities of LAG-based algorithms in binary image processing. the traversal and the storage of the EBT are discussed. Applications of the structure in engineering drawing entry are shown.< >
the main practical problem in model checking is the combinatorial explosion of system states commonly known as the state explosion problem. Abstraction methods attempt to reduce the size of the state space by employin...
详细信息
the main practical problem in model checking is the combinatorial explosion of system states commonly known as the state explosion problem. Abstraction methods attempt to reduce the size of the state space by employing knowledge about the system and the specification in order to model only relevant features in the Kripke structure. Counterexample-guided abstraction refinement is an automatic abstraction method where, starting with a relatively small skeletal representation of the system to be verified, increasingly precise abstract representations of the system are computed. the key step is to extract information from false negatives ("spurious counterexamples") due to over-approximation.
X-ray powder diffraction data of voglibose are reported, and its crystal and molecular structures were determined by simulated annealing and rigid-body Rietveld refinement methods. Voglibose was found to be crystalliz...
详细信息
X-ray powder diffraction data of voglibose are reported, and its crystal and molecular structures were determined by simulated annealing and rigid-body Rietveld refinement methods. Voglibose was found to be crystallized in triclinic symmetry with space group P-1. the lattice parameters were determined to be a=6.1974(6) angstrom, b=6.9918(5) angstrom, c=7.3955(9) angstrom, alpha=70.8628(3), beta=103.5312(4), gamma=94.3867(5), V=294.2(2) angstrom(3), and rho(cal)=1.495 g/cm(3). the crystal structure contains isolated C10H21NO7 molecular. (C) 2010international Centre for Diffraction data. [DOI: 10.1154/1.3478418]
A novel method for representing image orientation structure is used to measure the orientations of line segments in a series of increasingly blurred images. An algorithm for mapping filtered image data into an orienta...
详细信息
A novel method for representing image orientation structure is used to measure the orientations of line segments in a series of increasingly blurred images. An algorithm for mapping filtered image data into an orientation feature space is defined. the algorithm is applied using four sets of filters. the results show that the algorithm effectively exploits redundancy in the feature values to yield robust inferences across a broad range of scales and through large amounts of blurring.< >
the authors are researching human and humanoid movement. they discuss the transformation system among the movement score, the movement description in computer and the body movement data which are used to generate comp...
详细信息
the authors are researching human and humanoid movement. they discuss the transformation system among the movement score, the movement description in computer and the body movement data which are used to generate computer graphics animation of humanoid movement. the proposed humanoid movement description method can describe not only the movement information which are described by Labanotation but also more detailed movement information. this movement description is called "movement MIDI based on Labanotaton" or "continuous Labanotation", and it exists between the Labanoation (which the choreographer can read but the computer cannot) and the humanoid movement data (which the computer can read but the choreographer cannot). It can be read by boththe choreographer and the computer, and it is useful to modify the humanoid movement data in order to modify the CG animation, since the information of Labanotation is not sufficient to generate natural display of CG animation. We report the results of transformation from the humanoid motion captured data into the proposed movement description in computer.
Parallelizing data clustering algorithms has attracted the interest of many researchers over the past few years. Many efficient parallel algorithms were proposed to build partitioning over a huge volume of data. the e...
详细信息
Parallelizing data clustering algorithms has attracted the interest of many researchers over the past few years. Many efficient parallel algorithms were proposed to build partitioning over a huge volume of data. the effectiveness of these algorithms is attributed to the distribution of data among a cluster of nodes and to the parallel computation models. Although the effectiveness of parallel models to deal with increasing volume of data little work is done on the validation of big clusters. To deal withthis issue, we propose a parallel and scalable model, referred to as S-DI (Scalable Dunn Index), to compute the Dunn Index measure for an internal validation of clustering results. Rather than computing the Dunn Index on a single machine in the clustering validation process, the new proposed measure is computed by distributing the partitioning among a cluster of nodes using a customized parallel model under Apache Spark framework. the proposed S-DI is also enhanced by a Sketch and Validate sampling technique which aims to approximate the Dunn Index value by using a small representative data-sample. Different experiments on simulated and real datasets showed a good scalability of our proposed measure and a reliable validation compared to other existing measures when handling large scale data.
暂无评论