The aim of this study is to identify the correlations between the use case model, data model and the desired user interface (UI). Since use cases describe the interaction between the users and the system, implemented ...
详细信息
The aim of this study is to identify the correlations between the use case model, data model and the desired user interface (UI). Since use cases describe the interaction between the users and the system, implemented through the user interface with the aim to change the state of the system, the correlation between these three components should be taken into account at the software requirements phase. In this study, the authors have introduced the meta-model of software requirements developed on the identified correlations. Based on this meta-model, it is possible to create a model of concrete software requirements, which enables not only the design and implementation of the user interface, but also the automation of this process. In order to prove the sustainability of this approach, they have developed a special software tool that performs the transformation of the model to an executable source code. They have considered different ways of user interaction with the system, and consequently, they have recommended the set of most common user interface templates. Thus the flexibility of the user interface is achieved, as the user interface of the same use case could be displayed in several different ways, while still maintaining the desired functionality.
Geographic Hypermedia(GH)is a rich and interactive map document with geo-tagged graphics,sound and video ele-ments.A Geographic Hypermedia System(GHS)is designed to manage,query,display and explore GH *** emerging geo...
详细信息
Geographic Hypermedia(GH)is a rich and interactive map document with geo-tagged graphics,sound and video ele-ments.A Geographic Hypermedia System(GHS)is designed to manage,query,display and explore GH *** emerging geo-tagged videos and measurable images as valuable geographic data resources,this paper aims to design a web-based GHS using web mapping,geoprocessing,video streaming and XMLHTTP *** concept,data model,system design and implementation of this GHS are discussed in ***-tagged videos are modeled as temporal,spatial and metadata entities such as video clip,video path and frame-based ***,geo-tagged stereo video and derived data are modeled as interre-lated entities:original video,rectified video,stereo video,video path,frame-based description and measurable image(rectified and disparity image with baseline,interior and exterior parameters).The entity data are organized into video files,GIS layers with linear referencing and XML documents for web *** data can be integrated in HTML pages or used as Rich Internet Appli-cations(RIA)using standard web technologies such as the AJAX,*** and RIA *** SOA-based GHS is designed using four types of web services:ArcGIS Server 9.3 web mapping and geoprocessing services,Flash FMS 3.0 video streaming ser-vices and GeoRSS XMLHTTP *** applications in road facility management and campus hypermapping indicate that the GH data models and technical solutions introduced in this paper are useful and flexible enough for wider deployment as a GHS.
This paper describes how the OpenACC data model is implemented in current OpenACC compilers, ranging from research compilers (OpenUH and OpenARC) to a commercial compiler (the PGI OpenACC compiler). First, we summariz...
详细信息
This paper describes how the OpenACC data model is implemented in current OpenACC compilers, ranging from research compilers (OpenUH and OpenARC) to a commercial compiler (the PGI OpenACC compiler). First, we summarize various memory architectures in today's accelerator systems. We then describe details and issues in implementing the OpenACC data model in three different OpenACC compilers. This includes managing page tables, asynchronous data transfers, asynchronous memory allocate and free, host data construct, aliasing on a data directive, reusing device memory, partially present data, and adjacent data. We also discusses ongoing work to manage large, complex dynamic data structures. We measured the present table lookups, device memory allocation, pinned memory allocation, and managed memory in the three OpenACC compilers using eight OpenACC applications (seven from the SPEC ACCEL benchmark suite and a shock-hydrodynamics mini-application called LULESH). (C) 2018 Elsevier B.V. All rights reserved.
In this study, a data-driven method for predicting CO2 leaks and associated concentrations from geological CO2 sequestration is developed. Several candidate models are compared based on their reproducibility and predi...
详细信息
In this study, a data-driven method for predicting CO2 leaks and associated concentrations from geological CO2 sequestration is developed. Several candidate models are compared based on their reproducibility and predictive capability for CO2 concentration measurements from the Environment Impact Evaluation Test (EIT) site in Korea. Based on the data mining results, a one-dimensional solution of the advective dispersive equation for steady flow (i.e., Ogata-Banks solution) is found to be most representative for the test data, and this model is adopted as the data model for the developed method. In the validation step, the method is applied to estimate future CO2 concentrations with the reference estimation by the Ogata-Banks solution, where a part of earlier data is used as the training dataset. From the analysis, it is found that the ensemble mean of multiple estimations based on the developed method shows high prediction accuracy relative to the reference estimation. In addition, the majority of the data to be predicted are included in the proposed quantile interval, which suggests adequate representation of the uncertainty by the developed method. Therefore, the incorporation of a reasonable physically-based data model enhances the prediction capability of the data-driven model. The proposed method is not confined to estimations of CO2 concentration and may be applied to various real-time monitoring data from subsurface sites to develop automated control, management or decision-making systems.
Organizations may be related in terms of similar operational procedures, management, and supervisory agencies coordinating their operations. Supervisory agencies may be governmental or non-governmental but, in all cas...
详细信息
Organizations may be related in terms of similar operational procedures, management, and supervisory agencies coordinating their operations. Supervisory agencies may be governmental or non-governmental but, in all cases, they perform oversight functions over the activities of the organizations under their control. Multiple organizations that are related in terms of oversight functions by their supervisory agencies, may differ significantly in terms of their geographical locations, aims, and objectives. To harmonize these differences such that comparative analysis will be meaningful, data about the operations of multiple organizations under one control or management can be cultivated, using a uniform format. In this format, data is easily harvested and the ease with which it is used for cross-population analysis, referred to as data comparability is enhanced. The current practice, whereby organizations under one control maintain their data in independent databases, specific to an enterprise application, greatly reduces data comparability and makes cross-population analysis a herculean task. In this paper, the collocation data model is formulated as consisting of big data technologies beyond data mining techniques and used to reduce the heterogeneity inherent in databases maintained independently across multiple organizations. The collocation data model is thus presented as capable of enhancing data comparability across multiple organizations. The model was used to cultivate the assessment scores of students in some schools for some period and used to rank the schools. The model permits data comparability across several geographical scales among which are: national, regional and global scales, where harvested data form the basis for generating analytics for insights, hindsight, and foresight about organizational problems and strategies.
Considering the diversity among search engines, efficient integration of them is an important but difficult job. It is essential to provide a data model that can provide a detailed description of the query capabilitie...
详细信息
Considering the diversity among search engines, efficient integration of them is an important but difficult job. It is essential to provide a data model that can provide a detailed description of the query capabilities of heterogeneous search engines. By means of this model, the meta-searcher can map users' queries into specific sources more accurately, and it can achieve good precision and recall. Moreover, it will benefit the selection of target source and computing priority. Because new search engines emerge frequently and old ones are updated when their function and content change, the data model needs good adaptivity and scalability to keep in step with the rapidly developing World Wide Web. This paper gives a formal description of the query capabilities of heterogeneous search engines and an algorithm for mapping a query from a general mediator format into the specific wrapper format of a specific search engine. Compared with related work, the special features of our work are that we focus more on the constraint of/between the terms, attribute order, and the impact of logical operator restraints. The contribution of our work is that we offer a data model that is both expressive enough to meticulously describe the query capabilities of current World Wide Web search engines and flexible enough to integrate them efficiently. (C) 2000 Published by Elsevier Science B.V. All rights reserved.
This article describes a data model for encoding the American College of Radiology Appropriateness Criteria (ACRAC) for selection of diagnostic imaging procedures. These guidelines are recognized widely as an authorit...
详细信息
This article describes a data model for encoding the American College of Radiology Appropriateness Criteria (ACRAC) for selection of diagnostic imaging procedures. These guidelines are recognized widely as an authoritative repository of "best evidence" concerning appropriate radiology tests for a large number of clinical conditions. In its current text document format, the ACRAC is of limited utility for electronic use. The data model the authors propos completely encodes all attributes and domains of the: published guidelines and is suitable for translation into any industry standard relational database system. Additionally, the authors have added mappings onto commonly used procedure (CPT) and clinical problem (ICD) coding systems. When populated with the current ACRAC content, such a database could serve as the "master" repository of the guidelines with changes and additions made via an interface built with standard database application development tools. The database also could be made available for incorporation into existing information systems used for order entry, decision support, compliance tracking, and health services research at regional and national levels.
The behavior variation of concrete dam is investigated, based on a new method for analyzing the data model of concrete dam in service process for the limitation of wavelet transform for solving concrete dam service pr...
详细信息
The behavior variation of concrete dam is investigated, based on a new method for analyzing the data model of concrete dam in service process for the limitation of wavelet transform for solving concrete dam service process model. The study takes into account the time and position of behavior change during the process of concrete dam service. There is no dependence on the effect quantity for overcoming the shortcomings of the traditional identification method. The panel data model is firstly proposed for analyzing the behavior change of composite concrete dam. The change-point theory is used to identify whether the behavior of concrete dams changes during service. The phase space reconstruction technique is used to reconstruct the phase plane of the trend effect component. The time dimension method is used to solve the construction of multi transformation model of composite panel data. An existing 76.3-m-high dam is used to investigate some key issues on the behavior change. Emphasis is placed on conversion time and location for three time periods consistent with the practical analysis report for evaluating the validity of the analysis method of the behavior variation of concrete dams presented in this paper.
This research aims to obtain an data model from a Business Process model. This is not an easy task since the notation is weak in its orientation to the data, however there are methodologies that have been created to b...
详细信息
ISBN:
(纸本)9781538634837
This research aims to obtain an data model from a Business Process model. This is not an easy task since the notation is weak in its orientation to the data, however there are methodologies that have been created to be able to carry out the analysis of the data and to be able to obtain from an BPMN diagram an data model. The methodology chosen includes a series of transformation steps from the BPMN model to the data model. These steps are explained in detail throughout the document, from which new conclusions emerge, and analyze each of the assumptions presented by the author of the Methodology, suggestions are made to help the diagram to be an approach closer to the reality of the Remoras Chile Company.
XML (eXtensible Markup Language) has been the de-facto standard of information representation and exchange over the Web. The mapping from XML model to conceptual databases benefits XML data integration at a high level...
详细信息
XML (eXtensible Markup Language) has been the de-facto standard of information representation and exchange over the Web. The mapping from XML model to conceptual databases benefits XML data integration at a high level of data abstraction. Fuzziness is inherent in the many real-world applications. Fuzzy XML model has been developed for managing fuzzy data on the Web. This paper concentrates on formal mapping from the fuzzy XML model to the Extended Entity-Relationship (EER) model. For this purpose, the paper introduces different levels of fuzziness into EER model and presents the corresponding graphical representations, with the result that EER model can model fuzzy information. The formal mapping from the fuzzy XML model to the fuzzy EER model is then investigated. With the fuzzy XML and EER models, fuzzy information can be modeled at two kinds of data models. More important, the fuzzy XML model can be translated into the fuzzy EER model automatically. The formal mapping methods proposed in the paper are demonstrated with examples.
暂无评论