An essential step toward the effective processing of the medical language is the development of representational models that formalize the language semantics. These models, also known as semantic data models, help to ...
详细信息
An essential step toward the effective processing of the medical language is the development of representational models that formalize the language semantics. These models, also known as semantic data models, help to unlock the meaning of descriptive expressions, making them accessible to computer systems. The present study tries to determine the quality of a semantic data model created to encode chest radiology findings. The evaluation methodology relied on the ability of physicians to extract information from textual and encoded representations of chest X-ray reports, whilst answering questions associated with each report, The evaluation demonstrated that the encoded reports seemed to have the same information content of the original textual reports. The methodology generated useful data regarding the quality of the data model, demonstrating that certain segments were creating ambiguous representations and that some details were not being represented.
Dimensions and tolerances are critical engineering design information for defining shape requirements of manufactured parts. As technologies for new design analysis and manufacturing planning are being developed, they...
详细信息
Dimensions and tolerances are critical engineering design information for defining shape requirements of manufactured parts. As technologies for new design analysis and manufacturing planning are being developed, they must be seamlessly integrated into a computer-aided product development environment. A data model is an effective technique to define the shareable semantics that are essential to the success of data communication in an integrated environment. This paper introduces a dimension and tolerance data model developed for the foundation of ISO 10303 Part 47. This model is a component of an overall product data model. The model has three major components: dimension schema, tolerance schema, and datum and shape aspect schema. These schema specify data resources and structures for describing dimension and tolerance characteristics of products. Based on this model, descriptions of dimensions and tolerances of products can be communicated between tolerance-related software application systems.
Various software tools implementing multiple criteria decision analysis (MCDA) methods have appeared over the last decades. Although MCDA methods share common features, most of the implementing software have been deve...
详细信息
Various software tools implementing multiple criteria decision analysis (MCDA) methods have appeared over the last decades. Although MCDA methods share common features, most of the implementing software have been developed independently from scratch. Majority of the tools have a proprietary storage format and exchanging data among software is cumbersome. Common data exchange standard would be useful for an analyst wanting to apply different methods on the same problem. The Decision Deck project has proposed to build components implementing MCDA methods in a reusable and interchangeable manner. A key element in this scheme is the XMCDA standard, a proposal that aims to standardize an XML encoding of common structures appearing in MCDA models, such as criteria and performance evaluations. Although XMCDA allows to present most data structures for MCDA models, it almost completely lacks data integrity checks. In this paper we present a new comprehensive data model for MCDA problems, implemented as an XML schema. The data model includes types that are sufficient to represent multi-attribute value/utility models, ELECTRE III/TRI models, and their stochastic extensions, and AHP. We also discuss use of the data model in algorithmic MCDA.
BackgroundMost healthcare data sources store information within their own unique schemas, making reliable and reproducible research challenging. Consequently, researchers have adopted various data models to improve th...
详细信息
BackgroundMost healthcare data sources store information within their own unique schemas, making reliable and reproducible research challenging. Consequently, researchers have adopted various data models to improve the efficiency of research. Transforming and loading data into these models is a labor-intensive process that can alter the semantics of the original data. Therefore, we created a data model with a hierarchical structure that simplifies the transformation process and minimizes data *** were two design goals in constructing the tables and table relationships for the Generalized data model (GDM). The first was to focus on clinical codes in their original vocabularies to retain the original semantic representation of the data. The second was to retain hierarchical information present in the original data while retaining provenance. The model was tested by transforming synthetic Medicare data;Surveillance, Epidemiology, and End Results data linked to Medicare claims;and electronic health records from the Clinical Practice Research datalink. We also tested a subsequent transformation from the GDM into the Sentinel data *** resulting data model contains 19 tables, with the Clinical Codes, Contexts, and Collections tables serving as the core of the model, and containing most of the clinical, provenance, and hierarchical information. In addition, a Mapping table allows users to apply an arbitrarily complex set of relationships among vocabulary elements to facilitate automated *** GDM offers researchers a simpler process for transforming data, clear data provenance, and a path for users to transform their data into other data models. The GDM is designed to retain hierarchical relationships among data elements as well as the original semantic representation of the data, ensuring consistency in protocol implementation as part of a complete data pipeline for researchers.
This paper proposes a tetrahedral data model for unstructured data management. The model defines the four components of unstructured data including: basic attributes, semantic characteristics, low-level features and r...
详细信息
This paper proposes a tetrahedral data model for unstructured data management. The model defines the four components of unstructured data including: basic attributes, semantic characteristics, low-level features and raw data on its four facets, and the relations between these components. The internal implementation structure of the model and the data query language are designed and briefly introduced. This model provides a unified, integrated and associated description for different kinds of unstructured data, and supports intelligent data services such as associated retrieval and data mining. An example is given to demonstrate how to use the model for describing and manipulating data from a sample video base.
Previous works on database integration focused mainly on the creation of transformation interfaces between incompatible databases built up by different departments of the Taiwan Environmental Protection Administration...
详细信息
Previous works on database integration focused mainly on the creation of transformation interfaces between incompatible databases built up by different departments of the Taiwan Environmental Protection Administration, ignoring the demands of systematically and flexibly integrated information for advanced pollution control of point sources. To provide a systematic framework for flexible integration of distributed data, this paper presents a general model as the systematic object event data model based on systems thinking, to improve the integration capability of databases. The conceptual database framework for integrated pollution control was proposed as a result of the application of the systematic object event data model. The fundamental part of the systematic object event data model, creation of the object registry database, was put into practice from the year 2008. The object database of pollution source is available to factories in 2009, which helps factories to create and Taiwan environmental protection administration to maintain the consistent object data through the electronic application processes of permits. To construct various event databases that systematically connect to object database would be the following step to more efficiently provide systematic information from systematic data integration.
Recently published clinical trial data have produced compelling evidence for increased survival when Herceptin is administered to patients whose tumors are HER2 amplified. Therefore, the accuracy of HER2 status is ess...
详细信息
Recently published clinical trial data have produced compelling evidence for increased survival when Herceptin is administered to patients whose tumors are HER2 amplified. Therefore, the accuracy of HER2 status is essential to determine which patients should or should not receive Herceptin. Although HER2 results obtained by FISH and IHC are often in agreement, there is a persistent group of cases in which results are discordant, particularly among tumors with intermediate results. A multivariable analysis was undertaken to determine relative significance of various clinical and pathologic findings for patients diagnosed with infiltrating ductal carcinoma, and a data model was produced that predicts which patients are most likely to have HER2 amplified tumors. Correlates of HER2 amplification were higher Scarff-Bloom-Richardson grade, younger age at diagnosis, and a comedo ductal carcinoma in situ component. (c) 2006 Elsevier Ltd. All rights reserved.
To achieve Digital Transformation, companies are required to create new value and deploy solutions by using multi-field data, not just data from one domain. In recent years, data sharing platforms such as FIWARE and G...
详细信息
Background: data models are crucial for clinical research as they enable researchers to fully use the vast amount of clinical data stored in medical systems. Standardized data and well-defined relationships between da...
详细信息
Background: data models are crucial for clinical research as they enable researchers to fully use the vast amount of clinical data stored in medical systems. Standardized data and well-defined relationships between data points are necessary to guarantee semantic interoperability. Using the Fast Healthcare Interoperability Resources (FHIR) standard for clinical data representation would be a practical methodology to enhance and accelerate interoperability and data availability for research. Objective: This research aims to provide a comprehensive overview of the state-of-the-art and current landscape in FHIR-based data models and structures. In addition, we intend to identify and discuss the tools, resources, limitations, and other critical aspects mentioned in the selected research papers. Methods: To ensure the extraction of reliable results, we followed the instructions of the PRISMA-ScR (Preferred Reporting Items for Systematic Reviews and Meta-Analyses extension for Scoping Reviews) checklist. We analyzed the indexed articles in PubMed, Scopus, Web of Science, IEEE Xplore, the ACM Digital Library, and Google Scholar. After identifying, extracting, and assessing the quality and relevance of the articles, we synthesized the extracted data to identify common patterns, themes, and variations in the use of FHIR-based data models and structures across different studies. Results: On the basis of the reviewed articles, we could identify 2 main themes: dynamic (pipeline-based) and static data models. The articles were also categorized into health care use cases, including chronic diseases, COVID-19 and infectious diseases, cancer research, acute or intensive care, random and general medical notes, and other conditions. Furthermore, we summarized the important or common tools and approaches of the selected papers. These items included FHIR-based tools and frameworks, machine learning approaches, and data storage and security. The most common resource was "Observation" fol
The Universal Hydrographic data model is a new generation of marine geographic information data model, referred to as S-100 standard. In addition to supporting Electronic Navigational Charts (ENC) production, S-100 ca...
详细信息
The Universal Hydrographic data model is a new generation of marine geographic information data model, referred to as S-100 standard. In addition to supporting Electronic Navigational Charts (ENC) production, S-100 can be used as a Geographic Information System standard to model all maritime objects. Therefore, it is necessary to carry out a comprehensive study on S-100. In this review, we begin with a succinct overview of the history and development of S-100. Then we discussed the problems of S-100 and the corresponding solutions. Finally, some research directions of S-100 are proposed to provide reference for future research and application.
暂无评论