Marker gene amplicon sequencing is often preferred over whole genome sequencing for microbial community characterization, due to its lower cost while still enabling assessment of uncultivable organisms. This technique...
详细信息
Marker gene amplicon sequencing is often preferred over whole genome sequencing for microbial community characterization, due to its lower cost while still enabling assessment of uncultivable organisms. This technique involves many experimental steps, each of which can be a source of errors and bias. We present an up-to-date overview of the whole experimental pipeline, from sampling to sequencing reads, and give information allowing for informed choices at each step of both planning and execution of a microbial community assessment study. When applicable, we also suggest ways of avoiding inherent pitfalls in amplicon sequencing.
Distributed hybrid simulation is an approach to large-scale testing in which the system under test is split into several sub-structures which are tested or simulated in different locations. Data are passed between the...
详细信息
Distributed hybrid simulation is an approach to large-scale testing in which the system under test is split into several sub-structures which are tested or simulated in different locations. Data are passed between the sub-structures at each timestep so as to ensure that the distributed experiment realistically simulates the full system under test. This approach optimises the use of resources at different locations to achieve a more representative experiment. While different software to conduct distributed simulations exists, there are no standards and specifications to organise and plan the experiments, and as a result the different systems lack inter-operability. To address these issues, we have developed a high-level specification called Celestina, which provides a framework for conducting a distributed experiment. Celestina specifies the services to be implemented, under three main headings of networking, definition and execution, and supports the data exchange during a simulation. It does not force any particular implementation or method of data exchange. This paper summarises the Celestina specification and describes one implementation. Lastly, a validation experiment is presented, involving distributed numerical simulations of an earlier local hybrid experiment, in which Celestina controls the experiment planning and data exchange effectively and with minimal computational overhead. (C) 2014 American Society of Civil Engineers.
作者:
Schneikart, GeraldMayrhofer, WalterTech Univ Wien
Inst Management Sci Res Ctr Human Centered Cyber Phys Prod & Assembly Theresianumgasse 27 A-1040 Vienna Austria FHWien WKW
Inst Digital Transformat & Strategy Dept Digital Econ Wahringer Gurtel 97 A-1180 Vienna Austria
Biomedical research is a prominent case of knowledge work, often driven by data and information. A major limiting factor in biomedical research is access to information when and where it is needed, namely on the job. ...
详细信息
Biomedical research is a prominent case of knowledge work, often driven by data and information. A major limiting factor in biomedical research is access to information when and where it is needed, namely on the job. Biomedical research is very data-driven and even the amount of data that one researcher generates can be overwhelming. Consequently, researchers are prone to develop a psychological state called information overload, which hampers creative thinking. In order to facilitate optimal innovation strategies, research organizations are advised to implement assistance systems, which provide opportunities for digital data management in experimental laboratories directly at the work bench. Assistance systems have the potential to improve efficiency, quality, and reliability at the same time, while supporting researchers with the "dull-side" of keeping records and entering *** article provides a detailed technical overview of recent innovative solutions for the specific problems of experimental work in biomedical research listed below: 1) automation of standardized, repetitive methodological routines;2) establishment of ubiquitous computing environments to facilitate access to and storage of digital information at various locations in wet labs;3) replacement of paper-bound notebooks with electronic laboratory notebooks, which are enterprise software applications;4) integration of office and lab work space into single lab benches with tabletop systems;5) electronic guidance through complex pipetting experiments, which are automatically recorded;6) helping researchers to remain focused on hands-on activities with augmented reality provided by smartglasses and;7) voice assistance as a tool to keep hands free, in order to improve processes and increase efficiency .Since none of the reviewed innovations have become mainstream in research organizations yet, they were identified as disruptive technologies. This article will give a broad overview of those technologi
Phenomics has emerged as the technology of choice for understanding quantitative genetic variation in plant physiology and plant breeding. Phenomics has allowed for unmatched precision in exploring plant life cycles a...
详细信息
Phenomics has emerged as the technology of choice for understanding quantitative genetic variation in plant physiology and plant breeding. Phenomics has allowed for unmatched precision in exploring plant life cycles and physiological patterns. As new technologies are developed, it is still vital to follow best practices for designing and planning to be able to fully exploit any experimental results. Here we describe the basic – but sometimes overlooked – considerations of a phenomics experiment to help you maximize the value from the data collected: choosing population and location, accounting for sources of variation, establishing a timeline, and leveraging ground-truth measurements. less
Thanks to innovative sample-preparation and sequencing technologies, gene expression in individual cells can now be measured for thousands of cells in a single experiment. Since its introduction, single-cell RNA seque...
详细信息
Thanks to innovative sample-preparation and sequencing technologies, gene expression in individual cells can now be measured for thousands of cells in a single experiment. Since its introduction, single-cell RNA sequencing (scRNA-seq) approaches have revolutionized the genomics field as they created unprecedented opportunities for resolving cell heterogeneity by exploring gene expression profiles at a single-cell resolution. However, the rapidly evolving field of scRNA-seq invoked the emergence of various analytics approaches aimed to maximize the full potential of this novel strategy. Unlike population-based RNA sequencing approaches, scRNA seq necessitates comprehensive computational tools to address high data complexity and keep up with the emerging single-cell associated challenges. Despite the vast number of analytical methods, a universal standardization is lacking. While this reflects the fields’ immaturity, it may also encumber a newcomer to blend in. In this review, we aim to bridge over the abovementioned hurdle and propose four ready-to-use pipelines for scRNA-seq analysis easily accessible by a newcomer, that could fit various biological data types. Here we provide an overview of the currently available single-cell technologies for cell isolation and library preparation and a step by step guide that covers the entire canonical analytic workflow to analyse scRNA-seq data including read mapping, quality controls, gene expression quantification, normalization, feature selection, dimensionality reduction, and cell clustering useful for trajectory inference and differential expression. Such workflow guidelines will escort novices as well as expert users in the analysis of complex scRNA-seq datasets, thus further expanding the research potential of single-cell approaches in basic science, and envisaging its future implementation as best practice in the field. less
暂无评论