A breakthrough in building models for image processing came with the discovery that a convolutional neural net-work (CNN) can progressively extract higher-level represen-tations of the image content. Having high-resol...
详细信息
A breakthrough in building models for image processing came with the discovery that a convolutional neural net-work (CNN) can progressively extract higher-level represen-tations of the image content. Having high-resolution images to train CNN models is a key for optimizing the performance of image segmentation models. This paper presents a new dataset-called Flood Image (FloodIMG) database system- that was developed for flood related image processing and segmentation. We developed various Internet of Things Ap-plication programminginterfaces (IoT API) to gather flood-related images from Twitter, and US federal agencies' web servers, such as the US Geological Survey (USGS) and the De-partment of Transportation (DOT). Overall, > 9200 images of flooding events were collected, preprocessed, and formatted to make the dataset applicable for CNN training. Bounding boxes and polygon primitives were also labeled on each im-age to localize and classify an object in the image. Two use cases of FloodIMG are presented in this paper, where the Fast Region-based CNN (R-CNN) algorithm was used to estimate flood severity and depth during recent flooding events in the US. As of > 9200 images, 7,400 were categorized as training sets, whereas > 1,800 images were used for the R-CNN test -ing. Users can access the FloodIMG database freely through Kaggle platform to create more accessible, accurate, and op-timized image segmentation models. The FloodIMG workflow concludes with a visualization of colors and labels per im-age that can serve as a benchmark for flood image processing and segmentation.(c) 2023 The Author(s). Published by Elsevier Inc. This is an open access article under the CC BY license ( http://***/licenses/by/4.0/ )
作者:
Kazuki NakamaeHidemasa BonoLaboratory of BioDX
PtBio Co-Creation Research Center Genome Editing Innovation Center Hiroshima University 3-10-23 Kagamiyama Higashi-Hiroshima 739-0046 Japan Laboratory of Genome Informatics
Graduate School of Integrated Sciences for Life Hiroshima University 3-10-23 Kagamiyama Higashi-Hiroshima 739-0046 Japan
Bioinformatics has become an indispensable technology in molecular biology for genome editing. In this review, we outline various bioinformatic techniques necessary for genome editing research. We first review state-o...
详细信息
Bioinformatics has become an indispensable technology in molecular biology for genome editing. In this review, we outline various bioinformatic techniques necessary for genome editing research. We first review state-of-the-art computational tools developed for genome editing studies. We then introduce a bio-digital transformation (BioDX) approach, which fully utilizes existing databases for biological innovation, and uses publicly available bibliographic full-text data and transcriptome data to survey genome editing target genes in model organism species, where substantial genomic information and annotation are readily available. We also discuss genome editing attempts in species with almost no genomic information. The transcriptome data, sequenced genomes, and functional annotations for these species are described, with a primary focus on the bioinformatic tools used for these analyses. Finally, we conclude on the need to maintain a database of genome editing resources for future development of genome editing research. Our review shows that the integration and maintenance of useful resources remains a challenge for bioinformatics research in genome editing, and that it is crucial for the research community to work together to create and maintain such databases in the future.
暂无评论