We introduce an open-source web-based application Programming interface (API) developed within a representational state transfer (REST) architecture framework that provides access to the operational streamflow forecas...
详细信息
We introduce an open-source web-based application Programming interface (API) developed within a representational state transfer (REST) architecture framework that provides access to the operational streamflow forecasts from the U.S. National Water Model (NWM). We built this API within the Google Cloud infrastructure, taking advantage of Google's API Gateway, BigQuery, and the Google Cloud Run architecture. We ran a data transformation from netCDF to tabular formation then ingested data into BigQuery using Apache Beam parallel processing technology. The API activates functions deployed to Cloud Run that executes SQL queries within the BigQuery data warehouse. The API gives users granular control to specify queries based on forecast type, reference datetime, stream segment specifics, and forecast ensemble members. This API greatly simplifies access and use of current and historic NWM forecasts, an otherwise arduous task due to the large volume of data and unwieldy storage methods. Retrieved forecast data can be used for many critical water management applications providing actionable intelligence for, e.g., flood management, safety, recreation, agriculture, and related uses.
Keeping computing standards up with the pace of change in software and hardware development requires paying attention to methods by which such innovations are created, adopted, updated, and socialized within their tar...
详细信息
Keeping computing standards up with the pace of change in software and hardware development requires paying attention to methods by which such innovations are created, adopted, updated, and socialized within their target user communities. Significant changes are taking place in practice, led by work from individual contributors as well as formal and informal organizations. This column discusses some recent developments in application programmer interfaces (APIs) and factors that affect them, and explores new trends that are emerging from both standards organizations and software companies.
This column completes a two-part exploration into features of application programming interfaces (APIs) that are useful in clouds. The discussion contrasts APIs with other types of interfaces and describes variations ...
详细信息
This column completes a two-part exploration into features of application programming interfaces (APIs) that are useful in clouds. The discussion contrasts APIs with other types of interfaces and describes variations on protocols and calling methods, giving examples from physical hardware control to illustrate important features of cloud API design.
Background: Meaningful exchange of microarray data is currently difficult because it is rare that published data provide sufficient information depth or are even in the same format from one publication to another. Onl...
详细信息
Background: Meaningful exchange of microarray data is currently difficult because it is rare that published data provide sufficient information depth or are even in the same format from one publication to another. Only when data can be easily exchanged will the entire biological community be able to derive the full benefit from such microarray studies. Results: To this end we have developed three key ingredients towards standardizing the storage and exchange of microarray data. First, we have created a minimal information for the annotation of a microarray experiment (MIAME)-compliant conceptualization of microarray experiments modeled using the unified modeling language (UML) named MAGE-OM (microarray gene expression object model). Second, we have translated MAGE-OM into an XML-based data format, MAGE-ML, to facilitate the exchange of data. Third, some of us are now using MAGE (or its progenitors) in data production settings. Finally, we have developed a freely available software tool kit (MAGE-STK) that eases the integration of MAGE-ML into end users' systems. Conclusions: MAGE will help microarray data producers and users to exchange information by providing a common platform for data exchange, and MAGE-STK will make the adoption of MAGE easier.
Background: The high-density oligonucleotide microarray (GeneChip) is an important tool for molecular biological research aiming at large-scale detection of small nucleotide polymorphisms in DNA and genome-wide analys...
详细信息
Background: The high-density oligonucleotide microarray (GeneChip) is an important tool for molecular biological research aiming at large-scale detection of small nucleotide polymorphisms in DNA and genome-wide analysis of mRNA concentrations. Local array data management solutions are instrumental for efficient processing of the results and for subsequent uploading of data and annotations to a global certified data repository at the EBI (ArrayExpress) or the NCBI (GeneOmnibus). Description: To facilitate and accelerate annotation of high-throughput expression profiling experiments, the Microarray Information Management and Annotation System (MIMAS) was developed. The system is fully compliant with the Minimal Information About a Microarray Experiment (MIAME) convention. MIMAS provides life scientists with a highly flexible and focused GeneChip data storage and annotation platform essential for subsequent analysis and interpretation of experimental results with clustering and mining tools. The system software can be downloaded for academic use upon request. Conclusion: MIMAS implements a novel concept for nation-wide GeneChip data management whereby a network of facilities is centered on one data node directly connected to the European certified public microarray data repository located at the EBI. The solution proposed may serve as a prototype approach to array data management between research institutes organized in a consortium.
Background: Large-scale volumetric biomedical image data of three or more dimensions are a significant challenge for distributed browsing and visualisation. Many images now exceed 10GB which for most users is too larg...
详细信息
Background: Large-scale volumetric biomedical image data of three or more dimensions are a significant challenge for distributed browsing and visualisation. Many images now exceed 10GB which for most users is too large to handle in terms of computer RAM and network bandwidth. This is aggravated when users need to access tens or hundreds of such images from an archive. Here we solve the problem for 2D section views through archive data delivering compressed tiled images enabling users to browse through very-large volume data in the context of a standard web-browser. The system provides an interactive visualisation for grey-level and colour 3D images including multiple image layers and spatial-data overlay. Results: The standard Internet Imaging Protocol (IIP) has been extended to enable arbitrary 2D sectioning of 3D data as well a multi-layered images and indexed overlays. The extended protocol is termed IIP3D and we have implemented a matching server to deliver the protocol and a series of Ajax/Javascript client codes that will run in an Internet browser. We have tested the server software on a low-cost linux-based server for image volumes up to 135GB and 64 simultaneous users. The section views are delivered with response times independent of scale and orientation. The exemplar client provided multi-layer image views with user-controlled colour-filtering and overlays. Conclusions: Interactive browsing of arbitrary sections through large biomedical-image volumes is made possible by use of an extended internet protocol and efficient server-based image tiling. The tools open the possibility of enabling fast access to large image archives without the requirement of whole image download and client computers with very large memory configurations. The system was demonstrated using a range of medical and biomedical image data extending up to 135GB for a single image volume.
The Envoy Framework addresses a need for computer-based assistants or agents that operate in conjunction with users' existing applications, helping them perform tedious, repetitive, or time-consuming tasks more ea...
详细信息
The Envoy Framework addresses a need for computer-based assistants or agents that operate in conjunction with users' existing applications, helping them perform tedious, repetitive, or time-consuming tasks more easily and efficiently. Envoys carry out missions for users by invoking envoy-aware applications called operatives and inform users of mission results via envoy-aware applications called informers. The distributed, open architecture developed for Envoys is derived from an analysis of the best characteristics of existing agent systems. This architecture has been designed as a model for how agent technology can be seamlessly integrated into the electronic desktop. It defines a set of applicationprogrammer's interfaces so that developers may convert their software to envoy-aware applications. A subset of the architecture described in this paper has been implemented in an Envoy Framework prototype.
暂无评论