High administrative costs and associated operational inefficiencies present multiple challenges for pension in-dustries across different countries. Although blockchain is considered a potential solution for these conc...
详细信息
High administrative costs and associated operational inefficiencies present multiple challenges for pension in-dustries across different countries. Although blockchain is considered a potential solution for these concerns, the pension industry has not fully leveraged this disruptive technology. This paper provides a blueprint for a blockchain-based end-to-end digital transformation of the pension industry. First, we identified the pre-requisites for blockchain adoption in pension. Then, we developed an architectural design of blockchain-based re-designed pension business processes. Subsequently, we elaborated on how smart contracts can make pension transactions in such redesigned processes optimized, automated, and error-free. Finally, we presented how blockchain can be integrated with the existing pension IT systems, using an API layer, to enable seamless onboarding of all pension participants to a single digital platform - a critical ask for any blockchain implementation. We concluded with an elucidation of the potential of such blockchain-based digital transformation of the pension industry in reducing turnaround time, lowering operating expenses, and facilitating the achievement of other pension reform agendas. This architecture of a blockchain-enabled pension network represents a flexible and scalable knowledge construct that can act as a foundation for further investigations by pension regulators or pension industry par-ticipants interested in achieving technology-driven pension operations reforms.
Biomedical information mining is increasingly recognized as a promising technique to accelerate drug discovery and development. Especially, integrative approaches which mine data from several (open) data sources have ...
详细信息
Biomedical information mining is increasingly recognized as a promising technique to accelerate drug discovery and development. Especially, integrative approaches which mine data from several (open) data sources have become more attractive with the increasing possibilities to programmatically access data through application programming interfaces (APIs). The use of open data in conjunction with free, platform-independent analytic tools provides the additional advantage of flexibility, re-usability, and transparency. Here, we present a strategy for performing ligand-based in silico drug repurposing with the analytics platform KNIME. We demonstrate the usefulness of the developed workflow on the basis of two different use cases: a rare disease (here: Glucose Transporter Type 1 (GLUT-1) deficiency), and a new disease (here: COVID 19). The workflow includes a targeted download of data through web services, data curation, detection of enriched structural patterns, as well as substructure searches in DrugBank and a recently deposited data set of antiviral drugs provided by Chemical s Service. Developed workflows, tutorials with detailed step-by-step instructions, and the information gained by the analysis of data for GLUT-1 deficiency syndrome and COVID-19 are made freely available to the scientific community. The provided framework can be reused by researchers for other in silico drug repurposing projects, and it should serve as a valuable teaching resource for conveying integrative data mining strategies.
Background: Equally important and challenging as genome annotation, is the subsequent classification of predicted genes into their respective pathways. The Kyoto Encyclopedia of Genes and Genomes ( KEGG) represents a ...
详细信息
Background: Equally important and challenging as genome annotation, is the subsequent classification of predicted genes into their respective pathways. The Kyoto Encyclopedia of Genes and Genomes ( KEGG) represents a database consisting of known genes and their respective biochemical functionalities. Although accessible online, analyses of multiple genes are time consuming and are not suitable for analyzing data sets that are proprietary. Results: Presented here is a new software solution that utilizes the KEGG online database for pathway mapping of partial and whole prokaryotic genomes. PathwayVoyager retrieves user defined subsets of the KEGG database and stores the data as local, blast- formatted databases. Previously selected datasets can be re- used, reducing run- time significantly. Whole or partial genomes can be automatically analyzed using NCBI's BlastP algorithm and ORFs with similarities below the user- defined threshold will be marked on pathway maps. Multiple gene hits are sorted by similarity. Since no sequence information is transmitted over the Internet, PathwayVoyager is an ideal solution for pathway mapping and reconstruction of confidential DNA sequence data. Conclusion: PathwayVoyager represents an alternative approach to many already existing, more complex pathway reconstructions software solutions. This software does not require any dedicated hardware or software and is flexible and straightforward to use. It is ideally suited for environments where analyses on variable datasets are desired.
Graph analysis is important and pervasive in the DoD community. GraphBLAS Forum (a world-wide consortium of researchers) Government/FFRDCs, academia, and industry Goal: applicationprogramming specification (API) for ...
详细信息
Graph analysis is important and pervasive in the DoD community. GraphBLAS Forum (a world-wide consortium of researchers) Government/FFRDCs, academia, and industry Goal: applicationprogramming specification (API) for graph analysis GraphBLAS: Graph Basic Linear Algebra Subprograms
Low coupling between modules and high cohesion inside each module are key features of good software architecture. Systems written in modern programming languages generally start with some reasonably well-designed modu...
详细信息
Low coupling between modules and high cohesion inside each module are key features of good software architecture. Systems written in modern programming languages generally start with some reasonably well-designed module structure, however with continuous feature additions, modifications and bug fixes, software modularity gradually deteriorates. So, there is a need for incremental improvements to modularity to avoid the situation when the structure of the system becomes too complex to maintain. We demonstrate how Wrangler, a general-purpose refactoring tool for Erlang, can be used to maintain and improve the modularity of programs written in Erlang without dramatically changing the existing module structure. We identify a set of "modularity smells", and show how they can be detected by Wrangler and removed by way of a variety of refactorings implemented in Wrangler. Validation of the approach and usefulness of the tool are demonstrated by case studies.
Today, refactorings are supported in some integrated development environments (IDEs). The refactoring operations can only work correctly if all source code that needs to be changed is available to the IDE. However, th...
详细信息
ISBN:
(纸本)9781595933751
Today, refactorings are supported in some integrated development environments (IDEs). The refactoring operations can only work correctly if all source code that needs to be changed is available to the IDE. However, this precondition neither holds for application programming interface (API) evolution, nor in team development. The research presented in this paper aims to support refactoring in API evolution and team development by extending IDE and version control to allow refactoring-aware merging and migration.
Internet of Things (IoT) edge devices have small amounts of memory and limited computational power. These resource-constrained devices consist of sensors that generate large amounts of data, making IoT edge devices at...
详细信息
Internet of Things (IoT) edge devices have small amounts of memory and limited computational power. These resource-constrained devices consist of sensors that generate large amounts of data, making IoT edge devices attractive targets for machine learning models. To take advantage of machine learning models normally requires the data to be transported to a remote device with enough computational power to process these data. The transport of data to a remote node creates a delayed response and is dependent on data transport availability. Besides performance hits to machine learning models on IoT at the edge, any model training on IoT edge devices is nearly impossible. With the introduction of the Coral Tensor Processing Unit (TPU), real-time data processing through machine learning models on IoT edge devices is achievable. This research explores splitting a convolutional neural network (CNN) to expose an intermediate layer for fine-tune training. This study found that it is possible to extract an intermediate layer output from a CNN running on the TPU for fine-tune training on a Raspberry Pi v4where the fine-tuning is done only on the upper layers of the model. This makes it possible to fine-tune train larger models on a resource restricted device. The model's performance improved 6.7 percent, from 53.9 percent to 60.6 percent.
暂无评论