In the scope of the 7th Lifelog Search Challenge (LSC'24), we present the 4th iteration of LifeGraph, a multimodalknowledge-graph approach with data augmentations using Vision-Language Models (VLM). We extend the...
详细信息
ISBN:
(纸本)9798400705502
In the scope of the 7th Lifelog Search Challenge (LSC'24), we present the 4th iteration of LifeGraph, a multimodalknowledge-graph approach with data augmentations using Vision-Language Models (VLM). We extend the LifeGraph model presented in former LSC challenges by event-based clustering using temporal and spatial relations as well as information extracted from descriptions of Lifelog image captions produced by VLMs.
Foundation Models (FMs) hold transformative potential to accelerate scientific discovery, yet reaching their full capacity in complex, highly multimodal domains such as genomics, drug discovery, and materials science ...
详细信息
Foundation Models (FMs) hold transformative potential to accelerate scientific discovery, yet reaching their full capacity in complex, highly multimodal domains such as genomics, drug discovery, and materials science requires a deeper consideration of the contextual nature of the scientific knowledge. We revisit the synergy between FMs and multimodalknowledge Graph (MKG) representation and learning, exploring their potential to enhance predictive and generative tasks in biomedical contexts like drug discovery. We seek to exploit MKGs to improve generative AI models' ability to capture intricate domain-specific relations and facilitate multimodal fusion. This integration promises to accelerate discovery workflows by providing more meaningful multimodalknowledge-enhanced representations and contextual evidence. Despite this potential, challenges and opportunities remain, including fusing multiple sequential, structural and knowledge modalities and models leveraging the strengths of each;developing scalable architectures for multi-task multi-dataset learning;creating end-to-end workflows to enhance the trustworthiness of biomedical FMs using knowledge from heterogeneous datasets and scientific literature;the domain data bottleneck and the lack of a unified representation between natural language and chemical representations;and benchmarking, specifically the transfer learning to tasks with limited data (e.g., unseen molecules and proteins, rear diseases). Finally, fostering openness and collaboration is key to accelerate scientific breakthroughs.
knowledge-based Visual Question Answering (KVQA) requires external knowledge beyond the visible content to answer questions about an image. This ability is challenging but indispensable to achieve general VQA. One lim...
详细信息
knowledge-based Visual Question Answering (KVQA) requires external knowledge beyond the visible content to answer questions about an image. This ability is challenging but indispensable to achieve general VQA. One limitation of existing KVQA solutions is that they jointly embed all kinds of information without fine-grained selection, which introduces unexpected noises for reasoning the correct answer. How to capture the question-oriented and information-complementary evidence remains a key challenge to solve the problem. Inspired by the human cognition theory, in this paper, we depict an image by multiple knowledgegraphs from the visual, semantic and factual views. Thereinto, the visual graph and semantic graph are regarded as image-conditioned instantiation of the factual graph. On top of these new representations, we re-formulate knowledge-based Visual Question Answering as a recurrent reasoning process for obtaining complementary evidence from multimodal information. To this end, we decompose the model into a series of memory-based reasoning steps, each performed by a Graph-based Read, Update, and Control (GRUC) module that conducts parallel reasoning over both visual and semantic information. By stacking the modules multiple times, our model performs transitive reasoning and obtains question-oriented concept representations under the constrain of different modalities. Finally, we perform graph neural networks to infer the global-optimal answer by jointly considering all the concepts. We achieve a new state-of-the-art performance on three popular benchmark datasets, including FVQA, Visual7W-KB and OK-VQA, and demonstrate the effectiveness and interpretability of our model with extensive experiments. The source code is available at: https://***/astro-zihao/gruc (C) 2020 Elsevier Ltd. All rights reserved.
暂无评论