版权所有:内蒙古大学图书馆 技术提供:维普资讯• 智图
内蒙古自治区呼和浩特市赛罕区大学西街235号 邮编: 010021
作者机构:Graduate Interdisciplinary Program in Applied Mathematics University of Arizona Tucson AZ 85721 USA Department of Materials Science and Engineering University of Arizona Tucson AZ 85721 USA
出 版 物:《Acta Materialia》 (Acta Mater)
年 卷 期:2025年第296卷
基 金:SEW acknowledges the support by the National Science Foundation United States Graduate Research Fellowship Program under Grant No. DGE-2137419. MIL acknowledges the support by the National Science Foundation United States under Award No. 2441813. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements either expressed or implied of the National Science Foundation. The code for this paper is made available on GitHub https://github.com/materials-informatics-az/MicroPropViT
摘 要:Machine learning of microstructure–property relationships from data is an emerging approach in computational materials science. Most existing machine learning efforts focus on the development of task-specific models for each microstructure–property relationship. We propose utilizing pre-trained foundational vision transformers for the extraction of task-agnostic microstructure features and subsequent light-weight machine learning of a microstructure-dependent property. We demonstrate our approach with pre-trained state-of-the-art vision transformers (CLIP, DINOv2, SAM) in two case studies on machine-learning: (i) elastic modulus of two-phase microstructures based on simulations data; and (ii) Vicker’s hardness of Ni-base and Co-base superalloys based on experimental data published in literature. Our results show the potential of foundational vision transformers for robust microstructure representation and efficient machine learning of microstructure–property relationships without the need for expensive task-specific training or fine-tuning of bespoke deep learning models.