咨询与建议

看过本文的还看了

相关文献

该作者的其他文献

文献详情 >GINN-KAN: Interpretability pip... 收藏
arXiv

GINN-KAN: Interpretability pipelining with applications in Physics Informed Neural Networks

作     者:Ranasinghe, Nisal Xia, Yu Seneviratne, Sachith Halgamuge, Saman 

作者机构:AI Optimization and Pattern Recognition Research Group Dept. of Mechanical Eng. University of Melbourne Australia 

出 版 物:《arXiv》 (arXiv)

年 卷 期:2024年

核心收录:

主  题:Regression analysis 

摘      要:Neural networks are powerful function approximators, yet their black-box nature often renders them opaque and difficult to interpret. While many post-hoc explanation methods exist, they typically fail to capture the underlying reasoning processes of the networks. A truly interpretable neural network would be trained similarly to conventional models using techniques such as backpropagation, but additionally provide insights into the learned input-output relationships. In this work, we introduce the concept of interpretability pipelineing, to incorporate multiple interpretability techniques to outperform each individual technique. To this end, we first evaluate several architectures that promise such interpretability, with a particular focus on two recent models selected for their potential to incorporate interpretability into standard neural network architectures while still leveraging backpropagation: the Growing Interpretable Neural Network (GINN) and Kolmogorov Arnold Networks (KAN). We analyze the limitations and strengths of each and introduce a novel interpretable neural network GINN-KAN that synthesizes the advantages of both models. When tested on the Feynman symbolic regression benchmark datasets, GINN-KAN outperforms both GINN and KAN. To highlight the capabilities and the generalizability of this approach, we position GINN-KAN as an alternative to conventional black-box networks in Physics-Informed Neural Networks (PINNs), which we propose as a challenging testbed for backpropagation-friendly interpretable neural networks. By adding interpretability to PINNs, we allow far more transparent and trustworthy data-driven solutions to differential equations. We expect this to have far-reaching implications in the application of deep learning pipelines in the natural sciences. Our experiments with this interpretable PINN on 15 different partial differential equations demonstrate that GINN-KAN augmented PINNs outperform PINNs with black-box networks in solving d

读者评论 与其他读者分享你的观点

用户名:未登录
我的评分