Explainable Artificial Intelligence: Causal Discovery and Reasoning for Geotechnical Risk Analysis
Wenli Liu,Fenghua Liu,Weili Fang,Peter E.D.Love
Abstract
Artificial intelligence (AI), such as machine learning (ML) models, is profoundly impacting an organization's ability to assess safety risks during the construction of tunnels. Yet, ML models are black boxes and suffer from interpretability and transparency issues – they are unexplainable. Hence the motivation of this paper is to address the following research question: How can we explain the outputs of data-driven ML models used to assess geotechnical risks in tunnel construction? We draw on the concept of ‘eXplainable AI’ (XAI) and utilize causal discovery and reasoning to help analyze and interpret the manifestation of geotechnical risks in tunnel construction by developing: (1) a sparse nonparametric and nonlinear directed acyclic diagram (DAG) used to determine the causal structure of risks between sub-systems; (2) a multiple linear regression model, which we use to estimate the effect of the causal relationships between sub-systems; and (3) a probability-based reasoning model to quantify and reason about risk. We use the San-yang Road tunnel project in Wuhan (China) to validate the feasibility and effectiveness of our proposed approach. The results indicate that our approach can accurately explain what and how risks are obtained from a data-driven probability-based ML model for ground settlement in tunnel construction.
Keywords:Causal discovery;
probability-based reasoning;
risk;
tunnel construction;
XAI
https://www.sciencedirect.com/science/article/pii/S0951832023005732