XAI benefits to hydrological modeling obscured by hype

by

Editors' notes

This article has been reviewed according to Science X's editorial process and policies. Editors have highlighted the following attributes while ensuring the content's credibility:

fact-checked

trusted source

proofread

Credit: Holger Maier

Hydrologic modelers are increasingly using explainable AI (XAI) to provide additional insight into complex hydrological problems, but a new University of Adelaide study suggests XAI's insights may not be as revolutionary as proponents suggest.

XAI is a field of research and set of methods that helps people understand how AI algorithms work and trust the results they produce.

The traditional use of hydrological modeling would see a researcher use information on rainfall and evaporation to address issues such as water supply security and flooding.

If such models are developed using AI approaches, XAI is tasked with explaining the rationale that the AI model used to develop the relationships it describes between factors such as rainfall and water supply.

But according to the study published in the Journal of Hydrology X, led by Professor Holger Maier of the University of Adelaide's School of Architecture and Civil Engineering, using XAI in hydrological modeling has not yet created the advancements the technology might eventually lead to.

"Many XAI approaches are similar to more traditional methods of interrogating existing models, such as sensitivity or break-even analysis," says Professor Maier.

"In fact, the approach of developing data-driven models to obtain a better understanding of hydrological processes to inform the development of more physics-based models is as old as hydrology itself.

"Therefore, it remains to be established whether XAI methods can provide insights beyond those obtained through more traditional methods."

Credit: Holger Maier

For hydrological modeling to fully benefit from XAI's potential, Professor Maier says the current tech-centric approach should be reconsidered.

"With XAI, there is often a focus on maximizing the predictive ability of AI models at all costs, which tends to result in large models that might have thousands or even millions of ill-defined parameters," he says.

"There is little value in explaining AI-derived relationships if these do not reflect underlying hydrological processes.

"We also need to stop thinking about XAI as a purely technical approach, and instead employ a socio-technical approach that views XAI as a process that can assist with solving problems that are situated within broader social and political contexts."

In a previous study , Professor Maier and colleagues highlighted the fallibility of AI in hydrological modeling.

"Despite a model being built on a large dataset, and the predictive ability of the model being very good, we saw it model a negative contribution to the streamflow of a creek from rainfall, which does not make physical sense," says Professor Maier.

Because of these issues, the implementation of XAI—which would otherwise try to explain the rationale behind rainfall leading to less water in a creek—should be slowed while the technology is rigorously tested against known models to ensure accuracy.

"There is no point in applying XAI methods to AI models that are unable to represent underlying processes in a consistent and reliable fashion," Professor Maier said.

More information: Holger Robert Maier et al, How much X is in XAI: Responsible use of "Explainable" artificial intelligence in hydrology and water resources, Journal of Hydrology X (2024). DOI: 10.1016/j.hydroa.2024.100185

Provided by University of Adelaide