Friday, December 26, 2025

Science - Advancing Understanding with Interpretable and symbolic Machine Learning using Kolmogorov-Arnold Networks

Amazing stuff!

"[machine learning models] ... But despite such successes, these data-driven approaches suffer a major drawback in that they are generally “black boxes” that offer no human-accessible understanding of how they make their predictions. This shortcoming also extends to the models’ inputs: It is often desirable to build known domain knowledge into these models, but the data-driven approach excludes that option. [Researchers] have now made a notable step toward addressing these challenges by developing a machine-learning method designed to discover simple, interpretable laws from data ... This method could potentially enable the automated discovery of the physical laws governing a wide range of systems. ...

KANs have since been used to parameterize multivariate functions via such a composition of sums and univariate functions and learn the univariate functions in this representation from a set of training data. Since the learned part of a KAN is a collection of univariate functions, one can potentially gain insight into the function represented by the KAN―and thus understand what the KAN has learned―by inspecting these univariate functions after training. This arguably makes KANs more interpretable than standard neural networks. ...

By necessity, these initial tests are toy experiments where the correct answer is already known. It will be exciting to see how KANs perform on problems of real scientific interest, where the correct physical laws are not yet known. Applied to such problems, this approach has the potential to significantly accelerate the scientific process. ..."

From the abstract:
"A major challenge of AI plus science lies in its inherent incompatibility: Today’s AI is primarily based on connectionism, while science depends on symbolism.
To bridge the two worlds, we propose a framework to seamlessly synergize Kolmogorov-Arnold networks (KANs) and science. The framework highlights KANs’ usage for three aspects of scientific discovery:
identifying relevant features,
revealing modular structures, and
discovering symbolic formulas.
The synergy is bidirectional: science to KAN (incorporating scientific knowledge into KANs), and KAN to science (extracting scientific insights from KANs).
We highlight major new functionalities in pykan:
(1) MultKAN, KANs with multiplication nodes,
(2) kanpiler, a KAN compiler that compiles symbolic formulas into KANs;
(3) tree converter, convert KANs (or any neural networks) into tree graphs.
Based on these tools, we demonstrate KANs’ capability to discover various types of physical laws, including conserved quantities, Lagrangians, symmetries, and constitutive laws."

Physics - Advancing Physical Understanding with Interpretable Machine Learning "A new artificial neural-network architecture opens a window into the workings of a tool previously regarded as a black box."


KAN: Kolmogorov-Arnold Networks (I believe, this is the original work on KANs by the same first author Ziming Liu published in April 2024)




Notice MultKAN have additional multiplication nodes


No comments: