Saturday, February 11, 2023

Liquid networks: Researchers Discover a More Flexible Approach to Machine Learning

Recommendable! Biological inspired versus machine brains!

"... As well as being speedier ... their newest networks are also unusually stable, meaning the system can handle enormous inputs without going haywire. “The main contribution here is that stability and other nice properties are baked into these systems by their sheer structure,” ... Liquid networks seem to operate in ... “the sweet spot: They are complex enough to allow interesting things to happen, but not so complex as to lead to chaotic behavior.” ..."

From the abstract:
"Continuous-time neural networks are a class of machine learning systems that can tackle representation learning on spatiotemporal decision-making tasks. These models are typically represented by continuous differential equations. However, their expressive power when they are deployed on computers is bottlenecked by numerical differential equation solvers. This limitation has notably slowed down the scaling and understanding of numerous natural physical phenomena such as the dynamics of nervous systems. Ideally, we would circumvent this bottleneck by solving the given dynamical system in closed form. This is known to be intractable in general. Here, we show that it is possible to closely approximate the interaction between neurons and synapses—the building blocks of natural and artificial neural networks—constructed by liquid time-constant networks efficiently in closed form. To this end, we compute a tightly bounded approximation of the solution of an integral appearing in liquid time-constant dynamics that has had no known closed-form solution so far. This closed-form solution impacts the design of continuous-time and continuous-depth neural models. For instance, since time appears explicitly in closed form, the formulation relaxes the need for complex numerical solvers. Consequently, we obtain models that are between one and five orders of magnitude faster in training and inference compared with differential equation-based counterparts. More importantly, in contrast to ordinary differential equation-based continuous networks, closed-form networks can scale remarkably well compared with other deep learning instances. Lastly, as these models are derived from liquid networks, they show good performance in time-series modelling compared with advanced recurrent neural network models."

Researchers Discover a More Flexible Approach to Machine Learning | Quanta Magazine “Liquid” neural nets, based on a worm’s nervous system, can transform their underlying algorithms on the fly, giving them unprecedented speed and adaptability.



Extended Data Fig. 2: Closed-form Continuous-depth neural architecture
A backbone neural network layer delivers the input signals into three head networks g, f and h. f acts as a liquid time-constant for the sigmoidal time-gates of the network. g and h construct the nonlinearities of the overall CfC network.


No comments: