Wednesday, May 10, 2023

Machine Learning Model Reveals How Our Brains Understand Communication Sounds of animals

Amazing stuff!

"Researchers developed a machine learning model that mimics how the brains of social animals distinguish between sound categories, like mating, food or danger, and react accordingly. ..."

From the abstract:
"For robust vocalization perception, the auditory system must generalize over variability in vocalization production as well as variability arising from the listening environment (e.g., noise and reverberation). We previously demonstrated using guinea pig and marmoset vocalizations that a hierarchical model generalized over production variability by detecting sparse intermediate-complexity features that are maximally informative about vocalization category from a dense spectrotemporal input representation. Here, we explore three biologically feasible model extensions to generalize over environmental variability: (1) training in degraded conditions, (2) adaptation to sound statistics in the spectrotemporal stage and (3) sensitivity adjustment at the feature detection stage. All mechanisms improved vocalization categorization performance, but improvement trends varied across degradation type and vocalization type. One or more adaptive mechanisms were required for model performance to approach the behavioral performance of guinea pigs on a vocalization categorization task. These results highlight the contributions of adaptive mechanisms at multiple auditory processing stages to achieve robust auditory categorization."

Cracking the Code of Sound Recognition: Machine Learning Model Reveals How Our Brains Understand Communication Sounds

Adaptive mechanisms facilitate robust performance in noise and in reverberation in an auditory categorization model (open access)

Credits: Last Week in AI

Fig. 1: Hierarchical structure of the computational model.



No comments: