Saturday, December 31, 2022

Dendrocentric AI Could Run on Watts (on your smartphone), Not Megawatts

Good news! This seems to be early stage research! One of the biggest obstacles to a broader application of AI & machine learning is the voracious consumption of energy while in training!

New, less energy hungry computer hardware is in dire need! There are also some serious computing bottlenecks!

If this researcher is not overpromising, then this could well become a game changer!

"... For instance, to train its state-of-the-art neural network GPT-3, OpenAI spent US $4.6 million to run 9,200 GPUs for two weeks. ...
AI currently advances by performing twice as many computations every two months. However, the electronics industry doubles the devices required to perform these operations only once every two years. This has meant that AI is typically limited to the cloud, which can provide the many thousands of processors needed for it. ...
Based on these findings, ... developed a computational model of a dendrite that responded only if it received signals from neurons in a precise sequence. This means that each dendrite could encode data in more than just base two—one or zero, on or off—as is the case with today’s electronic component. It will use much higher base systems, depending on the number of connections it has and the length of the sequences of signals it receives. ...
that a string of ferroelectric capacitors could emulate a stretch of dendrite and replace the gate stack of a field-effect transistor to form a ferroelectric FET (FeFET). A 1.5-micrometer-long FeFET with five gates could emulate a 15-µm-long stretch of dendrite with five synapses ..."

From the abstract:
"Artificial intelligence now advances by performing twice as many floating-point multiplications every two months, but the semiconductor industry tiles twice as many multipliers on a chip every two years. Moreover, the returns from tiling these multipliers ever more densely now diminish because signals must travel relatively farther and farther. Although travel can be shortened by stacking tiled multipliers in a three-dimensional chip, such a solution acutely reduces the available surface area for dissipating heat. Here I propose to transcend this three-dimensional thermal constraint by moving away from learning with synapses to learning with dendrites. Synaptic inputs are not weighted precisely but rather ordered meticulously along a short stretch of dendrite, termed dendrocentric learning. With the help of a computational model of a dendrite and a conceptual model of a ferroelectric device that emulates it, I illustrate how dendrocentric learning artificial intelligence—or synthetic intelligence for short—could run not with megawatts in the cloud but rather with watts on a smartphone."

Dendrocentric AI Could Run on Watts, Not Megawatts - IEEE Spectrum Artificial intelligence that mimics dendrites could enable powerful AIs to run on smartphones instead of the cloud

Dendrocentric learning for synthetic intelligence (no public access)

In this concept drawing of a dendrite-like nanoscale device, voltage pulses applied consecutively to all five gates from left to right flip all electric dipoles in the ferroelectric insulating layer from down to up.


No comments: