tl;dr– They're optimizing critical functions by custom-designing computers to perform those specific tasks. It's basically about optimizing efficiency.
Right now, equilibrium propagation is only working in simulation.
—"Startup and Academics Find Path to Powerful Analog AI" (2020-07-30)
If they can simulate a computer, then they can already use it. For example, they can simulate its behavior and just observe its results; so, if all they want are the results, then they're already done.
But they don't just want the behavior. Instead:
Analog circuits could save power in neural networks in part because they can efficiently perform a key calculation, called multiply and accumulate.
—"Startup and Academics Find Path to Powerful Analog AI" (2020-07-30)
This is, they want implementation-efficiency.
Discussion: Analog-computing is lower-level than digital-logic.
Sometimes, folks say that machine-coding and Assembly are the lowest-level languages. Then stuff like C, then C++, C#, Mathemtica, and so forth would be higher-level languages.
But analog-computing is lower-level. This is, we can look at a CPU as a VM for digital-logic, virtualizing it on a lower-level language, i.e. physics.
So, ya know how folks sometimes recommend hand-optimizing Assembly for critical functions? Hand-designing a circuit for critical functions is a yet more extreme variation.
That appears to be what they're doing: hardcore optimization of functions that they expect to be of high value, stripping away digital-structuring for the sake of efficiency.
Discussion: Imperfection isn't a virtue.
Their argument is that (some) analogical computers were not exact, even if you ran the same calculation more than one time on them, they would give different results. Unlike the digital computers, which could receive exact numbers and give exact results (0's or 1's). And thus, since brains are also imperfect, it is a "logical" to use imperfect machines to simulate such thing.
Presumably they're arguing that analog-computers can do the job, so digital-logic would be unnecessarily expensive overhead.
This is, there's not really a motivation to grab some sort of new, magical power available only to analog-computers so much as there's a perceived lack of motivation to suffer the overhead of digital-structuring.
Simply put, "if it is the case, why no one tried it before?".
Folks have made analog-computers before. Sounds like they're just making one that implements a different algorithm.
For example, major machine-learning packages might come with hand-optimized Assembly for critical calculations as a trick to speed things up. They're basically doing the same, except hand-optimizing all the way down to analog-computing.
Such hand-optimizations can make sense if you believe that a critical-function is important enough to be worth the effort.