-1

I was searching about simulation of artificial intelligence, machine learning and subjects alike and saw the news that some startups around of the world are using analog computers to simulate artificial intelligence/machine learning etc.

Their argument is that (some) analog computers were not exact, even if you ran the same calculation more than one time on them, they would give different results. Unlike the digital computers, which could receive exact numbers and give exact results (0's or 1's). And thus, since brains are also imperfect, it is a "logical" to use imperfect machines to simulate such thing. Also, other part of the argument is that "Moore's law is reaching its limit".

Well, I'm not a software engineer, and initially I thought their argument made sense, but the more I think about it, the more it seems unlikely. Simply put, "if it is the case, why no one tried it before?".

Here is an article I read about the subject, it may help answer the question. It uses a lot of terms that I couldn't quite grasp, so I don't know if it is really factual or just another hype.

4
  • The article you linked appears to answer all of your questions. You're not using your imagination; I don't fully understand the terms they're using there, but I get the gist. Commented Dec 22, 2021 at 1:58
  • 2
    If you're hung up on the idea that the same calculation can give you different results, 1. This is how our brains work, and 2. Analog circuits correct themselves by using feedback mechanisms. Commented Dec 22, 2021 at 2:00
  • 2
    Some element of analog processing in electronic devices was actually fairly common in the recent past - so the premise that it hasn't been tried is false. The reason for shifting almost everything to digital is because digital processors are general purpose and run software, whereas any kind of analog device is hand-crafted hardware.
    – Steve
    Commented Dec 22, 2021 at 7:38
  • Errors consist of two components: systematic errors and random errors. Re-running an experiment will let you average out the random error component, but cannot address the systematic error which introduces a consistent bias. In an electrical circuit, systematic errors might be introduced e.g. through varying resistances or capacitances of components. For simple circuits you could calibrate them to correct such errors. Unfortunately, neural networks exhibit nonlinear behaviour so that small errors can produce big changes in the output. Whether this matters depends on the use case.
    – amon
    Commented Dec 22, 2021 at 8:34

1 Answer 1

4

tl;dr They're optimizing critical functions by custom-designing computers to perform those specific tasks. It's basically about optimizing efficiency.


Right now, equilibrium propagation is only working in simulation.

"Startup and Academics Find Path to Powerful Analog AI" (2020-07-30)

If they can simulate a computer, then they can already use it. For example, they can simulate its behavior and just observe its results; so, if all they want are the results, then they're already done.

But they don't just want the behavior. Instead:

Analog circuits could save power in neural networks in part because they can efficiently perform a key calculation, called multiply and accumulate.

"Startup and Academics Find Path to Powerful Analog AI" (2020-07-30)

This is, they want implementation-efficiency.


Discussion: Analog-computing is lower-level than digital-logic.

Sometimes, folks say that machine-coding and Assembly are the lowest-level languages. Then stuff like C, then C++, C#, Mathemtica, and so forth would be higher-level languages.

But analog-computing is lower-level. This is, we can look at a CPU as a VM for digital-logic, virtualizing it on a lower-level language, i.e. physics.

So, ya know how folks sometimes recommend hand-optimizing Assembly for critical functions? Hand-designing a circuit for critical functions is a yet more extreme variation.

That appears to be what they're doing: hardcore optimization of functions that they expect to be of high value, stripping away digital-structuring for the sake of efficiency.


Discussion: Imperfection isn't a virtue.

Their argument is that (some) analogical computers were not exact, even if you ran the same calculation more than one time on them, they would give different results. Unlike the digital computers, which could receive exact numbers and give exact results (0's or 1's). And thus, since brains are also imperfect, it is a "logical" to use imperfect machines to simulate such thing.

Presumably they're arguing that analog-computers can do the job, so digital-logic would be unnecessarily expensive overhead.

This is, there's not really a motivation to grab some sort of new, magical power available only to analog-computers so much as there's a perceived lack of motivation to suffer the overhead of digital-structuring.


Simply put, "if it is the case, why no one tried it before?".

Folks have made analog-computers before. Sounds like they're just making one that implements a different algorithm.

For example, major machine-learning packages might come with hand-optimized Assembly for critical calculations as a trick to speed things up. They're basically doing the same, except hand-optimizing all the way down to analog-computing.

Such hand-optimizations can make sense if you believe that a critical-function is important enough to be worth the effort.

1
  • 2
    The lowest level language is expressed with a soldering iron. : ) Commented Dec 22, 2021 at 16:07

Not the answer you're looking for? Browse other questions tagged or ask your own question.