14
$\begingroup$

I've been researching the relationship between brain neurons and nodes in neural networks. Repeatedly it is claimed neurons can do complex information processing that vastly exceeds that of a simple activation function in a neural network.

The resources I've read so far suggest nothing fancy is happening with a neuron. The neuron sums the incoming signals from synapses, and then fires when the sum passes a threshold. This is identical to the simple perceptron, the precursor to today's fancy neural networks. If there is more to a neuron's operation that this, I am missing it due to lack of familiarity with the neuroscience terminology. I've also perused this stack exchange, and haven't found anything.

If someone could point to a detailed resource that explains the different complex ways a neuron processes the incoming information, in particular what makes a neuron a more sophisticated information processor than a perceptron, I would be grateful.

$\endgroup$
10
  • 2
    $\begingroup$ There is some information in: What is the difference between biological and artificial neural networks? $\endgroup$
    – Arnon Weinberg
    Commented Jul 21, 2022 at 4:18
  • 1
    $\begingroup$ There was a paper last year where they tried to simulate a single cortical neuron with a deep neural network. It took around 5-8 layers and a 1,000 nodes to get 99% accuracy. (Here's a more accessible article about the paper). So yes, a ballpark estimate is that it's at least a thousand times more complex. $\endgroup$
    – towr
    Commented Jul 22, 2022 at 12:29
  • 1
    $\begingroup$ It sounds like you wanted to ask a different question (something like "Are there BNN computations that cannot be emulated using perceptron-based ANNs?"). If so, then please post that as a separate question rather than adding myriad comments unrelated to the actual question asked - this is not a discussion forum. $\endgroup$
    – Arnon Weinberg
    Commented Jul 22, 2022 at 19:24
  • 3
    $\begingroup$ I don't want to add yet another answer, but there is a whole field called computational neuroscience dedicated to developing ANNs that emulate the way BNNs actually work, using non-perceptron artificial neurons, such as spiking neurons, so that's a good place to look if you were actually interested in the differences between them. $\endgroup$
    – Arnon Weinberg
    Commented Jul 22, 2022 at 19:28
  • 1
    $\begingroup$ Seems you hold computational instructionists' point-to-point wiring models which was strongly criticized by selectionists such as Edelman, who claimed this is not the defining biological nature. Imagine such mechanical perceptron is really what our biological reality is at the bottom, then you'll see much less variation and diversity in the biological world. Edelman emphasized degeneracy as multiple-realizable topology and area in the whole neuron population for same type of input and the actual cell type during natural selection, meaning perceptron is not the bottom biologically. $\endgroup$
    – cinch
    Commented Nov 3, 2022 at 3:51

5 Answers 5

14
$\begingroup$

Even if you fully accept the analogy, a cortical pyramidal neuron's function is best described as that of a multi-layer perceptron, with each dendrite or part of it independently integrating information (Poirazi 2003). In addition, but also because of that fact, the location of connections on the cell plays a crucial role in determining their importance. It just so happens that cortical neural circuits are constructed in a way that exploits this principle (Larkum 2013). Simply put, a real neuron integrates information in time and space, in a way adapted through biological evolution. It does not just lump all inputs together. However, the true marvel --and most of the differences with artificial neural networks-- appears when you look beyond the level of the single neuron: how the neural system specializes and interacts with the rest of the body and the rest of the world, including with other neural systems, actively. Just as food for thought (sic), energy management is of primary importance for neural function and the nervous system is designed around this factor.


Response to other answers:

I see many examples in the other answers where people rightly point to phenomena such as neuromodulation and physiological state changes, the topology of the networks, and different learning rates for different neurons. These are to me all valid differences between networks of real neurons and networks of perceptrons. However, I consider it relatively straightforward to adjust the architecture of connectivity or make the learning rate or bias differ between perceptrons to emulate real neural networks. The small adjustments required do not alter a single perceptron's principle of operation. These are some differences that we can observe when looking at networks of many neurons and are beyond the question's scope as I understand it.

The asynchronous updating of weights that @luke-griffiths contributed still concerns the level of many neurons but the adjustment required for perceptrons to incorporate it is a radical move.

I have hinted that there are even more fundamental differences when you look at a level of description beyond the single neuron such as the coupling between neurons and the vasculature, the digestive system, the neuroglia, and the muscles that cannot be copied by networks of perceptrons, simply because they are not attached to a body.

@bonaca listed the existence of electrical synapses which can be considered a characteristic of a single neuron. However, electrical synapses have very specific functionality and are not the main way of information processing in the brain --certainly not in the brain's cortex which I have studied.

It now occurs to me, though, that I and everyone else so far failed to mention another difference between real neurons and perceptrons. Real neurons exhibit metaplasticity due to memory carried by resource allocation (e.g. receptors, structural molecules such as actin, mRNA) or heterosynaptic plasticity (Abraham 2008). To adjust a perceptron to emulate metaplasticity you would have to either make the bias or the learning rate a function of time and/or make each input's weight a function of other inputs' weights. I consider such changes as a major departure from the operating principle of a perceptron, even more so than the spatial sampling I initially mentioned.

$\endgroup$
4
  • $\begingroup$ Great, I like your updates. I have the same response to the other answers, none are pointing out any mechanisms that are fundamentally at odds with the perceptron model. Though I must confess I'm also not clearly seeing anything in your answer along those lines either. $\endgroup$
    – yters
    Commented Jul 22, 2022 at 13:20
  • 2
    $\begingroup$ Why don't you think that metaplasticity is a major departure from the perceptron model? Altering the way the learning rate, the bias, and the weights are calculated and including additional parameters--as is necessary to emulate real neurons' capability for metaplasticity-- produces a whole new model for neuronal integration. $\endgroup$
    – vkehayas
    Commented Jul 22, 2022 at 14:13
  • 1
    $\begingroup$ If everything, or at least most of the algorithm, is changed, I would say that there is definitely an inadequate match between perceptrons and real neurons. To deny that is equivalent to saying that neurons just do simple linear regression: all the rest are modifications on top of OLS. $\endgroup$
    – vkehayas
    Commented Jul 22, 2022 at 14:14
  • $\begingroup$ Yes, there is obviously a bad match between perceptrons and neurons, in a strict technical sense. I'm trying to get at the more nebulously defined "information processing capability", which I've failed to define well enough to get an answer I'm satisfied with. What I have in mind is more like a finite automata or Turing machine, and it appears from these answers neurons don't do anything like that. $\endgroup$
    – yters
    Commented Jul 22, 2022 at 15:03
10
$\begingroup$

Just a couple examples:

Spatial integration

This is the basis of @vkehayas's answer. More generally, despite integration at the soma being fairly linear up to a point, dendrites are highly non-linear as active conductances (i.e., voltage-gated channels) are necessary in addition to passive ones (i.e., neurotransmitter receptors or experimental current injections). This makes the spatial arrangement of input very important, not just the global sum over all input like a perceptron. See for example:

London, M., & Hausser, M. (2005). Dendritic computation. Annual review of neuroscience, 28(1), 503-532.

These spatial arrangements are also critical for distinct forms of inhibitory control of neurotransmission, particularly when people talk about "shunting inhibition" occurring with inhibition out on distal dendrites.

Short-term plasticity

Synapses are not memoryless; short-term plasticity occurs primarily presynaptically through calcium concentrations. "Weak" synapses (=those with low release probability) tend to show facilitation, as single action potentials do not allow sufficient calcium entry into the presynaptic bouton for release, but over multiple summed action potentials in quick succession the calcium concentration rises and release probability increases.

"Strong" synapses (=those with high release probability) tend to show the opposite, short-term depression, because initial release of vesicles depletes those available to release.

See for example:

Abbott, L. F., & Regehr, W. G. (2004). Synaptic computation. Nature, 431(7010), 796-803.

(I'd also recommend Abbott's book with Peter Dayan, "Theoretical Neuroscience" as a good intermediate-level textbook on neural computation; it should be especially accessible if you are familiar with artificial networks)

Neuromodulation and diverse neurotransmitter actions

While one might take glutamatergic transmission and GABAergic transmission in the CNS and simplify these to a + and - sign, this greatly simplifies things even for just those two neurotransmitters. GABA, for example, will diffuse out of highly active synapses and activate different classes of highly sensitive GABA receptors located far from synapses. Glutamate doesn't just open excitatory channels, but also triggers G-protein receptors on both pre- and post-synaptic cells and can modulate synaptic strength over intermediate time scales.

There's enough complexity with just those two "typical" neurotransmitters before we start to think of other neuromodulators. You could fill library shelves with all the diverse ways that neuromodulators can change the behavior of a circuit. Some of the most remarkable examples are found in what are misleadingly thought of as "simple" nervous systems such as in invertebrates, but similar mechanisms also occur in mammalian nervous systems. Subtle neuromodulation at the level of a single-cell can have massive consequences when viewed at the scope of whole networks; the difference between wake and sleep, for example.

A couple reviews to start with:

Fellous, J. M., & Linster, C. (1998). Computational models of neuromodulation. Neural computation, 10(4), 771-805.

Marder, E., & Thirumalai, V. (2002). Cellular, synaptic and network effects of neuromodulation. Neural Networks, 15(4-6), 479-493.

$\endgroup$
10
  • 1
    $\begingroup$ The first two sound like they'd be covered by the perceptron's weights and bias. The neurotransmitters are definitely not part of the perceptron model, but they also are not immediate, and have take 0.5 sec to minutes to have an effect. Based on your description, the neurotransmitters' effect would be equivalent to updating the perceptron weights, such as during training. The neurotransmitters don't directly cause the neuron to fire. $\endgroup$
    – yters
    Commented Jul 21, 2022 at 17:51
  • 2
    $\begingroup$ @yters No, they are not covered by weights and bias, those are linear. Neuromodulators can be involved in training, but not only that, and you don't get any sort of rapid task-switching ability with training. They can also be more or less immediate, at least on the scale of perception, I don't know why you think they take minutes. For training you also need something to train on. If you want to keep adding new tricks to your perceptron model to have it do things it can't do, you don't have a perceptron any more. $\endgroup$
    – Bryan Krause
    Commented Jul 21, 2022 at 18:28
  • $\begingroup$ Sorry, I misunderstood neurotransmitters. Now I realize those are the main mechanisms of exciting neurons to generate a signal. $\endgroup$
    – yters
    Commented Jul 21, 2022 at 23:38
  • 2
    $\begingroup$ @yters "all the inputs appear to be either binary or analog signals" - there are no other signals, anything is either binary or analog, so yeah, that much is true... "So, a neuron's information processing is not significantly more complex than a perceptron" - You can keep believing that, but there are several answers here explaining how that's not true. I don't know what else to say. I don't know what you mean by 'no equivalent of codes'... Neurons definitely have memory (though, so do perceptrons) and definitely output differently with different input - that's a form of 'code'. $\endgroup$
    – Bryan Krause
    Commented Jul 22, 2022 at 0:51
  • 1
    $\begingroup$ @yters I think your question was pretty clear about your understanding that "The neuron sums the incoming signals from synapses, and then fires when the sum passes a threshold" - the answers explain how there's more to it than that. $\endgroup$
    – Bryan Krause
    Commented Jul 22, 2022 at 14:48
5
$\begingroup$

I don't know enough about neural networks to properly define the difference between a perceptron and a neuron, I don't know how much machine learning definitions can be expanded or generalized to more fully mimic a biological neuron. But I want to clarify why your assertion that "The neuron sums incoming signals from synapses, and then fires when the sum passes the threshold." is not entirely correct, and certainly not the full picture.

Neurons integrate information differently

See vkehayas' excellent answer.

Neurons receive different types of 'input'

  1. A single sensory neuron can sometimes sense multiple types of external stimuli (e.g. light and mechanical stimuli) and integrate the information (Revilla-i-Domingo et al., 2021)
  2. Chemical synapses with classical neurotransmitters (e.g. serotonin, dopamine, adrenaline...) - this is what people most commonly think of as 'input', also matches your description I quoted. Even these neurotransmitters can have different functions, like excitation or inhibition.
  3. Electrical synapses, or gap junctions, where cells are directly electrically coupled.

Neurons change their behavior depending on physiological state and presence of various molecules

These could arguably be considered another type of input. They can change firing patterns and activation thresholds of neurons.

  1. Neuromodulators act more slowly, often have a long lasting effect and change how neurons respond to synaptic signals. They are often not released at synapses and often travel long distances. Neuropeptides are a very big and interesting category of neuromodulators (e.g. Nussbaum and Blitz). There are at least 65 neuropeptide families (Wang et al., 2015).
  2. Physiological state modulates neuronal activity (Shetty et al. 2012)

Activation threshold is variable in biology

Each neuron can have a different threshold potential, and it can change quickly, depending on cell type, physiological state, presence or absence of various neurotransmitters and other molecules.

Neurons can output different types of information

They may secrete different classical neurotransmitters, neuromodulators, etc. Each type of neuron has the potential to only secrete specific types of neurotransmitters and/or neuromodulators and/or other molecules. If a neuron is activated, it might secrete all types of molecules it has stored, or only one type. It can even release different molecules at different locations (Ghijsen and Leenders, 2005)

Information transmission between neurons is not straightforward

As explained above, the amount, location and type of 'signal' a pre-synaptic neuron 'outputs' can vary. Conversely, the type, location and amount of 'signal' a post-synaptic neuron receives can also vary. The amount of neurotransmitter present, the distance it has to travel, and how many receptors for that molecule the post-synaptic neuron has will all affect the amount of synaptic 'input' a neuron receives.

Topology of the network

  1. Every neuron can also send information in the opposite direction, i.e. the postsynaptic neuron sends signals to the presynaptic one with retrograde neurotransmitters like nitric oxide.
  2. Each neuron is unique. They receive different types of inputs, they have different axon length, topology and myelination (which changes the amount of time to transmit information), has different types and amounts of receptors for neurotransmitters, external stimuli, etc., they secrete different types and amounts of signalling and other molecules...
  3. Neurons only connect to specific other neurons, and create an incredibly complex network. To my limited knowledge, in artificial neural networks all nodes in one layer are connected to all nodes in the next layer. There are no such cases in biology.
    2.1. Individual neurons can synapse to any number of other neurons (but keep in mind that synaptic transmission is not all there is!). The number of synaptic partners strongly depends on the type of neuron. This is equally true for broad neuron categories like inter- or motoneuron, and for narrower ones like subtypes of Kenyon cells.
    2.2. The animal nervous system is incredibly complex, and can't be subdivided into fully separate units. Even the loose arbitrarily defined signalling "subnetworks" (e.g. taste and pain perception pathways) interact with each other in many different ways and many different levels.

Another interesting feature of biological systems are central pattern generators.

Finally, as I was searching for some of the references, I stumbled upon this article by Lui et al., which I haven't read fully, but seems to explain how features of biological systems can help improve artificial neural networks and at least partially answer your question.

$\endgroup$
3
$\begingroup$

Yes, a neuron's information processing is more complex than a perceptron's information processing. Here I describe one way it's more complex; I am not claiming this is the only way it is more complex. (One way is sufficient to answer the question).

A perceptron's "axons" are connected to every one of the perceptrons in the next network layer.

A neuron's axons are connected to many other cells, and those cells do not exist in separate "layers" at all (in some cases the physiology happens to arrange into layers; that's not universal)

Therefore a neuron's information processing is more complex than a perceptron's information processing, because the mixing of its output with that of other neurons' output happens at different "phases".

To see what I mean by phases, consider a perceptron in a neural network. As it processes input, a single input vector is processed all the way through the entire network in a single coherent "wave" of activation (one matrix multiplication performed after another in series). At no point is one perceptron doing its thing while its sibling perceptrons are inactive.

By the terminology I'm using, all of a perceptron's sibling perceptrons' outputs are sent "in phase" to the next layer. So a perceptron's output is always processed in the same "activation environment"; it's always a part of a single type of whole: the set of outputs of its whole layer.

But if two neurons are siblings, i.e. if they both send axons to the same downstream neuron(s), they may or may not send their signals at the same time. Therefore the overall "activation environment" in which its output is interpreted can vary.

An analogy would be a football player who sometimes plays on a small team of 4 players, other times plays on a team of 100 players, and other times plays football as a one-man team. The game that football player is playing is more complex than the game being played by someone who always plays in an 11v11 game.

Insofar as "a neuron's information processing" is the game that neuron is playing, a neuron's game is more complex than a perceptron's game, in the same way that variable football player's game is more complex than the constant-team-sized player's game.

$\endgroup$
2
$\begingroup$

Biological neurons operate with response latency, refraction, and inhibition. It is my understanding that perceptrons do not employ these characteristics.

Response latency In biological neurons, there is a delay between the arrival of a super-threshold stimulus and the generation of voltage spike (activation potential). The length of the response latency depends on the intensity, and frequency, of the stimulus. The intensity of the stimulus can my modulated by synaptic connections. Thus, it is the length of this delay that is adjusting in biological learning.

Refraction time After a biological neuron fires, it enters its absolute refractory period. During this time, it is unable to fire. Biological neurons also have a relative refractory period, during which the firing characteristics biological neurons are highly modified.

Inhibition The generation of action potentials in biological neurons can be suppressed through inhibition. When inhibited, biological neurons require a greater stimulus in order to generate an action potential.

These three characteristics, among others like bursting and variable spiking modes, distinguish biological neurons and most artificial neurons.

$\endgroup$
6
  • 1
    $\begingroup$ These are more distinctions in a practical sense, because nothing prevents artificial neural networks from using these mechanisms. I think these are not used because they are not conducive to learning with the current algorithms we have. $\endgroup$
    – yters
    Commented Jul 22, 2022 at 13:15
  • 1
    $\begingroup$ @yters If an artificial network uses any of these mechanisms it is not a perceptron. $\endgroup$
    – Bryan Krause
    Commented Jul 22, 2022 at 14:47
  • $\begingroup$ @BryanKrause in a strict technical sense, yes, but it is also pretty trivial to modify a perceptron to involve the mechanisms. E.g. add a delay to a program is as simple as 'sleep(1)', and changing the perceptron bias to affect the fire threshold. It's hard to see how these mechanisms add significant sophistication to the information processing capability of a neuron. $\endgroup$
    – yters
    Commented Jul 22, 2022 at 14:53
  • 1
    $\begingroup$ @yters The perceptron model is a simple weighted vector sum. This is like saying that a raft and aircraft carrier are the same thing because it would be trivial to add planes to the raft. $\endgroup$
    – Bryan Krause
    Commented Jul 22, 2022 at 14:56
  • 2
    $\begingroup$ @yters You are right, there are non-perceptron ANN's that can use these mechanism. They are not perceptrons. Wikipedia does a good job of providing a specific definition of perceptrons. $\endgroup$ Commented Jul 22, 2022 at 17:57

Not the answer you're looking for? Browse other questions tagged or ask your own question.