1

When someone throws a ball it is possible to predict where the ball will be based on the person's stance, motion during the throw, speed of ball, trajectory, etc. If you see a window in front of the trajectory you can sense a risk of danger.

When you walk through the park on a warm, sunny day and see melting ice cream you can guess that some child or distracted adult dropped it. You can also guess how long ago it was dropped depending on how hot it is outside and how melted the ice cream is.

When you meet someone you know you can guess how deep your conversation will go, its quality, the topics of discussion, and many other attributes based on your knowledge of the person.

The brain seems to be able to model reality to some extent and play it out in your mind forward or backward. The scenario I'm thinking of is that humans are sort of like a robots with five sensor channels and a brain that has a fancy built-in AI process that tries to model reality from the sensor data being streamed in during experience. The purpose of the model is to help aid survival by avoiding danger and acquiring resources/rewards. The model of reality does not need to be accurate as long as it prioritizes the main purpose.

What I'm trying to figure out is the details of how this process creates, updates, and uses this model. So far my guess is that past experience is a stream of sensor data from which the brain builds and maintains a model. In any moment the current physical and mental context activates within the brain the data and parts of the model relevant to the current context. The brain then uses the data and models to aid decisions/actions. This is vague and it's all I have. I tried looking for actual theories that explain the process in detail but have not found anything that gives details.

Do any of you know about the current best model of how a person models reality? Or at least where a catalog of them might be found? More specifically, what model best describes how to take a sensor stream of five channels and build and maintain a model of reality. Also I would like to know how the model is then used/activated in any given moment of experience. The theory does not have to exactly model how the brain does it. I'm looking for at least a non-vague logical model that does not have to get the structure exactly right.

UPDATE: I will attempt to make this question less broad.

I'm looking for a high level explanation of two things:

  1. What is the process or sequence of operations that takes sensory data in any moment and updates the model of reality that person has built up so far?

  2. In any given moment what determines the parts of the model of reality that will be used out of the entire model? Given some current context most of the model is probably not relevant. What rules or processes determine what is relevant to use for thinking/decisions/etc?

Basically, I'm imagining two main things happening in any moment. Sensory data comes in that can cause an update to the model. At the same time that data coming in activates parts of the model to be used for thinking, actions, etc.

In terms of granularity what I am looking for something high level and at most 1-3 paragraphs per stage. I want to be able to create a diagram of the process and for each arrow or node in the diagram I want a paragraph summary of what is happening.

Essentially, I would like an explanation of the process for those two questions at a high level (no deep details necessary).

UPDATE: Everything mentioned has been useful. Predictive coding mentioned by Johannes appears to provide the best explanations and there are experiments to back up the idea. The video I linked and the Anil Seth video linked by CriglCragl both seem to be about predictive coding, although they don't use the phrase specifically. If I find or develop something more useful for this post I'll add it.

5
  • See Mental models.
    – Conifold
    Commented Jan 7, 2022 at 3:52
  • For abstract understanding, schemas appear plausible. For linguistic reasoning, construction grammar appears similar. For lower-level, pre-linguistic and at least some spatial reasoning, image schemas may offer a clue.
    – Michael
    Commented Jan 7, 2022 at 6:41
  • This is asking for the entirety of the 'easy' problem of consciousness en.wikipedia.org/wiki/… That is too broad
    – CriglCragl
    Commented Jan 7, 2022 at 12:40
  • See also predictive coding.
    – Johannes
    Commented Jan 9, 2022 at 3:40
  • I'm still looking through everything posted here. This video I found is a small step in the right direction link
    – vergilvsyn
    Commented Jan 14, 2022 at 22:53

3 Answers 3

1

I think your question is too broad, so I'm just going to signpost towards resources.

Anil Seth has a great TED talk about his neuroscience work on these issues. We have many 'subroutines', with fully 50% of human brains processing visual information, things like edge-detection, predicting movement, interpreting solid shapes etc.

But bear in mind a distinctive quality of our experience is that it is not of fragmented subroutines, but instead we have a sense of the unity of our conscious experiences. We can look at consciousness as our Global Workspace, where the inputs of subroutines enter if they cross some threshold of relevance.

Split brain experiments helped us understand that prior or seperate to the workspace, our brain hemispheres have a subtly different emphasis, one towards modelling the self, one towards modelling the world.

Abstracting experience into concepts in a salience landscape: According to the major theories of concepts, where do meanings come from?

Convolutional neural networks: How is knowledge possible? and the connectome: Why isn't there just one quale? (qualia)

Intersubjectivity and Constructor Theory: How can brain know about consciousness if it can't be observed outside ourselves?

0

One of the more consistent themes that has been repeatedly demonstrated with respect to AI speculations about modeling mental processes, is that AI enthusiasts underestimate the complexity of both cognitive tasks, and the complexity of algorithms needed to execute them by MANY MANY orders of magnitude. Note, the predicted date for the development of strong AI has averaged approximately 50 years in the future, for roughly the last 60 years. See page 13: https://www.fhi.ox.ac.uk/wp-content/uploads/FAIC.pdf Read the rest of the paper to get more insight into the sorts of issues that you will need to deal with in your predictive modeling.

This tendency to drastically underestimate the challenge, and the sorts of failures that result, has continued into this century, when neurologists joined the AI modelers in trying to understand consciousness. The sources of the failures of basically every effort to solve the hard problem, and most of the easy problems of consciousness are documented in Peter Hankin's "the Shadow of Consciousness" https://www.amazon.com/Shadow-Consciousness-Little-Less-Wrong/dp/1507869177 This book is essential reading if you want to succeed in your project.

Also of great use, would be to study evolutionary psychology, as well as child development. The process of developing a world-model, which is part of late babyhood, is explicitly what you are talking about.

Note that ALL of our wiring, and very likely most of our processing is via neural nets, which simply can not DO the representational model-building you are talking about.

0

This is more or less what Kant proposes (which personally I found the most precise description of this modeling process). And this seems to address precisely your interrogations.

I know this is a raw and crude simplification of perhaps the most profound treatise in the history of all philosophy (Kant's Critique of Pure Reason), but let's go anyway:

The model you refer to is known as knowledge, and helps us at least improving our daily lives (that is another deep philosophical issue: what is the goal of knowledge?).

It is crucial to understand that the model is not the reality (as it is commonly said, the map is not the terrain). Knowledge is our map, while Kant calls the terrain the noumenon, something we don't have access to due to our natural limitations (we only experience an infinitesimally insignificant part of what could happen outside of us, and I don't mean only due to distance, but also due to time, to sizes, to what our senses can detect, to what we don't understand, etc.). So, knowledge is essentially our model of the noumenon, which is evidently biased, due to our nature, and it can be flawed, but is the only one we have access to.

Knowledge would be built as follows:

Phase 1: what Kant calls transcendental aesthetic. The noumenon appears to us as phenomena. Phenomena is the form that the world takes as appearance, which is what we grasp by means of our senses. Here, Kant proposes that space and time are something that DOES NOT exist really out there, but is built in our minds. Such construct would help us organizing the external impressions (obtained by the senses).

At this point, there is no imagination (what you are searching for), just impressions of the world, that don't still have the form of an object. There are only bunches of unorganized information that come from all the senses.

Phase 2: transcendental analytic. Here, the information is manifolded in order to build the concept, which would correspond, more or less, to the thing-as-it-appears (the thing-in-itself is a concept that could be associated with the noumenon; they are not precisely the same, but for this goal, it can be considered so). The object is the organized product of the manifold of impressions coming from the previous phase, and it is classified by means of a set of categories (existence/non-existence, unity/plurality, etc.), an idea that Kant would have get from Aristotle.

So, at this point, still no imagination. Just a set of isolated concepts of objects, and its elementary spatio-temporal relationships.

Phase 3: transcendental dialectic. Here, we apply certain rules to organize the concepts in higher-level ideas. Kant notices that here is where imagination and illusion take place. But the rules of illusion become flawed, usually because they are applied outside the domain they belong to (e.g. you imagine that every consequence has a cause, which leads to search for the first cause, and you can conclude that such initial cause is God; but that is nothing more than a profound illusion, based on the rules of illusion).

Here's where the role of imagination starts to determine how the world behaves, by means of the rules that are created (at this point, we have the objects, organized in space and time, and the relationships of them by means of rules we learn, as causality). A child who has never had a bouncing ball will never imagine that a ball will bounce, the first time he drops it to the ground. That's an example of how imagination uses the Organon of inner rules. So, the child will change his knowledge model (David Hume call it "custom and habit") to adapt to the new information. Notice that the concepts might change (now, there are not only balls, but also bouncing balls, and now, there are also new temporal objects, like a bouncing-event), and the rules will change (bouncing-ball interacting with the floor -> bouncing-event).

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .