0

As today has been a day for analogies:

dog : proximal environment :: human : Reality

A dog's ontology is presumably quite limited. A human's ontology is apparently maximal. While our average intelligence is quite a bit more than it is for dogs, it's not "godly" by comparison (maybe "demi-godly" but not godly, since for instance we can't create dogs from raw materials right now). But our ontology is maximal. If the analogy should carry over, as it seems it should, as in

(dog, proximal environment) : (human, Reality) :: (human, Reality) : (superintelligence, ?)

something should presumably have to replace the question-mark against all logical possibility. And then even if something did replace it against all logical possibility, humans are at most right now merely demigods compared to dogs.

Might this put a substantial hitch on some theoretical maximum of intelligence?

Update 1 2014-10-17: While no one has pointed this out yet, during my discussions so far I realized that the frame of mind at around the time of generating this curiosity could readily concede to "The analogy breaks down if superintelligence (including up to hyperintelligence) necessarily must originate from at least as low as human-level intelligence." ... As a corollary, this doesn't bode well for traditional conceptions of God.

Update 2 2014-10-17: Responder @ChrisSunami points out that a hunter-gatherer tribe 10 thousand years ago would not have a maximal ontology [in the relevant sense]. Since the main analogy broke down from Update 1, and in light of this hunter-gatherer point, a new analogy becomes forthcoming.

(early human, has no maximal ontology) : (modern human, has a maximal ontology) ::
(modern human, standard intelligence) : (derivable descendent "human", superintelligence)

This analogy happens to have the bonus side-effect that, as most arguments so far have been based on the magnitudes of ontological contents rather than the magnitudes of ontological expanses, it now accommodates this concern by having built into it the altogether irrelevance of the magnitudes of ontological contents.

So, to answer the original question, there's nothing to worry about. In comparing modern humans to dogs or to early humans with tight extremophilic analogies, ultimately no great filter to superintelligence or beyond is implied.

3
  • When asked if he believed in God, Charles Darwin said "A dog might as well contemplate the workings of the mind of [Issac] Newton. Let each man believe what he can." Commented Oct 16, 2014 at 10:39
  • "A human's ontology is apparently maximal." Evidence or proof please?
    – user4894
    Commented Oct 17, 2014 at 0:16
  • I answered this below when responding to the answer of @jobermark.
    – Dise
    Commented Oct 17, 2014 at 0:21

2 Answers 2

2

What makes you assume that human ontology is maximal? Kant surely did not. He assumed we are bound by the forms of intuition to almost always merely approximate reality, whereas a divine being would be able to know a deeper reality more directly. Presumably a semi-divine being would see some intermediate approximation much better than what we might attain.

I would claim that we are even more restricted by our biology than he imagined. Our physics is starting to show us a reality where we have a very hard time going. They are basically the same places Kant imagined we would find walls.

  • We cannot avoid thinking sequentially despite that our own thought processes are not sequential but parallel, so we are failing to even leverage what we have been given.
  • We cannot dispel the illusion of space, despite experimental evidence it ultimately fails us. Arguments around space at the Planck length are bizarre, because they have to use continuous curves, but the whole idea is that space is discontinuous there.
  • We don't seem to be able to dispel the notion of quantity and retain abstraction, so we have a hard time truly visualizing things we know are real, like fractional particle spin.
  • We have a very hard time evading the limitations of quantification or negation, even though we can see that together they lead to logical contradictions (like Russel's paradox) -- heading down that path is just too alien, and we construct a fake playground where it is safe to talk normally, instead.

Surely we can imagine creatures that would be less hobbled by their nature. So our limitations should not be assumed to place a global limit on intelligence, since we know there are limits we are unlikely to transcend without a considerable evolutionary modification of our own capacity for visualization, while at the same time, we have good clues as to how much further specific improvements might take us.

A more meaningful analogy for many of us would be:

dog : (proximal) experience :: human : X :: ideal intellect : Reality

Where establishing X is something hard to identify, but definitely important.

1
  • Conversation between jobermark & NLB about this answer ("What makes you assume that human ontology is maximal?"): Click here.
    – stoicfury
    Commented Oct 18, 2014 at 14:25
1

This analogy seems to me rather to suggest that our ontology likely has limits that we cannot conceive of. After all, if the dog was able to formulate the thought, he would probably also describe his ontology as "maximal."

As support for this --would you describe the ontology of a human being in a hunter-gatherer tribe ten thousand years ago as maximal? It would rather seem that the human ontology has expanding significantly several times since then. How can you be sure sure that process has achieved a final terminus?

6
  • 1
    AFAICT nobody here but the originator accepts the premise of the OP. I think all of us would say "dog : experience :: human : X :: ideal intellect : Reality" And all of us find identifying X as both a hard problem and an interesting one.
    – user9166
    Commented Oct 17, 2014 at 16:43
  • Well, you and I definitely agree about disagreeing with him (although it might be a stretch to say "nobody here" agrees with the original poster, unless the domain of "here" is restricted to just those who have already posted in this thread). I do also know that my answer is largely covered by the comments to your answer, but I still think it deserves to stand on its own as a response (I also upvoted yours). Commented Oct 17, 2014 at 16:49
  • @ChrisSunami Borrowing from a familiar metaphor, (re-)consider that our (fallibilistic) knowledge, wherever it's not altogether one and the same with what it is about, is distinct from the territory. So our knowledge is "the map." The map however consists of a further distinction, which is between the extent of the canvas and the marks on this canvas. [cont.]
    – Dise
    Commented Oct 17, 2014 at 17:28
  • So far arguments have been focused on the granularity and cardinality of the marks. My semantics of "maximal ontology" is not about that; it's about the extent of the canvas. Given a canvas, Reality, to say that a mark could be placed outside of it is to say that a logical impossibility could be instantiated. This is what would need explanation, not an explanation that "a new mark can be placed among our other marks and the contents of that mark would be completely unexpected." [cont.]
    – Dise
    Commented Oct 17, 2014 at 17:28
  • Dogs simply don't have a canvas of this extent up to isomorphism, because among other things dogs don't have a concept of "logical possibility." Your second point is a good one and the update to my post earlier today is partly congruent with it (it would be altogether congruent with it except that, yes, Reality, the canvas, is that terminal extent).
    – Dise
    Commented Oct 17, 2014 at 17:29

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .