The Object Of Objective Reality: Some Notes On Donald Hoffman

Facebooktwitterredditpinterestlinkedinmailby feather

Objective Reality Donald HoffmanA few months ago, The Atlantic published an interview with cognitive scientist Donald Hoffman arguing – albeit in a limited sense – against the notion of objective reality. It’s an interesting read, but only partly due to the science. More importantly, it illustrates an overlooked concept in pretty much all inquiry: that language, if ill-used or poorly defined, will ultimately poison good ideas, and generate new objects of study that simply don’t exist in the real world. In short, if people are confused by language, allowing words to invent or hide problems, scientists, in being a subset of people, are little different here. For this reason, Donald Hoffman falls into the kind of errors that even a layperson is familiar with, since they have in fact made the same mistakes themselves. The laity is inevitably corrected, however, since the real world is pretty unforgiving when compared to academia. Yet seeing just how he errs in such a low-stakes environment might shed some light on future questions: actual questions, I mean, and not the needless complications that scientists and philosophers can be quite good at.

Hoffman’s basic thesis is irrefutable: that organisms have evolved in a way that maximizes fitness, first, at the expense of things that they might have better valued under different circumstances. This, to me, is a re-phrasing of Leda Cosmides’s and John Tooby’s classic observation that ‘we are not fitness maximizers, but adaptation executors’. Yeah, the wording seems at odds with Hoffman’s thesis, but they’re in fact arguing the same thing: that we’ll do whatever it is that we’ll do, even as physical circumstances change, or (as with human beings) culture shifts and replaces older values. So, for example, whereas human beings value truth in the abstract, they are ill-prepared for what objective reality – in the totalizing sense – really is. They see sunlight, react to it emotionally, physiologically, etc., but cannot detect, say, radio waves, or ionizing radiation, because they have played such a minor role in most of human history, and have therefore had no utility, no way of capturing our biological attention. And this is true despite the fact that visible light accounts for a tiny slice of electromagnetic radiation, meaning, we are in just this one regard cut off from a huge chunk of reality. Then, when one tallies up the innumerable other evolutionary biases, it is clear that, for all of our curiosity, we weren’t built to inquire in just this way, and are using some fairly limited and imprecise tools for this purpose all the while organizing the universe along a survivalist bias that we ourselves have imbued into it.

Ok, so far, so good. Yet the issues start to come fast when these observations get mixed up with some wacky conclusions that are not at all a given from the above premises. Let us start, then, with this study on veridical perception, by Hoffman and others, and define the term in question. The word refers to the perception of stimuli as they exist, meaning, there is at least some correspondence between perception and reality. The study tested (by way of an evolutionary game) whether an organism could better survive if it is attuned to a more comprehensive reality, or whether it will benefit from shaving some portions of a deeper reality for a more focused one: on the smell and color of a fruit, say (my own example), as it corresponds to that organism’s wellness, as opposed to other, less relevant facts such as a fruit’s internal temperature, the fruit’s own ‘desire’ for survival, or the color of the surrounding trees. Phrased in such a way, it should be obvious that an organism more narrowly focused on survival rather than on things extraneous to survival will in fact fare better. It is like a champion sprinter who trains for sprinting only, and is therefore mediocre at a hundred other physical activities. This is why people, for example, still live off of instincts even when the instincts are completely irrational, that is, when they are a poor reflection of objective reality. In short, these instincts are a good response to perceived threats that, even if they turn out to not be threats at all, are less costly to be interpreted as such and be wrong, time and time again, than to be perceived wrongly as innocuous just once, thus leading to death. Sure, there are limits on risk/reward here (such as with, say, extreme psychosis), but the trend is obvious. The paper concludes that organisms which survive best will sometimes see the least, or perhaps more wrongly, in every element but that which is attuned to survival. In the authors’ words: “truth can fare poorly if information is not free; costs for time and energy required to gather information can impair the fitness of truth.” This is not always true, of course, and the authors to their credit do not treat it as such.

However, even as the data is incontrovertible, even here the definitions come with their own quagmire. First, objective reality is defined, more or less, as everything that in fact is, with every smaller subset of reality as somehow ‘less real’, sort of like the blind men and the elephant. This in and of itself is not really an issue, as it’s a useful way to discuss reality in lay terms. Yet while reality is in fact ‘that which is the case’, in the macro sense, since reality is the macro, it is an error to treat subsets as less real. They are, after all, made up of the same uniform ‘stuff’ that objectifies the world from the micro to the macro, and are now merely incomplete. One can imagine, then, different creatures, machines, etc., being able to consider, respond to, or articulate ever-higher levels of reality, even as they’d still be reacting to the very same initial nugget that the rest of us have access to, and are parallaxed against, and therefore objectified by. This is an important point to understand, even if, at first glance, it seems little more than a linguistic issue. It is not.

The second problem is a subtler one, for while it is technically correct, it is often used incorrectly to justify a position of extreme relativism that simply falls outside of the range of human experience. Often, a part of the definition, above, is the assumption that ideas, values, etc., in being generated by people, can never be objective. Yet by assuming that a slice of reality is less real than the totality, it pretty much ignores the fact that ALL concepts (law, religion, art, purpose) are the result of a stated objective writ large, around which logic could rally, if given the chance, and often does. For example, although law starts with a logically indefensible value judgment – that there are things people ought to do and not do – the mere acceptance of that first assumption allows everything from the Code of Hammurabi to a system of law spanning tens of thousands of pages to be written, all carefully reasoned, all trying to ensure that Point B follows securely from Point A. This is because another word for ‘objective’ is object-oriented: i.e., the purpose (or object) is made explicit, first, and the logical steps needed to ensure the object is reached are enacted on an ad hoc basis. And this is why something like art could be discussed rationally among two people – which are simply a representation of human culture in aggregate – who agree on the value of technical skill, originality, a lack of clichés, and so on, even when the first assumption (that art should have skill, depth, originality, etc.) has no way of being logically justified. In other words, it is the object which orients reason, and even makes it possible.

Thus armed, we can now tackle some of Donald Hoffman’s more recent claims. Referencing the evolutionary findings, above, he explains the faultiness of objective reality with an example:

Snakes and trains, like the particles of physics, have no objective, observer-independent features. The snake I see is a description created by my sensory system to inform me of the fitness consequences of my actions. Evolution shapes acceptable solutions, not optimal ones. A snake is an acceptable solution to the problem of telling me how to act in a situation. My snakes and trains are my mental representations; your snakes and trains are your mental representations.

Yet this is where less laxity with one’s words would get at a more precise meaning, a meaning that is really quite banal. In short, snakes and trains certainly have “objective, observer-independent features” – in the same way all things do, for even if we reduce all objects, like ideas, to that ‘first assumption’ around which senses, like logic, could rally around, there is still that first assumption, whatever it may be, that in the case of pure matter require NO justification but itself. To be sure, then, the extent to which we have evolved to NOT see the truth is the gap between the object-as-perceived, and the object-as-is. Yet the mere existence of a gap implies that there is something to be bridged, whether or not it is in fact possible to do so. In fact, even if the gap were astronomically vast – say, we only sense to one-trillionth accuracy when handling an object before us – that numerator still needs to be accounted for. All the gap shows is the difference, and the difference, a minuend: the very thing Hoffman is denying.

Further:

When I’m having an experience, based on that experience I may want to change what I’m doing. So I need to have a collection of possible actions I can take and a decision strategy that, given my experiences, allows me to change how I’m acting. That’s the basic idea of the whole thing. I have a space X of experiences, a space G of actions, and an algorithm D that lets me choose a new action given my experiences. Then I posited a W for a world, which is also a probability space. Somehow the world affects my perceptions, so there’s a perception map P from the world to my experiences, and when I act, I change the world, so there’s a map A from the space of actions to the world. That’s the entire structure. Six elements. The claim is: This is the structure of consciousness. I put that out there so people have something to shoot at.

Notice how, even in this mathematical model of consciousness, there is little more than object-oriented thinking. The very system implies that, for all the posits, there is nonetheless something to revolve around: a first assumption which requires something to rub against. To Hoffman, this may be little more than the percipient, with perception as the only possible outcome. Yet even that harks back to dated, circular, Cartesian arguments that have no sufficient answer, except: the burden of proof is on the claim which lacks a logical default. All evidence (including Hoffman’s above model) points to a shared and uniform ‘probability space’ from which the world’s fabric is woven. That, then, is the logical default, and it has not ever been debunked, nor even teased with a good alternative.

Gefter: But if there’s a W, are you saying there is an external world?

Hoffman: Here’s the striking thing about that. I can pull the W out of the model and stick a conscious agent in its place and get a circuit of conscious agents. In fact, you can have whole networks of arbitrary complexity. And that’s the world.

Gefter: The world is just other conscious agents?

Hoffman: I call it conscious realism: Objective reality is just conscious agents, just points of view. Interestingly, I can take two conscious agents and have them interact, and the mathematical structure of that interaction also satisfies the definition of a conscious agent. This mathematics is telling me something. I can take two minds, and they can generate a new, unified single mind. Here’s a concrete example. We have two hemispheres in our brain. But when you do a split-brain operation, a complete transection of the corpus callosum, you get clear evidence of two separate consciousnesses. Before that slicing happened, it seemed there was a single unified consciousness. So it’s not implausible that there is a single conscious agent. And yet it’s also the case that there are two conscious agents there, and you can see that when they’re split. I didn’t expect that, the mathematics forced me to recognize this. It suggests that I can take separate observers, put them together and create new observers, and keep doing this ad infinitum. It’s conscious agents all the way down.

The interviewer’s first question is a logical one, yet Hoffman equivocates unknowingly. Yes, one can, at least theoretically, remove the ‘world’ from this and similar models, yet this simply goes back to object orientation: that by so filling ANY probability space with two or more interacting subjects, they are objectified in proportion to these new surrounds. It is not so much that the ‘world’ has been removed, really, but that it now has a logical surrogate. This is less a comment on the world, really, than it is an illustration of the fact that niches are always ready to be filled. A similar process occurs with any logical progression of concepts. Nor does it take a mathematical model to see this.

Finally, there is the invocation of quantum mechanics, as per so many recent arguments against objective reality:

The idea that what we’re doing is measuring publicly accessible objects, the idea that objectivity results from the fact that you and I can measure the same object in the exact same situation and get the same results — it’s very clear from quantum mechanics that that idea has to go. Physics tells us that there are no public physical objects…..I’m emphasizing the larger lesson of quantum mechanics: Neurons, brains, space … these are just symbols we use, they’re not real. It’s not that there’s a classical brain that does some quantum magic. It’s that there’s no brain! Quantum mechanics says that classical objects—including brains—don’t exist. So this is a far more radical claim about the nature of reality and does not involve the brain pulling off some tricky quantum computation.

There is little to say, here, except that quantum mechanics involves objects on a micro scale. This already takes us away from the original discussion of ‘macro’ reality, then confuses it further by positing a wholly separate set of behaviors that don’t apply to it in the first place. In brief, if quantum mechanics is an accurate reflection of how things work under the hood, then reality, as we experience it, is simply the sum average of these states. The sum, of course, is as different from the quantum state as a slice of objective reality differs from the whole: something Hoffman readily admits when dealing with his own data, but cannot seem to extrapolate into wider, far more relevant trends.

Interestingly, this is similar to what occurs in objective discussions of art, as human culture is the sum average of ALL discussions, and responds, no matter the seeming diversity of ‘opinions’ (e.g., quantum states, to continue Hoffman’s metaphor), with steady, predictable states that always seem to find some regression to the mean when given enough time. Unlike what we normally think of as ‘average’, however, the result is in fact a seeming contradiction with quantum reality, which, in turn, is little more than a mathematical feature of that reality. The sum total – i.e., the only objective reality – remains untouched. It is, to borrow Hoffman’s use of multiple subjects, like removing a small-‘w’ world and replacing it with percipients who are nonetheless able to re-populate the world with objects, or at the very least have logic rally around them, give them life. This can be seen with simulations, sure. Yet it can also be seen by those who have, in fact, purposed and re-purposed life effectively, and in their own way, and consistently, until a system has emerged. Great artists, for example. In the meantime, scientists and philosophers will continue to play catch-up to things that we’ve known to be implicit in what had always seemed less rational pursuits.

13 Comments The Object Of Objective Reality: Some Notes On Donald Hoffman

  1. Dan Schneider

    This guy is sloppy lingually- most scientists are. In the aborted Lisa randall interview I was trying to get her to prove the existence of dark matter, as I said it’s all a catch all term for what we don’t know, but infer from effects we see that differ from gravitational predictions.

    She stated this proved dark matter existed. i stated it showed that there was either an error in calculation, an error in scalar predictions of gravity, but that this did not prove dark mater existed. I aske dwhat particles of brown dwarves were proven to make up the gap?

    She had no answer then cut off.

    She assumed that a discrepancy from prediction was proof, which it was not. It could be pink elephants, but dark matter is not proved. It cd be the limits of human math or understanding of gravity.

    The problem w science is its practitioners are bad communicators.

    Ala this guy.

    1. Keith Jackewicz

      I basically agree with this, but the problem is there have been no dark-matter-free mathematical models that have stood up to scrutiny, that I’m aware of, whereas there have been observations that accorded with what one would expect if dark matter were real, like the behavior of gases and stars in colliding galaxies, galaxies with .1% as many stars as our own that still behave the same way, gravitationally, etc. I think there’s a better theory to come, but even if dark matter is real, it’s frustrating that it’s treated as a given, much in the same way it’s frustrating when neuroscientists treat determinism as a given when we still know so very little about the nature of the universe that contains the physical forces comprising the brain.

    2. Bob

      Curious, I am, that you apply a term like “lingually” to one sloppy with his linguistics or, better, language.

      To Lisa Randall, you note that you “aske dwhat” about brown dwarves.

      So, if you’re compelled to rake “most scientists” for their inferior grasp of the language, then you’re setting up for your own raking.

    3. Alex Sheremet

      Bob really feel compelled to respond to a 4 year old comment by pointing out a typo?

      And here’s the definition of “lingual” since you couldn’t be bothered to look it up:

      lingual: adjective

      1. of or relating to the tongue or some tonguelike part.

      2. pertaining to languages.

      3. Phonetics. articulated with the aid of the tongue, especially the tip of the tongue, as d, n, s, or r.

      noun
      4. Phonetics. a lingual sound.

  2. Dan Schneider

    Modified Newtonian Dynamics(MOND) has for 20 years provided better result for galactic movement than Dark Matter, at the galactic and subgalactic levels.

    On the cosmic level it’s a little worse, but since it’s the motions of galaxies that first led to the posit of hidden mass, MOND actually does more and does better.

    The difference is that it’s not sexy, it does not entail funding to look for ‘particles’ that might explain the discrepancy. You cannot get funding if you tweak Newton’sLaws, but you can if you claim you need a super-collider, and 40 years later you still have bupkus.

    1. Peter Clease

      While I agree MOND is a better assessor of the known universe, it would require a reevaluation if the cosmos contains other universes subject to their own independent laws.

  3. Dan Schneider

    I’m doing a scientism show and Dark Matter will be on the menu.

    It’s ludicrous to think that over 6x as much mass is hidden in particles that have yet to be discovered.

    Think of it this way. Let’s say our sun has the avr # of planets, comets, moons, asteroids, etc.

    In toto, they all make up less than 1% of the solar system’s mass. If all visible stars and black holes had similar mass about them, that wd mean a 1-2% increase due to unseeable dark planetary bodies and nebulae.

    Dark Matter would mean there would have to be 600x the amount of THAT unseeable stuff.

  4. Alex Sheremet

    Keith, what do you think is the flaw in determinism? I can think of a number of interesting arguments against it, and like you said, given how little is *actually* known of cause/effect, especially earlier on in the universe, the willingness to accept it is blind.

    It’s a completely different thing to posit deterministic elements to habits, behavior, etc., vs. pure determinism itself. They are not categories of each other.

  5. Peter Clease

    While he can build a labyrinth, he cannot navigate it. Yes, the mind can contain the cosmos, but so can the cosmos contain the mind. The diff is the former exists before and after the latter.
    The scientific method has been dualized over the last decades, which produces these ill-wrought theories. His fails within and without itself; for by aggregating agency, the Nietzschian Supermen are obscured, which minimizes fitness.
    Religion constricts the brain, and the same could be said for modern science. Scientists are too demotic; and, when not, are mythmakers of absurd theories in search of their own god.

  6. Pingback: Against Hillary: Notes On The Future Of 2016 | IDEAS ON IDEAS

  7. Andrew Molitor

    Hoffman’s Conscious Agents, especially when he starts plugging them together, are nothing but models for systems — any systems with certain very very broad properties.

    Essentially he has smuggled the word “conscious” in to a very very generic “system” description, and is happily buffaloing his less mathematically sophisticated audience with coincidences. “Look,” he says, “the equation for quantum foofarol is similar to the equation for Conscious Agent whosits” (of course they are, both equations are descriptive of probabilistic systems), therefore we’re just constructing the particles with our non-existent brains!

    Say what?

    1. Alex Sheremet

      In general, scientists are poor communicators. This is part of the reason why we’ve been getting the whole “race does not exist” thing when it very clearly does, despite being utterly irrelevant.

  8. Pingback: A Primer on Dan Schneider V. 2 | This is an Anime Blog, Among Other Things

Comments are closed.