• NEW! LOWEST RATES EVER -- SUPPORT THE SHOW AND ENJOY THE VERY BEST PREMIUM PARACAST EXPERIENCE! Welcome to The Paracast+, eight years young! For a low subscription fee, you can download the ad-free version of The Paracast and the exclusive, member-only, After The Paracast bonus podcast, featuring color commentary, exclusive interviews, the continuation of interviews that began on the main episode of The Paracast. We also offer lifetime memberships! Flash! Take advantage of our lowest rates ever! Act now! It's easier than ever to susbcribe! You can sign up right here!

    Subscribe to The Paracast Newsletter!

Consciousness and the Paranormal — Part 10

Free episodes:

Status
Not open for further replies.
Steve, this was linked at the end of the Conscious Entities discussion you linked (re the Graziano piece):

Noam Chomsky on the unsolved mysteries of language and the brain

Extract:

". . . Instead of seeking to show that the world is intelligible to us, the goals of science were implicitly lowered to construct theories that are intelligible to us.'

As an example of a theory explaining a non-intelligible world, Chomsky cites the recent confirmation of the existence of gravitational waves predicted by Einstein.

'That theory,' he says, 'is intelligible to us—but the conception on which it is based, of curved space-time, of quantum principles involved, for Galileo through Hume and Locke and so on, that would have been outside the framework of their science.

'They're intelligible, but the world isn't. It isn't a machine.'

IMAGE: AFTER A LIFE IN WORDS, NOAM CHOMSKY REFLECTS (CHRIS FELVER/GETTY)


Humans are not machines either, and there are limits to what we can understand about ourselves. . . . ."
 
Last edited:
This sounds a bit more like @Soupie's approach


Panpsychism (Stanford Encyclopedia of Philosophy)

Monism: The Priority of the Whole - Jonathan Schaffer

https://pdfs.semanticscholar.org/ff0f/4e110da053d4ca1a2bacff43b42bb14ebdd3.pdf

"I will defend the monistic view. In particular I will argue that there are physical and modal considerations that favor the priority of the whole. Physically, there is good evidence that the cosmos forms an entangled system and good reason to treat entangled systems as irreducible wholes.Modally, mereology allows for the possibility of atomless gunk, with no ultimate parts for the pluralist to invoke as the ground of being."
 
Last edited:
The thing about this, and we've covered this numerous times in the past, is that the emergence of new and unexpected phenomena from combinations of materials and energy, e.g. magnetism ( sorry Steve ) requires that specific types of things be organized and energized in specific kinds of ways. So just like the 3D computer model of a magnet on your PC screen will have no magnetic properties itself ( no matter what resolution of detail it is ), there's no assurance that a brain modeled by a computer will possess any consciousness ( no matter how finely detailed the model or large and powerful a computer is used ). This analogy speaks directly to the issues in Chalmers paper ( cited above ) and it reveals the fatal flaw in panpsychism. Those who embrace panpsychism are blind to the demarcation points between form and function and meaning. They're engaging in a logical fallacy: The Association Fallacy:

Premise A is a B ( If brains are made of materials ... )
Premise A is also a C ( and also possess consciousness )
Conclusion Therefore, all Bs are Cs ( all materials also possess consciousness )

The above should make it crystal clear to anyone viewing this thread that virtually any view of panpsychism cannot be logically supported and therefore cannot be relied on. Therefore we should be on guard and learn to recognize it when we see it extrapolated out into lengthy papers or videos on AI or pop-philosophy.
ufology, the thing is that you're not wrong that there is a problem with panpsychism. But it has nothing to do with the gibberish above.

Yes, you can periodically jump into this discussion and say "you haven't solved anything." Again you're not wrong.

But don't give yourself too much credit. Because we recognize on an even more profound level how "wrong" we are.

So your problem is not that youre wrong about us being wrong. Or more precisely you're not wrong that we don't have the answers. We know we don't have the answers.

Your problem is that you seem to think you do have a handle on consciousness. I'm actually not sure where you stand on consciousness at the moment. In the past you seem to have believed that consciousness was something that oozes from the brain like bile from the liver. Which is a fine belief so long as you own up to the plethora of problems with such a view.

I just watched a TED video by Dennett in which he patronizingly tried to explain consciousness as an illusion. Its obvious that he is explaining SE and not consciousness (feeling) at all.

This is a hard problem. Consciousness is a complex phenomenon with many moving parts. See the "5 marks of consciousness" above.

Or consume the work of Anil Seth, a neuroscientist committed to materialism who nonetheless recognizes that even if SE is completely explained via brain mechanisms that there may still be a "metaphysical residue" remaining.

Anil Seth on the Real Problem of Consciousness

Many people striving to explain consciousness are actually striving to explain Subjective Experience, which is fine but not the same thing.

Phenomenal consciousness and any explanation will have metaphysical implications for how we understand reality.

So you can continue to chastise us for "not getting anywhere" or spinning our wheels. Sure. Fine. Yes we are. But you diminish yourself when you do so in a (comically) condescending and authoritative manner because observers can see that your grasp of the problem is primitive.

(Yes, I know he has me "blocked.")
 
Last edited:
ufology, the thing is that you're not wrong that there is a problem with panpsychism. But it has nothing to do with the gibberish above.

Yes, you can periodically jump into this discussion and say "you haven't solved anything." Again you're not wrong.

But don't give yourself too much credit. Because we recognize on an even more profound level how "wrong" we are.

So your problem is not that youre wrong about us being wrong. Or more precisely you're not wrong that we don't have the answers. We know we don't have the answers.

Your problem is that you seem to think you do have a handle on consciousness. I'm actually not sure where you stand on consciousness at the moment. In the past you seem to have believed that consciousness was something that oozes from the brain like bile from the liver. Which is a fine belief so long as you own up to the plethora of problems with such a view.

I just watched a TED video by Dennett in which he patronizingly tried to explain consciousness as an illusion. Its obvious that he is explaining SE and not consciousness (feeling) at all.

This is a hard problem. Consciousness is a complex phenomenon with many moving parts. See the "5 marks of consciousness" above.

Or consume the work of Anil Seth, a neuroscientist committed to materialism who nonetheless recognizes that even if SE is completely explained via brain mechanisms that there may still be a "metaphysical residue" remaining.

Anil Seth on the Real Problem of Consciousness

Many people striving to explain consciousness are actually striving to explain Subjective Experience, which is fine but not the same thing.

Phenomenal consciousness and any explanation will have metaphysical implications for how we understand reality.

So you can continue to chastise us for "not getting anywhere" or spinning our wheels. Sure. Fine. Yes we are. But you diminish yourself when you do so in a (comically) condescending and authoritative manner because observers can see that your grasp of the problem is primitive.

(Yes, I know he has me "blocked.")

So well expressed, @Soupie. I especially like your statement here:

"Many people striving to explain consciousness are actually striving to explain Subjective Experience, which is fine but not the same thing."

I think we should begin a more pointed discussion of this distinction between subjective experience and the complex nature of the human consciousness that develops out of it. I used to want to just blink away, essentially ignore, the Mind-Body problem, but you are right in pursuing clarification of it. I am learning a lot from the discussion of Monism and Panpsychism you have brought about here and hope to understand more about it.

Your post as a whole expresses succinctly for Randall the multivalent and variously approachable/thinkable condition of our own temporally situated being within a cosmos that exceeds us on every side, and the resulting constraints on what we can think and imagine.
 
David Chalmers on reddit.com

I'm David Chalmers, philosopher interested in consciousness, technology, and many other things. AMA. • r/philosophy

What do you think about Michael Graziano's theory of consciousness?

I think i mentioned him briefly somewhere else on this page. i'm very interested in his general strategy of explaining our intuitions about consciousness as the result of an illusory self-model. that said i think he needs to do much more to spell out the details of the model. i haven't seen nearly enough specifics to explain the things that need to be explained. he also has interesting things to say about attention but i think those are somewhat independent of his views about explaining consciousness.

And here is that other mention:

regarding the paradox of phenomenal judgment: i agree the key is finding a functional explanations of why we make judgments such as "i am conscious", "consciousness is mysterious", "there's a hard problem of consciousness over and above the easy problems", and so on. i tried to give the beginnings of such an explanation at a couple of points in "the conscious mind", but it wasn't well-developed and i guess it didn't do much for you. illusionists like dennett, humphrey, graziano, drescher, and others have also tried giving elements of such a story, but usually also in a very sketchy way that doesn't seem fully adequate to the behavior that needs to be explained. still i think there is a real research program here that philosophers and scientists of all stripes ought to be able to buy into. even most dualists and panpsychists ought to allow that there's some sort of broadly functional story here, though they will draw different conclusions (e.g. interactionist dualists will deny that this functional story is grounded in a physical story). it's an under-researched area at the moment and i hope it gets a lot more attention in the coming years. i'm hoping to return soon to this area myself.

Many thanks for linking this reddit discussion, Steve. I'm reading it now and will look for other threads in the philosophy subreddit in which Chalmers has participated.

While reading this thread you linked I followed a reddit popup directing me to another thread (equally interesting) on a paper concerning string theory and philosophy, responding to a paper by Massimo Pigliucci entitled "Must Science Be Testable?," actually concerning the inescapable relationship between philosophy and science. Here is an extract and one of the best comments at reddit:

Extract:

"... This surprisingly blunt – and very public – talk from prestigious academics is what happens when scientists help themselves to, or conversely categorically reject, philosophical notions that they plainly have not given sufficient thought to. In this case, it was Popper’s philosophy of science and its application to the demarcation problem. What makes this particularly ironic for someone like me, who started his academic career as a scientist (evolutionary biology) and eventually moved to philosophy after a constructive midlife crisis, is that a good number of scientists nowadays – and especially physicists – don’t seem to hold philosophy in particularly high regard. Just in the last few years Stephen Hawking has declared philosophy dead, Lawrence Krauss has quipped that philosophy reminds him of that old Woody Allen joke, ‘those that can’t do, teach, and those that can’t teach, teach gym,’ and science popularisers Neil deGrasse Tyson and Bill Nye have both wondered loudly why any young man would decide to ‘waste’ his time studying philosophy in college.

This is a rather novel, and by no means universal, attitude among physicists. Compare the above contemptuousness with what Einstein himself wrote to his friend Robert Thorton in 1944 on the same subject: ‘I fully agree with you about the significance and educational value of methodology as well as history and philosophy of science. So many people today – and even professional scientists – seem to me like somebody who has seen thousands of trees but has never seen a forest. A knowledge of the historic and philosophical background gives that kind of independence from prejudices of his generation from which most scientists are suffering. This independence created by philosophical insight is – in my opinion – the mark of distinction between a mere artisan or specialist and a real seeker after truth.’ By Einstein’s standard then, there are a lot of artisans but comparatively few seekers of truth among contemporary physicists!"

Comment:

microMe1_2 2 points 2 hours ago

Your response of "I understand everything" leads to exactly the kind of thinking that misses the point here. You're thinking too narrowly because you've already decided that the scientific method (or, in this case, pure math) leads to universal truth. These are the best tools humanity has developed, but you should not take the humanism out the equation. The questions we have evolved, biologically and culturally, to ask, the tools we have managed to develop, the past results we build on, our imaginations and their limits, our languages, even what culture we grow up in, all frame how we think about the universe and the models we construct to understand things. It's so much more complicated and interesting and, ultimately human (what else could it be?) to realize that our understanding is somewhat subjective (the truth, as we come to see it, is a combination of what's "out there" but also what's "in us"). Science alone doesn't give us the truth, in this grander sense, it gives us experimental data that we then fit to models. Why humans develop these models and not others, how they are refined over time, what questions we chose to probe and why, are all in the realms of philosophy and these all contribute to our understanding of Nature."

Link:
The string theory wars show us how science needs philosophy – Massimo Pigliucci | Aeon Essays • r/philosophy
 
ufology, the thing is that you're not wrong that there is a problem with panpsychism. But it has nothing to do with the gibberish above.

Yes, you can periodically jump into this discussion and say "you haven't solved anything." Again you're not wrong.

But don't give yourself too much credit. Because we recognize on an even more profound level how "wrong" we are.

So your problem is not that youre wrong about us being wrong. Or more precisely you're not wrong that we don't have the answers. We know we don't have the answers.

Your problem is that you seem to think you do have a handle on consciousness. I'm actually not sure where you stand on consciousness at the moment. In the past you seem to have believed that consciousness was something that oozes from the brain like bile from the liver. Which is a fine belief so long as you own up to the plethora of problems with such a view.

I just watched a TED video by Dennett in which he patronizingly tried to explain consciousness as an illusion. Its obvious that he is explaining SE and not consciousness (feeling) at all.

This is a hard problem. Consciousness is a complex phenomenon with many moving parts. See the "5 marks of consciousness" above.

Or consume the work of Anil Seth, a neuroscientist committed to materialism who nonetheless recognizes that even if SE is completely explained via brain mechanisms that there may still be a "metaphysical residue" remaining.

Anil Seth on the Real Problem of Consciousness

Many people striving to explain consciousness are actually striving to explain Subjective Experience, which is fine but not the same thing.

Phenomenal consciousness and any explanation will have metaphysical implications for how we understand reality.

So you can continue to chastise us for "not getting anywhere" or spinning our wheels. Sure. Fine. Yes we are. But you diminish yourself when you do so in a (comically) condescending and authoritative manner because observers can see that your grasp of the problem is primitive.

(Yes, I know he has me "blocked.")

Many people striving to explain consciousness are actually striving to explain Subjective Experience, which is fine but not the same thing.

Could you define and/or differentiate these words as you are using them here?

consciousness: ________

Subjective Experience: ________
 
Last edited:
Bill Nye says I convinced him that philosophy is not just a load of self-indulgent crap

Bill Nye, the science guy, is now a budding philosopher.

It’s a big turnaround for Nye, who last year dismissed philosophy as useless and misguided. That prompted a strong backlash, including a piece that I wrote: “Why are so many smart people such idiots about philosophy?

“Thank you for writing that critical article,” Nye told me on the phone this week. “It really led to something.”

Olivia Goldhill's article is so good that I'm going to c&p it here [in a next post] and dedicate that post to Randall / @Usual Suspect.
 
Why are so many smart people such idiots about philosophy?
Olivia Goldhill
March 05, 2016

"There’s no doubt that Bill Nye “the Science Guy” is extremely intelligent. But it seems that, when it comes to philosophy, he’s completely in the dark. The beloved American science educator and TV personality posted a video last week where he responded to a question from a philosophy undergrad about whether philosophy is a “meaningless topic.”

The video, which made the entire US philosophy community collectively choke on its morning espresso, is hard to watch, because most of Nye’s statements are wrong. Not just kinda wrong, but deeply, ludicrously wrong. He merges together questions of consciousness and reality as though they’re one and the same topic, and completely misconstrues Descartes’ argument “I think, therefore I am”—to mention just two of many examples.

And Nye—arguably America’s favorite “edutainer”—is not the only popular scientist saying “meh” to the entire centuries-old discipline. Astrophysicist Neil DeGrasse Tyson has claimed philosophy is not “a productive contributor to our understanding of the natural world”; while theoretical physicist Stephen Hawking declared that “philosophy is dead.”

It’s shocking that such brilliant scientists could be quite so ignorant, but unfortunately their views on philosophy are not uncommon. Unlike many other academic subjects (mathematics and history, for example), where non-experts have some vague sense of the field’s practices, there seems to be widespread confusion about what philosophy entails.

In Nye’s case, his misconceptions are too large and many to show why each and every one is flawed. But several of his comments in the video speak to broader confusions about philosophy. So let’s clear up some of those:

“It often gets back to this question: What is the nature of consciousness?”

Here is Nye’s full quote, on what he sees as philosophy’s main preoccupations:

“It often gets back to this question: What is the nature of consciousness? Can we know that we know? Are we aware that we’re aware? Are we not aware that we’re aware? Is reality real? Or is reality not real and we’re all living on a ping pong ball that’s part of a giant interplanetary ping pong game that we cannot sense? These are interesting questions.”

Nye’s remarks, which conflate ideas from completely different areas of philosophy, are a caricature of the common misconception that philosophy is about asking pointlessly “deep” questions, plucking an answer out of thin air, and then drinking some pinot noir and writing a florid essay.

But ping pong aside, these actually are interesting questions—and far from idle musing, the methods of analyzing such topics are incredibly, mind-achingly rigorous. Each of the questions Nye asks is the subject of extensive study, and philosophy, at its core, involves highly critical thinking.

Ned Hall, a professor and philosophy department chair at Harvard University, tells Quartz that a colleague describes philosophy as, “Thinking in slow motion.” It’s certainly thinking that cannot be dismissed with a raised eyebrow, à la Nye.

“The idea that reality is not real, that what you sense and feel is not authentic, is something I’m very skeptical of.”

Nye’s skepticism is an empty response to the question of whether we can trust our senses. “If you drop a hammer on your foot, is it real?” he asks. “Or is it just your imagination?” Then he goes on to suggest that the young philosophy student explore the question by dropping a hammer on his own foot. But such a painful experiment would not actually address the underlying question, and this approach—simply mocking the argument rather than addressing it—is so infamous that, as CUNY philosophy professor Massimo Pigliucci points out on his blog, it has its own name: argumentum ad lapidem—”appeal to a stone.”

Nye’s confidence that what we sense and feel is “authentic” is particularly strange coming from a scientist, given that several advanced scientific discoveries do in fact contradict information we receive from our senses. Einstein discovered that there’s no such thing as absolute simultaneity, for example, while quantum physics shows that an object can be in two places at the same time. Several philosophers have long argued that our senses are not a reliable means of evaluating reality, and such scientific discoveries support the idea that we should treat sensory information with a little skepticism.

“Philosophy is important for a while…. But you can start arguing in a circle.”

Philosophy is important for more than just a while, and has serious, practical uses for all of society. There are countless examples of philosophy of mind theories’ relevance to neuroscientists, or cases where political philosophers have shaped politicians.

Historically, physics and mathematics have often overlapped with philosophy, and many great scientists engaged with philosophers to advance their own thinking. (Einstein’s work can be studied alongside that of Kant, for example.) The physicist behind the theory of relativity was also a philosopher of science and, as Hall points out, Einstein reconfigured our concepts of space and time—itself a philosophical undertaking.

Philosophers also address the assumptions that underly science. “There’s a huge element in science of relying on our capacity to reason,” says Hall. “The way that capacity gets deployed in scientific inquiry often involves unstated but fairly substantial assumptions about the simplicity and elegance of the natural world. Philosophers bring to the table an awareness of how rich the set of assumptions are.”

So, for example, in the video Nye mockingly expresses his confidence that the sun will come up tomorrow. Philosophers are confident of this too, but few feel certain that they can explain exactly what causes this daily phenomenon—or any event. The 18th century philosopher David Hume’s argument that we don’t have a reasonable understanding of causation at all, but only presume cause and effect when two things have been observed as conjoined in the past, is notoriously difficult to refute. The problem underlies much of physics and is hardly insignificant.

And then there’s the development of formal logic, which was devised by philosophers a little over 100 years ago and is the foundation of coding and computer science—in other words, the grounding for all modern technology.

“It doesn’t always give an answer that’s surprising.”

Anyone who believes this clearly hasn’t spent much time studying philosophy. Any far-out, mind-bending, LSD-induced epiphany that’s ever been had has already been ripped apart and taken even further in sober-looking philosophy books. This is a field where prominent figures have argued that God is constantly creating the entire world in every moment, and that failing to donate any superfluous wealth is morally equivalent to walking past a drowning child."

“Keep in mind, humans made up philosophy too.”

Here, Nye suggests philosophy is irrelevant because we’re incapable, as fallible beings, of uncovering the absolute truth. “You’re a human seeking the truth,” he says, “so there are going to be limits.”

Far from a rebuttal of philosophy, this is a component of the field. Many great thinkers recognize this limit on our search for meaning and have written a range of complex papers on the subject, its implications, and the sort of truth that can be uncovered within the constraints of humans’ tiny minds. Ludwig Wittgenstein, for example, might interest those who share Nye’s skepticism.

Philosophy is not for everyone, and many are perfectly happy to live their lives without trying to figure out what, exactly, Heidegger is saying. But for Nye to talk so condescendingly about the “cool questions” in philosophy suggests that he doesn’t know enough to dismiss it. Because philosophy is in fact incredibly useful for anyone interested in language, knowledge, morality—and science. And yeah, it is pretty cool."


Note: click back to Steve's original post (above on this page) to connect to the link to the Nye video that led the US community of philosophers to choke on their morning espresso.
 
Please let me preface my comment by saying that I have honestly read absolutely nothing of parts 1, 2, 3, 4, 5, 6, 7, 8, 9, or even 10. I am posting this comment solely based on the title: Consciousness and the Paranormal. So, I'm likely just a know-nothing troll. That said:

Everything, everything, but only everything we have ever experienced is via conscious-awareness of it. Consciousness is the one common denominator in everything (including what we may refer to as a paranormal happening).

Perhaps the important question is: what sees/experiences what is seen/experienced (rather than: what the hell am I experiencing?).

What sees and knows all thoughts, emotions, feelings and phenomenal occurrences throughout our "life"?

Who am I, really? Really??? Am "I" an objective sense, feeling, thought or other object that is known? Or am I that indescribable intimacy which silently knows and witnesses it all?

Careful. Here be dragons!
 
Please let me preface my comment by saying that I have honestly read absolutely nothing of parts 1, 2, 3, 4, 5, 6, 7, 8, 9, or even 10. I am posting this comment solely based on the title: Consciousness and the Paranormal. So, I'm likely just a know-nothing troll. That said:

Everything, everything, but only everything we have ever experienced is via conscious-awareness of it. Consciousness is the one common denominator in everything (including what we may refer to as a paranormal happening).

Perhaps the important question is: what sees/experiences what is seen/experienced (rather than: what the hell am I experiencing?).

What sees and knows all thoughts, emotions, feelings and phenomenal occurrences throughout our "life"?

Who am I, really? Really??? Am "I" an objective sense, feeling, thought or other object that is known? Or am I that indescribable intimacy which silently knows and witnesses it all?

Careful. Here be dragons!

Hi @Swamp Gas and welcome to this thread. I was hoping you'd turn up here. I hope you will find the subjects discussed here to be interesting and that you'll add your perspectives to them.

{Note to present and accustomed company: 'Swamp Gas' has recently become a Paracast member whose rational posts I first read with relief in the thread concerning the ancient 'mummies' recently discovered in a cave near Nazca.}
 
Shtetl-Optimized

"So then what should we mean by “information is physical”? In the rest of this post, I’d like to propose an answer to that question."

Splendid. At last, a coherent response to the question I've been asking since we first discussed Tononi's Integrated Information Theory (more than a year ago, I think). I copied out some extracts into a Word doc but can't at the moment get my Word program to cooperate. So in the meantime, I'll c&p the following piece from farther down that page Steve linked for us:

"This past Thursday, Natalie Wolchover—a math/science writer whose work has typically been outstanding—published a piece in Quanta magazine entitled “A Theory of Reality as More Than the Sum of Its Parts.” The piece deals with recent work by Erik Hoel and his collaborators, including Giulio Tononi (Hoel’s adviser, and the founder of integrated information theory, previously critiqued on this blog). Commenter Jim Cross asked me to expand on my thoughts about causal emergence in a blog post, so: your post, monsieur.

In their new work, Hoel and others claim to make the amazing discovery that scientific reductionism is false—or, more precisely, that there can exist “causal information” in macroscopic systems, information relevant for predicting the systems’ future behavior, that’s not reducible to causal information about the systems’ microscopic building blocks. For more about what we’ll be discussing, see Hoel’s FQXi essay “Agent Above, Atom Below,” or better yet, his paper in Entropy, When the Map Is Better Than the Territory. Here’s the abstract of the Entropy paper:

The causal structure of any system can be analyzed at a multitude of spatial and temporal scales. It has long been thought that while higher scale (macro) descriptions may be useful to observers, they are at best a compressed description and at worse leave out critical information and causal relationships. However, recent research applying information theory to causal analysis has shown that the causal structure of some systems can actually come into focus and be more informative at a macroscale. That is, a macroscale description of a system (a map) can be more informative than a fully detailed microscale description of the system (the territory). This has been called “causal emergence.” While causal emergence may at first seem counterintuitive, this paper grounds the phenomenon in a classic concept from information theory: Shannon’s discovery of the channel capacity. I argue that systems have a particular causal capacity, and that different descriptions of those systems take advantage of that capacity to various degrees. For some systems, only macroscale descriptions use the full causal capacity. These macroscales can either be coarse-grains, or may leave variables and states out of the model (exogenous, or “black boxed”) in various ways, which can improve the efficacy and informativeness via the same mathematical principles of how error-correcting codes take advantage of an information channel’s capacity. The causal capacity of a system can approach the channel capacity as more and different kinds of macroscales are considered. Ultimately, this provides a general framework for understanding how the causal structure of some systems cannot be fully captured by even the most detailed microscale description.

Anyway, Wolchover’s popular article quoted various researchers praising the theory of causal emergence, as well as a single inexplicably curmudgeonly skeptic—some guy who sounded like he was so off his game (or maybe just bored with debates about ‘reductionism’ versus ’emergence’?), that he couldn’t even be bothered to engage the details of what he was supposed to be commenting on.

Hoel’s ideas do not impress Scott Aaronson, a theoretical computer scientist at the University of Texas, Austin. He says causal emergence isn’t radical in its basic premise. After reading Hoel’s recent essay for the Foundational Questions Institute, “Agent Above, Atom Below” (the one that featured Romeo and Juliet), Aaronson said, “It was hard for me to find anything in the essay that the world’s most orthodox reductionist would disagree with. Yes, of course you want to pass to higher abstraction layers in order to make predictions, and to tell causal stories that are predictively useful — and the essay explains some of the reasons why.”

After the Quanta piece came out, Sean Carroll tweeted approvingly about the above paragraph, calling me a “voice of reason [yes, Sean; have I ever not been?], slapping down the idea that emergent higher levels have spooky causal powers.” Then Sean, in turn, was criticized for that remark by Hoel and others.

Hoel in particular raised a reasonable-sounding question. Namely, in my “curmudgeon paragraph” from Wolchover’s article, I claimed that the notion of “causal emergence,” or causality at the macro-scale, says nothing fundamentally new. Instead it simply reiterates the usual worldview of science, according to which

  1. the universe is ultimately made of quantum fields evolving by some Hamiltonian, but
  2. if someone asks (say) “why has air travel in the US gotten so terrible?”, a useful answer is going to talk about politics or psychology or economics or history rather than the movements of quarks and leptons.
But then, Hoel asks, if there’s nothing here for the world’s most orthodox reductionist to disagree with, then how do we find Carroll and other reductionists … err, disagreeing?

I think this dilemma is actually not hard to resolve. Faced with a claim about “causation at higher levels,” what reductionists disagree with is not the object-level claim that such causation exists (I scratched my nose because it itched, not because of the Standard Model of elementary particles). Rather, they disagree with the meta-level claim that there’s anything shocking about such causation, anything that poses a special difficulty for the reductionist worldview that physics has held for centuries. I.e., they consider it true both that

  1. my nose is made of subatomic particles, and its behavior is in principle fully determined (at least probabilistically) by the quantum state of those particles together with the laws governing them, and
  2. my nose itched.
At least if we leave the hard problem of consciousness out of it—that’s a separate debate—there seems to be no reason to imagine a contradiction between 1 and 2 that needs to be resolved, but “only” a vast network of intervening mechanisms to be elucidated. So, this is how it is that reductionists can find anti-reductionist claims to be both wrong and vacuously correct at the same time.

(Incidentally, yes, quantum entanglement provides an obvious sense in which “the whole is more than the sum of its parts,” but even in quantum mechanics, the whole isn’t more than the density matrix, which is still a huge array of numbers evolving by an equation, just different numbers than one would’ve thought a priori. For that reason, it’s not obvious what relevance, if any, QM has to reductionism versus anti-reductionism. In any case, QM is not what Hoel invokes in his causal emergence theory.)

From reading the philosophical parts of Hoel’s papers, it was clear to me that some remarks like the above might help ward off the forehead-banging confusions that these discussions inevitably provoke. So standard-issue crustiness is what I offered Natalie Wolchover when she asked me, not having time on short notice to go through the technical arguments.

But of course this still leaves the question: what is in the mathematical part of Hoel’s Entropy paper? What exactly is it that the advocates of causal emergence claim provides a new argument against reductionism?

To answer that question, yesterday I (finally) read the Entropy paper all the way through.

Much like Tononi’s integrated information theory was built around a numerical measure called Φ, causal emergence is built around a different numerical quantity, this one supposed to measure the amount of “causal information” at a particular scale. The measure is called effective information or EI, and it’s basically the mutual information between a system’s initial state sI and its final state sF, assuming a uniform distribution over sI. Much like with Φ in IIT, computations of this EI are then used as the basis for wide-ranging philosophical claims—even though EI, like Φ, has aspects that could be criticized as arbitrary, and as not obviously connected with what we’re trying to understand.

Once again like with Φ, one of those assumptions is that of a uniform distribution over one of the variables, sI, whose relatedness we’re trying to measure. In my IIT post, I remarked on that assumption, but I didn’t harp on it, since I didn’t see that it did serious harm, and in any case my central objection to Φ would hold regardless of which distribution we chose. With causal emergence, by contrast, this uniformity assumption turns out to be the key to everything.

For here is the argument from the Entropy paper, for the existence of macroscopic causality that’s not reducible to causality in the underlying components. Suppose I have a system with 8 possible states (called “microstates”), which I label 1 through 8. And suppose the system evolves as follows: if it starts out in states 1 through 7, then it goes to state 1. If, on the other hand, it starts in state 8, then it stays in state 8. In such a case, it seems reasonable to “coarse-grain” the system, by lumping together initial states 1 through 7 into a single “macrostate,” call it A, and letting the initial state 8 comprise a second macrostate, call it B.

We now ask: how much information does knowing the system’s initial state tell you about its final state? If we’re talking about microstates, and we let the system start out in a uniform distribution over microstates 1 through 8, then 7/8 of the time the system goes to state 1. So there’s just not much information about the final state to be predicted—specifically, only 7/8×log2(8/7) + 1/8×log2(8) ≈ 0.54 bits of entropy—which, in this case, is also the mutual information between the initial and final microstates. If, on the other hand, we’re talking about macrostates, and we let the system start in a uniform distribution over macrostates A and B, then A goes to A and B goes to B. So knowing the initial macrostate gives us 1 full bit of information about the final state, which is more than the ~0.54 bits that looking at the microstate gave us! Ergo reductionism is false.

Once the argument is spelled out, it’s clear that the entire thing boils down to, how shall I put this, a normalization issue. That is: we insist on the uniform distribution over microstates when calculating microscopic EI, and we also insist on the uniform distribution over macrostates when calculating macroscopic EI, and we ignore the fact that the uniform distribution over microstates gives rise to a non-uniform distribution over macrostates, because some macrostates can be formed in more ways than others. If we fixed this, demanding that the two distributions be compatible with each other, we’d immediately find that, surprise, knowing the complete initial microstate of a system always gives you at least as much power to predict the system’s future as knowing a macroscopic approximation to that state. (How could it not? For given the microstate, we could in principle compute the macroscopic approximation for ourselves, but not vice versa.)

The closest the paper comes to acknowledging the problem—i.e., that it’s all just a normalization trick—seems to be the following paragraph in the discussion section:

"Another possible objection to causal emergence is that it is not natural but rather enforced upon a system via an experimenter’s application of an intervention distribution, that is, from using macro-interventions. For formalization purposes, it is the experimenter who is the source of the intervention distribution, which reveals a causal structure that already exists. Additionally, nature itself may intervene upon a system with statistical regularities, just like an intervention distribution. Some of these naturally occurring input distributions may have a viable interpretation as a macroscale causal model (such as being equal to Hmax [the maximum entropy] at some particular macroscale). In this sense, some systems may function over their inputs and outputs at a microscale or macroscale, depending on their own causal capacity and the probability distribution of some natural source of driving input.'

As far as I understand it, this paragraph is saying that, for all we know, something could give rise to a uniform distribution over macrostates, so therefore that’s a valid thing to look at, even if it’s not what we get by taking a uniform distribution over microstates and then coarse-graining it. Well, OK, but unknown interventions could give rise to many other distributions over macrostates as well. In any case, if we’re directly comparing causal information at the microscale against causal information at the macroscale, it still seems reasonable to me to demand that in the comparison, the macro-distribution arise by coarse-graining the micro one. But in that case, the entire argument collapses.

Despite everything I said above, the real purpose of this post is to announce that I’ve changed my mind. I now believe that, while Hoel’s argument might be unsatisfactory, the conclusion is fundamentally correct: scientific reductionism is false. There is higher-level causation in our universe, and it’s 100% genuine, not just a verbal sleight-of-hand. In particular, there are causal forces that can only be understood in terms of human desires and goals, and not in terms of subatomic particles blindly bouncing around.

So what caused such a dramatic conversion?


By 2015, after decades of research and diplomacy and activism and struggle, 196 nations had finally agreed to limit their carbon dioxide emissions—every nation on earth besides Syria and Nicaragua, and Nicaragua only because it thought the agreement didn’t go far enough. The human race had thereby started to carve out some sort of future for itself, one in which the oceans might rise slowly enough that we could adapt, and maybe buy enough time until new technologies were invented that changed the outlook. Of course the Paris agreement fell far short of what was needed, but it was a start, something to build on in the coming decades. Even in the US, long the hotbed of intransigence and denial on this issue, 69% of the public supported joining the Paris agreement, compared to a mere 13% who opposed. Clean energy was getting cheaper by the year. Most of the US’s largest corporations, including Google, Microsoft, Apple, Intel, Mars, PG&E, and ExxonMobil—ExxonMobil, for godsakes—vocally supported staying in the agreement and working to cut their own carbon footprints. All in all, there was reason to be cautiously optimistic that children born today wouldn’t live to curse their parents for having brought them into a world so close to collapse.

In order to unravel all this, in order to steer the heavy ship of destiny off the path toward averting the crisis and toward the path of existential despair, a huge number of unlikely events would need to happen in succession, as if propelled by some evil supernatural force.

Like what? I dunno, maybe a fascist demagogue would take over the United States on a campaign based on willful cruelty, on digging up and burning dirty fuels just because and even if it made zero economic sense, just for the fun of sticking it to liberals, or because of the urgent need to save the US coal industry, which employs fewer people than Arby’s. Such a demagogue would have no chance of getting elected, you say?

So let’s suppose he’s up against a historically unpopular opponent. Let’s suppose that even then, he still loses the popular vote, but somehow ekes out an Electoral College win. Maybe he gets crucial help in winning the election from a hostile foreign power—and for some reason, pro-American nationalists are totally OK with that, even cheer it. Even then, we’d still probably need a string of additional absurd coincidences. Like, I dunno, maybe the fascist’s opponent has an aide who used to be married to a guy who likes sending lewd photos to minors, and investigating that guy leads the FBI to some emails that ultimately turn out to mean nothing whatsoever, but that the media hyperventilate about precisely in time to cause just enough people to vote to bring the fascist to power, thereby bringing about the end of the world. Something like that.

It’s kind of like, you know that thing where the small population in Europe that produced Einstein and von Neumann and Erdös and Ulam and Tarski and von Karman and Polya was systematically exterminated (along with millions of other innocents) soon after it started producing such people, and the world still hasn’t fully recovered? How many things needed to go wrong for that to happen? Obviously you needed Hitler to be born, and to survive the trenches and assassination plots; and Hindenburg to make the fateful decision to give Hitler power. But beyond that, the world had to sleep as Germany rebuilt its military; every last country had to turn away refugees; the UK had to shut down Jewish immigration to Palestine at exactly the right time; newspapers had to bury the story; government record-keeping had to have advanced just to the point that rounding up millions for mass murder was (barely) logistically possible; and finally, the war had to continue long enough for nearly every European country to have just enough time to ship its Jews to their deaths, before the Allies showed up to liberate mostly the ashes.

In my view, these simply aren’t the sort of outcomes that you expect from atoms blindly interacting according to the laws of physics. These are, instead, the signatures of higher-level causation—and specifically, of a teleological force that operates in our universe to make it distinctively cruel and horrible.

Admittedly, I don’t claim to know the exact mechanism of the higher-level causation. Maybe, as the physicist Yakir Aharonov has advocated, our universe has not only a special, low-entropy initial state at the Big Bang, but also a “postselected final state,” toward which the outcomes of quantum measurements get mysteriously “pulled”—an effect that might show up in experiments as ever-so-slight deviations from the Born rule. And because of the postselected final state, even if the human race naïvely had only (say) a one-in-thousand chance of killing itself off, even if the paths to its destruction all involved some improbable absurdity, like an orange clown showing up from nowhere—nevertheless, the orange clown would show up. Alternatively, maybe the higher-level causation unfolds through subtle correlations in the universe’s initial state, along the lines I sketched in my 2013 essay The Ghost in the Quantum Turing Machine. Or maybe Erik Hoel is right after all, and it all comes down to normalization: if we looked at the uniform distribution over macrostates rather than over microstates, we’d discover that orange clowns destroying the world predominated. Whatever the details, though, I think it can no longer be doubted that we live, not in the coldly impersonal universe that physics posited for centuries, but instead in a tragicomically evil one.

I call my theory reverse Hollywoodism, because it holds that the real world has the inverse of the typical Hollywood movie’s narrative arc. Again and again, what we observe is that the forces of good have every possible advantage, from money to knowledge to overwhelming numerical superiority. Yet somehow good still fumbles. Somehow a string of improbable coincidences, or a black swan or an orange Hitler, show up at the last moment to let horribleness eke out a last-minute victory, as if the world itself had been rooting for horribleness all along. That’s our universe.

I’m fine if you don’t believe this theory: maybe you’re congenitally more optimistic than I am (in which case, more power to you); maybe the full weight of our universe’s freakish awfulness doesn’t bear down on you as it does on me. But I hope you’ll concede that, if nothing else, this theory is a genuinely non-reductionist one."


{A blind third alternative (given my general ignorance of mathematics and quantum mechanics), but is it possible that in our little part of the cosmos, where life has evolved to a certain degree of intelligence, humanly generated ideas in the humanly lived macrosphere also exist in fragile temporary superpositions that [for some reason or reasons] fall apart completely -- decohere? What better concept could we arrive at than decoherence to describe the human world we've constructed upon the prolific ground of earth? If so, our species might well expect to revert again to an earlier stage of life on earth, another effort by nature to evolve intelligent beings, perhaps producing a species capable of learning how to live sensibly and cooperatively, even justly, within the constraints of our natural ecology. So I can't put it down, as the author, Scott Aaronson, does, to "our universe’s freakish awfulness." Why has our species failed to assume its proper role as 'shepherds of being', in Heidegger's later philosophy? That's something we could try to figure out.}
 
Many people striving to explain consciousness are actually striving to explain Subjective Experience, which is fine but not the same thing.

Could you define these words as you are using them here?

consciousness: ________

Subjective Experience: ________

Great question. Since you directed this question to @Soupie, in a response to one of his posts, I'll wait to chime in, and also link to some of Zahavi's papers.

In the meantime I want to call attention to this book, which one of us linked several months back. I discovered at amazon that it's now available in a paperback edition that will be available in mid-October.

Peter Godfrey-Smith, Other Minds: The Octopus, the Sea, and the Deep Origins of Consciousness

Description at amazon: "Although mammals and birds are widely regarded as the smartest creatures on earth, it has lately become clear that a very distant branch of the tree of life has also sprouted higher intelligence: the cephalopods, consisting of the squid, the cuttlefish, and above all the octopus. In captivity, octopuses have been known to identify individual human keepers, raid neighboring tanks for food, turn off lightbulbs by spouting jets of water, plug drains, and make daring escapes. How is it that a creature with such gifts evolved through an evolutionary lineage so radically distant from our own? What does it mean that evolution built minds not once but at least twice? The octopus is the closest we will come to meeting an intelligent alien. What can we learn from the encounter?

In Other Minds, Peter Godfrey-Smith, a distinguished philosopher of science and a skilled scuba diver, tells a bold new story of how subjective experience crept into being―how nature became aware of itself. As Godfrey-Smith stresses, it is a story that largely occurs in the ocean, where animals first appeared. Tracking the mind’s fitful development, Godfrey-Smith shows how unruly clumps of seaborne cells began living together and became capable of sensing, acting, and signaling. As these primitive organisms became more entangled with others, they grew more complicated. The first nervous systems evolved, probably in ancient relatives of jellyfish; later on, the cephalopods, which began as inconspicuous mollusks, abandoned their shells and rose above the ocean floor, searching for prey and acquiring the greater intelligence needed to do so. Taking an independent route, mammals and birds later began their own evolutionary journeys.

But what kind of intelligence do cephalopods possess? Drawing on the latest scientific research and his own scuba-diving adventures, Godfrey-Smith probes the many mysteries that surround the lineage. How did the octopus, a solitary creature with little social life, become so smart? What is it like to have eight tentacles that are so packed with neurons that they virtually “think for themselves”? What happens when some octopuses abandon their hermit-like ways and congregate, as they do in a unique location off the coast of Australia?

By tracing the question of inner life back to its roots and comparing human beings with our most remarkable animal relatives, Godfrey-Smith casts crucial new light on the octopus mind―and on our own."
 
Splendid. At last, a coherent response to the question I've been asking since we first discussed Tononi's Integrated Information Theory (more than a year ago, I think). I copied out some extracts into a Word doc but can't at the moment get my Word program to cooperate. So in the meantime, I'll c&p the following piece from farther down that page Steve linked for us:

"This past Thursday, Natalie Wolchover—a math/science writer whose work has typically been outstanding—published a piece in Quanta magazine entitled “A Theory of Reality as More Than the Sum of Its Parts.” The piece deals with recent work by Erik Hoel and his collaborators, including Giulio Tononi (Hoel’s adviser, and the founder of integrated information theory, previously critiqued on this blog). Commenter Jim Cross asked me to expand on my thoughts about causal emergence in a blog post, so: your post, monsieur.

In their new work, Hoel and others claim to make the amazing discovery that scientific reductionism is false—or, more precisely, that there can exist “causal information” in macroscopic systems, information relevant for predicting the systems’ future behavior, that’s not reducible to causal information about the systems’ microscopic building blocks. For more about what we’ll be discussing, see Hoel’s FQXi essay “Agent Above, Atom Below,” or better yet, his paper in Entropy, When the Map Is Better Than the Territory. Here’s the abstract of the Entropy paper:

The causal structure of any system can be analyzed at a multitude of spatial and temporal scales. It has long been thought that while higher scale (macro) descriptions may be useful to observers, they are at best a compressed description and at worse leave out critical information and causal relationships. However, recent research applying information theory to causal analysis has shown that the causal structure of some systems can actually come into focus and be more informative at a macroscale. That is, a macroscale description of a system (a map) can be more informative than a fully detailed microscale description of the system (the territory). This has been called “causal emergence.” While causal emergence may at first seem counterintuitive, this paper grounds the phenomenon in a classic concept from information theory: Shannon’s discovery of the channel capacity. I argue that systems have a particular causal capacity, and that different descriptions of those systems take advantage of that capacity to various degrees. For some systems, only macroscale descriptions use the full causal capacity. These macroscales can either be coarse-grains, or may leave variables and states out of the model (exogenous, or “black boxed”) in various ways, which can improve the efficacy and informativeness via the same mathematical principles of how error-correcting codes take advantage of an information channel’s capacity. The causal capacity of a system can approach the channel capacity as more and different kinds of macroscales are considered. Ultimately, this provides a general framework for understanding how the causal structure of some systems cannot be fully captured by even the most detailed microscale description.

Anyway, Wolchover’s popular article quoted various researchers praising the theory of causal emergence, as well as a single inexplicably curmudgeonly skeptic—some guy who sounded like he was so off his game (or maybe just bored with debates about ‘reductionism’ versus ’emergence’?), that he couldn’t even be bothered to engage the details of what he was supposed to be commenting on.

Hoel’s ideas do not impress Scott Aaronson, a theoretical computer scientist at the University of Texas, Austin. He says causal emergence isn’t radical in its basic premise. After reading Hoel’s recent essay for the Foundational Questions Institute, “Agent Above, Atom Below” (the one that featured Romeo and Juliet), Aaronson said, “It was hard for me to find anything in the essay that the world’s most orthodox reductionist would disagree with. Yes, of course you want to pass to higher abstraction layers in order to make predictions, and to tell causal stories that are predictively useful — and the essay explains some of the reasons why.”

After the Quanta piece came out, Sean Carroll tweeted approvingly about the above paragraph, calling me a “voice of reason [yes, Sean; have I ever not been?], slapping down the idea that emergent higher levels have spooky causal powers.” Then Sean, in turn, was criticized for that remark by Hoel and others.

Hoel in particular raised a reasonable-sounding question. Namely, in my “curmudgeon paragraph” from Wolchover’s article, I claimed that the notion of “causal emergence,” or causality at the macro-scale, says nothing fundamentally new. Instead it simply reiterates the usual worldview of science, according to which

  1. the universe is ultimately made of quantum fields evolving by some Hamiltonian, but
  2. if someone asks (say) “why has air travel in the US gotten so terrible?”, a useful answer is going to talk about politics or psychology or economics or history rather than the movements of quarks and leptons.
But then, Hoel asks, if there’s nothing here for the world’s most orthodox reductionist to disagree with, then how do we find Carroll and other reductionists … err, disagreeing?

I think this dilemma is actually not hard to resolve. Faced with a claim about “causation at higher levels,” what reductionists disagree with is not the object-level claim that such causation exists (I scratched my nose because it itched, not because of the Standard Model of elementary particles). Rather, they disagree with the meta-level claim that there’s anything shocking about such causation, anything that poses a special difficulty for the reductionist worldview that physics has held for centuries. I.e., they consider it true both that

  1. my nose is made of subatomic particles, and its behavior is in principle fully determined (at least probabilistically) by the quantum state of those particles together with the laws governing them, and
  2. my nose itched.
At least if we leave the hard problem of consciousness out of it—that’s a separate debate—there seems to be no reason to imagine a contradiction between 1 and 2 that needs to be resolved, but “only” a vast network of intervening mechanisms to be elucidated. So, this is how it is that reductionists can find anti-reductionist claims to be both wrong and vacuously correct at the same time.

(Incidentally, yes, quantum entanglement provides an obvious sense in which “the whole is more than the sum of its parts,” but even in quantum mechanics, the whole isn’t more than the density matrix, which is still a huge array of numbers evolving by an equation, just different numbers than one would’ve thought a priori. For that reason, it’s not obvious what relevance, if any, QM has to reductionism versus anti-reductionism. In any case, QM is not what Hoel invokes in his causal emergence theory.)

From reading the philosophical parts of Hoel’s papers, it was clear to me that some remarks like the above might help ward off the forehead-banging confusions that these discussions inevitably provoke. So standard-issue crustiness is what I offered Natalie Wolchover when she asked me, not having time on short notice to go through the technical arguments.

But of course this still leaves the question: what is in the mathematical part of Hoel’s Entropy paper? What exactly is it that the advocates of causal emergence claim provides a new argument against reductionism?

To answer that question, yesterday I (finally) read the Entropy paper all the way through.

Much like Tononi’s integrated information theory was built around a numerical measure called Φ, causal emergence is built around a different numerical quantity, this one supposed to measure the amount of “causal information” at a particular scale. The measure is called effective information or EI, and it’s basically the mutual information between a system’s initial state sI and its final state sF, assuming a uniform distribution over sI. Much like with Φ in IIT, computations of this EI are then used as the basis for wide-ranging philosophical claims—even though EI, like Φ, has aspects that could be criticized as arbitrary, and as not obviously connected with what we’re trying to understand.

Once again like with Φ, one of those assumptions is that of a uniform distribution over one of the variables, sI, whose relatedness we’re trying to measure. In my IIT post, I remarked on that assumption, but I didn’t harp on it, since I didn’t see that it did serious harm, and in any case my central objection to Φ would hold regardless of which distribution we chose. With causal emergence, by contrast, this uniformity assumption turns out to be the key to everything.

For here is the argument from the Entropy paper, for the existence of macroscopic causality that’s not reducible to causality in the underlying components. Suppose I have a system with 8 possible states (called “microstates”), which I label 1 through 8. And suppose the system evolves as follows: if it starts out in states 1 through 7, then it goes to state 1. If, on the other hand, it starts in state 8, then it stays in state 8. In such a case, it seems reasonable to “coarse-grain” the system, by lumping together initial states 1 through 7 into a single “macrostate,” call it A, and letting the initial state 8 comprise a second macrostate, call it B.

We now ask: how much information does knowing the system’s initial state tell you about its final state? If we’re talking about microstates, and we let the system start out in a uniform distribution over microstates 1 through 8, then 7/8 of the time the system goes to state 1. So there’s just not much information about the final state to be predicted—specifically, only 7/8×log2(8/7) + 1/8×log2(8) ≈ 0.54 bits of entropy—which, in this case, is also the mutual information between the initial and final microstates. If, on the other hand, we’re talking about macrostates, and we let the system start in a uniform distribution over macrostates A and B, then A goes to A and B goes to B. So knowing the initial macrostate gives us 1 full bit of information about the final state, which is more than the ~0.54 bits that looking at the microstate gave us! Ergo reductionism is false.

Once the argument is spelled out, it’s clear that the entire thing boils down to, how shall I put this, a normalization issue. That is: we insist on the uniform distribution over microstates when calculating microscopic EI, and we also insist on the uniform distribution over macrostates when calculating macroscopic EI, and we ignore the fact that the uniform distribution over microstates gives rise to a non-uniform distribution over macrostates, because some macrostates can be formed in more ways than others. If we fixed this, demanding that the two distributions be compatible with each other, we’d immediately find that, surprise, knowing the complete initial microstate of a system always gives you at least as much power to predict the system’s future as knowing a macroscopic approximation to that state. (How could it not? For given the microstate, we could in principle compute the macroscopic approximation for ourselves, but not vice versa.)

The closest the paper comes to acknowledging the problem—i.e., that it’s all just a normalization trick—seems to be the following paragraph in the discussion section:

"Another possible objection to causal emergence is that it is not natural but rather enforced upon a system via an experimenter’s application of an intervention distribution, that is, from using macro-interventions. For formalization purposes, it is the experimenter who is the source of the intervention distribution, which reveals a causal structure that already exists. Additionally, nature itself may intervene upon a system with statistical regularities, just like an intervention distribution. Some of these naturally occurring input distributions may have a viable interpretation as a macroscale causal model (such as being equal to Hmax [the maximum entropy] at some particular macroscale). In this sense, some systems may function over their inputs and outputs at a microscale or macroscale, depending on their own causal capacity and the probability distribution of some natural source of driving input.'

As far as I understand it, this paragraph is saying that, for all we know, something could give rise to a uniform distribution over macrostates, so therefore that’s a valid thing to look at, even if it’s not what we get by taking a uniform distribution over microstates and then coarse-graining it. Well, OK, but unknown interventions could give rise to many other distributions over macrostates as well. In any case, if we’re directly comparing causal information at the microscale against causal information at the macroscale, it still seems reasonable to me to demand that in the comparison, the macro-distribution arise by coarse-graining the micro one. But in that case, the entire argument collapses.

Despite everything I said above, the real purpose of this post is to announce that I’ve changed my mind. I now believe that, while Hoel’s argument might be unsatisfactory, the conclusion is fundamentally correct: scientific reductionism is false. There is higher-level causation in our universe, and it’s 100% genuine, not just a verbal sleight-of-hand. In particular, there are causal forces that can only be understood in terms of human desires and goals, and not in terms of subatomic particles blindly bouncing around.

So what caused such a dramatic conversion?


By 2015, after decades of research and diplomacy and activism and struggle, 196 nations had finally agreed to limit their carbon dioxide emissions—every nation on earth besides Syria and Nicaragua, and Nicaragua only because it thought the agreement didn’t go far enough. The human race had thereby started to carve out some sort of future for itself, one in which the oceans might rise slowly enough that we could adapt, and maybe buy enough time until new technologies were invented that changed the outlook. Of course the Paris agreement fell far short of what was needed, but it was a start, something to build on in the coming decades. Even in the US, long the hotbed of intransigence and denial on this issue, 69% of the public supported joining the Paris agreement, compared to a mere 13% who opposed. Clean energy was getting cheaper by the year. Most of the US’s largest corporations, including Google, Microsoft, Apple, Intel, Mars, PG&E, and ExxonMobil—ExxonMobil, for godsakes—vocally supported staying in the agreement and working to cut their own carbon footprints. All in all, there was reason to be cautiously optimistic that children born today wouldn’t live to curse their parents for having brought them into a world so close to collapse.

In order to unravel all this, in order to steer the heavy ship of destiny off the path toward averting the crisis and toward the path of existential despair, a huge number of unlikely events would need to happen in succession, as if propelled by some evil supernatural force.

Like what? I dunno, maybe a fascist demagogue would take over the United States on a campaign based on willful cruelty, on digging up and burning dirty fuels just because and even if it made zero economic sense, just for the fun of sticking it to liberals, or because of the urgent need to save the US coal industry, which employs fewer people than Arby’s. Such a demagogue would have no chance of getting elected, you say?

So let’s suppose he’s up against a historically unpopular opponent. Let’s suppose that even then, he still loses the popular vote, but somehow ekes out an Electoral College win. Maybe he gets crucial help in winning the election from a hostile foreign power—and for some reason, pro-American nationalists are totally OK with that, even cheer it. Even then, we’d still probably need a string of additional absurd coincidences. Like, I dunno, maybe the fascist’s opponent has an aide who used to be married to a guy who likes sending lewd photos to minors, and investigating that guy leads the FBI to some emails that ultimately turn out to mean nothing whatsoever, but that the media hyperventilate about precisely in time to cause just enough people to vote to bring the fascist to power, thereby bringing about the end of the world. Something like that.

It’s kind of like, you know that thing where the small population in Europe that produced Einstein and von Neumann and Erdös and Ulam and Tarski and von Karman and Polya was systematically exterminated (along with millions of other innocents) soon after it started producing such people, and the world still hasn’t fully recovered? How many things needed to go wrong for that to happen? Obviously you needed Hitler to be born, and to survive the trenches and assassination plots; and Hindenburg to make the fateful decision to give Hitler power. But beyond that, the world had to sleep as Germany rebuilt its military; every last country had to turn away refugees; the UK had to shut down Jewish immigration to Palestine at exactly the right time; newspapers had to bury the story; government record-keeping had to have advanced just to the point that rounding up millions for mass murder was (barely) logistically possible; and finally, the war had to continue long enough for nearly every European country to have just enough time to ship its Jews to their deaths, before the Allies showed up to liberate mostly the ashes.

In my view, these simply aren’t the sort of outcomes that you expect from atoms blindly interacting according to the laws of physics. These are, instead, the signatures of higher-level causation—and specifically, of a teleological force that operates in our universe to make it distinctively cruel and horrible.

Admittedly, I don’t claim to know the exact mechanism of the higher-level causation. Maybe, as the physicist Yakir Aharonov has advocated, our universe has not only a special, low-entropy initial state at the Big Bang, but also a “postselected final state,” toward which the outcomes of quantum measurements get mysteriously “pulled”—an effect that might show up in experiments as ever-so-slight deviations from the Born rule. And because of the postselected final state, even if the human race naïvely had only (say) a one-in-thousand chance of killing itself off, even if the paths to its destruction all involved some improbable absurdity, like an orange clown showing up from nowhere—nevertheless, the orange clown would show up. Alternatively, maybe the higher-level causation unfolds through subtle correlations in the universe’s initial state, along the lines I sketched in my 2013 essay The Ghost in the Quantum Turing Machine. Or maybe Erik Hoel is right after all, and it all comes down to normalization: if we looked at the uniform distribution over macrostates rather than over microstates, we’d discover that orange clowns destroying the world predominated. Whatever the details, though, I think it can no longer be doubted that we live, not in the coldly impersonal universe that physics posited for centuries, but instead in a tragicomically evil one.

I call my theory reverse Hollywoodism, because it holds that the real world has the inverse of the typical Hollywood movie’s narrative arc. Again and again, what we observe is that the forces of good have every possible advantage, from money to knowledge to overwhelming numerical superiority. Yet somehow good still fumbles. Somehow a string of improbable coincidences, or a black swan or an orange Hitler, show up at the last moment to let horribleness eke out a last-minute victory, as if the world itself had been rooting for horribleness all along. That’s our universe.

I’m fine if you don’t believe this theory: maybe you’re congenitally more optimistic than I am (in which case, more power to you); maybe the full weight of our universe’s freakish awfulness doesn’t bear down on you as it does on me. But I hope you’ll concede that, if nothing else, this theory is a genuinely non-reductionist one."


{A blind third alternative (given my general ignorance of mathematics and quantum mechanics), but is it possible that in our little part of the cosmos, where life has evolved to a certain degree of intelligence, humanly generated ideas in the humanly lived macrosphere also exist in fragile temporary superpositions that [for some reason or reasons] fall apart completely -- decohere? What better concept could we arrive at than decoherence to describe the human world we've constructed upon the prolific ground of earth? If so, our species might well expect to revert again to an earlier stage of life on earth, another effort by nature to evolve intelligent beings, perhaps producing a species capable of learning how to live sensibly and cooperatively, even justly, within the constraints of our natural ecology. So I can't put it down, as the author, Scott Aaronson, does, to "our universe’s freakish awfulness." Why has our species failed to assume its proper role as 'shepherds of being', in Heidegger's later philosophy? That's something we could try to figure out.}

I had not read to that point ... it's an interesting conclusion, not what I had expected

Whatever the details, though, I think it can no longer be doubted that we live, not in the coldly impersonal universe that physics posited for centuries, but instead in a tragicomically evil one.

On the other hand, Stephen Pinker has argued against this in The Better Angels of Our Nature.

On the third hand you could look at Controversial New Theory Suggests Life Wasn't a Fluke of Biology—It Was Physics and "dissipation driven adaptation" (which we've talked about before) ... so that the dissipation finally wins and the clowns are just playing to the lowest common denominator, making the least effort -

I guess I'm running out of hands - but I don't want the orange clowns to win. We didn't blow ourselves up in the Cold War. For example, under the Carter administration, they did figure out it was a training tape sending the signals and found that out at pretty much the last (literally) minute possible. But perhaps we were saved for a worse fate.

I am surprised by his conclusion and not sure I understand exactly what it is he is concluding. Maybe in the comments ...

@Constance writes:

{A blind third alternative (given my general ignorance of mathematics and quantum mechanics), but is it possible that in our little part of the cosmos, where life has evolved to a certain degree of intelligence, humanly generated ideas in the humanly lived macrosphere also exist in fragile temporary superpositions that [for some reason or reasons] fall apart completely -- decohere? What better concept could we arrive at than decoherence to describe the human world we've constructed upon the prolific ground of earth? If so, our species might well expect to revert again to an earlier stage of life on earth, another effort by nature to evolve intelligent beings, perhaps producing a species capable of learning how to live sensibly and cooperatively, even justly, within the constraints of our natural ecology. So I can't put it down, as the author, Scott Aaronson, does, to "our universe’s freakish awfulness." Why has our species failed to assume its proper role as 'shepherds of being', in Heidegger's later philosophy? That's something we could try to figure out.}

I was just talking about this ... about the idea of man's "dominion" over the Earth (a Biblical idea) ... the fallacy of the noble or wise savage (humans and closely related species, some argue, have always been poor sheperds of being - it's just that we have reached an impact of global proportions).

Two ideas I've not seen in sci-fi, but have probably been explored are:

1. a balance struck on a planet among many different intelligent species - to tie in with Mysterianism, perhaps some of them are philosophically-minded and learn all about the Hard Problem in the third grade - others have great social intelligence ... now I'm thinking that in Stories of Your Life (the film "Arrival" is based on this short story by Ted Chiang) there is an alien species that sees through time and is able to save our species so we can later save theirs, but who have a very different understanding of physics that complements ours ... so the idea is many intelligent species but none dominates.

2. we run into aliens that are basically like us - with a parallel history but have done so much more than we have - they have good government, population under control their planet is more or less a paradise, etc. And we can find no accounting for this except that they've simply done better than us. We could conceivablt run into that as individuals in real life of course - so and so is no better than you in any way that you can tell except for having done so much more with their lives ... but to explore that idea on a species level. Because we are always saying "look at all the remarkable things we humans have accomplished" but I always think "compared to who?" It's a case of n of 1.
 
Many people striving to explain consciousness are actually striving to explain Subjective Experience, which is fine but not the same thing.

Could you define and/or differentiate these words as you are using them here?

consciousness: ________

Subjective Experience: ________
I essentially mean the difference between the Hard and Easy problems of consciousness.

Why should there be phenomenal consciousness at all (Hard Problem); why does phenomenal consciousness have the form/structure that it does (Easy Problem).

As has been well discussed, the "easy problem" is not easy at all. It just indicates that the Hard Problem is a metaphysical question and the easy problems are functional problems.

Obviously as with pretty much every facet of CS, this is controversial.

An analogy would be why does anything exist at all vs why does what does exist have the form/structure that it does?

As far as consciousness, I think we are making progress on the second question (see Anil Seth for example) but still floundering on the first.
 
I essentially mean the difference between the Hard and Easy problems of consciousness.

Why should there be phenomenal consciousness at all (Hard Problem); why does phenomenal consciousness have the form/structure that it does (Easy Problem).

As has been well discussed, the "easy problem" is not easy at all. It just indicates that the Hard Problem is a metaphysical question and the easy problems are functional problems.

Obviously as with pretty much every facet of CS, this is controversial.

An analogy would be why does anything exist at all vs why does what does exist have the form/structure that it does?

As far as consciousness, I think we are making progress on the second question (see Anil Seth for example) but still floundering on the first.

@Soupie writes: "... why does phenomenal consciousness have the form/structure that it does (Easy Problem)."

David Chalmers discussed the hard and easy problems of consciousness in his 1995 paper Facing Up to the Problem of Consciousness

Facing Up to the Problem of Consciousness

"At the start, it is useful to divide the associated problems of consciousness into "hard" and "easy" problems. The easy problems of consciousness are those that seem directly susceptible to the standard methods of cognitive science, whereby a phenomenon is explained in terms of computational or neural mechanisms. The hard problems are those that seem to resist those methods.

The easy problems of consciousness include those of explaining the following phenomena:

  • the ability to discriminate, categorize, and react to environmental stimuli;
  • the integration of information by a cognitive system;
  • the reportability of mental states;
  • the ability of a system to access its own internal states;
  • the focus of attention;
  • the deliberate control of behavior;
  • the difference between wakefulness and sleep."
 
@Soupie writes: "... why does phenomenal consciousness have the form/structure that it does (Easy Problem)."

David Chalmers discussed the hard and easy problems of consciousness in his 1995 paper Facing Up to the Problem of Consciousness

Facing Up to the Problem of Consciousness

"At the start, it is useful to divide the associated problems of consciousness into "hard" and "easy" problems. The easy problems of consciousness are those that seem directly susceptible to the standard methods of cognitive science, whereby a phenomenon is explained in terms of computational or neural mechanisms. The hard problems are those that seem to resist those methods.

The easy problems of consciousness include those of explaining the following phenomena:

  • the ability to discriminate, categorize, and react to environmental stimuli;
  • the integration of information by a cognitive system;
  • the reportability of mental states;
  • the ability of a system to access its own internal states;
  • the focus of attention;
  • the deliberate control of behavior;
  • the difference between wakefulness and sleep."
I would say the first five probably cover the form/structure of conscious experience. (And I don't think the above list is supposed to be exhaustive.)
 
Status
Not open for further replies.
Back
Top