• NEW! LOWEST RATES EVER -- SUPPORT THE SHOW AND ENJOY THE VERY BEST PREMIUM PARACAST EXPERIENCE! Welcome to The Paracast+, eight years young! For a low subscription fee, you can download the ad-free version of The Paracast and the exclusive, member-only, After The Paracast bonus podcast, featuring color commentary, exclusive interviews, the continuation of interviews that began on the main episode of The Paracast. We also offer lifetime memberships! Flash! Take advantage of our lowest rates ever! Act now! It's easier than ever to susbcribe! You can sign up right here!

    Subscribe to The Paracast Newsletter!

Consciousness and the Paranormal — Part 9

Free episodes:

Status
Not open for further replies.
Let's look again at @Soupie's post concerning Turing machines as hoped-for analogues of consciousness . . .

Not sure how I hadn't found this paper previously:

Objects of consciousness

"Definition of Conscious Agents
If our reasoning has been sound, then space-time and three-dimensional objects have no causal powers and do not exist unperceived. Therefore, we need a fundamentally new foundation from which to construct a theory of objects. Here we explore the possibility that consciousness is that new foundation, and seek a mathematically precise theory. The idea is that a theory of objects requires, first, a theory of subjects.

This is, of course, a non-trivial endeavor. Frank Wilczek, when discussing the interpretation of quantum theory, said, “The relevant literature is famously contentious and obscure. I believe it will remain so until someone constructs, within the formalism of quantum mechanics, an “observer,” that is, a model entity whose states correspond to a recognizable caricature of conscious awareness … That is a formidable project, extending well-beyond what is conventionally considered physics” (Wilczek, 2006).

The approach we take toward constructing a theory of consciousness is similar to the approach Alan Turing took toward constructing a theory of computation. Turing proposed a simple but rigorous formalism, now called the Turing machine (Turing, 1937; Herken, 1988). It consists of six components: (1) a finite set of states, (2) a finite set of symbols, (3) a special blank symbol, (4) a finite set of input symbols, (5) a start state, (6) a set of halt states, and (7) a finite set of simple transition rules (Hopcroft et al., 2006).

Turing and others then conjectured that a function is algorithmically computable if and only if it is computable by a Turing machine. This “Church-Turing Thesis” can't be proven, but it could in principle be falsified by a counterexample, e.g., by some example of a procedure that everyone agreed was computable but for which no Turing machine existed. No counterexample has yet been found, and the Church-Turing thesis is considered secure, even definitional.

Similarly, to construct a theory of consciousness we propose a simple but rigorous formalism called a conscious agent, consisting of six components. We then state the conscious agent thesis, which claims that every property of consciousness can be represented by some property of a conscious agent or system of interacting conscious agents. The hope is to start with a small and simple set of definitions and assumptions, and then to have a complete theory of consciousness arise as a series of theorems and proofs (or simulations, when complexity precludes proof). We want a theory of consciousness qua consciousness, i.e., of consciousness on its own terms, not as something derivative or emergent from a prior physical world.

No doubt this approach will strike many as prima facie absurd. It is a commonplace in cognitive neuroscience, for instance, that most of our mental processes are unconscious processes (Bargh and Morsella, 2008). The standard account holds that well more than 90% of mental processes proceed without conscious awareness. Therefore, the proposal that consciousness is fundamental is, to contemporary thought, an amusing anachronism not worth serious consideration.

This critique is apt. ..."

If I'm understanding this, he seems to eschew a materialist, reductionist, mechanistic model and aim for a systems approach starting from a ground of consciousness. Naive question: is a systems approach the same as a "relational" approach.
 
Let's look again at @Soupie's post concerning Turing machines as hoped-for analogues of consciousness . . .

I don't think he's saying Turing machines are possible analgoues for consciousness. He is positing a formalism to use in a theory of consciousness and comparing it to the formalism of a Turing machine. In the quote above, Hoffman notes that he does not think we are machines and distinguishes between models and the things themselves.
 
If we've evolved to perceive and think about things according to what helps us survive - not according to reality - then how can we trust our conclusions about things like CR? How do we know it's not part of the UI?

CR = Conscious Realism?

UI = 'Universal Interface'?


I don't think he's saying Turing machines are possible analgoues for consciousness. He is positing a formalism to use in a theory of consciousness and comparing it to the formalism of a Turing machine. In the quote above, Hoffman notes that he does not think we are machines and distinguishes between models and the things themselves.

The paper @Soupie quotes goes on to theorize six 'functions' for consciousness comparable to six functions in the Turing machine apparatus. Can the formalism of any 'model' capture consciousness in its ongoing development from prereflective experience to reflective experience? Subconscious mentation is part of consciousness. Moreover, consciousness as a whole is open-ended, never rests in categories of 'reality' whether proposed by Kant or by 'bad faith' as analyzed by Sartre.

I've gotta go now to turn my computer over for technical support. I shall return when my new Norton program, downloaded last night, is enabled on Internet Explorer as well as in Mozilla firefox, my new browser. I shall return to follow today's interesting discussion.
 
CR = Conscious Realism?

UI = 'Universal Interface'?




The paper @Soupie quotes goes on to theorize six 'functions' for consciousness comparable to six functions in the Turing machine apparatus. Can the formalism of any 'model' capture consciousness in its ongoing development from prereflective experience to reflective experience? Subconscious mentation is part of consciousness. Moreover, consciousness as a whole is open-ended, never rests in categories of 'reality' whether proposed by Kant or by 'bad faith' as analyzed by Sartre.

I've gotta go now to turn my computer over for technical support. I shall return when my new Norton program, downloaded last night, is enabled on Internet Explorer as well as in Mozilla firefox, my new browser. I shall return to follow today's interesting discussion.

My own appreciation of the role of conscious processing in social interaction has been revised due to the CS102 lecture I posted above. The evidence indicates quite a role for conscious processing in social interactions - and we do move things into and out of consciousness with some skill and sophistication, I believe. Meditation can be used to move things from automaticity to conscious control, even some physiological processes that normally operate entirely or almost entirely under subconscious control. (body temp, alimentation, etc.

And I think Hoffman would agree with you to some extent. If I remember he said that what got him on this path was the question of whether we were machines which he says he has resolved in the negative. But it raises the question of whether you can model something that is not a machine? If consciousness is open-ended, then we can't model it?
 
My own appreciation of the role of conscious processing in social interaction has been revised due to the CS102 lecture I posted above. The evidence indicates quite a role for conscious processing in social interactions - and we do move things into and out of consciousness with some skill and sophistication, I believe. Meditation can be used to move things from automaticity to conscious control, even some physiological processes that normally operate entirely or almost entirely under subconscious control. (body temp, alimentation, etc.

Yes.

And I think Hoffman would agree with you to some extent. If I remember he said that what got him on this path was the question of whether we were machines which he says he has resolved in the negative. But it raises the question of whether you can model something that is not a machine? If consciousness is open-ended, then we can't model it?

Right. The notion that we can produce a complete model of consciousness is mistaken. This notion arises from the objectivist paradigm of the physical sciences operating on the presupposition that everything that is can be explained/accounted for in objective terms. The application of the computational-informational meme to consciousness is, as I see it, the latest gambit in the attempt to reduce subjectivity to an illusion -- to erase it -- rather than recognizing it as the other half of experienced being.
 
Yes.



Right. The notion that we can produce a complete model of consciousness is mistaken. This notion arises from the objectivist paradigm of the physical sciences operating on the presupposition that everything that is can be explained/accounted for in objective terms. The application of the computational-informational meme to consciousness is, as I see it, the latest gambit in the attempt to reduce subjectivity to an illusion -- to erase it -- rather than recognizing it as the other half of experienced being.

The idea that consciousness is open-ended helps me to think about the problem I have with CR ... which is self-contradiction. We evolved not to see reality but to see the UI but then we figured out that the UI is just a UI ... there's some problems there and objections that could be made, but the basic intuition that there's a contradiction here sticks with me.

That also seems to be the problem @Michael Allen is pointing to ... that we are trying to figure things out from within a system - but then he also seems to be saying things as if he is standing outside of the system ... or I should say "outside" ;-) ... the logic in both cases is like that of:

"Here's what I would do if I were really smart."

Another way I think about it - is the difference in an AI that is very capable in an artificial world - we know how to do that - vs one that can recognize it's in an artificial world. Vs one that can move from the one world to the other.

Dawkins also dealt with something like this problem at the end of the Selfish Gene in accounting for how the mind can be brought about by genes and yet go on to make extra-genetic decisions or actions ... a problem of determinism (and related to Searle's argument for free will based on rationality).
 
If we've evolved to perceive and think about things according to what helps us survive - not according to reality - then how can we trust our conclusions about things like CR? How do we know it's not part of the UI?

We evolved not to see reality but to see the UI but then we figured out that the UI is just a UI
Just a small quibble and based on other comments you've made I know you grok this, but the IU is based on reality.

It's forged against reality, it just doesn't veridically represent reality.

So the question is to what extent do our perceptual and conceptual representations depart from reality?

Of course if the above is true it means the question is based on non-veridical perceptions and conceptions about reality and therefore may be absurd.

But can we have our cake and eat it too, Dennett does: this this and this are phenomena but this this and this are noumena.

The above model may be wrong. And if it is we are confronted with even harder questions! If we do perceive and think about reality veridically how might that be?

For those who adopt the materialist, reductionist, mechanistic, determined worldview, why should (and it must be should, right?) reality have ticktocked to a point of perfect self-awareness?

That seems quite odd, no?

So which is it: We do and can know nothing or we do and can know all?
 
Last edited:
The older doctrine, here called universal mechanism, is the ancient philosophies closely linked with materialism and reductionism, especially that of the atomists and to a large extent, stoic physics. They held that the universe is reducible to completely mechanical principles—that is, the motion and collision of matter. Later mechanists believed the achievements of the scientific revolution had shown that all phenomenon could eventually be explained in terms of 'mechanical' laws, natural laws governing the motion and collision of matter that imply a thorough going determinism: if all phenomena can be explained entirely through the motion of matter under physical laws, then even more surely than the gears of a clock determine that it must strike 2:00 an hour after striking 1:00, all phenomena must be completely determined: whether past, present or future.

The French mechanist and determinist Pierre Simon de Laplace formulated the sweeping implications of this thesis by saying:

We may regard the present state of the universe as the effect of the past and the cause of the future. An intellect which at any given moment knew all of the forces that animate nature and the mutual positions of the beings that compose it, if this intellect were vast enough to submit the data to analysis, could condense into a single formula the movement of the greatest bodies of the universe and that of the lightest atom; for such an intellect nothing could be uncertain and the future just like the past would be present before its eyes.

— Pierre Simon Laplace, A Philosophical Essay on Probabilities
One of the first and most famous expositions of universal mechanism is found in the opening passages of Leviathan by Thomas Hobbes(1651). What is less frequently appreciated is that René Descartes was a staunch mechanist, though today, in Philosophy of Mind, he is remembered for introducing the mind–body problem in terms of dualism and physicalism.

Descartes was a substance dualist, and argued that reality was composed of two radically different types of substance: extended matter, on the one hand, and immaterial mind, on the other. Descartes argued that one cannot explain the conscious mind in terms of the spatial dynamics of mechanistic bits of matter cannoning off each other. Nevertheless, his understanding of biology was thoroughly mechanistic in nature:

"I should like you to consider that these functions (including passion, memory, and imagination) follow from the mere arrangement of the machine’s organs every bit as naturally as the movements of a clock or other automaton follow from the arrangement of its counter-weights and wheels." (Descartes, Treatise on Man, p.108)
His scientific work was based on the traditional mechanistic understanding that animals and humans are completely mechanistic automata. Descartes' dualism was motivated by the seeming impossibility that mechanical dynamics could yield mental experiences.

Isaac Newton ushered in a much weaker acceptation of mechanism that tolerated the antithetical, and as yet inexplicable, action at a distance of gravity. However, his work seemed to successfully predict the motion of both celestial and terrestrial bodies according to that principle, and the generation of philosophers who were inspired by Newton's example carried the mechanist banner nonetheless. Chief among them were French philosopherssuch as Julien Offray de La Mettrie and Denis Diderot (see also: French materialism).
 
The debate over anthropic mechanism seems here to stay, at least for the time being. The thesis in anthropic mechanism is not that everything can be completely explained in mechanical terms (although some anthropic mechanists may also believe that), but rather that everything about human beings can be completely explained in mechanical terms, as surely as can everything about clocks or the internal combustion engine.

One of the chief obstacles that all mechanistic theories have faced is providing a mechanistic explanation of the human mind; Descartes, for one, endorsed dualism in spite of endorsing a completely mechanistic conception of the material world because he argued that mechanism and the notion of a mind were logically incompatible. Hobbes, on the other hand, conceived of the mind and the will as purely mechanistic, completely explicable in terms of the effects of perception and the pursuit of desire, which in turn he held to be completely explicable in terms of the materialistic operations of the nervous system. Following Hobbes, other mechanists argued for a thoroughly mechanistic explanation of the mind, with one of the most influential and controversial expositions of the doctrine being offered by Julien Offray de La Mettrie in his Man a Machine (1748).

Today, as in the past, the main points of debate between anthropic mechanists and anti-mechanists are mainly occupied with two topics: the mind — and consciousness, in particular — and free will. Anti-mechanists argue that anthropic mechanism is incompatible with our commonsense intuitions: in philosophy of mind they argue that unconscious matter cannot completely explain the phenomenon of consciousness, and in metaphysics they argue that anthropic mechanism implies determinism about human action, which (they argue) is incompatible with our understanding of ourselves as creatures with free will. Contemporary philosophers who have argued for this position include Norman Malcolm and David Chalmers.

Anthropic mechanists typically respond in one of two ways. In the first, they agree with anti-mechanists that mechanism conflicts with some of our commonsense intuitions, but go on to argue that our commonsense intuitions are simply mistaken and need to be revised. Down this path lies eliminative materialism in philosophy of mind, and hard determinism on the question of free will. This option is popular with some scientists, but it is rejected by most philosophers[who?], although not by its most well-known advocate, the eliminative materialist philosopher Paul Churchland. Some have questioned how eliminative materialism is compatibile with the freedom of will apparently required for anyone (including its adherents) to make truth claims.[2] The second option, common amongst philosophers who adopt anthropic mechanism, is to argue that the arguments given for incompatibility are specious: whatever it is we mean by "consciousness" and "free will," they urge, it is fully compatible with a mechanistic understanding of the human mind and will. As a result, they tend to argue for one or another non-eliminativist physicalist theories of mind, and for compatibilism on the question of free will. Contemporary philosophers who have argued for this sort of account include J. J. C. Smart and Daniel Dennett.
 
Some scholars have debated over what, if anything, Gödel's incompleteness theoremsimply about anthropic mechanism. Much of the debate centers on whether the human mind is equivalent to a Turing machine, or by the Church-Turing thesis, any finite machine at all. If it is, and if the machine is consistent, then Gödel's incompleteness theorems would apply to it.

Gödelian anti-mechanist arguments claim that a system of human mathematicians (or some idealization of human mathematicians) is both consistent and powerful enough to recognize its own consistency. Since this is impossible for a Turing machine, the Gödelian concludes that human reasoning must be non-mechanical.

However, the modern consensus in the scientific and mathematical community is that actual human reasoning is inconsistent; that any consistent "idealized version" H of human reasoning would logically be forced to adopt a healthy but counter-intuitive open-minded skepticism about the consistency of H(otherwise H is provably inconsistent); and that Gödel's theorems do not lead to any valid argument against mechanism.[3][4][5] This consensus that Gödelian anti-mechanist arguments are doomed to failure is laid out strongly in Artificial Intelligence: "any attempt to utilize (Gödel's incompleteness results) to attack the computationalist thesis is bound to be illegitimate, since these results are quite consistent with the computationalist thesis."[6]

Source: Mechanism (philosophy) - Wikipedia
 
Just a small quibble and based on other comments you've made I know you grok this, but the IU is based on reality.

It's forged against reality, it just doesn't veridically represent reality.

So the question is to what extent do our perceptual and conceptual representations depart from reality?

Of course if the above is true it means the question is based on non-veridical perceptions and conceptions about reality and therefore may be absurd.

But can we have our cake and eat it too, Dennett does: this this and this are phenomena but this this and this are noumena.

The above model may be wrong. And if it is we are confronted with even harder questions! If we do perceive and think about reality veridically how might that be?

For those who adopt the materialist, reductionist, mechanistic, determined worldview, why should (and it must be should, right?) reality have ticktocked to a point of perfect self-awareness?

That seems quite odd, no?

So which is it: We do and can know nothing or we do and can know all?

The tree I see is the way a tree appears to me.

CR seems to add another layer on to this ... and then that requires conscious agents and now there is blood ALL over Ockham's Razor, a case of epistemic hemophilia.
 
Some scholars have debated over what, if anything, Gödel's incompleteness theoremsimply about anthropic mechanism. Much of the debate centers on whether the human mind is equivalent to a Turing machine, or by the Church-Turing thesis, any finite machine at all. If it is, and if the machine is consistent, then Gödel's incompleteness theorems would apply to it.

Gödelian anti-mechanist arguments claim that a system of human mathematicians (or some idealization of human mathematicians) is both consistent and powerful enough to recognize its own consistency. Since this is impossible for a Turing machine, the Gödelian concludes that human reasoning must be non-mechanical.

However, the modern consensus in the scientific and mathematical community is that actual human reasoning is inconsistent; that any consistent "idealized version" H of human reasoning would logically be forced to adopt a healthy but counter-intuitive open-minded skepticism about the consistency of H(otherwise H is provably inconsistent); and that Gödel's theorems do not lead to any valid argument against mechanism.[3][4][5] This consensus that Gödelian anti-mechanist arguments are doomed to failure is laid out strongly in Artificial Intelligence: "any attempt to utilize (Gödel's incompleteness results) to attack the computationalist thesis is bound to be illegitimate, since these results are quite consistent with the computationalist thesis."[6]

Source: Mechanism (philosophy) - Wikipedia

I'll see your Wikipedia and raise you an SEP

Mechanisms in Science (Stanford Encyclopedia of Philosophy)
 
To say that the UI is based on reality seems to assume something against which the fundamental activity of CAs can act ... but that doesn't make sense if all that exists is consciousness ... so I have a hard time seeing how this isn't isomorphic to to the problems of materialism? That is, each has the same kind of problem.

Materialism has the hard problem of consciousness and Idealism has the hard problem of other minds or of matter and Panpsychism has the hard problem of combination ...

I wouldn't be too surprised if something could be put together claiming that everything is conscious activity - if Hoffman could come up with some kind of Turing like formalism ... but without any additional explanatory power (one hard problem goes away and another comes in) - on what basis would we choose?
 
To say that the UI is based on reality seems to assume something against which the fundamental activity of CAs can act ... but that doesn't make sense if all that exists is consciousness ... so I have a hard time seeing how this isn't isomorphic to to the problems of materialism? That is, each has the same kind of problem.
I agree. Hoffman says CR is not panpsychism because particles are just the UI.

However, what is the real difference between conscious atoms interacting and conscious agents interacting?
 
Some scholars have debated over what, if anything, Gödel's incompleteness theoremsimply about anthropic mechanism. Much of the debate centers on whether the human mind is equivalent to a Turing machine, or by the Church-Turing thesis, any finite machine at all. If it is, and if the machine is consistent, then Gödel's incompleteness theorems would apply to it.

Gödelian anti-mechanist arguments claim that a system of human mathematicians (or some idealization of human mathematicians) is both consistent and powerful enough to recognize its own consistency. Since this is impossible for a Turing machine, the Gödelian concludes that human reasoning must be non-mechanical.

However, the modern consensus in the scientific and mathematical community is that actual human reasoning is inconsistent; that any consistent "idealized version" H of human reasoning would logically be forced to adopt a healthy but counter-intuitive open-minded skepticism about the consistency of H(otherwise H is provably inconsistent); and that Gödel's theorems do not lead to any valid argument against mechanism.[3][4][5] This consensus that Gödelian anti-mechanist arguments are doomed to failure is laid out strongly in Artificial Intelligence: "any attempt to utilize (Gödel's incompleteness results) to attack the computationalist thesis is bound to be illegitimate, since these results are quite consistent with the computationalist thesis."[6]

Source: Mechanism (philosophy) - Wikipedia

[Note: First sentence in the above extract seems to have been garbled.]

This IEP article, "The Lucas-Penrose Argument about Gödel's Theorem,"
might be helpful.

Lucas-Penrose Argument about Gödel’s Theorem | Internet Encyclopedia of Philosophy
 
That's what I am asking.
Well, it's one thing to say that CR and panpsychism are essentially the same, but to say that physical monism or emergentism are the same is different imo.

And I agree that they present difficult problems as well, but I'm not convinced the problems are equivalent to the HP. In fact, those other problems are still present for physical monism and emergentism, on top of the HP.
 
The "hard problem" as we have used it here is based on Nagel's WILTBAB - which poses it as a problem for physicalism. A complete, physicalist explanation would leave something out ... "what it is like".

The "hard problem" as I see it is in our attempt to fully explain the result (i.e. our "qualia" and "experience") in terms and categories that are dependent on a background of being that precedes the formation and application of the "consciousness" categories and abstractions. I suppose the term used for this category is "pre-reflective" (@Constance).

What does "a background of being" mean? What are "the "consciousness" categories and abstractions?" Can I re-write this as:

The hard problem as I see it is in our attempt to explain "experience" in familiar terms - terms that came before the idea of consciousness. (do you mean in "intuitive" or "naieve" terms?

Without a loss of meaning?

I don't think everyone makes this mistake though or thinks in these terms. A lot of people have argued that the problem dissolves when you look at it in another way. But these other ways of looking at things create other "hard problems". And the problem does persist. If it were as easy as pointing out that we are thinking about it the wrong way - would it have gone away by now? Or do we maintain that large groups of intelligent people just get it wrong?

And Nagel was being rhetorical when he wrote WILTBAB.


For the first question, I observe that terms that are useful are meaningful are always familiar (or derived from familiars). As for the "background," I suppose this might be the closest caricature to "noumena" ... the background is something like what you might imagine to be the visual field that is outside your field of view. It might also (being slightly esoteric here) be considered the time before you were born (or equivalently, after you die). Background is just a metaphor -- it is meant to be an illustration of the mechanism that may allow you to experience "figure" or "form."

It is interesting to see that you find "familiar terms" to precede the "idea of consciousness" -- this makes me think that our "idea" or notion of "consciousness" is in this respect not so familiar or intuitive.

The "hard problem" as I see it is in our attempt to fully explain the result (i.e. our "qualia" and "experience") in terms and categories that are dependent on a background of being that precedes the formation and application of the "consciousness" categories and abstractions. I suppose the term used for this category is "pre-reflective."

I will need to revisit this statement above...as well as the question "what is it like to be a ______"

But I think that maybe the question can be simplified to "what is it like to be" -- even if we ask it about ourselves.

Work in progress...
 
Status
Not open for further replies.
Back
Top