S
smcder
Guest
THAT sounded bad ...
NEW! LOWEST RATES EVER -- SUPPORT THE SHOW AND ENJOY THE VERY BEST PREMIUM PARACAST EXPERIENCE! Welcome to The Paracast+, eight years young! For a low subscription fee, you can download the ad-free version of The Paracast and the exclusive, member-only, After The Paracast bonus podcast, featuring color commentary, exclusive interviews, the continuation of interviews that began on the main episode of The Paracast. We also offer lifetime memberships! Flash! Take advantage of our lowest rates ever! Act now! It's easier than ever to susbcribe! You can sign up right here!
Okay. I wasn't sure if you were suggesting phenomenal experiences/qualia are illusory.Easy now ... it's just humor ... we've been on a quale hunt for much of this thread and I'm curious too.
If the sense of free will is a cognitive illusion that exists as a result of determined, evolutionary processes, what adaptive function does it provide?
The functional role of free-will illusion in cognition: “The Bignetti Model”
Abstract
When performing a voluntary action the agent is firmly convinced that he has freely decided to perform it. This raises two questions: “Is this subjective perception of free will (FW) an illusion?” and “Does it serve a useful purpose?”. The answers are tentatively given by “The “Bignetti Model” (TBM) as follows: (1) The so called “voluntary” action is decided and performed by the agent’s unconscious mind (UM) by means of probabilistic responses to inner and outer stimuli; (2) After a slight delay, the agent becomes aware of the ongoing action through feedback signals (somatosensory, etc.) that are conveyed to the brain as a consequence of its performance. Thus, the agent’s conscious mind (CM) always lags behind unconscious activity; (3) Owing to this delay, the CM cannot know the unconscious work that preceeds awareness, thus the CM erroneously believes it has freely decided the action. Though objectively false, this belief is subjectively perceived as true (FW illusion). It is so persistent and deep-rooted in the mind that the CM is unwilling to abandon it; (4) The FW illusion satisfies a psychological need to secure the arousal of the senses of agency (SoA) and of responsibility (SoR) of the action. Both SoA and SoR inevitably lead the CM to self-attribute reward or blame depending on action performance and outcome; (5) Both reward and blame are motivational incentives which foster learning and memory in the CM; the updating of knowledge will provide new information and the skill required for further action (restart from point 1).
Introduction
The American philosopher John Searle believes that mind and body are not two different entities; that consciousness is an emergent property of the brain, and that consciousness is a series of qualitative states (Searle, 1997). With regard to the old philosophical question of duality and FW, Searle is astonished that the problem of duality has not yet been resolved, and thus asks himself why we find the conviction of our own FW so difficult to abandon. He writes: “The persistence of the traditional free will problem in philosophy seems to me something of a scandal”. Nevertheless, many thinkers have studied this issue and many papers have been written, but it appears that little progress has been made. He questions: “Is there some conceptual problem we have simply ignored? Why is it that we have made so little progress compared with our philosophical ancestors?” He is not able to provide a philosophical solution to the question, and rather than adding further proposals, none of which would be convincing, he bypasses the obstacle by stating that “the philosophical mind–body problem seems to me not very difficult. However, the philosophical solution kicks the problem upstairs to neurobiology, where it leaves us with a very difficult neurobiological problem. How exactly does the brain do it, and how exactly are conscious states realised in the brain? What exactly are the neuronal processes that cause our conscious experience, and how exactly are these conscious experiences realised in brain structures?”
We agree with Searle when he claims to be astonished by this evidence, but we do not agree with him when he suggests that we should “kick the question upstairs to neurobiology” as if FW were not an intriguing issue anymore. This paper will attempt to take a significant step forward on this issue.
Material events can be described by an external observer as a chain of causes and effects which, in turn, may be causes for other effects and so on. Conversely, when we voluntarily cause an event, we do not feel that we are part of a chain; rather we consider our action to be the result of free will (FW). Wegner states that scientific explanations account for our decisions and the illusion of FW (Wegner, 2002). There must always be an objective mechanism, i.e., a precise relationship between causes and effects, underlying a voluntary action. We think that we consciously will what we are doing because we feel “free from causes” and because we experience this feeling many times a day (Wegner, 2002). ...
If the sense of free will is a cognitive illusion that exists as a result of determined, evolutionary processes, what adaptive function does it provide?
Well, it made me laugh.I'm not sure of you're meaning here, but I want to be clear that I'm not trying to "trick" Marduk. It's an honest question. I'm curious to hear his thoughts.
If it's the term "quale" that you're bulking at, let's just focus on phenomenal experience.
http://www.yale.edu/acmelab/articles/Morsella_2005.pdf
The Function of Phenomenal States: Supramodular Interaction Theory
Abstract
Discovering the function of phenomenal states remains a formidable scientific challenge. Research on consciously penetrable conflicts (e.g., “pain-for-gain” scenarios) and impenetrable conflicts (as in the pupillary reflex, ventriloquism, and the McGurk effect [H. McGurk & J. MacDonald, 1976]) reveals that these states integrate diverse kinds of information to yield adaptive action. Supramodular interaction theory proposes that phenomenal states play an essential role in permitting interactions among su- pramodular response systems—agentic, independent, multimodal, information-processing structures defined by their concerns (e.g., instrumental action vs. certain bodily needs). Unlike unconscious processes (e.g., pupillary reflex), these processes may conflict with skeletal muscle plans, as described by the principle of parallel responses into skeletal muscle (PRISM). Without phenomenal states, these systems would be encapsulated and incapable of collectively influencing skeletomotor action.
Introduction
Discovering the function of phenomenal states remains one of the greatest challenges for psychological science (Baars, 1998, 2002; Bindra, 1976; Block, 1995; Chalmers, 1996; Crick & Koch, 2003; Donald, 2001; Dretske, 1997; Jackendoff, 1990; James, 1890; Mandler, 1998; Searle, 2000; Shallice, 1972; Sherrington, 1906; Sperry, 1952; Wegner & Bargh, 1998). These enigmatic phenomena, often referred to as “subjective experience,” “qualia,” “sentience,” “consciousness,” and “awareness,” have proven to be difficult to describe and analyze but easy to identify, for they constitute the totality of our experience. Perhaps they have been best defined by Nagel (1974), who claimed that an organism has phenomenal states if there is something it is like to be that organ- ism—something it is like, for example, to be human and experi- ence pain, love, breathlessness, or yellow afterimages. Similarly, Block (1995) claimed, “The phenomenally conscious aspect of a state is what it is like to be in that state” (p. 227). In this article, I present a theory that addresses a simple question: What do these states contribute to the cognitive apparatus and to the survival of the human organism? ...
Software on a hard drive is made of discrete quanta of magnetic fields arranged such that you can recover the 1's and 0's.In explaining your thoughts on how the brain and consciousness are related, you've used the analogy of a computer and software.
When it comes to computers and software (input), it's easy to see how both are physical. I think. Marduk, of what would you say computer software is constituted? How about output?
I think it's both. When you create a state machine that can consume it's own output, you make a feedback loop.And I'm wondering about whether the output of a computer and software fit into the brain/consciousness analogy. Is consciousness input, output or both?
It would be the state that gets created in your brain when you look at green or think about it.One of the things Chalmers helped me understand is that it's not easy to see of what phenomenal experience — one aspect of consciousness — is constituted. A common example is the experience of the color green. We can call this "experience of green" a quale. (Sorry for the semantics. Shred it if needed.) Of what is a green quale "made?" Quarks? Atoms? Chemicals? Neurons?
Software on a hard drive is made of discrete quanta of magnetic fields arranged such that you can recover the 1's and 0's.
Software on a flash drive is made up of logic gates arranged to store the same 1's and 0's.
Software in memory is a series of 1's and 0's stored in a different kind of memory gates that happen to be pretty close to the processor. The processor takes those 1's and 0's, processes them through logic gates into different 1's and 0's.
Some of these 1's and 0's are interpreted by specialized hardware to paint pixels on a screen, turn a magnet on and off really fast to make sound, or just sit there and listen for other 1's and 0's from, say, a mouse.
The process of execution falls into different domains, be it single-threaded, multi-threaded, but at the end of the day you're describing a series of 1's and 0's sitting in memory registers that get shifted through logic gates to perform functions according to instruction sets described in other registers and logic that is "frozen" into the hardware.
One can describe the process of software execution in different kind of domains, but I've taken you pretty far into the guts of the machine and close to the metal.
What are you after? How semiconductors work, how logic gates work, how processors or memory busses work, or OS's, or the applications running on the OS?
Maybe this will help. A computer by definition is a state machine. A state machine is one that you can ascribe a series of different states to, and how you get from one state to another.
For example, you could interpret a human being as a simple state machine in one of two states: alive or dead.
The process from getting to alive to dead means you have to die; and you don't come back. So the description is simple:
A -> D with the -> being a one way transition from alive to dead.
A computer is like this, only really really complicated.
I think the human brain in execution could also be described as a series of really really simple state machines at the neuron level, only we have a whole hell of a lot of them, with the state machines interacting and looping with each other.
I think it's both. When you create a state machine that can consume it's own output, you make a feedback loop.
In other words, lucky you -- you get to influence your own software at run-time.
It would be the state that gets created in your brain when you look at green or think about it.
Here's an example of a cat's "quale" being mapped:
Sure, but now we're just talking about scale, which is an engineering problem, not a philosophical one.That's part of the problem with interpreting experiments of that type - it shouldn't be surprising that we get a spike on an EEG or whatever the device is - before we become aware of a thought - that's how we experience it subjectively too - and we talk about it that way too, thoughts come out of nowhere or pop into our heads. We don't say "now I'm going to have this thought" but as I said with attention we can gain awareness and the brain also re wires itself as a result of how we think ... so it's Sometimes the chicken sometimes the egg for free will.
The experiments I've seen have been for simple intentions or actions so I don't know it would scale up ... but we also talk about our intentions or actions that way too ... We may experience them coming from somewhere "outside" our consciousness (though we can usually shed light on it if we look hard) or as irresistible impulses ... at one time at least the law even recognized this.
Are you asking me what it's like for the software to be executed?All that skips over the subjective experience ... the "what it's like to be" ...
Objectively getting to the subjective is as concise a statement of "the hard problem" of consciousness as I can formulate.
I'm sure you've read Nagel's "What It's Like To Be A Bat".
Are you asking me what it's like for the software to be executed?
I would have no idea.
Again, I'm speculating that consciousness is an emergent property of some self-referential highly complex systems that can receive and respond to external stimuli.
It's not like I know this to be true, but at least it's being tested somewhat, and at least it doesn't require some mystical "stuff" that's not part of the material universe to exist.
Is it fair to say that software is a pattern of information [embodied as physical bits]?[A]t the end of the day you're describing a series of 1's and 0's...
I think it's easy to see how the brain and a computer are alike but not so easy to see how phenomenal experience and "a series of 1's and 0's" are alike.What are you after? How semiconductors work, how logic gates work, how processors or memory busses work, or OS's, or the applications running on the OS?
As I believe the mind is information, I think it's constituted of both the incoming and outgoing (so to speak) information.I think it's both. When you create a state machine that can consume it's own output, you make a feedback loop.
In other words, lucky you -- you get to influence your own software at run-time.
I'm not sure what you mean by "the state" here. Can you go a little farther here? The state of neurons? How does this state give rise to or create qualitative experience?[A quale] would be the state that gets created in your brain when you look at green or think about it.
Is it fair to say that software is a pattern of information [embodied as physical bits]?
I think it's easy to see how the brain and a computer are alike but not so easy to see how phenomenal experience and "a series of 1's and 0's" are alike.
Having said that, my current regard of the analogy may be even more literal then yours.
I do think mind literally is a pattern of information. A very, very complex, dynamic pattern. (Having said that, I'm not suggesting that it's the complexity that gives rise to mind.)
It's not clear to me what physical substance a qualitative experience might be reduced to, nor do I see how a quale could emerge from physical processes. (That is, not exist beforehand in any form, but then exist post-hand.)
Thus I believe the constituents of qualia — phenomenal experiences — exist as a fundamental aspect of physical reality.
I think qualitative experiences — and the rest of consciousness — are constituted of information.
As I believe the mind is information, I think it's constituted of both the incoming and outgoing (so to speak) information.
I'm not sure what you mean by "the state" here. Can you go a little farther here? The state of neurons? How does this state give rise to or create qualitative experience?
"Popping into our heads" and "coming from outside our consciousness" may be the subjective experience of our bodies receiving and integrating information; that is, the process of received, pre-integrated information (unconscious mind) being processed into integrated information (conscious mind)....before we become aware of a thought - that's how we experience it subjectively too - and we talk about it that way too, thoughts come out of nowhere or pop into our heads. ...
We may experience them coming from somewhere "outside" our consciousness (though we can usually shed light on it if we look hard) or as irresistible impulses ... at one time at least the law even recognized this.
"Popping into our heads" and "coming from outside our consciousness" may be the subjective experience of our bodies receiving and integrating information; that is, the process of received, pre-integrated information (unconscious mind) being processed into integrated information (conscious mind).
This particular meditative experience you've described many times — the formation and rejecting of thoughts — may be your sense of self (conscious mind or integrated information) observing (and ultimately denying) the integration of new information.