marduk
quelling chaos since 2352BC
Yes, go ahead. You were asking my permission?Give'r?
NEW! LOWEST RATES EVER -- SUPPORT THE SHOW AND ENJOY THE VERY BEST PREMIUM PARACAST EXPERIENCE! Welcome to The Paracast+, eight years young! For a low subscription fee, you can download the ad-free version of The Paracast and the exclusive, member-only, After The Paracast bonus podcast, featuring color commentary, exclusive interviews, the continuation of interviews that began on the main episode of The Paracast. We also offer lifetime memberships! Flash! Take advantage of our lowest rates ever! Act now! It's easier than ever to susbcribe! You can sign up right here!
Yes, go ahead. You were asking my permission?Give'r?
Yes, go ahead. You were asking my permission?
What I'm saying is the subjective experience is the act of specific neurons firing a specific way.
And that's what I'm saying too.
Other than being an overview or map of the different ideas, what's he offering here?The Chalmers and Nagel papers aren't linked as statements of my position, they are were where we started on the thread ... so everyone who's been here a while has read them, it's so we don't have to re-hash what's been covered.
Erm, I'm not sure.So phenomenal experience isn't constituted of neurons, but rather of the pattern of information they produce. Ergo phenomenal experience equals information, no?
I'm with you so far.Smcder, you stated that you believe consciousness might arise in non-organic computers. This to me seems to imply even more that mind isn't constituted of any specific physical substance, but mind is embodied by physical substances be they organic or silicone.
It's the patterns of information they produce that's important, not the atomic composition of the physical substance producing the pattern of information.
Here's where it goes a bit pear shaped for me.In my thinking:
Information = Proto qualitative experience (proto mind)
Integrated information = Qualitative experience (mind)
I'm lost.This means proto-mind has existed from the moment matter existed; and mind has existed from the moment matter gave rise to integrated information.
As I noted early, I wonder if the first moment of mind arising from proto-mind occurred when a physical system first arranged itself to represent another physical system.
However, the internal, affectual "information management" concept of the origin of mind is interesting too. I've not read that paper yet.
I think the assertion is that it is actually constituted of a physical substance, that would be the substrate doing the information processing. It's just the special arrangement and process that makes the mind.For me it comes down to the fact that mind appears not to be constituted of any physical substance, yet appears to be intimately related to physical organisms. What we know about physical organisms is that they are information processing systems. The information comes to them via "lived experience" in a real, physical world, but all this lived experience is converted to information via neuronal spikes in the body-brain.
This converted and integrated information, I believe, must be the mind.
We've been on a quale hunt and we've looked all over the physical body-brain and not found any quale! But the quale are jumping all over and around the body-brain. Where are the quale coming from? What are they made of? Here, let's move all this information out of the way that also seems to be spilling out of the body-brain so we can have a better look...
I'm reading it now, will post later. However I'll offer this for the moment:[
Chalmers trained under Hofstadter, not Nagel.
Nagel's paper is #17 of the 100 most cited papers in philosophy of mind
MindPapers: 100 most cited works by philosophers in MindPapers according to Google Scholar
... so, good, bad or indifferent, it's probably been thrown at him over the I think 40 years since its publication and there have been counter arguments and counter counter arguments, etc ... so we should be able to address your concerns above - and most of them are pretty common reactions in discussions of the bat paper, for example:
That's like saying since "physicalism" doesn't currently have all the answers, it never will, and because I (at list think I have) a subjective experience, it must not exist physically!
The first bit is more in line with McGinn's position of New Mysterianism which should drive you even battier than Nagel's paper.
But the overall critique is addressed in a couple of places in the text:
What moral should be drawn from these reflections, and what should be done next? It would be a mistake to conclude that physicalism must be false. Nothing is proved by the inadequacy of physicalist hypotheses that assume a faulty objective analysis of mind. It would be truer to say that physicalism is a position we cannot understand because we do not at present have any conception of how it might be true. Perhaps it will be thought unreasonable to require such a conception as a condition of understanding. After all, it might be said, the meaning of physicalism is clear enough: mental states are states of the body; mental events are physical events. We do not know which physical states and events they are, but that should not
prevent us from understanding the hypothesis. What could be clearer than the words 'is' and 'are'?
Of course he goes on, but he does recognize/anticipate your critique.
And in footnote 15:
I have not defined the term 'physical'. Obviously it does not apply just to what can be described by the concepts of contemporary physics, since we expect further developments. Some may think there is nothing to prevent mental phenomena from eventually being recognized as physical in their own right. But whatever else may be said of the physical, it has to be objective. So if our idea of the physical ever expands to include mental phenomena, it will have to assign them an objective character—whether or not this is done by analyzing them in terms of other phenomena already regarded as physical It seems to me more likely, however, that mental-physical relations will eventually be expressed in a theory whose fundamental terms cannot be placed clearly in either category.
This is a good link that points out the rhetorical aspect of Nagel's argument that can be missed in a first reading:
Conscious Entities » What is it like to be a bat
You might find it a good tonic for your limbic system. What I'm interested in getting from Nagel is that from what I've seen people are split neatly on this paper ... they either kind of go "yeah, that's obvious" or they go ballistic like you did ... so I'm curious if he'll share how he has taught his idea through the years and handled various objections to them.
Here's where it goes a bit pear shaped for me.
There's lots of kinds of information that won't lead to a mind.
I'm lost.
I'm reading it now, will post later. However I'll offer this for the moment:
My overall sense is that philosophy is a dead end in general, and in artificial intelligence in particular.
I think of course of all the pontificators that thought that heavier than air travel was philosophically impossible, even after it was done. I say it can be done, and it will be done.
Of course, I could be wrong. Mind could be the stuff of not of this universe. Of course, we could all be given consciousness by the incantations of dread Cthulhu, too.
But I doubt it.
I have a feeling that you fellas might like this (c.f. pages 7 to 8):
http://www.fqxi.org/data/essay-contest-files/Klingman_Gravity_and_Nature.pdf
I have a feeling that you fellas might like this (c.f. pages 7 to 8):
http://www.fqxi.org/data/essay-contest-files/Klingman_Gravity_and_Nature.pdf
I'm reading it now, will post later. However I'll offer this for the moment:
My overall sense is that philosophy is a dead end in general, and in artificial intelligence in particular.
I think of course of all the pontificators that thought that heavier than air travel was philosophically impossible, even after it was done. I say it can be done, and it will be done.
Of course, I could be wrong. Mind could be the stuff of not of this universe. Of course, we could all be given consciousness by the incantations of dread Cthulhu, t
But I doubt it.
That's essentially what the dualist arguement is.What can and will be done? AI? Chalmers thinks AI is possible, I posted a talk he gave on the topic earlier in this thread about how he thought it would happen. I don't know where Nagel stands on the issue.
I'm not sure what you mean by mind is "the stuff of not of this universe" ... I think you have a typo in there somewhere.
That's essentially what the dualist arguement is.
Mind is a magical substance that isn't of the physical universe, yet interacts with it.
I know about axioms, I was just saying if you want to bust out the math I'll try to keep up.
Very Zen answer!
I just pulled 185 deadlift, 5 lbs over bodyweight, the old man ain't dead yet.
I'm familiar with dualism.
Here's what I was trying to clarify with neurons, subjective experience and action. The diagram is the best I can do, it's just two arrows pointing from neurons firing ... hope it makes sense.
neurons firing -----------(emergence)-----> subjective experience ("I will move my hand.")
|
|
|
V
action (my hand moves)
The question is what, if any relation there is between the subjective experience and the action?
I'll stop there, because I want to go one step at a time and make sure I understand. The kind of free will I would want would be that there is a direct relationship between my thinking, my subjective experience of willing or intending my hand to move and my hand moving ... but it looks like my subjective experience and my hand moving are both caused by the neurons firing?
As I understand it, the idea that subjective experience emerges from neurons firing is epiphenomenalism and eliminative materialism would say there is no advantage conferred by having subjective experience, it's just what it feels like for the neurons to fire. Natural selection put together a brain and it so happens that subjective experience is a quality of that brain.
So there is no relationship between the subjective experience of thinking "I will move my hand." and the hand moving, other than that the hand moving and the subjective experience are both caused by the neurons firing.
It seems this is one way to interpret experiments that seem to show a person becomes aware of the intention to move their hand after the impulse is already formed - under this reading, I believe free will can be eliminated and consciousness can be seen as an "illusion" in that there is no direct relationship between what you are doing and what is going on in your head, both are effects of neurons firing.
As I said there are other possible (maybe) relationships between neurons, subjective experience and action, but I have questions about those too.
I'm familiar with dualism.
Here's what I was trying to clarify with neurons, subjective experience and action. The diagram is the best I can do, it's just two arrows pointing from neurons firing ... hope it makes sense.
neurons firing -----------(emergence)-----> subjective experience ("I will move my hand.")
|
|
|
V
action (my hand moves)
The question is what, if any relation there is between the subjective experience and the action?
I'll stop there, because I want to go one step at a time and make sure I understand. The kind of free will I would want would be that there is a direct relationship between my thinking, my subjective experience of willing or intending my hand to move and my hand moving ... but it looks like my subjective experience and my hand moving are both caused by the neurons firing?
As I understand it, the idea that subjective experience emerges from neurons firing is epiphenomenalism and eliminative materialism would say there is no advantage conferred by having subjective experience, it's just what it feels like for the neurons to fire. Natural selection put together a brain and it so happens that subjective experience is a quality of that brain.
So there is no relationship between the subjective experience of thinking "I will move my hand." and the hand moving, other than that the hand moving and the subjective experience are both caused by the neurons firing.
It seems this is one way to interpret experiments that seem to show a person becomes aware of the intention to move their hand after the impulse is already formed - under this reading, I believe free will can be eliminated and consciousness can be seen as an "illusion" in that there is no direct relationship between what you are doing and what is going on in your head, both are effects of neurons firing.
As I said there are other possible (maybe) relationships between neurons, subjective experience and action, but I have questions about those too.
I see it a little differently. Forgive going back to the computer methodology.
Every process enters into the processor, goes through a series of logic gates, then goes out the other side. How many gates a process needs to go through before it comes out the other side is known as it's gate depth.
So, a floating point divide (a computationally expensive operation) may have a gate depth of, say, 20 gates. Each level of logic gates takes a finite period of time, so a floating point divide takes each gate time x 20 to deliver.
A simple "true or false" comparison may take, say, 2 gates to go through to deliver. So that's 1/10th as computationally expensive.
What I'm surmising is that there are different levels of awareness, and different levels of autonomy in our actions.
Consciousness is probably a neuronally expensive task. We have bazillions of them, and they're a network, but let's say just for the sake of argument they have a neuron depth of 20 to keep our higher consciousness up and running.
Pulling our hand away from something hot may have been evolutionarily optimized by our neurons a long time ago, so let's say it passes through whatever the equivalent is of 2 neuron "gates" to be done. The act may be done before we're actually aware of it.
Let's also say that we have various subcomponents of the mind. One is for autonomic functions, one is for sympathetic functions, one for parasympathetic functions, etc. My brain, for the sake of local optimization, may carve off a set of neuron pathways to drive me into work. That's it's job -- just get me into work and don't make me think too much about it. Let's say it's optimized so it has a neuron depth of 10.
So it happily chugs along, and our consciousness is freed up to think as we drive into work. We may, then, take a turn or jam on the brakes before we become aware of it.
This doesn't mean that the higher level computational task of consciousness can't itself trigger a lower level event -- like taking a left hand turn around a traffic jam I just heard about on the radio.
In short, I see it kinda this way, with a series of pretty elastically defined sub processes being executed massively in parallel, with different timings.