• NEW! LOWEST RATES EVER -- SUPPORT THE SHOW AND ENJOY THE VERY BEST PREMIUM PARACAST EXPERIENCE! Welcome to The Paracast+, eight years young! For a low subscription fee, you can download the ad-free version of The Paracast and the exclusive, member-only, After The Paracast bonus podcast, featuring color commentary, exclusive interviews, the continuation of interviews that began on the main episode of The Paracast. We also offer lifetime memberships! Flash! Take advantage of our lowest rates ever! Act now! It's easier than ever to susbcribe! You can sign up right here!

    Subscribe to The Paracast Newsletter!

Consciousness and the Paranormal — Part 12

Free episodes:

Status
Not open for further replies.
@Soupie

And this part is interesting too ...


So early work on the perceptron was affected by Minsky and Paperts book criticizing the limits of the perceptron (for example a single perceptron can model logic gates but not an XOR gate) - however, my reading "Talking Nets: An Oral History of Neural Networks" indicates that Minsky and Papert knew that perceptrons could be linked together to form an XOR gate and that neural networks held promise for AI and that they wrote the article to give support to digital electronic gates (integrated logic gates) because they were more involved with that technology (I think that's right) and some claim that set off the first "AI Winter" by killing off work on neural nets - but others say work went on with neural networks and backpropagation...and it is only recently that they have paid off because only recently do we have the computational power to simulate large neural networks ... but see the paper above that critiques even the newest deep learning, network approaches and consider how much less energy the human brain uses to do all sorts of things that simulated neural networks cannot.
 
Sure.
Please explain how Dennett is a dualist.
I've made no claim about Dennett being a dualist or anything else, and therefore no explanation is required.
please explain how something can be emergent and fundamental simultaneously.
I recently explained how something can be emergent and fundamental simultaneously via the fundamental quanta model, where if whatever composes such quanta is assumed to be fundamental, and some number of them form something larger, then that something is composed of the same fundamental stuff as its constituent parts, and therefore remains fundamental, yet the overall effect on the macro scale may stand out from the random background noise.
Please explain how a fundamental consciousness field fits in the materialist paradigm. Or fundamental consciousness quanta.
As already explained; the "materialist paradigm" leaves a lot of room for personal interpretation. I prefer to use the term "physicalist" and to equate the physical with anything that we know exists. This includes all the familiar solids, liquids, gasses, plasmas, radiation, waves, fields, particles, whatever the case may be that anything in the universe ( including consciousness ) is composed of.

The logic behind this was well said by @marduk who pointed out that if something is interacting with our physical universe then it must also be part of our physical universe. I see no way to escape this logic. Consequently if consciousness had no way of interacting with our universe, we wouldn't know it existed. But we do. Therefore it must be physical.
please explain how this fundamental consciousness field interacts with matter and energy to support your claim.
It isn't necessary to explain how something does what it does to know that it does it somehow. For ages we didn't know how the Sun produced light. We just knew that it did. Eventually we figured out a lot more about it. However we still don't have it figured out on the most fundamental scale. The thing about fundamentalness is that we simply have to accept it as part of the way things are.

To add, I also don't make any claim that consciousness is fundamental. It might not be. I don't fence myself in with conditions that we haven't got sufficient evidence to be sure about. All I'm doing is proposing models that aren't already on the table or perhaps differ a little from others I've seen. Otherwise all we're doing is rehashing the same stuff over again and getting nowhere.
 
Last edited:
I think I finally understand this notion of intelligence without representation. It’s a bit like procedural knowledge versus semantic knowledge.

The following speaks to thoughts I’ve had in this regard:


If this paradigm shift is achieved, Brooks’ proposal for non-centralized cognition without representation appears promising for full-blown intelligent agents - though not for conscious agents and thus not for human-like AI.”

Ive always felt like this notion of intelligence without representation failed to account for human consciousness. But I see how it could account for intelligent behavior.

Here is a link to a pdf of R. A. Brooks's paper entitled "Intelligence without Representation:

http://people.csail.mit.edu/brooks/papers/representation.pdf


Since Dreyfus hasn't made his paper by the same title (and apparently much else) available online without paywalls, I'm going to try to get a printout of his paper titled "Intelligence without Representation" through interlibrary loan.

Thanks for this link, which I am reading now:

Müller, Vincent C. (2007), ‘Is there a future for AI without representation?’, Minds and Machines, 17 (1), 101-15.
 
Last edited:
Here is a link to a pdf of R. A. Brooks's paper entitled "Intelligence without Representation:

http://people.csail.mit.edu/brooks/papers/representation.pdf

Since Dreyfus hasn't made his paper by the same title (and apparently much else) available online without paywalls, I'm going to try to get a printout of his paper titled "Intelligence without Representation" through interlibrary loan.

Dreyfus passed away in April 2017 and I'm not surprised at the lack of online presence...Brooks was/is the roboticist and I'm not sure of the exact relationship between the two papers other than the name, but Brooks' early work did respond somewhat to Dreyfus' critique.

Sean D. Kelly was Dreyfus' student and his papers I think may be more accessible and there may be some access to Dreyfus' papers through his web presence. I'll try to have a look.
 
@Soupie

And this part is interesting too ...


So early work on the perceptron was affected by Minsky and Paperts book criticizing the limits of the perceptron (for example a single perceptron can model logic gates but not an XOR gate) - however, my reading "Talking Nets: An Oral History of Neural Networks" indicates that Minsky and Papert knew that perceptrons could be linked together to form an XOR gate and that neural networks held promise for AI and that they wrote the article to give support to digital electronic gates (integrated logic gates) because they were more involved with that technology (I think that's right) and some claim that set off the first "AI Winter" by killing off work on neural nets - but others say work went on with neural networks and backpropagation...and it is only recently that they have paid off because only recently do we have the computational power to simulate large neural networks ... but see the paper above that critiques even the newest deep learning, network approaches and consider how much less energy the human brain uses to do all sorts of things that simulated neural networks cannot.

Very interesting. It seems that there is more going on in the organically evolved human brain-mind, expressed in (and made tangible) through prereflective and reflective consciousness, than 'computation'. What do you think?
 
@Soupie

This may help with intelligence without representation, from Sean Kelly's paper Merleau Ponty On The Body:

"It is probably because’, Merleau-Ponty concludes, ‘knowledge
of where something is can be understood in a number of ways’.2
The general point of Merleau-Ponty’s discussion is that the
understanding of space that informs my skillful, unreflective
bodily activity – activity such as unreflectively grasping the door-
knob in order to go through the door, or skillfully typing at the
keyboard – is not the same as, nor can it be explained in terms of,
the understanding of space that informs my reflective, cognitive
or intellectual acts – acts such as pointing at the doorknob in
order to identify it. As Merleau-Ponty says, in skillful, unreflective
bodily activity."

So there's the distinction between cognitive. reflective acts and "coping", spoken about here as:

"skillful, unreflective bodily activity."

Which would be the kind of intelligence AI researchers were trying to bring to robots. As I said it turns out to be hard to get a robot that can run around your house without getting stuck (and without knowing it's stuck) on different kinds of carpet or under furniture. etc. Much less cope with stairs ... And yet this never happens to your dog (or cat) "honey, the dog is stuck in the shag carpet again!".

Early AI wanted to do this with procedural knowledge and representation. Rodney Brooks took another path...and the Roomba was born.
 
This seems to be relevant:

Philosophy of Skillful Coping. Motor Intentionality vs. Representations for Action
Cristinel Ungureanu and Irina Rotaru
RedirectingGet rights and content
Under a Creative Commons license
open access

Abstract:
In many of his papers the contemporary and much debated author Hubert Dreyfus resorts to Merleau-Ponty's concept of “motor intentionality” in order to uphold the view of direct relation, not mediated by mental representations, between subject and world. He claims that as a person becomes an expert in a domain (driving, playing piano etc.), she does not respond in a rule-like way to objects, but in a flexible way to situations as wholes. No representations of objects or rules are active anymore; the whole situation in which that person is immersed requires her to act in a certain way. However, skillful coping, although not guided by rules, is not automatic action; it is intentional behavior, viz. motor intentionality–the body's adaptation to the environmental “solicitations”. This paper challenges this thesis by arguing that, in order to account for skillful coping, one need to explain how the body succeeds in dealing with complicated objects that require multiple object tracking, specific timing and spatiotemporal coordination of movements. Using contemporary research findings, it could be argued that such coping needs binding of many kinds of information (about object features, spatial details and motor commands) in the same mental file. After clarifying the concept of “mental representation”, the paper argues that such files are subpersonal representations for action.

Pdf of the whole paper is available here:

Philosophy of Skillful Coping. Motor Intentionality vs. Representations for Action
 
This too, apparently printed in a conference program and summarizing a book by Michael J. Spivey entitled The Continuity of Mind.

On the Continuity of Mind
Michael J. Spivey
Department of Psychology Cornell University Ithaca, NY 14853
spivey@cornell.edu

A recent movement in the cognitive sciences is encouraging that we discard the computer metaphor of the mind in favor of a continuous (both in time and in feature-space) dynamical framework for describing cognition (e.g., Port & Van Gelder, 1995; Spivey, in preparation; Thelen & Smith, 1993; Van Orden, Holden, & Turvey, in press). As much of the advancement of this metatheoretical framework has taken place in motor movement research (e.g., Kelso, 1995), a significant proportion of the cognitive science community has conveniently been able to ignore it, more or less. However, as contemporary theorists (e.g., Ballard, Hayhoe, Pook & Rao, 1997; Barsalou, 1999; Glenberg, 1997) renew an emphasis on the physical embodiment of cognition, and the pivotal role of action in all thought, it becomes clear that motor movement -- and especially the theoretical advances of dynamical systems that have come with it -- need to figure prominently in our treatment of mind.

In this talk, I will briefly touch on a number of experimental findings, from a few different cognitive psychology laboratories, that appear more consistent with a dynamical-systems perspective on cognition than an information-processing one. These findings come from some of the core areas in traditional cognitive psychology, including categorical perception, visual attention, spoken word recognition, and sentence processing. When placed in the context of neurophysiological evidence for distributed neuronal population codes coalescing over time, and computational demonstrations of attractor network dynamics, these findings converge on a description of the mind as a graded, probabilistic, continuously flowing “event”, rather than a discrete logical stage-based “object”.

Without abandoning the vast empirical database produced by decades of traditional cognitive psychology, the new framework encourages an extension of these inquiries using continuous on-line experimental measures that can reveal the real-time dynamical nature of cognition, perception, and action. Additionally, a computational characterization of those temporal dynamics can be provided by attractor networks, which loosely approximate both the neurophysiological properties and the temporally continuous nature of real biological neural networks. In a dynamical (as well as ecological) psychology, we are compelled to treat mind as a continuous nonlinear trajectory through a high-dimensional state-space; not as a box full of boxes full of rules and symbols. As the debate continues (cf. Dietrich & Markman, 2000), the benefits of this new perspective will be witnessed in the decades to come.

References:

Ballard, D. H., Hayhoe, M. M., Pook, P. K., and Rao R. P. N. (1997). Deictic codes for the embodiment of cognition. Behavioral and Brain Sciences, 20, 723-767.

Dietrich, E. & Markman, A. (2000). Cognitive Dynamics. Mahwah, NJ: Erlbaum. Glenberg, A. (1997). What memory is for. Behavioral and Brain Sciences, 20, 1-55.

Barsalou, L. W. (1999). Perceptual symbol systems. Behavioral and Brain Sciences, 22, 577-660.

Kelso, J. A. S. (1995). Dynamic Patterns. Cambridge, MA: MIT Press.

Port, R. & Van Gelder, T. (1995). Mind as Motion. Cambridge, MA: MIT Press.

Spivey, M. (in preparation). The Continuity of Mind. New York, NY: Oxford University Press.

Thelen, E. & Smith, L. (1993). A Dynamic Systems Approach to the Development of Cognition and Action. Cambridge, MA: MIT Press.

Van Orden, G., Holden, J., & Turvey, M. (in press). Self-organization of cognitive performance. Journal of Experimental Psychology: General.

csjarchive.cogsci.rpi.edu/proceedings/2003/pdfs/32.pdf
 
This too, apparently printed in a conference program and summarizing a book by Michael J. Spivey entitled The Continuity of Mind.

On the Continuity of Mind
Michael J. Spivey
Department of Psychology Cornell University Ithaca, NY 14853
spivey@cornell.edu

A recent movement in the cognitive sciences is encouraging that we discard the computer metaphor of the mind in favor of a continuous (both in time and in feature-space) dynamical framework for describing cognition (e.g., Port & Van Gelder, 1995; Spivey, in preparation; Thelen & Smith, 1993; Van Orden, Holden, & Turvey, in press). As much of the advancement of this metatheoretical framework has taken place in motor movement research (e.g., Kelso, 1995), a significant proportion of the cognitive science community has conveniently been able to ignore it, more or less. However, as contemporary theorists (e.g., Ballard, Hayhoe, Pook & Rao, 1997; Barsalou, 1999; Glenberg, 1997) renew an emphasis on the physical embodiment of cognition, and the pivotal role of action in all thought, it becomes clear that motor movement -- and especially the theoretical advances of dynamical systems that have come with it -- need to figure prominently in our treatment of mind.

In this talk, I will briefly touch on a number of experimental findings, from a few different cognitive psychology laboratories, that appear more consistent with a dynamical-systems perspective on cognition than an information-processing one. These findings come from some of the core areas in traditional cognitive psychology, including categorical perception, visual attention, spoken word recognition, and sentence processing. When placed in the context of neurophysiological evidence for distributed neuronal population codes coalescing over time, and computational demonstrations of attractor network dynamics, these findings converge on a description of the mind as a graded, probabilistic, continuously flowing “event”, rather than a discrete logical stage-based “object”.

Without abandoning the vast empirical database produced by decades of traditional cognitive psychology, the new framework encourages an extension of these inquiries using continuous on-line experimental measures that can reveal the real-time dynamical nature of cognition, perception, and action. Additionally, a computational characterization of those temporal dynamics can be provided by attractor networks, which loosely approximate both the neurophysiological properties and the temporally continuous nature of real biological neural networks. In a dynamical (as well as ecological) psychology, we are compelled to treat mind as a continuous nonlinear trajectory through a high-dimensional state-space; not as a box full of boxes full of rules and symbols. As the debate continues (cf. Dietrich & Markman, 2000), the benefits of this new perspective will be witnessed in the decades to come.

References:

Ballard, D. H., Hayhoe, M. M., Pook, P. K., and Rao R. P. N. (1997). Deictic codes for the embodiment of cognition. Behavioral and Brain Sciences, 20, 723-767.

Dietrich, E. & Markman, A. (2000). Cognitive Dynamics. Mahwah, NJ: Erlbaum. Glenberg, A. (1997). What memory is for. Behavioral and Brain Sciences, 20, 1-55.

Barsalou, L. W. (1999). Perceptual symbol systems. Behavioral and Brain Sciences, 22, 577-660.

Kelso, J. A. S. (1995). Dynamic Patterns. Cambridge, MA: MIT Press.

Port, R. & Van Gelder, T. (1995). Mind as Motion. Cambridge, MA: MIT Press.

Spivey, M. (in preparation). The Continuity of Mind. New York, NY: Oxford University Press.

Thelen, E. & Smith, L. (1993). A Dynamic Systems Approach to the Development of Cognition and Action. Cambridge, MA: MIT Press.

Van Orden, G., Holden, J., & Turvey, M. (in press). Self-organization of cognitive performance. Journal of Experimental Psychology: General.

csjarchive.cogsci.rpi.edu/proceedings/2003/pdfs/32.pdf
hooray! I like the the direction this is taking: "I will briefly touch on a number of experimental findings, from a few different cognitive psychology laboratories, that appear more consistent with a dynamical-systems perspective on cognition than an information-processing one."
Representationalism relies on this notion that 'information' from the envrionemnt gets re-presented, or, converted into meaningful information.—hence the link with processing and computationalism...
The concept of information is at the heart of cognition and representationalism but is fraught with problems.

Btw... I'm Pharoah... loggin problems so reregistered... also, I have another paper coming out in 'Think' (not open access—email me for a copy) coauthored with Anton Sukhoverkhov 'Polo Mints: Gateway to Existential Enlightenment'
 
It's not about rejecting the "no such thing as objective and subjective states" but about absorbing them in a framework that shows them to be facets of a kind of "world" from which such terms can be "thought" as "extant."
I'm not clear on the above. The objective exists entirely independent of whether there is any "thought" about it. The whole point of the objective world is that it exists outside our individual mental "framework" ( other than in the context of a subject for discussion like this ). We can't know what the reality of the objective world is. We can only infer its existence via our senses. Now we're back to philosophy 101.
Subjective idealism is an attempt to create the whole background of existence out of a facet viewed and experienced--a recreational model created by the "heads" of a coin trying to comprehend it's own backside without acknowledgement of the necessary interdependence involved in the transaction between ...
That strikes me as a very good analogy.
The reality of what we continually live and experience embedded in the entire background of being lies in-between the categories we posit as their source..(i.e. the "subjective" or "objective" are just models created by the entire system which we know not what because our "whatness" questioning cannot exit the very theatre or scope that brought the primordial basis for the same)
The above gets fuzzy again. There is what is. Then there is what we think about it, and what we think about it doesn't transform it into something else. It just adds a mental representation of it to the overall situation. In other words, the objective and the subjective aren't simply about mental versus material. If that were the case, then there could be no objective mental states. But there are. For example, my mental state is an objective thing in relation to you. But whatever you might think about it is your subjective experience. Bridging that divide is communication. Grant it, it is not the most dependable of bridges. But it's all we've got.
 
Last edited:
@Constance
@Soupie


I just read this summary, not the whole paper, so ... spoiler alert! :-)

"This test has a profound implication: If a machine that uses subsumption architecture solves the task, we must register non-zero representation, indicating that the machine does in fact model processes internally. Just because you do not intend to use an internal model, and just because you claim not to use representations, doesn’t mean that they are not there. Subsumption architecture machines indeed have internal states and send information back and forth between computational subunits, which in turn could be used as representations. What I suggest is that Brooks’ machines might not have been intended to use representations, but when you apply our mathematical measures of representation, you may find them lurking here after all. The claim that you can have “Intelligence without Representations” really requires that you quantify representation to back up that claim. Historically, this never happened of course. Tentatively, it is becoming possible now, due to our work. What remains is this: We aren’t very good at programming internal models into machines, and even if you try your best to avoid it (like in the subsumption architecture), these seem to find a way to represent anyway, as does evolution."
 
I've made no claim about Dennett being a dualist or anything else, and therefore no explanation is required.

I recently explained how something can be emergent and fundamental simultaneously via the fundamental quanta model, where if whatever composes such quanta is assumed to be fundamental, and some number of them form something larger, then that something is composed of the same fundamental stuff as its constituent parts, and therefore remains fundamental, yet the overall effect on the macro scale may stand out from the random background noise.
I think you are wrong in at least two ways. And I don’t think it’s a matter of interpretation.
1) If something is fundamental (say water molecules) then the thing that emerges from them (say waves) is emergent.
The water molecules are fundamental and the waves emergent. We don’t say the waves are fundamental.
2) Something is emergent if it has new properties that it’s constituents don’t have. Individual water molecules don’t have waves. Waves are something new that emerge from interacting water molecules. Anyhow this stuff is pretty well defined. We’ve already hashed this all out many posts ago. It’s not a matter of interpretation or argument as you seem to believe.

As already explained; the "materialist paradigm" leaves a lot of room for personal interpretation. I prefer to use the term "physicalist" and to equate the physical with anything that we know exists. This includes all the familiar solids, liquids, gasses, plasmas, radiation, waves, fields, particles, whatever the case may be that anything in the universe ( including consciousness ) is composed of.

The logic behind this was well said by @marduk who pointed out that if something is interacting with our physical universe then it must also be part of our physical universe. I see no way to escape this logic. Consequently if consciousness had no way of interacting with our universe, we wouldn't know it existed. But we do. Therefore it must be physical.

It isn't necessary to explain how something does what it does to know that it does it somehow. For ages we didn't know how the Sun produced light. We just knew that it did. Eventually we figured out a lot more about it. However we still don't have it figured out on the most fundamental scale. The thing about fundamentalness is that we simply have to accept it as part of the way things are.

To add, I also don't make any claim that consciousness is fundamental. It might not be. I don't fence myself in with conditions that we haven't got sufficient evidence to be sure about. All I'm doing is proposing models that aren't already on the table or perhaps differ a little from others I've seen. Otherwise all we're doing is rehashing the same stuff over again and getting nowhere.
Your logic for monism is good. However a resolution to the mbp does not follow from this. Nor does it follow that the mbp is no longer a problem for you.

The mind might be identical with the body. Or the mind might be a fundamental physical field that exists in parallel to the body. The mind might information processing at the neural level. Or the mind might be a field emitted by neurons. Etc.

What new models have you proposed and defended in this thread? You were keen on Searle’s notion of consciousness oozing from biological neurons at one point. Then the idea of consciousness being fundamental quanta/field at one point. Am I missing something?
 
I think you are wrong in at least two ways. And I don’t think it’s a matter of interpretation.
1) If something is fundamental (say water molecules) then the thing that emerges from them (say waves) is emergent.
The water molecules are fundamental and the waves emergent. We don’t say the waves are fundamental.
2) Something is emergent if it has new properties that it’s constituents don’t have. Individual water molecules don’t have waves. Waves are something new that emerge from interacting water molecules. Anyhow this stuff is pretty well defined. We’ve already hashed this all out many posts ago. It’s not a matter of interpretation or argument as you seem to believe.


Your logic for monism is good. However a resolution to the mbp does not follow from this. Nor does it follow that the mbp is no longer a problem for you.

The mind might be identical with the body. Or the mind might be a fundamental physical field that exists in parallel to the body. The mind might information processing at the neural level. Or the mind might be a field emitted by neurons. Etc.

What new models have you proposed and defended in this thread? You were keen on Searle’s notion of consciousness oozing from biological neurons at one point. Then the idea of consciousness being fundamental quanta/field at one point. Am I missing something?

1) I'd have to bone up on my emergence, but how it shakes out for me at this moment is that the strongest way to think about emergence is with consciousness, I could argue that as soon as you have water you have waves, so they are equally fundamental (although I take your point) and even oxygen and hydrogen probably indulge in wave like behavior ... and there are other problems with emergence, but it seems harder to do without the "and then a miracle occurs" quality of emergence in the case of consciousness than in other examples.

Going way back to Nagel, he opens his paper talking about reductionism and saying why the mind body problem is different from others:

"But the problems dealt with are those common to this type of reduction and other types, and what makes the mind-body problem unique, and unlike the water-H20 problem or the Turing machine-IBM machine problem or the lightning-electrical discharge problem or the gene-DNA problem or the oak tree-hydrocarbon problem, is ignored."

That's in the first paragraph.

---------------
2) "Without some idea, therefore of what the subjective character of experience is, we cannot; know what is required of a physicalist theory."

Not knowing what is required of a physicalist theory then would seem to stand in the way of defining physicalism which is a prior problem to assuming physicalism ... first you gotta know what you are assuming.

And then there'd this:

"But when we examine their subjective character it: seems that such a result is impossible. The reason is that every subjective phenomenon is essentially connected with a single point of view, and it seems inevitable that an objective, physical theory will abandon that point of view."

The temptation is to dismiss this on definition - subjective is one thing, objective another - but that's what Nagel is doing, is pointing out a problem we'd otherwise never see. So even if we argue that there's no getting over this divide, the physicalist is still left with a problem:

"If physicalism is to be defended, the phenomenological features must themselves be given a physical account."

Without that, physicalism, which makes the claim that everything is physical, is incomplete at best. To the same extent that you can give a physicalist account of water or light or whatever, you must give an account of "the phenomenological features". That opens the door to McGinn's cognitive closure, of course and may provide some other exits ... but McGinn still has to trade on faith that there is a physical explanation, albeit one the human mind congenitally cannot understand.
 
@smcder

re emergence

I agree that it’s problematic. Just as the term fundamental is problematic. The point is the terms don’t refer to the same thing.
 
Status
Not open for further replies.
Back
Top