• NEW! LOWEST RATES EVER -- SUPPORT THE SHOW AND ENJOY THE VERY BEST PREMIUM PARACAST EXPERIENCE! Welcome to The Paracast+, eight years young! For a low subscription fee, you can download the ad-free version of The Paracast and the exclusive, member-only, After The Paracast bonus podcast, featuring color commentary, exclusive interviews, the continuation of interviews that began on the main episode of The Paracast. We also offer lifetime memberships! Flash! Take advantage of our lowest rates ever! Act now! It's easier than ever to susbcribe! You can sign up right here!

    Subscribe to The Paracast Newsletter!

Consciousness and the Paranormal — Part 9

Free episodes:

Status
Not open for further replies.
Isn't the obvious answer that a mind is a subjective process while hurricanes and airplanes are objective processes?

Therefore the argument would be:

Given objective (physical) process X possessed a mind, then functionally isomorphic objective process X1 will also posses a mind.

Before anyone dismisses the above argument, they would have to explain why and how physical process X has a corresponding mind.

Since no one can do this, the above argument can't be dismissed.

You could argue that computer simulations can destroy simulated trailer parks or that simulated kidneys get simulated keyboards wet.
 
p. 21

"Physical structures, in addition to falling under a physical description, can take on a functional description in the course of a functional analysis. But, whereas physical descriptions apply to structures in virtue of their physical type, functional descriptions classify by virtue of functional contributions to a system. Multiple realizability follows as a consequence of the fact that structures that differ in physical description may play the same contributing role in the systems of which they are parts. Yet, even if we are to accept Fodor’s suggestion that the mind is functional in the sense of being defined by a goal or purpose, there remains a gap between this assumption and the conclusion that minds are multiply realizable. Fodor may be right that minds or mental states ought to be identified by their function, but this does not entail that minds or mental states are multiply realizable in physical kinds. The problem is this. Whereas there are many cases in which objects of distinct physical description may bear the same functional description, there are also cases in which a functional description may apply only to a single kind of physical object. For instance, suppose one wishes to build a machine that drills for oil through surfaces composed of extremely hard minerals. Drill bit is presumably a functional kind. Like valve lifter, it appears that one can speak of drill bits without taking on a commitment to any particular physical description of the kinds of things that can do the job of a drill bit. However, if diamonds are the only substance that in fact are hard enough to drill through very hard surfaces, then drill bit picks out a physical kind no less than it refers to a functional kind."
 
"As an alternative example, suppose one wishes to build a solar cell like the kind that powers a hand calculator. To build a solar cell, one needs a substance that turns light into electricity and in which it is possible to control the flow of electrons. Existing solar cells consist of two types of silicon—n-type silicon (“n” for negatively charged) in which some of the silicon atoms have an extra electron, and p-type silicon (“p” for positively charged) in which some of the silicon atoms lack an electron. By layering n-type silicon on top of p-type silicon and exposing the resulting lattice to light, it is possible to create an electrical current. The light frees electrons, which move from the n-type silicon to the calculator, back into the p-type silicon and then up into the n-type silicon, and so on.4 The kind solar cell appears to be a functional kind—it is defined as that which turns light into a controlled electrical current. But suppose that nand p-type silicon are the only substances that exhibit the properties necessary for the construction of a solar cell. If such were the case, then “solar cell,” which appears to be a functional kind, also picks out a particular type of physical structure, namely, a lattice of n- and p-type silicon atoms.

The point of these examples is not to argue that the mind, construed in the functional sense that Fodor develops, is not multiply realizable, but rather to show that adoption of a functional perspective toward the mind does not entail that the mind is multiply realizable. Perhaps it is; perhaps it is not. Whether it is requires empirical investigation of a sort that puts philosophers on the sidelines."
 
Isn't the obvious answer that a mind is a subjective process while hurricanes and airplanes are objective processes?

Therefore the argument would be:

Given objective (physical) process X possessed a mind, then functionally isomorphic objective process X1 will also posses a mind.

Before anyone dismisses the above argument, they would have to explain why and how physical process X has a corresponding mind.

Since no one can do this, the above argument can't be dismissed.

Whether it is requires empirical investigation of a sort that puts philosophers on the sidelines.
 
A kidney that has "wetness" as observed from our ridiculously limited point-of-view may be no more "wet" than the quarks, hadrons and leptons that make up this "wetness."

At least our ridiculously limited pov isn't so ridiculously limited that we don't even know it's ridiculously limited ... maybe that's why we don't just give up?
 
Last edited:
The point of these examples is not to argue that the mind, construed in the functional sense that Fodor develops, is not multiply realizable, but rather to show that adoption of a functional perspective toward the mind does not entail that the mind is multiply realizable. Perhaps it is; perhaps it is not. Whether it is requires empirical investigation of a sort that puts philosophers on the sidelines."
Excellent stuff smcder.

If I follow, the above is saying that in principle the mind may be MR, but some functional relations may be one of a kind (OOAK).

A diamond drill bit and ultra hard substrate may be one such ooak relation.

Thus, if the human mind is contingent on one or more ooak functional relations, then minds like humans will not be MR.

One question then is that if this turns out to be the case, does it follow that no minds are MR. or is it just human minds?

For example, diamond drill bit = human mind. There can be no other diamond drill bit thus no other MR human mind.

However there can be titanium drill bits. Titanium drill may not MR a human mind, but may they realize some other mind?

In other words, if there are ooak relations, do they just rule out human minds being MR, or do they rule out minds being MR?

While I agree that empirical research is critical here, the problem with minds is that they are very stubborn about showing up in X-rays, MRI scanners, and microscopes.
 
Also, whether correct or not, the mainstream view is that minds (some would say consciousness but not I) are realized on the scale of neural networks/brain waves.

Is there reason to believe that functional relations between neurons/ neural networks are ooak?
 
Excellent stuff smcder.

If I follow, the above is saying that in principle the mind may be MR, but some functional relations may be one of a kind (OOAK).

A diamond drill bit and ultra hard substrate may be one such ooak relation.

Thus, if the human mind is contingent on one or more ooak functional relations, then minds like humans will not be MR.

One question then is that if this turns out to be the case, does it follow that no minds are MR. or is it just human minds?

For example, diamond drill bit = human mind. There can be no other diamond drill bit thus no other MR human mind.

However there can be titanium drill bits. Titanium drill may not MR a human mind, but may they realize some other mind?

In other words, if there are ooak relations, do they just rule out human minds being MR, or do they rule out minds being MR?

While I agree that empirical research is critical here, the problem with minds is that they are very stubborn about showing up in X-rays, MRI scanners, and microscopes.

Keep reading as he examines the empirical evidence at the end of the paper -this is only one chapter of a book apparently, I hope the rest is available.

Note that what he sets out to do is:

In the end, I hope to have diminished significantly the support for a conception of MRT that falls somewhere between weak and standard MRT.


Weak MRT: At least some creatures that are not exactly like us in their physical composition can have minds like ours.

SETI1 MRT: Some creatures that are significantly different from us in their physical composition can have minds like ours.

Standard MRT: Systems of indefinitely (perhaps infinitely) many physical compositions can have minds like ours.

Radical MRT: Any (every) suitably organized system, regardless of its physical composition, can have minds like ours. Polger notes that standard and radical MRT seem to
 
Also, whether correct or not, the mainstream view is that minds (some would say consciousness but not I) are realized on the scale of neural networks/brain waves.

Is there reason to believe that functional relations between neurons/ neural networks are ooak?

mental constraint thesis (MCT). According to MCT, humanlike
minds are not multiply realizable, or, at least, many humanlike mental
capacities are not multiply realizable
 
mental constraint thesis (MCT). According to MCT, humanlike
minds are not multiply realizable, or, at least, many humanlike mental
capacities are not multiply realizable
Evidence supporting MCT?

If the structure of minds are contingent on functional relations, then MR functional relations will realize minds with the same structure.

MCT would need to provide evidence that neural relations are ooak.
 
http://homepages.uc.edu/~polgertw/Polger-ShapiroTICS.pdf

"The very coherence of a substantial research program in artificial intelligence requires that brains are not the only way to build minds. And it is the alleged multiple realizability of minds seems to justify cognitive psychology as an autonomous science in its own right, able to pursue its explanatory goals without waiting on progress in the neurosciences. Conceptions of minds as essentially intentional, representational, computational, or information processing systems all suppose that one can understand minds without knowing much about the stuff of which they are made. That is, they free the sciences of the mind from the sciences of the brain by taking for granted that there are many ways to construct a mind."
 
Evidence supporting MCT?

If the structure of minds are contingent on functional relations, then MR functional relations will realize minds with the same structure.

MCT would need to provide evidence that neural relations are ooak.

Consider two alternatives: either the physical constraints on minds are few or none (the multiple realization thesis) or else they are many (the mental constraint thesis.) Clearly these alternatives stand near opposing ends of a spectrum of possibilities. What we’d like to know is where the truth about human minds falls on that spectrum. Now we can make some predictions. If the constraints on minds are few or none, then from facts about minds we should not be able to predict much or anything about the physical structures that realize them. After all, minds could just as well be realized by many other structures. On the other hand, if we can make predictions about the realizers of minds, this suggests that there are some substantial constraints on how minds are built. Shapiro argues that we can in fact make interesting predictions about brains and bodies from facts about minds. This shows that minds are not as multiply realizable as many have supposed, and therefore that we ought to favor the mental constraint thesis
 
http://homepages.uc.edu/~polgertw/Polger-EvidenceMR.pdf

"The belief that mental states are multiply realized is now nearly universal among philosophers concerned with psychology and neuroscience, as is the belief that this fact decisively refutes the identity (brain state) theory. Yet the empirical support for multiple realization does not justify the confidence that has been placed in it."

Putnam's definitive arguments for MRT date to the 1960s - this paper is 2009 ... what I can gather is that arguments against MRT are very new and gaining in number and strength, the Shapiro book of 2004 led the way.
 
The Mind Incarnate

"While contemporary philosophers no longer view the mind as a supernatural entity—the famous "Ghost in the Machine" dogma that Gilbert Ryle ridiculed over fifty years ago

Shapiro argues that naturalistic approaches to understanding the mind retain their own naturalized varieties of ghosts.

In particular, the multiple realizability thesis holds that the connection between human minds and human brains is in some sense accidental: the tie between mental properties and neural properties is not physically necessary. According to the separability thesis, the mind is a largely autonomous component residing in the body that contributes little to its functioning. Shapiro tests these hypotheses against two rivals, the mental constraint thesis and the embodied mind thesis. Collecting evidence from a variety of sources (e.g., neuroscience, evolutionary theory, and embodied cognition) he concludes that the multiple realizability thesis, accepted by most philosophers as a virtual truism, is much less obvious than commonly assumed, and that there is even stronger reason to give up the separability thesis.

In contrast to views of mind that tempt us to see the mind as simply being resident in a brain or body, Shapiro view is a far more encompassing integration of mind, brain, and body than philosophers have supposed."
 
Also, whether correct or not, the mainstream view is that minds (some would say consciousness but not I) are realized on the scale of neural networks/brain waves.

'The mainstream view' among which population of thinkers and researchers in which fields of inquiry?

Some would say consciousness but not I.

The irony is that computer science and its development of the project to build 'artificial intelligence' was itself a (perhaps the) primary motivation for scientific investigations of consciousness in our time, which soon generated the interdisciplinary field of Consciousness Studies, which we have been exploring here for more than two years. In their desire to achieve consciousness in computers, AI researchers themselves implicitly recognized consciousness as a prerequisite for 'mind'. After thirty years of development of interdisciplinary consciousness studies, we understand considerably more about consciousness than we did at the outset. In my view, the more we have learned about consciousness the less we are justified in equating minds with brains (i.e., neural nets).
 
from the post above

Shapiro argues that naturalistic approaches to understanding the mind retain their own naturalized varieties of ghosts.

Does intelligence require a body?: The growing discipline of embodied cognition suggests that to understand the world, we must experience the world

"Does the body have an impact on how we think? Seen from the practical perspective of artificial intelligence, it boils down to the question:

  • does the shape of the machine matter when trying to model intelligence in said machine?
Such a question would have been greeted with ridicule in the 1950s or 1960s, the early heydays of artificial intelligence. The prevalent thinking of the time was that cognition involved the manipulation of abstract symbols following explicit rules. Information about the physical world could be translated into symbols and processed according to formal logic [5,6,7,8]. “Cognition was described as a chess play,” explained Giulio Sandini, Director of Research at the Italian Institute of Technology and Professor of Bioengineering at the University of Genoa, Italy. As such, because symbol processing is abstract, it is independent of a platform. Scientists therefore claimed that cognition is similar to computation: minds run on brains as software runs on computer hardware [7,9]. In his book, Artificial Intelligence: The Very Idea (1985), John Haugeland (1945–2010), Professor Emeritus and Chair of the Philosophy Department at the University of Chicago, coined the term ‘GOFAI'—‘good old-fashioned artificial intelligence'—to describe this approach [1]."

At this point - H. Dreyfus made his Heideggerean critique of GOFAI in What Computers Can't Do ... and following some taking on board of his criticism, Dreyfus wrote What Computers Still Can't Do.

“Once you are caught up in this Cartesian world view that thinking is algorithms or a computer programme, it is enormously difficult to free yourself from that. It just seems so obvious: there is input, processing, output—how else could it be?” commented Rolf Pfeifer, Professor of Computer Science at the Department of Informatics, University of Zurich and Director of the University's Artificial Intelligence Laboratory.
 
Does intelligence require a body?: The growing discipline of embodied cognition suggests that to understand the world, we must experience the world

However, there is a lot more to human intelligence than knowing about objects. Humans write and read articles, think about politics, conduct experiments and ponder their own existence or the meaning of the universe. We usually conceive of intelligence as something that goes beyond acting and sensing. Yet, it seems that our bodies remain intimately involved. Einstein conceived his theory of general relativity as he imagined that he was travelling along a light beam at the speed of light.

In the 1980s and 1990s, George Lakoff and Mark Johnson argued that even abstract thought is rooted in our experience of interacting with the world [19,20]. According to Lakoff and Johnson, there is a ground stock of concepts to which other concepts can refer and these basic concepts are related to our body and how we move in space. Indeed, this idea is reflected in our language and our use of metaphors. When we plan a project, for example, we have a goal somewhere ‘ahead of us' [5]; we equate being happy with being ‘up', or being sad with being ‘down' or ‘depressed'. With a different body, Lakoff and Johnson argue, our concept of happiness would be a different one [7].
 
@smcder

I know you're just the messenger and that you may or may not subscribe to the thesis of embodied cognition.

But I have some questions that I wonder if you can answer:

what metaphysical category would embodied cognitivists fall into if you had to guess? Are they physical monists? Is EC similar or the same as Identity Theory? In this case the mind just is the body?

Do ECs distinguish between consciousness and mind? Or do they believe consciousness and mind emerge together?

At what scale do ECs theorize that minds emerge? The molecular, cellular, multi-cellular level?

Do they have an answer for what is essentially the combination problem? If ECs hold that minds emerge at the cellular level, how do they suppose that all these individual cellular minds combine into one subjective pov?
 
Status
Not open for further replies.
Back
Top