• NEW! LOWEST RATES EVER -- SUPPORT THE SHOW AND ENJOY THE VERY BEST PREMIUM PARACAST EXPERIENCE! Welcome to The Paracast+, eight years young! For a low subscription fee, you can download the ad-free version of The Paracast and the exclusive, member-only, After The Paracast bonus podcast, featuring color commentary, exclusive interviews, the continuation of interviews that began on the main episode of The Paracast. We also offer lifetime memberships! Flash! Take advantage of our lowest rates ever! Act now! It's easier than ever to susbcribe! You can sign up right here!

    Subscribe to The Paracast Newsletter!

Consciousness and the Paranormal — Part 12

Free episodes:

Status
Not open for further replies.
Asking if consciousness is computable is like asking if gravity is computable. A computer can simulate a star and calculate its gravitational influence, but that will never change the weight of the computer in the process. At best, computation might be able to predict better designs for the structures that make consciousness apparent. A more relevant question might be: Is it possible to engineer consciousness, or the possibility of consciousness into a computer?



Hypothetically I see no reason why not. It just requires the right materials and design. Now all we have to do is figure out what those materials and designs are. But before we start playing God with machines, maybe we should be asking whether or not we should engineer consciousness into machines in the first place.

Does our knowledge of what those materials and designs are warrant a "no reason why not"? Is consciousness substrate dependent as Searle's biological naturalism argues (and does that mean no/yes/or possibly to your hypothesis) or is substrate relevant only in terms of functional organization as Chalmers' argues:


"In any case, the conclusion is a strong one. It tells us that systems that duplicate our functional organization will be conscious even if they are made of silicon, constructed out of water-pipes, or instantiated in an entire population. The arguments in this paper can thus be seen as offering support to some of the ambitions of artificial intelligence. The arguments also make progress in constraining the principles in virtue of which consciousness depends on the physical. If successful, they show that biochemical and other non-organizational properties are at best indirectly relevant to the instantiation of experience, relevant only insofar as they play a role in determining functional organization."​
Which means a lot of things could be conscious.​
What does engineering "the possibility of consciousness" into a computer mean?
 
Chalmers' argues ...
This is a preliminary response. I have not yet read the thought experiments Chalmers has offered. I will have to get to them later. In the meantime, Chalmers' premise is that, "experience is invariant across systems with the same fine-grained functional organization". To get a more complete picture of what this means: Absent Qualia, Fading Qualia, Dancing Qualia

So "substrate independent" in this context doesn't mean that consciousness as we experience it is capable of coming into being without a substrate. Rather it's that whatever the substrate is, it has to meet a set of specifications that makes it functionally equivalent. On this point I would agree. The problem is that we don't yet know what exactly it is about our biological substrate that imparts this functionality. It may be the case that the 'fineness of the grains' are so fine that they correspond to the atomic structure of specific materials, and therefore no other materials will work.

WARNING: Again I cite the analogy to electromagnetism ( please forgive me ). It seems ( so far ) that we could apply the same argument there, and find ourselves constructing electromagnets with wood cores and plastic windings, and expecting them to work.
What does engineering "the possibility of consciousness" into a computer mean?
In this context, engineering the possibility of consciousness into a computer, means identifying what fine-grained functional organization works, and then designing a computer that incorporates it. The problem ( again ) is that we don't know enough yet to be able to do that with any degree of certainty.

The obvious step in that direction would be to use neuroprocessors, but they alone may not be sufficient. Large-grained functional organization might also be required. In other words even if you make an electromagnet's winding from a suitable conductor, and the core from the suitable material e.g. iron, unless the winding is placed around the core, it simply won't work.

If the thought experiments cited in the article are the same ones outlined in the paper ( above ), I'll comment on them later. In the meantime, I think these sorts of questions are very good to reflect on, because I see them as orienting ourselves in the right direction, even if we aren't able to move in it much, or at all. Then again maybe I'm just a blathering phoole. Yes, given the probability that I've actually identified the direction required to explore life's greatest mystery, that's more likely to be it.
 
Last edited:
This is a preliminary response. I have not yet read the thought experiments Chalmers has offered. I will have to get to them later. In the meantime, Chalmers' premise is that, "experience is invariant across systems with the same fine-grained functional organization". To get a more complete picture of what this means: Absent Qualia, Fading Qualia, Dancing Qualia

So "substrate independent" in this context doesn't mean that consciousness as we experience it is capable of coming into being without a substrate. Rather it's that whatever the substrate is, it has to meet a set of specifications that makes it functionally equivalent. On this point I would agree. The problem is that we don't yet know what exactly it is about our biological substrate that imparts this functionality. It may be the case that the 'fineness of the grains' are so fine that they correspond to the atomic structure of specific materials, and therefore no other materials will work.

WARNING: Again I cite the analogy to electromagnetism ( please forgive me ). It seems ( so far ) that we could apply the same argument there, and find ourselves constructing electromagnets with wood cores and plastic windings, and expecting them to work.

In this context, engineering the possibility of consciousness into a computer, means identifying what fine-grained functional organization works, and then designing a computer that incorporates it. The problem ( again ) is that we don't know enough yet to be able to do that with any degree of certainty.

The obvious step in that direction would be to use neuroprocessors, but they alone may not be sufficient. Large-grained functional organization might also be required. In other words even if you make an electromagnet's winding from a suitable conductor, and the core from the suitable material e.g. iron, unless the winding is placed around the core, it simply won't work.

If the thought experiments cited in the article are the same ones outlined in the paper ( above ), I'll comment on them later. In the meantime, I think these sorts of questions are very good to reflect on, because I see them as orienting ourselves in the right direction, even if we aren't able to move in it much, or at all. Then again maybe I'm just a blathering phoole. Yes, given the probability that I've actually identified the direction required to explore life's greatest mystery, that's more likely to be it.

@USI Calgary writes: "This is a preliminary response. I have not yet read the thought experiments Chalmers has offered."

I'll wait until you've done that, but the questions were:

1. Does our knowledge of what those materials and designs are warrant a "no reason why not"?

In this context, engineering the possibility of consciousness into a computer, means identifying what fine-grained functional organization works, and then designing a computer that incorporates it. The problem ( again ) is that we don't know enough yet to be able to do that with any degree of certainty.

Does: "we don't know enough yet to be able to do that with any degree of certainty" warrant a "no reason why not" hypothesis? Or, if a fine-grained organization would work such that we can design a computer program that incorporates it, wouldn't we have some idea of how to proceed?

2. Is consciousness substrate dependent as Searle's biological naturalism argues and does that mean no/yes/or possibly to your hypothesis?

@USI Calgary: "Is it possible to engineer consciousness, or the possibility of consciousness into a computer?

Hypothetically I see no reason why not. It just requires the right materials and design. Now all we have to do is figure out what those materials and designs are."

It seems to me we have to have some apprehension of what those materials and designs are before we can offer a "no reason why not" to the hypothesis.
 
Last edited:
@USI Calgary writes: "This is a preliminary response. I have not yet read the thought experiments Chalmers has offered."

I'll wait until you've done that, but the questions were:

1. Does our knowledge of what those materials and designs are warrant a "no reason why not"?
Assuming we had the knowledge of what materials and designs work together to make consciousness apparent, then such knowledge would warrant a "no reason why not". However at present, that knowledge is insufficient to be reasonably certain, other than in the case of something like reproduction, cloning, or genetic engineering.
 
It seems to me we have to have some apprehension of what those materials and designs are before we can offer a "no reason why not" to the hypothesis.

Agreed.

Randel's response to your statement is "... at present, our knowledge of what materials and designs work together to make consciousness apparent is insufficient to be reasonably certain, other than in the case of something like cloning or a genetic engineering." Biofield research strongly suggests that, instead of altering what nature has produced in terms of sentience and consciousness in living organisms, we ought to continue to explore and comprehend the actual nature of the quantum substrate and potential additional influences in the substrate of life itself. Makes sense to me. Here are two relevant papers:

"Sentience Everywhere: Complexity Theory, Panpsychism & the Role of Sentience in Self-Organization of the Universe"
Neil D. Theise*1 & Menas C. Kafatos2
1Departments of Pathology & of Medicine, Beth Israel Medical Center, Albert Einstein College of Medicine, New York, NY 10003, USA
2 Center of Excellence in Applied, Computational & Fundamental Science, Chapman University, California 92866, USA


ABSTRACT: Philosophical understandings of consciousness divide into emergentist positions (when the universe is sufficiently organized and complex it gives rise to consciousness) vs. panpsychism (consciousness pervades the universe). A leading emergentist position derives from autopoietic theory of Maturana and Varela: to be alive is to have cognition, one component of which is sentience. Here, reflecting autopoietic theory, we define sentience as: sensing of the surrounding environment, complex processing of information that has been sensed, (i.e. processing mechanisms defined by characteristics of a complex system), and generation of a response. Further, complexity theory, points to all aspects of the universe comprising “systems of systems.” Bringing these themes together, we find that sentience is not limited to the living, but present throughout existence. Thus, a complexity approach shifts autopoietic theory from an emergentist to a panpsychist position and shows that sentience must be inherent in all structures of existence across all levels of scale.
Key Words: sentience, complexity theory, panpsychism, self-organization, Universe, autopoiesis.

https://www.upaya.org/uploads/pdfs/TheiseSentienceEverywhere.pdf


"Biofield Science: Current Physics Perspectives"

Menas C. Kafatos, PhD,
corresponding author
Gaétan Chevalier, PhD, Deepak Chopra, MD, John Hubacher, MA, Subhash Kak, PhD, and Neil D. Theise, MD

Abstract: This article briefly reviews the biofield hypothesis and its scientific literature. Evidence for the existence of the biofield now exists, and current theoretical foundations are now being developed. A review of the biofield and related topics from the perspective of physical science is needed to identify a common body of knowledge and evaluate possible underlying principles of origin of the biofield. The properties of such a field could be based on electromagnetic fields, coherent states, biophotons, quantum and quantum-like processes, and ultimately the quantum vacuum. Given this evidence, we intend to inquire and discuss how the existence of the biofield challenges reductionist approaches and presents its own challenges regarding the origin and source of the biofield, the specific evidence for its existence, its relation to biology, and last but not least, how it may inform an integrated understanding of consciousness and the living universe.
Key Words: Biofield, quantum mechanics, physics

Biofield Science: Current Physics Perspectives
 
Last edited:
Neural networks are interesting and if we knew some things about the relationship of consciousness to biological neurons, they might be much more interesting - but ANNs are an abstraction of biological networks which are composed of semi-independent biological entities that have to do a lot more than "compute" and I assume they came from independent biological entities that formed complex relationships within organisms ... and perhaps the connection to consciousness has something to do with those relationships, or resulted from some evolutionary shaping of those relationships - maybe a by-product, maybe integral to brain function but not intelligence (or maybe), depending on how that came out, engineering consciousness into a machine might be pretty ludicrous...or it might be an absolute necessity if we want intelligence.

McGinn says (the italics are mine):

"Now I want to marshal some reasons for thinking that consciousness is actually a rather simple natural fact; objectively, consciousness
is nothing very special. We should now be comfortable with the idea that our own sense of difficulty is a fallible guide to objective
complexity: what is hard for us to grasp may not be very fancy in itself. The grain of our thinking is not a mirror held up to the facts of nature.
In particular, it may be that the extent of our understanding of facts about the mind is not commensurate with some objective estimate of their
intrinsic complexity: we may be good at understanding the mind in some of its aspects but hopeless with respect to others, in a way that cuts across
objective differences in what the aspects involve. Thus we are adept at understanding action in terms of the folk psychology of belief and desire,
and we seem not entirely out of our depth when it comes to devising theories of language. But our understanding of how consciousness develops
from the organization of matter is non-existent.
But now, think of these various aspects of mind from the point of view of evolutionary biology. Surely language and the propositional attitudes are more complex and advanced evolutionary achievements than the mere possession of consciousness by a physical organism. Thus it seems that we are better at understanding some of the more complex aspects of mind than the simpler ones. Consciousness arises early in evolutionary history and is found right across the animal kingdom. In some respects it seems that the biological engineering required for consciousness is less fancy than that needed for certain kinds of complex motor behaviour. Yet we can come to understand the latter while drawing a total blank with respect to the former. Conscious states seem biologically quite primitive, comparatively speaking. So the theory T that explains the occurrence of consciousness in a physical world is very probably less objectively complex (by some standard) than a range of other theories that do not defy our intellects. If only we could know the psychophysical mechanism it might surprise us with its simplicity, its utter naturalness."

Interesting, if true, and it might also play to the idea that we might not find consciousness in the connections of networks, artificial or biological (because isn't that just where you would expect humans to look for it?) but rather in something simpler in terms of the function and relationship of biological or other structures...
 
OK, I'll bite. What, and where, is 'the formal description that lies outside the entity creating the same'?
Also what is the referent of 'the same'?



What 'formal' structures are you referring to? And why, 'lying within the very framework of the entity', do these structures attempt to "undermine" the same? To comprend what you've written [never say die] it is necessary to ask what is the referent of 'the same' in this sentence? Possibilities implied by your sentence include 1) the entity; 2) the entity's 'formal' structures; and/or 3) the 'framework' (a) of the entity, or (b) of the 'formal description that lies outside the entity?



So what, finally, is 'the engine of explanation'? And why does it 'attempt to undermine its own basis of generation'?

I have to say I suspect a good bit of 'woo' in these cryptic statements. Surely you can write more clearly than this. Try.


I think I clarified most of these cryptic statements in my later posts. This was just a "warm up" post to get my brain working. The problem is one of self-reference where "consciousness" attempts to explain itself through itself and the "formal description" is something like a chain of symbols (like this sentence) created by the very mechanisms/structures/whatever attempting to explain "itself."
 
In response to my post to Randel on the previous page:

Further to Randel -- In response to your statement captured above:

"Simply because neuroscience has only proven correlation, doesn't mean there is no causation, or that it's safe to assume consciousness doesn't require a neurophysiological framework, or some other framework that serves the same function."

To be sure, to the best of our knowledge, consciousness occurs in living beings, thus obviously occurs in beings having 'neurophysiological frameworks'. But as is quite obvious, all living creatures' 'neurophysiological frameworks' change and adapt throughout their lifespans. Each creature develops during and over the course of its own lifetime, evolving through the influence, the pressure, of its ongoing situated experiences increasingly complex neural connections and interconnections based in its own sensed experiences. In this way, ongoing lived experience influences, indeed forges, the developing neurological characteristics of the brain.

It seems that you think of 'neurophysiology' or 'neurophysiological frameworks' as static rather than as developing and expanding. In a way you are trapped in a 'synchronic' notion of what the body/mind of any creature is at a given moment, rather than recognizing that biologically embodied life is a diachronic process, continually developing in the time/temporality of a biological being's existence.

As we know, infants in our species and others begin to experience their existence at birth {and even before birth}, but we can hardly think that their brains, their neural networks, are the same at ages 1 month, 2, 6, 12, 20, and 80 years as they were at birth. It is lived experience, both protoconscious and conscious, prereflective and reflective, that forges the neural developments of the brain in any species, and lived experience depends on the nature/qualities of the lived environments within which embodied beings and their brains develop. We and other forms of life exist in change, just as nature as a whole exists in change.

Synchronic and diachronic - Oxford Reference


I don't know whatever gave you that idea. Evolution alone would seem to be sufficient evidence that our biological systems, including our brains and nervous systems are changing over time. Even if we take evolution out of the picture, our brains undergo a lot of changes during life. Not long ago I pointed to a video about how infants can acquire perfect pitch by being exposed to complex tones and music. Maybe you just want to see me as having certain perspectives in order to accommodate your arguments. I don't know.

It's not that I "want to see you as having certain perspectives in order to accommodate [my arguments]. It's that I have frequently sensed that you do hold certain 'perspectives' [indeed presuppositions] concerning the 'consciousness-mind-brain' relation', particularly your conviction that the brain causes consciousness. That claim hasn't yet been proved, and I think it cannot be proved, especially not by experiments with 'artificial intelligence' in computers and robots.

That you seem to think such experiments can explain lived sentience and consciousness in biological species strongly suggests, though you don't say it directly, that you assume a one-for-one correspondence between the operations of artificially constructed 'neural nets' and what living beings feel and think on the basis of their temporally lived and temporally changing experience. But the living brain is never 'once and for all' a static object, much less the facilitator of a static subjectivity. And we cannot at this point deny the reality of subjectivity in ourselves and other living creatures.

I don't know how to make this clearer. Maybe someone else can. Anyway that's why I referred you to the distinction between synchronic and diachronic approaches in various attempts to describe 'human nature'. In the mid-20th century, anthropologists and specialists in other fields generated and celebrated 'structuralism' as the path to understanding human being and being in nature. And failed to satisfy succeeding generations of thinkers and scholars in many fields, leading in cultural studies/cultural theory to the massive cross-disciplinary influence of Derrida and other postmodernists in recognizing the existentiality and open-endedness of consciousness, thought, and behavior at the foundation of meaning in all historical/cultural settings.
 
I would like to hear more about the phenomenon you refer to above.



True (the underscored statement). But what Sabhash Kak actually wrote was this:

"A conscious person is aware of what they're thinking, and has the ability to stop thinking about one thing and start thinking about another — no matter where they were in the initial train of thought. But that's impossible for a computer to do."

You and Randle appear to think that an artificial intelligence built into a computer or robot experiences 'streams of consciousness' or 'thinking' not directed or routed beforehand by their engineers, and that, like us, they can change the subject of their thinking/computing spontaneously, for one reason or another, e.g., when interrupted or distracted or occurring in cases in which a human decides to stop pursuing one train of thought and take up another, or go out for coffee. Interesting, if true. I'll watch the newspapers.


I think I posted what Sabhash Kak stated and what you've underlined doesn't change my analysis of the fallacy presented -- it is simply NOT true that a computer cannot stop itself. Kak misrepresents the halting problem and twists it into some kind of (il)-logic pretzel--a red herring at best. It is ALSO not true that a conscious person has the ability to stop thinking about one thing and then about another--unless you redefine "thinking" as some kind of spotlight that moves from one corner to another (metaphorically speaking here, no splitting hairs here). Also, the halting problem concerns turing machines--not massive networks of interconnected parallel processing computational modules that may or may not conform to the primitives of one turing machine running a program to study and determine if another program for the same machine indicated will halt or run forever.

Secondly, I don't like the term "artificial" -- intelligence either is or isn't...artificial is nothing more than our own hubris at work trying to somehow construct yet another mystical wall between ourselves (as sentient "spirits") and the "crude physical matter."
 
And the answers get funnier as you move on (yes, laughter will play a role here)

(1) What is "consciousness?"

The question as framed comes from the language processing and motor centers of the very thing that leads to you clicking on a button and writing symbols that may (or may not) help others understand the foundational what which underlies your impulse to communicate this to another person which you've already assumed will have some kind of comprehension of the "what" along with the term "consciousness."

I don't think a more loaded question can be constructed such as "what is 'consciousness'" -- even the embedded quotes betray the problem in even asking such as question


I will close on one of my favorite passages from Heidegger's "Being and Time"

As a seeking, questioning needs prior guidance from what it seeks. The meaning of being must therefore already be available to us in a certain way. We intimated that we are always already involved in an understanding of being. From this grows the explicit question of the meaning of being and the tendency toward its concept. We do not know what "being" means. But already when we ask, "What is being?" we stand in an understanding of the "is" without being [sic...multiordinal usage of being in english not in german] able to determine conceptually what the "is" means. We do not even know the horizon upon which we are supposed to grasp and pin down the meaning. This average and vague understanding of being is a fact.

From "The Exposition anof the Question of the Meaning of Being: The Necessity, Structure, and Priority of the Question of Being"

(Stambaugh trans.)

I think you are misreading Heidegger, the remedy for which lies in carefully reading one or more of the clarifying scholarly texts produced by his primary expositors for the benefit of other philosophers and thinkers. .

For example, when H. wrote "This average and vague understanding of being is a fact," he was referring to the mis-understanding of the existential nature and meaning of 'be-ing' at the time of his writing, a situation he wrote voluminously to correct and overcome.
 
This is a preliminary response. I have not yet read the thought experiments Chalmers has offered. I will have to get to them later. In the meantime, Chalmers' premise is that, "experience is invariant across systems with the same fine-grained functional organization". To get a more complete picture of what this means: Absent Qualia, Fading Qualia, Dancing Qualia

So "substrate independent" in this context doesn't mean that consciousness as we experience it is capable of coming into being without a substrate. Rather it's that whatever the substrate is, it has to meet a set of specifications that makes it functionally equivalent.

'Equivalent' to what? To consciousness as we experience it in ourselves and others, correct?

How are 'we' to accomplish this comp sci/AI goal if we do not begin with a better understanding of (a) consciousness as we experience it, and (b) the depth and complexity of the substrate of our existence and the evolution of all other sentient, protoconscious, and conscious living organisms preceding us?

Also, what persuades you that merely 'functional equivalence' between natural and artificial brains/neural nets can touch, much less exhaust, all the capacities of human consciousness and mind?
 
I think I posted what Sabhash Kak stated and what you've underlined doesn't change my analysis of the fallacy presented -- it is simply NOT true that a computer cannot stop itself. Kak misrepresents the halting problem and twists it into some kind of (il)-logic pretzel--a red herring at best. It is ALSO not true that a conscious person has the ability to stop thinking about one thing and then about another--unless you redefine "thinking" as some kind of spotlight that moves from one corner to another (metaphorically speaking here, no splitting hairs here). Also, the halting problem concerns turing machines--not massive networks of interconnected parallel processing computational modules that may or may not conform to the primitives of one turing machine running a program to study and determine if another program for the same machine indicated will halt or run forever.

So you say, Michael. Why don't you write a paper presenting a detailed critique of the article (better yet, the longer related paper) by Sabhash Kak that I've linked and see if NeuroQuantology will publish it? Or alternatively you could present such a critique here, if you prefer. I for one would need something more than your brief paragraphs of declarations and exhortations to doubt the validity of what Kak has written.

Secondly, I don't like the term "artificial" -- intelligence either is or isn't...artificial is nothing more than our own hubris at work trying to somehow construct yet another mystical wall between ourselves (as sentient "spirits") and the "crude physical matter."

Wasn't it the computer scientists and computationalists who propounded the term 'artificial intelligence' and pressed for its development over the last half-century or more? Are you telling us that all these folks now see themselves [as you apparently see yourself] as some kind of 'philosophers of mind'? Or what?
 
Last edited:
Indeed because the capturing, expressing, and translation into "terms" requires a formal description that lies outside the entity creating the same. Even worse, our "formal" structures lie within the very framework attempting to "undermine" the same through "explanation."

The engine of explanation attempts to undermine it's own basis of generation...

If this is true, then is it possible to create any reliable statements (including the above) on consciousness?
 

“on what is more standardly called strong illusionism, phenomenal consciousness is not real. but consciousness happens. it just doesn't involve phenomenal conscious experiences as such.

on weak illusionism, consciousness just isn't *exactly* what we think it is. it may not involve things like 'qualia', but phenomenal, conscious experiences are real. ...

so, it's complicated. which is why i advocate not getting so hung up about what exactly do we deny in illusionism for now. it may suffice to say: if consciousness is exactly as it seems to some people, then there may be metaphysically challenging stuff like qualia. but consciousness may not turn out to be *exactly* as it seems. it may be *somewhat* an illusion, at least in part. it's illusion-ish. so things may not turn out to be so challenging after all.”

If perceived reality is a construct of the brain, and the substrate of our perceptions is not quite what it seems, we’re left with what kind of realism exactly? Something other than mind and matter.
 
'Equivalent' to what? To consciousness as we experience it in ourselves and others, correct?
Equivalent to whatever the properties in the substrate are that are responsible for making consciousness apparent. This relies on the assumption that a substrate of some type is necessary ( which appears to be part of Chalmers' premise ).
How are 'we' to accomplish this comp sci/AI goal if we do not begin with a better understanding of (a) consciousness as we experience it, and (b) the depth and complexity of the substrate of our existence and the evolution of all other sentient, protoconscious, and conscious living organisms preceding us?
Those are all good places to explore, but at the same time, I doubt any normal newborn human has the faintest idea about any of that, yet I believe that they are conscious nonetheless. Understanding isn't required. Only the right set of circumstances.
Also, what persuades you that merely 'functional equivalence' between natural and artificial brains/neural nets can touch, much less exhaust, all the capacities of human consciousness and mind?
Equivalence implies that there will be no difference in whatever capacities you want to assign. It may even become the case that equivalence is exceeded, leaving humans at a disadvantage. Personally I don't think we should even be attempting to engineer conscious machines, but some giant ego someplace who wants to play God will eventually succeed in it anyway. Maybe they already have and we just don't know.
 
If this is true, then is it possible to create any reliable statements (including the above) on consciousness?
This is the general position that I have settled into. (I’m sure there’s a philosophical position that captures this sentiment.)

There are some in this discussion who seem to think we’ve got it all pretty much figured out. We just need to understand just a last few bits of stuff and then we’ll have it, the TOE.

I think in practice we are a long way off from a toe, and in principle it’s impossible. (I know that sentence doesn’t make great sense haha.)

Our perceptual, conceptual, and mathematical tools are all descriptive and terribly biased/subjective.

I’m not denying objective reality, although it may sound like it. Rather I’m expressing skepticism about humanity’s or a human’s ability to grip objective reality.

What I need to do is some reading on metaphysics. @smcder any suggestions?
 
If perceived reality is a construct of the brain, and the substrate of our perceptions is not quite what it seems, we’re left with what kind of realism exactly? Something other than mind and matter.
So far I haven't found any clear explanation for exactly what illusionism is. In addition to the link in your post, I've found two completely different versions. One is about free will, and another is about phenomenal consciousness. I'm assuming we're talking about the latter, in which case I'm not certain about how to interpret the word "illusion" with respect to mental phenomena or qualia, and I'm not certain how to interpret the word "real" in the context of Illusionism, when referring to mental phenomena or qualia. Maybe someone can help me with this?

So far, the case for illusionism seems like another word game that pits different contexts for words like "illusion" and "real" against one another in an effort to discredit one position or another, but if that's the case, then it appears to be arguing apples and oranges. This seems to be how philosophers do battle. Maybe just throwing apples and oranges at each other would be simpler? Perhaps someone would be interested in the attached PDF.

In the meantime, I found it interesting that philosophers often conflate WIL ( What It's Like ) properties with sensory qualities ?
 

Attachments

Last edited:
I think you are misreading Heidegger, the remedy for which lies in carefully reading one or more of the clarifying scholarly texts produced by his primary expositors for the benefit of other philosophers and thinkers. .

For example, when H. wrote "This average and vague understanding of being is a fact," he was referring to the mis-understanding of the existential nature and meaning of 'be-ing' at the time of his writing, a situation he wrote voluminously to correct and overcome.

I think you are right -- but I also think that the "mis-understanding" is the same due to self-reference.
 
If this is true, then is it possible to create any reliable statements (including the above) on consciousness?

Depends on what you mean by "reliable." Consciousness isn't something that lies outside of physical being like some kind of all-pervasive ghost... these categories are already somehow assumed within the entire framework. This is where the "halting problem" actually gives us some kind of clue as to the real problem. We are in fact nothing more than self-reproducing-organized portions of a "whole" (what physicalists will call a "universe") looking back on itself...but this isn't even at the level of fact...because a "fact" is an entity or creation of the same self-reproducting-organizing...evolving consciousness.

An analogy is the Godel incompleteness theorem...loosely speaking, a formal system cannot prove it's own fundamentally unproven axioms; and perhaps this means that a brain with consciousness cannot prove it's own foundation of existence. This is actually where the "halting problem" (i.e. sometimes reformulated stupidly as "a computer cannot stop itself from thinking") actually helps us in the realization that the "we" searching for a foundational source of "we" might be a fool's errand.

I don't really know either way--but you've at least hit the nail on the head with 15 words.
 
So you say, Michael. Why don't you write a paper presenting a detailed critique of the article (better yet, the longer related paper) by Sabhash Kak that I've linked and see if NeuroQuantology will publish it? Or alternatively you could present such a critique here, if you prefer. I for one would need something more than your brief paragraphs of declarations and exhortations to doubt the validity of what Kak has written.



Wasn't it the computer scientists and computationalists who propounded the term 'artificial intelligence' and pressed for its development over the last half-century or more? Are you telling us that all these folks now see themselves [as you apparently see yourself] as some kind of 'philosophers of mind'? Or what?

Quick replies to your second paragraph:

Yes.
Perhaps.

But those same "folks" may have fallen prey to GOFAI ..."Good old-fashioned ...."




As the late Dreyfus points out, such a point of view is a dead end. The dead end is true with respect to a single static computational unit...but such units (with the help of our own designs and plans) can make such units into vast huge interconnected networks for which human are more reliant on their existence than the other way around...take a microcontroller foundry creating millions of chips for governing processes of all human "for somethings" with lessened human intervention than what was required 70 years ago. Not one single human can recreate any one of those billion transistor logic gate microcontrollers without the aid of another computer...
 
Status
Not open for further replies.
Back
Top