• NEW! LOWEST RATES EVER -- SUPPORT THE SHOW AND ENJOY THE VERY BEST PREMIUM PARACAST EXPERIENCE! Welcome to The Paracast+, eight years young! For a low subscription fee, you can download the ad-free version of The Paracast and the exclusive, member-only, After The Paracast bonus podcast, featuring color commentary, exclusive interviews, the continuation of interviews that began on the main episode of The Paracast. We also offer lifetime memberships! Flash! Take advantage of our lowest rates ever! Act now! It's easier than ever to susbcribe! You can sign up right here!

    Subscribe to The Paracast Newsletter!

Consciousness and the Paranormal — Part 10

Free episodes:

Status
Not open for further replies.
There is also an article we posted about how long it would take to "evolve" an AI using genetic algorithms, how complex the environment (if it's artificial) would have to be, etc. Which is along the lines I am thinking. Right now, we have no idea how to design a consciousness and part of what I am saying is that, in practice, it might not be possible to design one -

I think that even if computer scientists were somehow able to produce an open-ended, existential, consciousness in an artificial frame, the 'being' that resulted would not experience existence in the world in the ways that living beings do. Its world would differ radically from ours; what 'makes sense' to us would be alien to it; we and the AI would not be able to participate in the same sense of being-in-the-world. Still, if it happens it would be interesting to see how an artificial consciousness construes the nature of the being of its artificially constructed 'world' and the nature of its own being. How much of the expression of the humanly experienced world, layered/sedimented over 50,00 years or more {and which we remain connected to, carry forward subconsciously}, could we possibly convey to an AI in language, numbers, or symbols so that it might have any appreciation at all of what it means to us to be alive and thus to care, as we do, about the survival and interests of life?
 
Last edited:
I think that even if computer scientists were somehow able to produce an open-ended, existential, consciousness in an artificial frame, the 'being' that resulted would not experience existence in the world in the ways that living beings do. Its world would differ radically from ours; what 'makes sense' to us would be alien to it; we and the AI would not be able to participate in the same sense of being-in-the-world. Still, if it happens it would be interesting to see how an artificial consciousness construes the nature of the being of its artificially constructed 'world' and the nature of its own being. How much of the expression of the humanly experienced world, layered/sedimented over 50,00 years or more {and which we remain connected to, carry forward subconsciously}, could we possibly convey to an AI in language, numbers, or symbols so that it might have any appreciation at all of what it means to us to be alive and thus to care, as we do, about the survival and interests of life?

I think it could be a train wreck, I think we would bear a grave moral responsibility to such a conscious being, more than a parent even to a child and should be prepared for all imaginable outcomes. I'm not sure we could or would do that.

I don't assume that the evolution of human life was, in the main, subjectively speaking, pleasant - the extremely high rates of mental illness show the "imperfect" highly contingent nature of the evolution of intelligence and language (I think mental health is deeply related to our various "human" capacities) - I think we survived endowed with a tremendous desire for life and survival, for the interests of life - I think that had a high evolutionary cost - we are the sole surviving human species ... so far.

I think an artificial intelligence could be far less stable, far less mentally healthy than we might understand. And we should be prepared for that.

ancestors.jpg
 
Right. But what I am saying is that I understand the orthodox position to hold that the whole notion of the existence of information is contingent on there being interprants.

I.e. Environmental stimuli are commonly (sloppily) referred to as information, but it is understood that various stimuli are only information/informative to systems/organisms capable of sensing and interpreting them.

That's my understating if the orthodox position.

Having said that, some conceptions of Shannon information may hold that information exists independently of an interprant—but I would argue that humans are the implicit interprants in that case.
@Soupie "I understand the orthodox position to hold that the whole notion of the existence of information is contingent on there being interprants."
You say that... but...
There are countless examples then of "sloppy" talk, which is fine from scientists (who freely talk of information—as meaningful content—being moved around by a system or from one system to another), but philosophers are the first to jump down your throat if a term's use is sloppy or ill-defined -defended. Furthermore, there are many examples of philosophers talking explicitly about information existing out there and I have quoted directly from some seminal phlosophical works on information to illustrate this. Finally, if they held the view that you propose (observer-dependent) firstly they would find it hard to be so sloppy and second, they would come to the same conclusions as me about causation.
I am sorry but just to conclude that everyone bar none is sloppy in their use of the term (and that that is just fine and dandy) begs the question why we find in necessary to refer to such a vacuous term in the first place.
 
Yeah if you could flesh that out for me, that would helpful bc I don't follow.

For example, it may have taken millions of years for organism x to evolve a nervous system capable of differentiating between dozens of em wave frequencies. We could say this ability weakly emerges over millions of years.

However we could in principle build a silicon system capable of differentiating em waves just as well as the organism that evolved. It would have the same ability, but it wouldn't have the same evolutionary time frame. It could be constructed in a day, say.

From whence emerges the phenomenal consciousness? Is the suggestion that the SEH plays a direct role in an organism having consciousness?

Are we saying that evolution allows such tight coupling between organism and environment that p-con strongly emerges and in practice such tight coupling can't be artificially captured ergo no p-consciousness via design processes, only evo processes?
imo you need to hav the silicon system manipulating qualitative assimlations (which are internal mechanisms). There has to be something about the sensory complex that determines or differentiates stimulants qualitatively. I think that capability has to be biochemical in nature... it will not be practically possible to replicate the biochemistry artificially... therefore, it will not be possible to create APE or AC... but in theory, you could. That is where I am at on this atm.
 
[quoting @Pharoah]

Pharoah "The alternative provides a unified concept of information as a relation of meaning in a world of interactions where self-regulatory processes lead to increasingly complex structures that have an observer-dependent informational relation to and about the world with which they interact. *The view that information is not some thing that exists independently of the observer but is solely a function of an observer’s dynamic construction inverts the syntactic–semantic dilemma for there is no requisite translation of one to the other in the derivation of meaningful content. The observer is itself a construction that defines the environment qualitatively and quantitatively and from that position then qualifies information in those self-referential terms."

Underscored above are some additional terms and phrases that I think might be revised for greater clarity.

First, "an observer-dependent informational relation" suggests that living beings are merely passive observers of their environing worlds," but we know that living beings also move and act in their environments and thereby learn experientially, directly, about the nature of things and others they encounter. Note: I know what you're getting at, but it's the word 'observer' that masks the distinction between living organisms that perceive their surroundings by acting within them and machines that are engineered to measure some physical processes in the environment in which they are placed.

The underlined sentence that follows --

"*The view that information is not some thing that exists independently of the observer but is solely a function of an observer’s dynamic construction inverts the syntactic–semantic dilemma for there is no requisite translation of one to the other in the derivation of meaningful content."

makes sense to me [since I follow your general direction], but it would make more sense if instead of referring to "an observer's dynamic construction" you would refer to 'a living animal's dynamic construction' since it is the animal's dynamic interactions with things and others in its environment that grounds further development and success in surviving and even thriving. Without making clear that it is the animal's lived experience that enables it to learn and cope in its situation, the reader is left with the impression that the animal -- more than an 'observer' -- actively participates in the development of its own dynamic, interactive, behaviors.

Finally, the statement that "the observer is itself a construction" is misleading because it implies that living, experiencing, animals actively functioning in their environments do so on the basis of their having been 'constructed'/engineered by 'outside information', in the way we would construct a machine. That last sentence reads:

"The observer is itself a construction that defines the environment qualitatively and quantitatively and from that position then qualifies information in those self-referential terms."

Granted that the living animal is not a 'predesigned construction' but rather actively makes its way in its worldly environment, I wonder how we can say that it 'defines its environment'? Rather it "qualitatively and quantitatively" experiences its environment from a 'self-referential perspective' embedded in its awareness of being-in {situated in} an environing world.

I hope these suggestions help toward disambiguating some of the language that I think is leading to confusion for some readers.


*Now ... all you gotta do is explain how. ;-)

I think Panksepp's prodigious research and writing is probably our best available guide to explaining 'how' animals experience their actual environments and thereby come to 'know'/understand relevant aspects of their environments [safe terrain and dangerous terrain; what can be expected from their conspecifics v what can be anticipated from predators, etc.]. To a great extent animals 'carve out' and adapt their ecological niches and the territories adjacent to them when they explore these territories.
 
Last edited:
I don't think of a mechanism as being mechanical or mechanistic
nick-young-confused-face-300x256_nqlyaa.png
 
If you are trying to resolve the question via the pre conceived assumption of silicon chips and wires then naturally it wont seem to make sense.

Its more likely to happen on an engineered "biological" substrate

Using nanotechnology to create parallel computers

How MIT's new biological 'computer' works, and what it could do in the future - ExtremeTech

Now, researchers from MIT have taken a step toward this possible future, with cellular machines that can perform simple computational operations and store, then recall, memory.

The input problem is resolved by simply reversing the cochlear ear type tech we already have ie, hook up a pair of real eyeballs to the system.

http://cogsci.uci.edu/~ddhoff/HoffmanComputerConsciousness.pdf

In fact, he thinks consciousness could be explained by something called “integrated information theory,” which asserts that consciousness is a product of structures, like the brain, that can both store a large amount of information and have a critical density of interconnections between their parts.

To Koch, the theory provides a means to assess degrees of consciousness in people with brain damage, in species across the animal kingdom, and even, he says, among machines. We asked Koch about computer consciousness last week during MIT Technology Review’s EmTech conference.

Computers Could Be Conscious


I gave a lecture [last week] at MIT about Integrated Information Theory, developed by Giulio Tononi at the University of Wisconsin. This is a theory that makes a very clear prediction: it says that consciousness is a property of complex systems that have a particular “cause-effect” repertoire. They have a particular way of interacting with the world, such as the brain does, or in principle, such as a computer could. If you were to build a computer that has the same circuitry as the brain, this computer would also have consciousness associated with it. It would feel like something to be this computer. However, the same is not true for digital simulations.

If I build a perfect software model of the brain, it would never be conscious, but a specially designed machine that mimics the brain could be?

Correct. This theory clearly says that a digital simulation would not be conscious, which is strikingly different from the dominant functionalist belief of 99 percent of people at MIT or philosophers like Daniel Dennett. They all say, once you simulate everything, nothing else is required, and it’s going to be conscious.

And to borrow a quote: those who say it cant be done have to be right every single time, the scientists trying to make it happen only have to be right once.


Place your bets folks ;)
 
If you are trying to resolve the question via the pre conceived assumption of silicon chips and wires then naturally it wont seem to make sense.

Its more likely to happen on an engineered "biological" substrate

Using nanotechnology to create parallel computers

How MIT's new biological 'computer' works, and what it could do in the future - ExtremeTech

Now, researchers from MIT have taken a step toward this possible future, with cellular machines that can perform simple computational operations and store, then recall, memory.

The input problem is resolved by simply reversing the cochlear ear type tech we already have ie, hook up a pair of real eyeballs to the system.

http://cogsci.uci.edu/~ddhoff/HoffmanComputerConsciousness.pdf

In fact, he thinks consciousness could be explained by something called “integrated information theory,” which asserts that consciousness is a product of structures, like the brain, that can both store a large amount of information and have a critical density of interconnections between their parts.

To Koch, the theory provides a means to assess degrees of consciousness in people with brain damage, in species across the animal kingdom, and even, he says, among machines. We asked Koch about computer consciousness last week during MIT Technology Review’s EmTech conference.

Computers Could Be Conscious


I gave a lecture [last week] at MIT about Integrated Information Theory, developed by Giulio Tononi at the University of Wisconsin. This is a theory that makes a very clear prediction: it says that consciousness is a property of complex systems that have a particular “cause-effect” repertoire. They have a particular way of interacting with the world, such as the brain does, or in principle, such as a computer could. If you were to build a computer that has the same circuitry as the brain, this computer would also have consciousness associated with it. It would feel like something to be this computer. However, the same is not true for digital simulations.

If I build a perfect software model of the brain, it would never be conscious, but a specially designed machine that mimics the brain could be?

Correct. This theory clearly says that a digital simulation would not be conscious, which is strikingly different from the dominant functionalist belief of 99 percent of people at MIT or philosophers like Daniel Dennett. They all say, once you simulate everything, nothing else is required, and it’s going to be conscious.

And to borrow a quote: those who say it cant be done have to be right every single time, the scientists trying to make it happen only have to be right once.


Place your bets folks ;)

Yes, we've covered that.

And to borrow a quote: those who say it cant be done have to be right every single time, the scientists trying to make it happen only have to be right once.


No one here falls into the simple "it can't be done" category.
 
Last edited:
I don't want to lose sight of this paper:

EXPLAINING EMERGENCE:- towards an ontology of levels

And I was listening this SEP article on the way home:

Mechanisms in Science (Stanford Encyclopedia of Philosophy)

Which I think will be helpful with what mechanism/mechanistic means.

I wonder if @Pharoah's understanding is Cartesian as in the first paragraph of 2. here:

Mechanisms in Science (Stanford Encyclopedia of Philosophy)

i.e. in terms of the conservation of inertial motion through contact action

vs.

The new mechanists inherit the word “mechanism” from these antecedents, but, in their effort to capture how the term is used in contemporary science, have distanced themselves both from the idea that mechanisms are machines and especially from the austere metaphysical world picture in which all real change involves only one or a limited set of fundamental activities or forces (cf. Andersen 2014a,b).

That gives some room for:

I don't think of a mechanism as being mechanical or mechanistic.

The three most commonly cited characterizations are next described and discussed in detail.
 
Last edited:
In

The Emergence of Qualitative attribution, Phenomenal experience and Being | Philosophy of Consciousness

the prefix "mecha" appears 62 times:

42 instances of "mechanisms" leaving 19 instances of "mechanism" (singular) (presumably, I didn't count) and one (1) of "mechanistic".

21 instances are "biochemical mechanisms" which is what first came to mind when @Pharoah said:

I don't think of a mechanism as being mechanical or mechanistic.

(and which, yes, I am going to try to work in as many posts as many times as I possibly can from this point forward)

So I hope (seriously) that a non-mechanical/mechanistic definition can be found in the SEP article:

Mechanisms in Science (Stanford Encyclopedia of Philosophy)
 
Unless we hold that atoms are mechanisms......

Philosophical atomism is a reductive argument: not only that everything is composed of atoms and void, but that nothing they compose really exists: the only things that really exist are atoms ricocheting off each other mechanistically in an otherwise empty void. Atomism stands in contrast to a substance theory wherein a prime material continuum remains qualitatively invariant under division (for example, the ratio of the four classical elements would be the same in any portion of a homogeneous material).
 
If you are trying to resolve the question via the pre conceived assumption of silicon chips and wires then naturally it wont seem to make sense.

Its more likely to happen on an engineered "biological" substrate

Using nanotechnology to create parallel computers

How MIT's new biological 'computer' works, and what it could do in the future - ExtremeTech

Now, researchers from MIT have taken a step toward this possible future, with cellular machines that can perform simple computational operations and store, then recall, memory.

The input problem is resolved by simply reversing the cochlear ear type tech we already have ie, hook up a pair of real eyeballs to the system.

http://cogsci.uci.edu/~ddhoff/HoffmanComputerConsciousness.pdf

In fact, he thinks consciousness could be explained by something called “integrated information theory,” which asserts that consciousness is a product of structures, like the brain, that can both store a large amount of information and have a critical density of interconnections between their parts.

To Koch, the theory provides a means to assess degrees of consciousness in people with brain damage, in species across the animal kingdom, and even, he says, among machines. We asked Koch about computer consciousness last week during MIT Technology Review’s EmTech conference.

Computers Could Be Conscious


I gave a lecture [last week] at MIT about Integrated Information Theory, developed by Giulio Tononi at the University of Wisconsin. This is a theory that makes a very clear prediction: it says that consciousness is a property of complex systems that have a particular “cause-effect” repertoire. They have a particular way of interacting with the world, such as the brain does, or in principle, such as a computer could. If you were to build a computer that has the same circuitry as the brain, this computer would also have consciousness associated with it. It would feel like something to be this computer. However, the same is not true for digital simulations.

If I build a perfect software model of the brain, it would never be conscious, but a specially designed machine that mimics the brain could be?

Correct. This theory clearly says that a digital simulation would not be conscious, which is strikingly different from the dominant functionalist belief of 99 percent of people at MIT or philosophers like Daniel Dennett. They all say, once you simulate everything, nothing else is required, and it’s going to be conscious.

And to borrow a quote: those who say it cant be done have to be right every single time, the scientists trying to make it happen only have to be right once.


Place your bets folks ;)

Searle, read Searle, John ... esp. biological naturalism

(read Thomas Nagel's What Is It Like To Be a Bat) too

Then David Chalmers, if you haven't (he's an Aussie)
 
I have read many of these, As ive said the question is already resolved

Machines can be conscious.

Anyone with a brain "knows" that.

(A hint not an insult.)
 
I have read many of these, As ive said the question is already resolved

Machines can be conscious.

Anyone with a brain "knows" that.

(A hint not an insult.)
  • They aren't counter-arguments.
  • That question, isn't the only question.
neither is an insult ...

"I have read many of these"

Can you summarize what you've read? (and no peeking at this Wikipedia of yours ...)
 
I have an entire folder of links that relate to BCI's , synthetic intelligence etc etc.

But at the end of the day to summarize.

If a machine is complex enough, it can be conscious.

https://www.quora.com/Are-humans-only-machines

Short answer: yes.

Long answer: humans are a lot more complex than the machines we've created, especially our intelligence. It may be awhile but eventually we'll be creating machines superior to ourselves in that manner, at which point anything could happen (i.e. technological singularity).

I think that we are biological machines, yes. How much of our behaviour is free will, and how much is subject to hormones, instinct, conditioning and brain chemistry (otherwise known as "programming") is a question I ask myself a lot. I suspect that a lot of the decisions and choices that I make are influenced by those things.

Science shows us that humans are nothing but biological machines. So scientifically human is actually a machine comprised of bones, flesh and blood.


I am a conscious machine by the definition of both conscious and machine.

The question is resolved.

And while i am a complex machine, as a biological one i am also quite primitive, limited by many environmental factors. (though the manufacturing process is a lot of fun)

I have had software upgrades that have made me more efficient, more complex as a processor.

No one has questioned my being conscious here i pass the Turing test

As for the Turing test, according to McKenna, “Intelligence is the art in the eye of the beholder. How do you know that I am not a cyborg? How do I know that you are not a cyborg? The answer is we Turing test each other unconsciously at sufficient depth to satisfy ourselves. It becomes moot, or it is becoming moot.”


In other words, if AI is product of our imagination and creativity and it passes the Turing test, then like the theory that consciousness creates reality, the very act of observing and believing that an AI is conscious would make it so.

Terence McKenna's cyberdelic evolution of consciousness as it relates to AI - The Sociable
 
Last edited by a moderator:
The theory does not discriminate between squishy brains inside skulls and silicon circuits encased in titanium.

http://www.klab.caltech.edu/koch/CR/CR-Complexity-09.pdf

IIT explains why consciousness requires neither sensory input nor behavioral out put , as happens every night during REM sleep in which a central paralysis prevents the sleeper from acting out his/her dreams. All that matters for consciousness is the functional relation among the nerve cells that make up the corticothalamic complex.

Integrated information theory - Wikipedia

David Chalmers has argued that any attempt to explain consciousness in purely physical terms (i.e. to start with the laws of physics as they are currently formulated and derive the necessary and inevitable existence of consciousness) eventually runs into the so-called "hard problem". Rather than try to start from physical principles and arrive at consciousness, IIT "starts with consciousness" (accepts the existence of consciousness as certain) and reasons about the properties that a postulated physical substrate would have to have in order to account for it. The ability to perform this jump from phenomenology to mechanism rests on IIT's assumption that if a conscious experience can be fully accounted for by an underlying physical system, then the properties of the physical system must be constrained by the properties of the experience.

Specifically, IIT moves from phenomenology to mechanism by attempting to identify the essential properties of conscious experience (dubbed "axioms") and, from there, the essential properties of conscious physical systems (dubbed "postulates").

And to answer Chalmer's, to resolve his problem with the easy Vs hard problems, the answer is simply complexity, processing power, given the right processing power the "hard" problems become easy..............

We are complex machines, as such the "hard problem" is resolved. We are conscious.
Ergo: a sufficiently complex machine is conscious.

The existence of a "hard problem" is controversial and has been disputed by philosophers such as Daniel Dennett[4] and cognitive neuroscientists such as Stanislas Dehaene.[5] Clinical neurologist and skeptic Steven Novella has dismissed it as "the hard non-problem".[6]
 
Last edited by a moderator:
Status
Not open for further replies.
Back
Top