• NEW! LOWEST RATES EVER -- SUPPORT THE SHOW AND ENJOY THE VERY BEST PREMIUM PARACAST EXPERIENCE! Welcome to The Paracast+, eight years young! For a low subscription fee, you can download the ad-free version of The Paracast and the exclusive, member-only, After The Paracast bonus podcast, featuring color commentary, exclusive interviews, the continuation of interviews that began on the main episode of The Paracast. We also offer lifetime memberships! Flash! Take advantage of our lowest rates ever! Act now! It's easier than ever to susbcribe! You can sign up right here!

    Subscribe to The Paracast Newsletter!

Consciousness and the Paranormal — Part 5

Free episodes:

Status
Not open for further replies.
...Basically what I am saying is that we cannot allow our own concepts of our own systems of understanding to take over for their own existence ( definition ). If to understand consciousness is to destroy it (I don't know this)...then our attempts may be repeatedly thwarted by memes that are captured and released over thousands...if not millions...of years.
Point taken. I tend to think that this is where there is a gap in @Soupie's take on consciousness as information. Information is about something or another rather than the thing itself. Therefore consciousness isn't information itself as much as it is an information experiencer. How fine grained the information is depends on the nature of the language used to impart the information. So language and information are essentially synonymous, differing only in that information is a generic term and language is the form in which it is expressed ( e.g. speech, imagery, digital ). But this does not mean that information doesn't play an important role. Unless we can find a way of detecting consciousness that can be translated into sharable information, how will we be able to discern whether or not something like an AI is conscious? So there is a very practical application in looking at how information theory can be applied to this problem.
 
Last edited:
I'm not insulting anyone; just throwing a different viewpoint into the mix.

Complexity manifests from simplicity in the same way that thoughts arise within the space of conscious awareness (anyone who doubts the latter point need only gain a rudimentary familiarity with meditation to have it confirmed).

Increasing complexity in an attempt to understand or apprehend the essence of simplicity is a futile approach, as is trying to arrive at the root of consciousness by generating more and more branches of thought.

It might be an interesting or stimulating exercise, but it is searching in the wrong direction entirely.
Agree 100% @TheBitterOne
This group is like an oil painter who is forever not sure whetherr to go over the same bits using different colours. As i said to @ufology when he first starting contributing, trying to nail this group down on a topic is like armwrewtling a slug; you think you are just about to succeed and it goes splat.... it is good though... a diverse and very knowledgeable group with open minds. Bring your opinions into the mix and they will generate considered feedback.
 
This paper by Velmans -- a chapter from R. Banerjee and B.K. Chakrabarti (eds), Models of Brain and Mind: Physical, Computational and Psychological Approaches. (Oxford: Elsevier) --
provides an explanation why this thread, and other discussions of consciousness in the published literature continually circle around the same unresolved issues in consciousness studies:

Velmans, HOW TO SEPARATE CONCEPTUAL ISSUES FROM EMPIRICAL ONES IN THE STUDY OF CONSCIOUSNESS

Extract:

. . . "Given that we can get an empirical handle on such investigations, it is sometimes assumed in the consciousness studies literature that these problems are entirely empirical—and even that all the problems of consciousness will eventually be resolved in this way.

But it is easy to show that this is not so. One might think for example that Problem 1, the nature and location of consciousness, should be easy to resolve, as we all have access to and information about our own consciousness. However both its nature and location are much disputed in the literature—and the same may be said about the enduring puzzles and disputes surrounding the causal relationships between consciousness and the brain (Problem 2). Although empirical progress can be made with many questions (of the kind listed above) without first settling such disputes, we cannot in the end ignore them, for the simple reason that pre-theoretical assumptions, theories, and empirical problems interconnect. How, for example, could one arrive at an agreed understanding of the neural correlates of consciousness without an agreed understanding of what consciousness is—and without an understanding of how consciousness could have causal effects on matter, how can one determine its function in the workings of the brain? . . . .

The Nunez book I linked above {Brain, Mind and the Structure of Reality} deals with the same issues in additional neuroscientifically- based detail.
 
Agree 100% @TheBitterOne
This group is like an oil painter who is forever not sure whetherr to go over the same bits using different colours. As i said to @ufology when he first starting contributing, trying to nail this group down on a topic is like armwrewtling a slug; you think you are just about to succeed and it goes splat.... it is good though... a diverse and very knowledgeable group with open minds. Bring your opinions into the mix and they will generate considered feedback.

"trying to nail this group down on a topic is like armwrewtling a slug; you think you are just about to succeed and it goes splat..."

Do you feel you've succeeded and the group hasn't acknowledged or recognized it?

Or do you mean staying on/focusing on one topic?

Sent from my LGLS991 using Tapatalk
 
Interesting extract from that Velmans paper:

"As noted in Velmans (1991a) a process can be said to be “conscious” (a) in the sense that one is conscious of the process
(b) in the sense that the operation of the process is accompanied by consciousness (of its results) and
(c) in the sense that consciousness enters into or causally influences the process.

Why does this matter? It is only sense (c) that is relevant to claims that consciousness has a third-person causal or functional role—and, crucially, one cannot assume a process to be conscious in sense (c) on the grounds that it is conscious in senses (a) or (b). Sense (a) is also very different to sense (b). Sense (a) has to do with what experiences represent. Normal conscious states are always about something, that is they provide information to those who have them about the external world, body or mind/brain itself. Some mental processes (problem solving, thinking, planning, etc) can be said to be partially “conscious” (in this sense) in so far as their detailed operations are accessible to introspection. Sense (b) contrasts different forms of mental processing. Some forms of mental processing result in conscious experiences, while others do not. For example, analysis of stimuli in attended channels usually results in a conscious experience of those stimuli, but not in non-attended channels. Theories that attribute a third-person causal role to consciousness solely on the basis of functional contrasts between “conscious” and preconscious or unconscious processes invariably conflate these distinctions. They either take it for granted that if a process is conscious in sense (a) or sense (b) then it must be conscious in sense (c). Or they simply redefine consciousness to be a form of processing, such as focal attention, information in a “limited capacity channel,” a “central executive,” a “global workspace” and so on, thereby begging the question about the functional role of conscious phenomenology in the economy of the mind."

{ASIDE to @Soupie: This is why I suggested in our earliest conversation here concerning Tononi's Integrated Information Theory that the subconscious would likely present an intractable problem for his theory.}


Velmans continues at this point to a section entitled:

Further problems with conscious causation . . .

http://cogprints.org/5380/1/Conceptual_vs_emprical_issues.pdf

{^at page 8}


Further note: Velmans' term "the economy of the mind" points to the complexity of reflective consciousness and the prereflective and subconscious sources of 'information' on which it is based.
 
Point taken. I tend to think that this is where there is a gap in @Soupie's take on consciousness as information. Information is about something or another rather than the thing itself. Therefore consciousness isn't information itself as much as it is an information experiencer. How fine grained the information is depends on the nature of the language used to impart the information. So language and information are essentially synonymous, differing only in that information is a generic term and language is the form in which it is expressed ( e.g. speech, imagery, digital ). But this does not mean that information doesn't play an important role. Unless we can find a way of detecting consciousness that can be translated into sharable information, how will we be able to discern whether or not something like an AI is conscious? So there is a very practical application in looking at how information theory can be applied to this problem.
Ther is a problem with the concept of information when it is being applied to consciousness studies. This stems from its popular use and from communications theory. This flawed view has it that information is a commodity insofar as it can be carried, processed, trasferred etc i.e. it is external to the observer. The problem with this concept is how "meaning" can be carried, transferred, processed i.e. how does an agency know how to get meaning from 'information' transferred to it, for example. How does information (for example in a nerual impulse) inform. This problem besets computationalisg and existing representationist theories.
The alternative does not have it that information exists out there waiting to be decoded for its meaningful content by the requisite agency. Rather, an agency is an information construct by virtue of the valudity of its environemntal correspondence, where correspondence that is accurate or valid increases the likelihood of the continuance of that informational construct.
So, for example, something in the environment is not 'red' and decoded by our brains as "redness".
Rather, something is red because our construct—our physiological makeup and neural mechanisms—has needed the feeling of "redness" because this qualitative correspondence jas benefited the construct's (that being the human physiology) survival
 
"trying to nail this group down on a topic is like armwrewtling a slug; you think you are just about to succeed and it goes splat..."

Do you feel you've succeeded and the group hasn't acknowledged or recognized it?

Or do you mean staying on/focusing on one topic?

Sent from my LGLS991 using Tapatalk
I haven't succeeded in anything but I have learnt a lot and improved my work so... that's a result
I am saying the group is interested in the diversification not in unity... it actively does not seek answers.
 
Information is about something or another rather than the thing itself.

[Consciousness] is about something or another rather than the thing itself.

Therefore consciousness isn't information itself as much as it is an information experiencer.
No. Consciousness does not experience; consciousness is experience. It is the physical organism that is the experiencer.

How fine grained the information is depends on the nature of the language used to impart the information. So language and information are essentially synonymous, differing only in that information is a generic term and language is the form in which it is expressed ( e.g. speech, imagery, digital ).
Speech, imagery, digital... or perhaps neural oscillations within the brain-system.

Information on the other hand is non-material. It's an abstract idea. It only has existence within the context of a system that is able to identify information, and that is not the same as simply operating on an instruction set. For example a digital weigh scale operates on an instruction set and provides a readout, but it has no comprehension of the idea that it is providing information about weight ( another abstract concept ).
A human organism is certainly no digital weight scale operating on an instructional set.

On the other hand, information, like consciousness, is non-material. And information, like consciousness, "only has existence within the context of a system that is able to identify information." A human organism--especially its brain--would be a great candidate for a system able to identify information--information in the form of oscillating neurons.

For example an intelligent system could probably be designed to detect a range of environmental conditions and operate a series of controls to maintain specified environmental tolerances, but it wouldn't necessarily be able to experience being either hot or cold.
Unless the system is able to self-report--like humans--there is currently no way for us to know whether an information processing system is having experiences.

According to IIT, a model of the fundamental nature of consciousness (supported by some heavyweights such as Christof Koch, which I add so that the model is not simply dismissed out of hand), conscious experience is a property of "nodes containing information and causally influencing other nodes." This conclusion was reached by researching the various regions of the brain most closely associated with consciousness. It's a fascinating theory. As noted, I was very pleased to see it noted several times in the excellent article @smcder posted: http://www.pnas.org/content/110/Supplement_2/10357.abstract

At this point in time, if a system processes information by way of nodes containing information and causally influencing other nodes, I don't think we can rule out the possibility that it (the system) is having experience.

Furthermore that experience cannot be reduced to ones and zeroes or any particular configuration of materials without losing the hotness or the coldness of the experience.
Hot and cold can be considered specific qualities of consciousness. I agree that these qualities cannot be objectively reduced to ones and zeros (or oscillating neurons) without losing their subjective meaning. To observe the system reveals oscillating neurons; to be the system is to experience hot and cold.
 
[Consciousness] is about something or another rather than the thing itself.


No. Consciousness does not experience; consciousness is experience. It is the physical organism that is the experiencer.


Speech, imagery, digital... or perhaps neural oscillations within the brain-system.


A human organism is certainly no digital weight scale operating on an instructional set.

On the other hand, information, like consciousness, is non-material. And information, like consciousness, "only has existence within the context of a system that is able to identify information." A human organism--especially its brain--would be a great candidate for a system able to identify information--information in the form of oscillating neurons.


Unless the system is able to self-report--like humans--there is currently no way for us to know whether an information processing system is having experiences.

According to IIT, a model of the fundamental nature of consciousness (supported by some heavyweights such as Christof Koch, which I add so that the model is not simply dismissed out of hand), conscious experience is a property of "nodes containing information and causally influencing other nodes." This conclusion was reached by researching the various regions of the brain most closely associated with consciousness. It's a fascinating theory. As noted, I was very pleased to see it noted several times in the excellent article @smcder posted: http://www.pnas.org/content/110/Supplement_2/10357.abstract

At this point in time, if a system processes information by way of nodes containing information and causally influencing other nodes, I don't think we can rule out the possibility that it (the system) is having experience.


Hot and cold can be considered specific qualities of consciousness. I agree that these qualities cannot be objectively reduced to ones and zeros (or oscillating neurons) without losing their subjective meaning. To observe the system reveals oscillating neurons; to be the system is to experience hot and cold.

Soupie, would you edit that post to indicate the sources of the quoted extracts you are responding to? I opened Ufology's post to see the sum of what he had said, but it does not include all the extracted statements you are responding to. Thank you ahead of time.
 
According to IIT, a model of the fundamental nature of consciousness (supported by some heavyweights such as Christof Koch, which I add so that the model is not simply dismissed out of hand), conscious experience is a property of "nodes containing information and causally influencing other nodes." This conclusion was reached by researching the various regions of the brain most closely associated with consciousness. It's a fascinating theory. As noted, I was very pleased to see it noted several times in the excellent article @smcder posted: Evolution of consciousness: Phylogeny, ontogeny, and emergence from general anesthesia

I have yet to get back to and complete that paper. Do the authors support Tononi's theory as adequate to account for consciousness?


At this point in time, if a system processes information by way of nodes containing information and causally influencing other nodes, I don't think we can rule out the possibility that it (the system) is having experience.

The system being the brain? Or _____? Are you (and Tononi in and since version 3 of IIT) claiming that it is the brain that experiences the 'world'?
 
Last edited:
I haven't succeeded in anything but I have learnt a lot and improved my work so... that's a result
I am saying the group is interested in the diversification not in unity... it actively does not seek answers.

Nay, Pharoah. It seems to me that we are interested in the diverse phenomena experienced by consciousness and in its unity, or, perhaps better, its unification of experience as its 'own'. We have been seeking answers to how to understand these and other aspects of consciousness all along.
 
Last edited:
There is a problem with the concept of information when it is being applied to consciousness studies. This stems from its popular use and from communications theory. This flawed view has it that information is a commodity insofar as it can be carried, processed, transferred etc i.e. it is external to the observer. The problem with this concept is how "meaning" can be carried, transferred, processed i.e. how does an agency know how to get meaning from 'information' transferred to it, for example. How does information (for example in a neural impulse) inform. This problem besets computationalism and existing representationist theories.

That's well expressed.

The alternative does not have it that information exists out there waiting to be decoded for its meaningful content by the requisite agency. Rather, an agency is an information construct by virtue of the validity of its environmental correspondence, where correspondence that is accurate or valid increases the likelihood of the continuance of that informational construct.

That works well enough in terms of 'information' (of many types) exchanged by living organisms and their natural environments, ecological niches, etc. But I don't think the attempt to describe human beings (and perhaps some other higher animals) as 'informational constructs' can succeed given, in humans at least, the difference between the natural world and the cultural worlds in which human work out their existences and their understandings of their existence.

So, for example, something in the environment is not 'red' and decoded by our brains as "redness".
Rather, something is red because our construct—our physiological makeup and neural mechanisms—has needed the feeling of "redness" because this qualitative correspondence has benefitted the construct's (that being the human physiology) survival

The above hypothesis is not persuasive for the reason I've given.
 
That's well expressed.

That works well enough in terms of 'information' (of many types) exchanged by living organisms and their natural environments, ecological niches, etc. But I don't think the attempt to describe human beings (and perhaps some other higher animals) as 'informational constructs' can succeed given, in humans at least, the difference between the natural world and the cultural worlds in which human work out their existences and their understandings of their existence.

The above hypothesis is not persuasive for the reason I've given.
@Constance you say,
"That works well enough in terms of 'information' (of many types) exchanged by living organisms and their natural environments, ecological niches, etc. But I don't think the attempt to describe human beings (and perhaps some other higher animals) as 'informational constructs' can succeed given, in humans at least, the difference between the natural world and the cultural worlds in which human work out their existences and their understandings of their existence."

But it does work. Because with human beings we are not just talking about physiological constructs that are informational (informational in terms of their accurate—and therefore valid—correspondence with environment). The construct we are talking about with humans is conceptual. Conceptual constructs are 'just as real' as physiological ones in that they correspond with the environment and are valid; but noting, that like physiological constructs, they do adjust and accommodate to the environment in a responsive accurate way.
So... our beliefs are a construct that corresponds with the real interactive world and they are very much embedded in our sense of culture, community, society, self etc.
 
@Constance you say,
"That works well enough in terms of 'information' (of many types) exchanged by living organisms and their natural environments, ecological niches, etc. But I don't think the attempt to describe human beings (and perhaps some other higher animals) as 'informational constructs' can succeed given, in humans at least, the difference between the natural world and the cultural worlds in which human work out their existences and their understandings of their existence."

But it does work. Because with human beings we are not just talking about physiological constructs that are informational (informational in terms of their accurate—and therefore valid—correspondence with environment). The construct we are talking about with humans is conceptual. Conceptual constructs are 'just as real' as physiological ones in that they correspond with the environment and are valid; but noting, that like physiological constructs, they do adjust and accommodate to the environment in a responsive accurate way.
So... our beliefs are a construct that corresponds with the real interactive world and they are very much embedded in our sense of culture, community, society, self etc.

Certainly 'conceptual constructs' are real -- they have real effects in the sociopolitical world we live in on this planet -- but they are not uniformly rational (as you seem to suggest) or even commensurable. Their effects are innumerable, producing social injustice, deadly conflict, exploitation, poverty, the destruction of the planetary ecology, and manifold human misery. How then can we understand ourselves as 'conceptual constructs' that "correspond with the environment and are valid"?

Perhaps I misunderstand what you meant to say.
 
Consciousness does not experience; consciousness is experience. It is the physical organism that is the experiencer.
We've been through this before. If I recall correctly I used the analogy of a book. The book can be thought of as the story and the story can be though of as the book, and either way it's the book that contains the story. So it's not a one or the other situation. There are contextual nuances. You can certainly look at consciousness as being an experiential state, in which case there is no real delineation between consciousness and experience, or you can look at it as a possessive, that part of the experiencer that is doing the experiencing, as in "a person's consciousness". I also think a lot of confusion is caused by the suffix "ness" and the stack of issues that get piled on after that.
Unless the system is able to self-report--like humans--there is currently no way for us to know whether an information processing system is having experiences.
I tend to agree, but at the same time, humans have brain waves that correspond to conscious states and experiences that can be traced to certain brain patterns. So I would think that we can probably determine from a number of readouts whether or not a person is conscious, whether he or she self-reports or not. Perhaps someday something similar for AI might also be possible.
According to IIT, a model of the fundamental nature of consciousness (supported by some heavyweights such as Christof Koch, which I add so that the model is not simply dismissed out of hand), conscious experience is a property of "nodes containing information and causally influencing other nodes." This conclusion was reached by researching the various regions of the brain most closely associated with consciousness. It's a fascinating theory. As noted, I was very pleased to see it noted several times in the excellent article @smcder posted: Evolution of consciousness: Phylogeny, ontogeny, and emergence from general anesthesia
I don't have much of a problem with that, but the question in my mind is exactly how this "property" emerges from this arrangement of things. I noticed that in an earlier post you seemed to have written-off the idea that the structure of consciousness might be in the form of some kind of field. I think it's much to soon for that. Until we know how the "property" of consciousness is imparted on the mind, I don't think it's safe to assume that just any particular type of node will do the job. We can liken this to the creation on an EM field. Not just any kind of wire wrapped around any kind of core will work. So a silicon or quartz node just might not be able to do the trick. Or maybe it will do it even better. We just don't know enough about that yet.
At this point in time, if a system processes information by way of nodes containing information and causally influencing other nodes, I don't think we can rule out the possibility that it (the system) is having experience.
Given the concern in my previous segment, I don't think we can say such a system is having experience either. At one time I was more in the camp that it probably would be having experience, now I'm really doubting my former position. In other words, I suspect that the emergence of consciousness isn't simply due to nodes and information, but what the nodes are made of, how they're laid out in relation to other nodes, and the mechanism of communication between nodes. What does seem to be a virtual certainty is that a system of some kind is required, and that tends to put a damper on some paranormal theories.
Hot and cold can be considered specific qualities of consciousness. I agree that these qualities cannot be objectively reduced to ones and zeros (or oscillating neurons) without losing their subjective meaning. To observe the system reveals oscillating neurons; to be the system is to experience hot and cold.
Eactly ... That's assuming the system can experience hot and cold and isn't simply nodes and info.
 
Last edited:
I have yet to get back to and complete that paper. Do the authors support Tononi's theory as adequate to account for consciousness?




The system being the brain? Or _____? Are you (and Tononi in and since version 3 of IIT) claiming that it is the brain that experiences the 'world'?
I think Ive posted this before:

Shtetl-Optimized » Blog Archive » Why I Am Not An Integrated Information Theorist (or, The Unconscious Expander)

A response to a challenge by Tononi.

Its worth working thorough the whole article and at the bottom there are links to a response by D Chalmers - Tononi said he would respond but there are so many comments and I didn't find it.

There's also a link at the end of this article to a less technical article by Eric Schwitzgebel on a similar argument.

The Splintered Mind: Why Tononi Should Think That the United States Is Conscious

Sent from my LGLS991 using Tapatalk
 
Last edited by a moderator:
Certainly 'conceptual constructs' are real -- they have real effects in the sociopolitical world we live in on this planet -- but they are not uniformly rational (as you seem to suggest) or even commensurable. Their effects are innumerable, producing social injustice, deadly conflict, exploitation, poverty, the destruction of the planetary ecology, and manifold human misery. How then can we understand ourselves as 'conceptual constructs' that "correspond with the environment and are valid"?

Perhaps I misunderstand what you meant to say.
The requisite of a conceptual construct is not that it is rational—that view on rationality would be the orthodoxy. Rather, the requisite of a conceptual construct is that it maintains stability, by which I mean that it must not comprise conflicting constructive parts (which would, at the extreme, instigate a neurosis or other mental coping mechanism that would seem highly irrational). That is why, to change someones views is not a question of demonstrating the rationality of your own opinion, but is about presenting the concepts in a way that does not destabilize their conceptual world-view i.e., the concept has to be incorporated somehow into the other. In fact, rationality is but a side-effect of conceptual reasoning, because conceptual reasoning is primarily about the 'valid' (i.e. TJB) consolidation conceptual stabilities.
In society, belief structures are predicated on conceptual constructions shared by community. When these present clashing stances through a forced integration (e.g. mass migration, war, cultural movements etc), they—the constructs—can act like two colliding billiard balls i.e., they resist one another and the resistance causes fragmentation, collateral damage, violence, instabilities etc. In this way, conceptual constructs are substantial.
So, yes, rationality is a consequence of stabilizing concepts because concepts, if they are to be valid, need to correspond with reality, and reality has a rationality to it (i.e. it is not random).
 
The requisite of a conceptual construct is not that it is rational—that view on rationality would be the orthodoxy. Rather, the requisite of a conceptual construct is that it maintains stability, by which I mean that it must not comprise conflicting constructive parts (which would, at the extreme, instigate a neurosis or other mental coping mechanism that would seem highly irrational). That is why, to change someones views is not a question of demonstrating the rationality of your own opinion, but is about presenting the concepts in a way that does not destabilize their conceptual world-view i.e., the concept has to be incorporated somehow into the other. In fact, rationality is but a side-effect of conceptual reasoning, because conceptual reasoning is primarily about the 'valid' (i.e. TJB) consolidation conceptual stabilities.
In society, belief structures are predicated on conceptual constructions shared by community. When these present clashing stances through a forced integration (e.g. mass migration, war, cultural movements etc), they—the constructs—can act like two colliding billiard balls i.e., they resist one another and the resistance causes fragmentation, collateral damage, violence, instabilities etc. In this way, conceptual constructs are substantial.
So, yes, rationality is a consequence of stabilizing concepts because concepts, if they are to be valid, need to correspond with reality, and reality has a rationality to it (i.e. it is not random).

Rationality is a consequence of stablizing concepts but not all stabilizing concepts are _____ I want to put "rational" in the blank, except you say: which would, at the extreme, instigate a neurosis or other mental coping mechanism that would seem highly irrational ... "seem" highly irrational but would in fact serve some purpose? Help me untangle that.

This feels something like Sperber's Argumentative Theory of Reason ... I know very little about it, except that it says we evolved reason in order to win arguments, to give reasons ... and it explains cognitive biases / why we are so much better at defending our pre-existing views than at evaluating new ideas objectively.

I still struggle with basic mechanism of HCT which is to explain everything in terms of a hierarchy of stabilizing constructs ... it seems necessary but not sufficient and it doesn't seem much different than the orthodox view of evolution in the sense of things that can replicate with change showed up and were then sorted by environmental contingencies and the rest, as they say, is history. And again, see Stephen J Gould's The Spread of Excellence for an argument against the idea that evolution leads invitably to increasing complexity (although I recall we had that discussion).
 
I read an interesting paper tonight, linked in a forum at Research Gate, that might be relevant at this point:

"Proposal for an evolutionary approach to self-consciousness"
Christophe Menant
(Feb 8th 2014)

Abstract

It is pretty obvious to most of us that self-consciousness is a product of evolution. But its nature is unknown. We propose here a scenario
addressing a possible evolutionary nature of self-consciousness covering the segment linking pre-human primates to humans. The scenario is based on evolutions of representations and of inter-subjectivity that could have taken place within the minds of our pre-human ancestors1. We begin by situating self-consciousness relatively to other aspects of human consciousness. With the help of anthropology, we date a possible starting point of our scenario at a time when our non-self-conscious pre-human ancestors were able to build meaningful representations and were capable of inter-subjectivity, like are our today modern apes.

As the proposed scenario is based on an evolution of representations, we recall an existing model for meaningful representations. When our ancestors reached the capability to identify with their conspecifics, they were carrying the two types of meaningful representations presented in the previous paragraph: an auto-representation and representations of conspecifics. Identification with conspecifics brought the auto-representation and the representations of conspecifics to progressively become about a same entity. As a consequence, the two representations tended to merge their contents, and the meanings of the one became available to the other. By this process the auto-representation became able to access a characteristic of the representation of conspecifics: being about an entity existing in the environment. This brought our ancestors to slowly access the possibility to represent themselves as existing entities, like the conspecifics were. We consider that such identification with conspecifics has introduced in the mind of our ancestors an elementary and embryonic sense of being an existing entity that we name ‘ancestral self-consciousness’.

The same process has also imposed to our ancestors an identification with suffering or endangered conspecifics which has produced an important anxiety increase that could have blocked the evolutionary process. We propose that the performances developed by our ancestors to manage that anxiety increase have also generated significant evolutionary advantages that have helped the development of ancestral self-consciousness and favored its evolution toward our full-fledged self-consciousness. It is also proposed that some pre-human primates have avoided the anxiety increase by finding a niche where evolutionary advantages were not necessary. This may have led to our today apes. The contribution of anxiety to the proposed scenario brings to position anxiety management as having guided the evolution of self-consciousness and as still being a key player in our today human minds

.Regarding philosophy of mind, possible links between phenomenal consciousness and the proposed nature of self-consciousness are introduced. The conclusion presents a summary of the points addressed here. Possible continuations are highlighted as
related to human mind, to anxiety management and to artificial intelligence.

.Keywords: self-consciousness, pre-human, meaningful representation, auto-representation, intersubjectivity, evolution, anxiety, evolutionary engine, ancestral self-consciousness, primitive self-consciousness, pre-reflective self-consciousness, phenomenal consciousness.

Proposal for an evolutionary approach to self-consciousness (Feb 8th 2014)
 
Status
Not open for further replies.
Back
Top