• NEW! LOWEST RATES EVER -- SUPPORT THE SHOW AND ENJOY THE VERY BEST PREMIUM PARACAST EXPERIENCE! Welcome to The Paracast+, eight years young! For a low subscription fee, you can download the ad-free version of The Paracast and the exclusive, member-only, After The Paracast bonus podcast, featuring color commentary, exclusive interviews, the continuation of interviews that began on the main episode of The Paracast. We also offer lifetime memberships! Flash! Take advantage of our lowest rates ever! Act now! It's easier than ever to susbcribe! You can sign up right here!

    Subscribe to The Paracast Newsletter!

Consciousness and the Paranormal — Part 6

Free episodes:

Status
Not open for further replies.
"Observer-dependent" yields several results starting on page 11. For your view to be more obvious, perhaps start with an introduction to your view on observer-independence, define what it is, and follow it with examples, rather than starting with the illustrative example and then saying: "From this position, we can highlight a subtle distinction ..."

How about simply: "An Objective-Subjective Bridge"

If you haven't already viewed this video ( posted earlier ) perhaps check it out. Of particular relevance is around 27:25: "... those features of the world that exist independent of our feelings and attitudes; I'll call those observer-independent, from those features of the world that are observer relative ..." and it carries on from there:

@ufology Yes this (Searle 27'25") is how the reviewer is understanding the observer-dependence/independence distinction. That is probably the way these terms are conceptually understood in philosophy and it is my fault for not clearly debunking their stance and slotting in my own. I thought I made clear in my paper... clearly not:
I argue that any agency (A) that is a unified physical construct (be it living or non-living), reacts to the world (B) in a way that is 'observer-dependent' i.e. its own dynamic construction determines what any given external influence's informational affect and content is going to be following interaction.
Thus, 'red' light has an informational effect on the human perception whilst 'gamma' radiation does not because of the dynamic construction of the physical 'agency' that (somehow) constitutes the human conscious experience. The experience of these wavelengths is observer-dependent. Red does not have an informationally independent subjective quality any more than gamma does. By extension, I say that no physical entity has independent informational content by which we might call it "a fact of that physical entity, that has this or that property of existence". Instead, I say that it is the observing agency (be it living or non-living) that, by virtue of its dynamic construction, determines the 'nature of the fact' of the observed physical entity. Thus, when an agency has the necessary dynamic construction, it then is capable of identifying its own subjective ontology in existence purely in virtue of the nature of its own agency's construction—in a period prior to the creation of that agency's construction and following its death, there is no physical existence experienced for that agency as a fact of reality—because all of reality, as experienced, is observer-dependent.
If one were to put this panpsychically, one would say that all physical entities have an experience of the world as evidenced by their reaction to it, but that only when it comes to the dynamic construction that is the human body and brain, does the agency identify and name that observer-dependent ontology as belonging to its observer-dependent perspective.

I'll just slot this into the paper somewhere... lol
 
A clarifying paper concerning the developing integration between phenomenological philosophy of mind and cognitive neuroscience ~~~

"The Uroboros of Consciousness: Between the Naturalisation of Phenomenology and the Phenomenologisation of Nature"
Sebastjan Vörös • University of Ljubljana, Slovenia

Abstract

> Context • The burgeoning field of consciousness studies has recently witnessed a revival of first-person approaches based on phenomenology in general and Husserlian phenomenology in particular. However, the attempts to introduce phenomenological methods into cognitive science have raised serious doubts as to the feasibility of such projects. Much of the current debate has revolved around the issue of the naturalisation of phenomenology, i.e., of the possibility of integrating phenomenology into the naturalistic paradigm. Significantly less attention has been devoted to the complementary process of the phenomenologisation of nature, i.e., of a (potentially radical) transformation of the theoretical and existential underpinnings of the naturalist framework. >Problem • The aim of this article is twofold. First, it provides a general overview of the resurgence of first-person methodologies in cognitive sciences, with a special emphasis on a circular process of naturalising phenomenology and phenomenologising nature. Secondly, it tries to elucidate what theoretical (conceptual) and practical (existential) implications phenomenological approaches might have for the current understanding of nature and consciousness. >Results • It is argued that, in order for the integration of phenomenological and scientific approaches to prove successful, it is not enough merely to provide a firm naturalistic grounding for phenomenology. An equally, if not even more important, process of phenomenological contextualisation of science must also be considered, which might have far-reaching implications for its theoretical underpinnings (move from disembodied to embodied models) and our existential stance towards nature and consciousness (cultivation of a non-dual way of being). > Implications • The broader theoretical framework brought about by the circular exchange between natural sciences and phenomenology can contribute to a more holistic conception of science, one that is in accord with the cybernetic idea of second-order science and based on a close interconnection between (abstract) reflection and (lived) experience. > Constructivist content • The (re)introduction of first-person approaches into cognitive science and consciousness studies evokes the fundamental circularity that is characteristic of second-order cybernetics. It provides a rich framework for a dialogue between science and lived experience, where scientific endeavour merges with the underlying existential structures, while the latter remains reflectively open to scientific findings and proposals. >Key words • Cognitive science, phenomenology, first-person approaches, naturalisation, phenomenologisation, lived experience, non-dualism.

Extract:

". . . Not until recently did the idea of the systematic study of consciousness enter the “sciences of the mind.” In this regard, cognitivism and – later – connectionism, the two predominant approaches in cognitive science since its inception in the 1950s and up until the so-called “experiential turn” in the 1990s (Froese 2011; Varela, Thompson & Rosch 1991), proved to be loyal heirs to behaviourism: although daring enough to look inside the notorious mental black box, they simultaneously precluded all talk of what is happening for the black box: “To put it in a nutshell, Cognitive Science purports to say how the cognitive mind/brain works in itself and not how it comes to seem to be working for itself […]” (Petitot et al. 1999: 12). Consciousness and lived experience were brushed aside, as cognitive scientists embarked on the study of information-processing mechanisms of either a “symbolic” or “connectionist” variety.

« 4 » Yet slowly, but persistently, the question of consciousness found its way into mainstream cognitive science. This can be seen as the end result of a two-tiered process. On the one hand, several philosophers of mind have put forward a series of challenges to the predominant view of the mind as an “information-processing machine,” arguing that such a conception inevitably leaves out something crucial: the what-is-it like (Nagel 1974), qualitative (Jackson 2002) or phenomenal (Jackendoff 1987) character of consciousness. For David Chalmers (1995), the “hard problem of consciousness” boils down to “the problem of experience”: “It is widely agreed that experience arises from a physical basis, but we have no good explanation of why and how it so arises. Why should physical processing give rise to a rich inner life at all? It seems objectively unreasonable that it should, and yet it does.” (Chalmers 1995: 201) There is, in other words, an “explanatory gap” (Levine 2002), which separates the conscious (phenomenological) domain from the neural (physiological) domain.

« 5 » On the other hand, the “experiential turn” in cognitive science seems to have been brought about by the fruition of some of the ideas developed in second-order cybernetics. Varela, Thompson & Rosch (1991), Dupuy (2009), and Froese (2010) argue convincingly that the seeds of the central tenets of cognitivism and connectionism were already sown during the cybernetic era in the 1940s and early 1950s, a legacy that has been deliberately belittled by the cognitive science mainstream up until recently.2 Situating itself in opposition to the introspectionist movement, first-order cybernetics was an attempt to “mechanize the mind” and explain it in terms of feedback mechanisms, algorithms, and nonlinear dynamics. However, after the so-called “Ashbyian crisis” in the early 1950s, first-order cybernetics plunged into a state of turmoil and the field split into two branches, namely cognitivism and second-order cybernetics: “[T]he one was the golden boy that became the foundation of the prestigious cognitive sciences, while the other was shunned as the ugly duckling.

[2 | Incidentally, by omitting its cybernetic roots, Gardner, who is famous for noting that cognitive science “has a very long past but a relatively short history” (Gardner 1985: 9), actually managed to make its history even shorter. and is still struggling for recognition.” (Froese 2010: 81)]

« 6 » Yet it was precisely this “ugly duckling,” with its turn from first-order “observed systems” to second-order “observing systems” and its emphasis on the active role of the observer and circular causality (Foerster & Glasersfeld 1999; Scott 2004), that has provided a much needed impetus for the revival of first-person approaches in the cognitive sciences. Tom Froese (2011), for instance, argues that Francisco Varela’s “experiential turn” can actually be seen as an elaboration of Heinz von Foerster’s insights into the importance of the observer with the first-person pragmatics of phenomenology.3

« 7 » The confluence of the two processes – the experience-oriented criticism internal to cognitive science and the observer-oriented impetus external to it – is what revived interest in the first-person study of experience. Thus, from the late 1980s and the early 1990s onwards, several proposals have been put forward arguing for the need to integrate studies of consciousness into mainstream cognitive science (e.g., Chalmers 1995, 1996; Flanagan 1992) and develop improved methodologies for the study of experience (e.g., Gallagher 1997; Marbach 1993; Varela, Thompson & Rosch 1991; Varela 1996a; Varela & Shear 1999). It has been suggested that the dichotomy of either “scientific (and thus unexperiential) objectivism” or “introspectionist (and thus unscientific) subjectivism” promulgated by the adherents of the classical cognitive science is false, and that first-person approaches to consciousness are not to be conflated with naïve just-take-a-look introspectionism, but must be rigorously and systematically explored. Therefore, in searching for an appropriate first-person methodology, many authors have turned to phenomenological tradition in general and Husserlian phenomenology in particular. The reason for this extraordinary, and for many a more traditionally-minded philosopher of mind almost blasphemous, alliance was twofold: first,

[3 | One can already detect this convergence of interests in Varela’s ideas in his (now almost legendary) paper Not one, not two (1976); for an autobiographical account of his diverse intellectual heritage, see Varela (1996b).]


when it comes to (disciplined) first-person approaches, (Husserlian) phenomenology is claimed to be the best game in town, and second, there seems to be a surprising correspondence between phenomenological descriptions of experiential data and recent findings in cognitive science (Petitot et al. 1999). The spectre of phenomenality, long kept at bay by the behaviourist-cum-cognitivist suspicion towards everything experiential, has been resuscitated and has set out to haunt the sciences of the mind. . . . ."

http://www.univie.ac.at/constructivism/journal/articles/10/1/096.voros.pdf

{Note: to download the pdfs of the articles I've cited from Constructivist Foundations you might need to sign in to the website with your email address and a password and indicate your own particular interests from a list of disciplines.}

Constructivist E-Print Archive (CEPA)

 
Last edited:
Why is it necessary that consciousness replicate its environment in order for consciousness to be the viable means by which we live and act meaningfully in the environment in which we find ourselves existing?
It's not necessary, and there's ample evidence that phenomenal consciousness is not a replicate of the environment.

However, consciousness (what i have referred to as Intentional Information) must inform us about the environment in such a way that can successfully guide behavior.

Re your last sentence: "Our experiences of reality, it seems, must pale in comparison to its full richness and complexity,"

I would say the opposite, that the world as experienced, sensed, felt, engaged, lived, and contemplated by our species and others is where the lights turn on and the colors and sounds arise, and meaningful action and value appear in the midst of what would otherwise remain unsensed and unknown. The handful of strong and weak forces identified so far by physicists as constituting the substructure of the physical universe never ask themselves what is 'real'. Without life and consciousness there would be no questions and no answers, however partial they are in the world as consciously and temporally lived by us and other aware beings.
That is one way to look at it, yes. If there were no such thing as phenomenal consciousness within reality, reality would be the poorer.

However, I was thinking of it in terms of all the physical stimuli/energies that must swirl about us that our nervous systems are not equipped to sense and perceive. I imagine there are many wonderful, physical phenomenon occuring all around us that we humans are completely ignorant of but that various other organisms are well aware, and vice versa. It's just to say that what-is is much deeper and more complex than we can currently experience and therefore fathom.
 
@Pharoah I think you will find the following article very interesting. (@Burnt State I think you'll find this very interesting as well. I think this pretty conclusively and concisely explains how our perceived reality is indeed virtual. And why novel stimuli might cause our perceptual system to go cray cray.)

See article for illustrations.

Perception and Reality: Why a Wholly Empirical Paradigm is Needed to Understand Vision

A widely accepted concept of vision in recent decades stems from studies carried out by Stephen Kuffler, David Hubel and Torsten Wiesel beginning in the 1950s (Kuffler, 1953; Hubel and Wiesel, 2005). This seminal work showed that neurons in the primary visual pathway of cats and monkeys respond to light stimuli in specific ways, implying that the detection of retinal image features plays a central role in visual perception. Based on the properties of simpler input-level cells, Hubel and Wiesel discovered that neurons in V1 respond selectively to retinal activation elicited by oriented bars of light, bars of a certain length, bars moving in different directions, and stimuli with different spectral properties. These and other findings earned Hubel and Wiesel a Nobel Prize in 1981 (Kuffler had died in 1980), and inspired a generation of scientists to pursue similar electrophysiological and neuroanatomical research in a variety of species in the ongoing effort to reveal how vision works.

A seemingly straightforward interpretation of these observations is that the visual system operates analytically, extracting features from retinal images, efficiently filtering and processing image features in a series of computational steps, and ultimately combining them to provide a close approximation of physical reality that is then used to guide behavior. This concept of visual perception is logical, accords with electrophysiological and anatomical evidence, and has the further merit of being similar to the operation of computers, providing an analogy that connects biological vision with machine vision and artificial intelligence (Marr, 1982). Finally, this interpretation concurs with the impression that we see the world more or less as it really is and behave accordingly. Indeed, to do otherwise would seem to defy common sense and insure failure.

Attractive though it is, this interpretation fails to consider an axiomatic fact about biological vision: retinal images conflate the physical properties of objects, and therefore cannot be used to recover the objective properties of the world (Figure 1). Consequently, the basic visual qualities we perceive—e.g., lightness, color, form, distance, depth and motion—cannot specify reality. ...

Although it is possible to model how neural activity in different sensory systems could be combined using Bayesian decision theory (Fetsch et al., 2013), such models cannot indicate how information about the physical world could be obtained in a way that avoids the quandary illustrated in Figure 1. Indeed, any model based on recovering or estimating real-world parameters, statistically or otherwise, will fail as a canonical explanation of visual perception (see also Jones and Love, 2011; Bowers and Davis, 2012). Biological vision must therefore depend on some other strategy that does not require accessing the real-world parameters of image sources. ...

[In other words, it has been conclusively shown that the human eye is incapable of recovering accurate, objective, real-world parameters (data) of environmental stimuli/energies. Therefore, any theory of perception which assumes it can is wrong, and any theory that assumes perceived reality is a representation (replica) of reality is wrong.]

The aim of the visual system in these approaches is assumed to be the recovery of real world properties, however imperfectly, from information in retinal stimuli. A different supposition is that since retinal images cannot specify the measurable properties of objects (see Figure 1), achieving this goal is impossible. It follows that visual perceptions must therefore arise from a strategy that does not rely on real world properties as such. In a wholly empirical conception of vision, the perceptual values we experience are determined by ordering visual qualities according to the frequency of occurrence of image patterns and how this impacts survival (Purves and Lotto, 2003, 2011; Purves et al., 2011, 2014).

In general terms, understanding this strategy is straightforward. Imagine a population of primitive organisms whose behavior is dictated by rudimentary collections of photoreceptors and associated neural connections. As stipulated by neo-Darwinian theory, the organization of both the receptors and their connections in the population is subject to small random variations in structure and function that are acted on by natural selection. Based on interactions with the environment, variations of pre-neural and neural configurations that promote survival tend to be passed down to future generations. As a result, the ranks of visual qualities an agent perceives over some evolved range (darkest-lightest, largest-smallest, fastest-slowest, etc.) reflect biological utility rather than the physically measureable properties of objects and conditions in the world. In short, the role of perceptual states is not to reveal the physical world, but to promote useful behaviors. In this scheme, the world is simply the arena in which the utility of perceptions and other behavioral responses pertinent to survival and reproduction is tested, with feedback from the environment acting as the driving force that gradually instantiates the needed circuitry (Figure 3). ...

@Constance

Consider the above ideas with those of Maturana and Varela that you most recently shared!

7. Constructivist approaches focus on self-referential and organizationally closed systems

Such systems strive for control over their inputs rather than their outputs. Cognitive system (mind) is operationally closed. It interacts necessarily only with its own states (Maturana & Varela 1979). The nervous system is “a closed network of interacting neurons such that any change in the state of relative activity of a collection of neurons leads to a change in the state of relative activity of other or the same collection of neurons” (Winograd & Flores 1986, p. 42). This is a consequence of the neurophysiological principle of undifferentiated encoding: “The response of a nerve cell does not encode the physical nature of the agents that caused its response.” (Foerster 1973/2003, p. 293). Humberto Maturana (1978) suggests that we can compare the situation of the mind with a pilot using instruments to fly the plane. All he does is “manipulate the instruments of the plane according to a certain path of change in their readings” (p. 42). In other words, the pilot doesn’t even need to look “outside.” The enactive cognitive science paradigm expresses clearly: “...autonomous systems stand in sharp contrast to systems whose coupling with the environment is specified through input/output relations. ...the meaning of this or that interaction for a living system is not prescribed from outside but is the result of the organization and history of the system itself.” (Varela, Thompson & Rosch 1991, p.157).
 
Last edited:
@ufology Yes this (Searle 27'25") is how the reviewer is understanding the observer-dependence/independence distinction. That is probably the way these terms are conceptually understood in philosophy and it is my fault for not clearly debunking their stance and slotting in my own. I thought I made clear in my paper... clearly not:
I argue that any agency (A) that is a unified physical construct (be it living or non-living), reacts to the world (B) in a way that is 'observer-dependent' i.e. its own dynamic construction determines what any given external influence's informational affect and content is going to be following interaction.
That's what I got from your draft too. However if you don't mind a suggestion. To avoid confusion with other notions of "observers" and "effects" I would suggest that you discard the phrase "observer-dependent" and use the term "autologous", e.g. "I argue that any agency (A) that is a unified physical construct (be it living or non-living), reacts to the world (B) in a way that is autologous." Normally used in medicine, it means "derived from the patient's ( or subject's ) own body.", but is easily adaptable to a philosophical context to include other facets that are "of the subject" be they "of the body" or something other e.g. the mind.
Thus, 'red' light has an informational effect on the human perception whilst 'gamma' radiation does not because of the dynamic construction of the physical 'agency' that (somehow) constitutes the human conscious experience. The experience of these wavelengths is observer-dependent. Red does not have an informationally independent subjective quality any more than gamma does. By extension, I say that no physical entity has independent informational content by which we might call it "a fact of that physical entity, that has this or that property of existence".
I don't see the relevance of the extension. It would seem to be self-evident. It seems to be the same as saying something doesn't have anything which it cannot possess. But maybe it's just as well that you impress the point for some people. I dunno. Minor quibble.
Instead, I say that it is the observing agency (be it living or non-living) that, by virtue of its dynamic construction, determines the 'nature of the fact' of the observed physical entity. Thus, when an agency has the necessary dynamic construction, it then is capable of identifying its own subjective ontology in existence purely in virtue of the nature of its own agency's construction—in a period prior to the creation of that agency's construction and following its death, there is no physical existence experienced for that agency as a fact of reality—because all of reality, as experienced, is observer-dependent.
No argument there. And again I suggest that the word autologous would continue to fit well.
If one were to put this panpsychically, one would say that all physical entities have an experience of the world as evidenced by their reaction to it, but that only when it comes to the dynamic construction that is the human body and brain, does the agency identify and name that observer-dependent ontology as belonging to its observer-dependent perspective.

I'll just slot this into the paper somewhere... lol

Now if you could clarify for me how this helps to creates an objective-subjective bridge. The phrase "observer-independent" or as I suggest "autologous", is pretty much synonymous with "subjective".

I'll leave you with my own brief thought on it. Using the bridge analogy, the objective and the subjective are simply the two ends of the same bridge, which are ultimately connected to a common foundation. One can elaborate more if they muse about it, e.g. perhaps it's a kind of drawbridge, where when the sides come down and connect, raw stimuli become types of traffic ( information ) about the objective side that flow into the subjective side. One could probably imagine all kinds of other neat little details like toll gates and special lanes that act like filters that create certain types of information out of the raw stimuli. It would make a fun YouTube animation project :) .
 
Last edited:
@Constance

Consider the above ideas with those of Maturana and Varela that you most recently shared!

7. Constructivist approaches focus on self-referential and organizationally closed systems

Such systems strive for control over their inputs rather than their outputs. Cognitive system (mind) is operationally closed. It interacts necessarily only with its own states (Maturana & Varela 1979). The nervous system is “a closed network of interacting neurons such that any change in the state of relative activity of a collection of neurons leads to a change in the state of relative activity of other or the same collection of neurons” (Winograd & Flores 1986, p. 42). This is a consequence of the neurophysiological principle of undifferentiated encoding: “The response of a nerve cell does not encode the physical nature of the agents that caused its response.” (Foerster 1973/2003, p. 293). Humberto Maturana (1978) suggests that we can compare the situation of the mind with a pilot using instruments to fly the plane. All he does is “manipulate the instruments of the plane according to a certain path of change in their readings” (p. 42). In other words, the pilot doesn’t even need to look “outside.” The enactive cognitive science paradigm expresses clearly: “...autonomous systems stand in sharp contrast to systems whose coupling with the environment is specified through input/output relations. ...the meaning of this or that interaction for a living system is not prescribed from outside but is the result of the organization and history of the system itself.” (Varela, Thompson & Rosch 1991, p.157).

Soupie, that paragraph from the constructivist* source I copied a day or two ago hardly expresses, in its truncated two-sentence quotation, the ideas of Varela,Thompson & Rosch 1991 (The Embodied Mind: Cognitive Science and Human Experience) or the thought of Varela and Thompson as a whole. If you go to page 157 of the Google Books sample of the book you can read the whole paragraph from which the constructivist author quoted fragments and I think get a better grasp of what they were saying. The quasi-quotation from Maturana is also misleading.

But since you like that brief quotation from The Embodied Mind, I have to ask you whether you've yet had time to read that book as a whole (the necessary context for understanding that sentence). It's a long time since I've read it (which was in drafts online during the period of its writing), and I need to read it again. The book is essential to recognizing the distinctions among emergentist, connectionist, and enactivist theories of consciousness (the latter being the major work of Varela and Thompson). I just read a very instructive review of The Embodied Mind written by a Computer Scientist at Yale, which I recommend as prefatory reading before reading the book:

https://www.cise.ufl.edu/~anand/pdf/airevhd.pdf

I'd post some extracts from it (especially a section that will interest you on color vision beginning at page 8), but unfortunately the text can't be copied in a readable form.


*As I noted in a previous post, the author of that brief overview of constructivist premises acknowledges that piece is written with a "broad brush." The paragraph you've cited was the major reason why I commented that constructivism seems, in this summary of its premises, to lack philosophical depth. But there are years' worth of the journal Constructivist Foundations available online to correct that impression for anyone wanting to invest time in pursuing constructivism as it has taken shape.
 
Last edited:
@Pharoah I think you will find the following article very interesting. (@Burnt State I think you'll find this very interesting as well. I think this pretty conclusively and concisely explains how our perceived reality is indeed virtual. And why novel stimuli might cause our perceptual system to go cray cray.)

See article for illustrations.

Perception and Reality: Why a Wholly Empirical Paradigm is Needed to Understand Vision

A widely accepted concept of vision in recent decades stems from studies carried out by Stephen Kuffler, David Hubel and Torsten Wiesel beginning in the 1950s (Kuffler, 1953; Hubel and Wiesel, 2005). This seminal work showed that neurons in the primary visual pathway of cats and monkeys respond to light stimuli in specific ways, implying that the detection of retinal image features plays a central role in visual perception. Based on the properties of simpler input-level cells, Hubel and Wiesel discovered that neurons in V1 respond selectively to retinal activation elicited by oriented bars of light, bars of a certain length, bars moving in different directions, and stimuli with different spectral properties. These and other findings earned Hubel and Wiesel a Nobel Prize in 1981 (Kuffler had died in 1980), and inspired a generation of scientists to pursue similar electrophysiological and neuroanatomical research in a variety of species in the ongoing effort to reveal how vision works.

A seemingly straightforward interpretation of these observations is that the visual system operates analytically, extracting features from retinal images, efficiently filtering and processing image features in a series of computational steps, and ultimately combining them to provide a close approximation of physical reality that is then used to guide behavior. This concept of visual perception is logical, accords with electrophysiological and anatomical evidence, and has the further merit of being similar to the operation of computers, providing an analogy that connects biological vision with machine vision and artificial intelligence (Marr, 1982). Finally, this interpretation concurs with the impression that we see the world more or less as it really is and behave accordingly. Indeed, to do otherwise would seem to defy common sense and insure failure.

Attractive though it is, this interpretation fails to consider an axiomatic fact about biological vision: retinal images conflate the physical properties of objects, and therefore cannot be used to recover the objective properties of the world (Figure 1). Consequently, the basic visual qualities we perceive—e.g., lightness, color, form, distance, depth and motion—cannot specify reality. ...

Although it is possible to model how neural activity in different sensory systems could be combined using Bayesian decision theory (Fetsch et al., 2013), such models cannot indicate how information about the physical world could be obtained in a way that avoids the quandary illustrated in Figure 1. Indeed, any model based on recovering or estimating real-world parameters, statistically or otherwise, will fail as a canonical explanation of visual perception (see also Jones and Love, 2011; Bowers and Davis, 2012). Biological vision must therefore depend on some other strategy that does not require accessing the real-world parameters of image sources. ...

[In other words, it has been conclusively shown that the human eye is incapable of recovering accurate, objective, real-world parameters (data) of environmental stimuli/energies. Therefore, any theory of perception which assumes it can is wrong, and any theory that assumes perceived reality is a representation (replica) of reality is wrong.]

The aim of the visual system in these approaches is assumed to be the recovery of real world properties, however imperfectly, from information in retinal stimuli. A different supposition is that since retinal images cannot specify the measurable properties of objects (see Figure 1), achieving this goal is impossible. It follows that visual perceptions must therefore arise from a strategy that does not rely on real world properties as such. In a wholly empirical conception of vision, the perceptual values we experience are determined by ordering visual qualities according to the frequency of occurrence of image patterns and how this impacts survival (Purves and Lotto, 2003, 2011; Purves et al., 2011, 2014).

In general terms, understanding this strategy is straightforward. Imagine a population of primitive organisms whose behavior is dictated by rudimentary collections of photoreceptors and associated neural connections. As stipulated by neo-Darwinian theory, the organization of both the receptors and their connections in the population is subject to small random variations in structure and function that are acted on by natural selection. Based on interactions with the environment, variations of pre-neural and neural configurations that promote survival tend to be passed down to future generations. As a result, the ranks of visual qualities an agent perceives over some evolved range (darkest-lightest, largest-smallest, fastest-slowest, etc.) reflect biological utility rather than the physically measureable properties of objects and conditions in the world. In short, the role of perceptual states is not to reveal the physical world, but to promote useful behaviors. In this scheme, the world is simply the arena in which the utility of perceptions and other behavioral responses pertinent to survival and reproduction is tested, with feedback from the environment acting as the driving force that gradually instantiates the needed circuitry (Figure 3). ...

@Constance

Consider the above ideas with those of Maturana and Varela that you most recently shared!

7. Constructivist approaches focus on self-referential and organizationally closed systems

Such systems strive for control over their inputs rather than their outputs. Cognitive system (mind) is operationally closed. It interacts necessarily only with its own states (Maturana & Varela 1979). The nervous system is “a closed network of interacting neurons such that any change in the state of relative activity of a collection of neurons leads to a change in the state of relative activity of other or the same collection of neurons” (Winograd & Flores 1986, p. 42). This is a consequence of the neurophysiological principle of undifferentiated encoding: “The response of a nerve cell does not encode the physical nature of the agents that caused its response.” (Foerster 1973/2003, p. 293). Humberto Maturana (1978) suggests that we can compare the situation of the mind with a pilot using instruments to fly the plane. All he does is “manipulate the instruments of the plane according to a certain path of change in their readings” (p. 42). In other words, the pilot doesn’t even need to look “outside.” The enactive cognitive science paradigm expresses clearly: “...autonomous systems stand in sharp contrast to systems whose coupling with the environment is specified through input/output relations. ...the meaning of this or that interaction for a living system is not prescribed from outside but is the result of the organization and history of the system itself.” (Varela, Thompson & Rosch 1991, p.157).
@Soupie
Yes. This is excellent empirical support for my theorising. All your underlined sections are on the pulse. Thanks. It would be great to quote some of it in my paper... trouble is I'm at the word limit as it is.
@ufology
Thanks for the feedback: really useful.
1. Sometimes changing terms is very beneficial to allay confusion. I will have a think about your suggestion re. autologous.

2. you say, "I don't see the relevance of the extension. It would seem to be self-evident."
Self-evident?? If you look at my draft paper on information (recently linked #404), you will be puzzled why virtually nobody thinks in this manner about information. Alternatively, I am misunderstanding what you mean here.
To clarify, the orthodoxy is that information is a commodity that exists out there in the world, and that can be transferred, modified, transmitted etc. The extension above ("By extension, I say that no physical entity has independent informational content by which we might call it "a fact of that physical entity, that has this or that property of existence") states that nothing is informational in and of itself. The meaning of informational content is dictated by the nature of the observing agency's dynamic construction. Thus information is meaning attributed by an agency to an external entity that interacts with and thereby has an impact on it.
This is why the phenomenal experience of redness is observer-dependent: redness does not exist in the world as a qualitative phenomenon. Consequently, there is no representation of worldly redness as such (this is where representational theories get it wrong). Where does the representation come into it? — The qualitative nature of experience is a representation of the comparative relevancies and merit of environmental interactions. A species' physiologies determine the relative significance of one experience over another and in this way the world becomes one of meaning that is informed—phenomenally: in terms of its qualitative merits.

3. re: objective–subjective bridge
The Searle video recently linked (26'10") might help here:
We start from the premise that information is not a commodity out there in the world but that it is determined by the nature of the dynamic construction of the observing agency—in virtue of the meaningful impact that interaction has on the agency's dynamic construction.
This premise imbues agency with an ontological status.
This status confers meaning to worldly interaction and thereby an epistemological interpretation to the interactive world.
When an agency becomes informed epistemically of its own ontological status, it identifies the subjective nature of being.
That subjective identification is populated by a qualitative meaning, because the world has a meaningful impact on the agency's construction.
Perhaps that sounds ridiculous... I have no idea.
Alternatively, the dynamic construction of an agency determines what the world means to it. The information it accrues is a reflection of what it is. And it is meaningful and relevant to it. Humans have as part of that informational construction, a recognition of this relation which confers a subjective identity populated with its qualitative meaning of its existence.
 
@ufology
2. you say, "I don't see the relevance of the extension. It would seem to be self-evident." Self-evident??
What I meant by that, is that assuming that the reader gets what you are saying, particularly in the previous sentence, then the extension doesn't do much more than impress the point. Maybe I've begun to take for granted how difficult some of the ideas might be for those who haven't waded into this. Not sure. If you think it's worth adding, then don't worry about it. Go with your experience and instinct on it.
3. re: objective–subjective bridge
The Searle video recently linked (26'10") might help here:
We start from the premise that information is not a commodity out there in the world but that it is determined by the nature of the dynamic construction of the observing agency—in virtue of the meaningful impact that interaction has on the agency's dynamic construction.
This premise imbues agency with an ontological status.
This status confers meaning to worldly interaction and thereby an epistemological interpretation to the interactive world.
When an agency becomes informed epistemically of its own ontological status, it identifies the subjective nature of being.
That subjective identification is populated by a qualitative meaning, because the world has a meaningful impact on the agency's construction.
Perhaps that sounds ridiculous... I have no idea.
It seems internally coherent, but whether or not it actually constitutes a bridge or explains anything is another matter.
Alternatively, the dynamic construction of an agency determines what the world means to it. The information it accrues is a reflection of what it is. And it is meaningful and relevant to it. Humans have as part of that informational construction, a recognition of this relation which confers a subjective identity populated with its qualitative meaning of its existence.
Same thing. The typical problem with respect to the subjective and the objective, in a nutshell, is that they appear to constitute two different types of reality, and some schools of philosophy have a problem with that. I don't. I just accept it as a fact. Philosophically, I suppose it's sort of a Yin-Yang thing. Essentially you've described that same situation; but neither of our views actually constitutes an "objective-subjective bridge". Instead for my part, I'm just saying, "Look, this is how it is: We've got these two different types of reality within a larger construct. So what's next?" It seems that's ultimately where you're headed as well.
 
Soupie, that paragraph from the constructivist* source I copied a day or two ago hardly expresses, in its truncated two-sentence quotation, the ideas of Varela,Thompson & Rosch 1991 (The Embodied Mind: Cognitive Science and Human Experience) or the thought of Varela and Thompson as a whole. If you go to page 157 of the Google Books sample of the book you can read the whole paragraph from which the constructivist author quoted fragments and I think get a better grasp of what they were saying. The quasi-quotation from Maturana is also misleading.

But since you like that brief quotation from The Embodied Mind, I have to ask you whether you've yet had time to read that book as a whole (the necessary context for understanding that sentence). It's a long time since I've read it (which was in drafts online during the period of its writing), and I need to read it again. The book is essential to recognizing the distinctions among emergentist, connectionist, and enactivist theories of consciousness (the latter being the major work of Varela and Thompson). I just read a very instructive review of The Embodied Mind written by a Computer Scientist at Yale, which I recommend as prefatory reading before reading the book:

https://www.cise.ufl.edu/~anand/pdf/airevhd.pdf

I'd post some extracts from it (especially a section that will interest you on color vision beginning at page 8), but unfortunately the text can't be copied in a readable form.


*As I noted in a previous post, the author of that brief overview of constructivist premises acknowledges that piece is written with a "broad brush." The paragraph you've cited was the major reason why I commented that constructivism seems, in this summary of its premises, to lack philosophical depth. But there are years' worth of the journal Constructivist Foundations available online to correct that impression for anyone wanting to invest time in pursuing constructivism as it has taken shape.
While I have not read "The Embodied Mind," I have, as you know, read Thompson's "Mind in Life." As I noted at the time, what I found was support for my current approach to consciousness. That is, phenomenal consciousness is largely--if not wholly--Intentional Information. However, this is not computized information as we generally think of it (as I've tried to explain many times). The article I posted above clarifies this concept in a way that I had not yet encountered, even after reading all of Mind in Life.

That is, the environment isn't to be thought of as a computer "program" or "input" that the organism is "running" or "reading" which it then translates as "output" or "behavior." As I've tried to gently (and at times not-so-gently) hint at, I think your interpretation of Neurophenomenology may not be completely on target.

Enactivism - Wikipedia, the free encyclopedia

"Enactivism is one of a cluster of related theories sometimes known as the 4Es, the others being embodied, embedded and extended aspects of cognition.[10][11] It proposes an alternative to dualism as a philosophy of mind, in that it emphasises the interactions between mind, body and the environment, seeing them all as inseparably intertwined in mental processes.[12] The self arises as part of the process of an embodied entity interacting with the environment in precise ways determined by its physiology. In this sense, individuals can be seen to "grow into" or arise from their interactive role with the world.[13]

"Enaction is the idea that organisms create their own experience through their actions. Organisms are not passive receivers of input from the environment, but are actors in the environment such that what they experience is shaped by how they act."[14]

In The Tree of Knowledge Maturana & Varela proposed the term enactive[15] "to evoke the view of knowledge that what is known is brought forth, in contraposition to the more classical views of either cognitivism[Note 1] or connectionism.[Note 2] They see enactivism as providing a middle ground between the two extremes of representationalism and solipsism. They seek to "confront the problem of understanding how our existence-the praxis of our living- is coupled to a surrounding world which appears filled with regularities that are at every instant the result of our biological and social histories.... to find a via media: to understand the regularity of the world we are experiencing at every moment, but without any point of reference independent of ourselves that would give certainty to our descriptions and cognitive assertions. Indeed the whole mechanism of generating ourselves, as describers and observers tells us that our world, as the world which we bring forth in our coexistence with others, will always have precisely that mixture of regularity and mutability, that combination of solidity and shifting sand, so typical of human experience when we look at it up close."[Tree of Knowledge, p. 241]

Enactivism also addresses the hard problem of consciousness, referred to by Thompson as part of the explanatory gap in explaining how consciousness and subjective experience are related to brain and body.[16] "The problem with the dualistic concepts of consciousness and life in standard formulations of the hard problem is that they exclude each other by construction".[17] Instead, according to Thompson's view of enactivism, the study of consciousness or phenomenology as exemplified by Husserl and Merleau-Ponty is to complement science and its objectification of the world. "The whole universe of science is built upon the world as directly experienced, and if we want to subject science itself to rigorous scrutiny and arrive at a precise assessment of its meaning and scope, we must begin by reawakening the basic experience of the world of which science is the second-order expression" (Merleau-Ponty, The phenomenology of perception as quoted by Thompson, p. 165). In this interpretation, enactivism asserts that science is formed or enacted as part of humankind's interactivity with its world, and by embracing phenomenology "science itself is properly situated in relation to the rest of human life and is thereby secured on a sounder footing."[18][19]"

Enaction has been seen as a move to conjoin representationalism with phenomenalism, that is, as adopting a constructivist epistemology, an epistemology centered upon the active participation of the subject in constructing reality.[20][21] However, 'constructivism' focuses upon more than a simple 'interactivity' that could be described as a minor adjustment to 'assimilate' reality or 'accommodate' to it.[22] Constructivism looks upon interactivity as a radical, creative, revisionist process in which the knower constructs a personal 'knowledge system' based upon their experience and tested by its viability in practical encounters with their environment. Learning is a result of perceived anomalies that produce dissatisfaction with existing conceptions.[23]

How does constructivism relate to enactivism? From the above remarks it can be seen that Glasersfeld expresses an interactivity between the knower and the known quite acceptable to an enactivist, but does not emphasize the structured probing of the environment by the knower that leads to the "perturbation relative to some expected result" that then leads to a new understanding.[23] It is this probing activity, especially where it is not accidental but deliberate, that characterizes enaction, and invokes affect,[24] that is, the motivation and planning that lead to doing and to fashioning the probing, both observing and modifying the environment, so that "perceptions and nature condition one another through generating one another."[25] The questioning nature of this probing activity is not an emphasis of Piaget and Glasersfeld.

Sharing enactivism's stress upon both action and embodiment in the incorporation of knowledge, but giving Glasersfeld's mechanism of viability an evolutionary emphasis,[26] is evolutionary epistemology. Inasmuch as an organism must reflect its environment well enough for the organism to be able to survive in it, and to be competitive enough to be able to reproduce at sustainable rate, the structure and reflexes of the organism itself embody knowledge of its environment. ( @Pharoah ) This biology-inspired theory of the growth of knowledge is closely tied to universal Darwinism, and is associated with evolutionary epistemologists such as Karl Popper, Donald T. Campbell, Peter Munz, and Gary Cziko.[27] According to Munz, "an organism is an embodied theory about its environment... Embodied theories are also no longer expressed in language, but in anatomical structures or reflex responses, etc."[27][28]

Consider the above in regards to this excellent quote from Perception and Reality: Why a Wholly Empirical Paradigm is Needed to Understand Vision

"Understanding vision as reflexive (i.e., hard-wired at any given moment but subject to modification by subsequent experience) also affords the ability to account for visual perceptions generated within a few tens of milliseconds in response to complex stimuli such as wind-blown leaves, running water, animal movements and numerous other circumstances."

This is Enactivism in a nutshell!

An organism is a system "forged" slowly over time via evolution--its very structure embodies "knowledge about" (intentional information) the environment--yet it also remains flexible and malleable so that it can "learn" via experience (interaction) with the environment.
 
And here is an extended, excellent, quote from this article I recently posted: Phenomenal Expectations and the Developmental Origins of Knowledge of Objects | The Brains Blog

"Coming to know facts about physical objects is a matter of rediscovering things already implicit in a system of object indexes, or so the guess about phenomenal expectations implies. Some might object that development can’t require such rediscovery because it would be hopelessly inefficient to require things already encoded to be learnt anew. But rediscovery is an elegant solution to a practical problem. If you are building a survival system you want quick and dirty heuristics that are good enough to keep it alive: you don’t necessarily care about the truth. If, by contrast, you are building a thinker, you want her to be able to think things that are true irrespective of their apparent survival value. This cuts two ways. On the one hand, you want the thinker’s thoughts not to be constrained by heuristics that ensure her survival. On the other hand, in allowing the thinker freedom to pursue the truth there is an excellent chance she will end up profoundly mistaken or deeply confused about the nature of physical objects (especially if she’s a philosopher, it seems). So you don’t want thought contaminated by survival heuristics and you don’t want survival heuristics contaminated by thought. Or, even if some contamination is inevitable, you want to limit it. This combination is beautifully achieved by giving your thinker a system or some systems for tracking objects and their interactions which appear early in development [phenomenal vision/perception], and also a mind which allows her to acquire knowledge of physical objects gradually over months or years, taking advantage of interactions with objects as well as social interactions about objects [flexible, conceptual learning]—providing, of course, that the two are not directly connected but rather linked only very loosely, via phenomenal expectations."

This is a beautiful illustration of the way in which our bodies are "hard-wired" with knowledge of the environment, while at the same time possessing flexible mechanisms that allow us to adapt and learn from our interactions with the environment. (Above brackets are mine.)
 
What I meant by that, is that assuming that the reader gets what you are saying, particularly in the previous sentence, then the extension doesn't do much more than impress the point. Maybe I've begun to take for granted how difficult some of the ideas might be for those who haven't waded into this. Not sure. If you think it's worth adding, then don't worry about it. Go with your experience and instinct on it.

It seems internally coherent, but whether or not it actually constitutes a bridge or explains anything is another matter.

Same thing. The typical problem with respect to the subjective and the objective, in a nutshell, is that they appear to constitute two different types of reality, and some schools of philosophy have a problem with that. I don't. I just accept it as a fact. Philosophically, I suppose it's sort of a Yin-Yang thing. Essentially you've described that same situation; but neither of our views actually constitutes an "objective-subjective bridge". Instead for my part, I'm just saying, "Look, this is how it is: We've got these two different types of reality within a larger construct. So what's next?" It seems that's ultimately where you're headed as well.

It would impress the point to the reviewer of my paper, because he/she did not appreciate the extension. He/she recognised the sense of 'observer' only in terms of the human relation to the world. I would agree that it is obvious, but my obvious rarely seems to translate into obvious for others.
Internally coherent... wow. That's one degree better than mad isolation :) at last.
Your last point... rcognising two different tyes of reality is not the same as having one explain why the other must exist. Subjectivity must come into existence in a universe of our kind. I apprciate that I haven't convinced you (or anyone for that matter) that my HCT does this... but my internal coherence tells me it is obvious; somehow I have to make it obvious to everyone else.
 
It would impress the point to the reviewer of my paper, because he/she did not appreciate the extension. He/she recognised the sense of 'observer' only in terms of the human relation to the world. I would agree that it is obvious, but my obvious rarely seems to translate into obvious for others.
Internally coherent... wow. That's one degree better than mad isolation :) at last.
Seriously. Describing a situation coherently is important, and it's not always easy, even if it's obvious to one's self.
Your last point... rcognising two different tyes of reality is not the same as having one explain why the other must exist.
True.
Subjectivity must come into existence in a universe of our kind.
That assumes that some version of awareness "must come into existence". I'm not convinced that it "must" come into existence. Hypothetically our universe might never have evolved anything with subjectivity. Indeed, there was a long period of time when so far as we know, there was no life at all, let alone life complex enough to have experiences of a subjective nature. However I would agree that assuming that life does evolve subjectivity, then your logic surrounding the relationship between such life and the kind of subjective experiences it has, appears to be sound and in harmony with scientific fact.

That being said, there is still a dividing line between simple awareness ( detection of and reaction to stimuli ) and having a consciousness that experiences "what it's like" to detect and react to stimuli.

I apprciate that I haven't convinced you (or anyone for that matter) that my HCT does this... but my internal coherence tells me it is obvious; somehow I have to make it obvious to everyone else.
It seems to me you've made your points well during this short exchange. What I think would be helpful is an example of how having this understanding can advance something in a practical sense. For example what medical or AI advances could be made by taking these observations into account?
 
Seriously. Describing a situation coherently is important, and it's not always easy, even if it's obvious to one's self.

True.

That assumes that some version of awareness "must come into existence". I'm not convinced that it "must" come into existence. Hypothetically our universe might never have evolved anything with subjectivity. Indeed, there was a long period of time when so far as we know, there was no life at all, let alone life complex enough to have experiences of a subjective nature. However I would agree that assuming that life does evolve subjectivity, then your logic surrounding the relationship between such life and the kind of subjective experiences it has, appears to be sound and in harmony with scientific fact.

That being said, there is still a dividing line between simple awareness ( detection of and reaction to stimuli ) and having a consciousness that experiences "what it's like" to detect and react to stimuli.


It seems to me you've made your points well during this short exchange. What I think would be helpful is an example of how having this understanding can advance something in a practical sense. For example what medical or AI advances could be made by taking these observations into account?
@ufology The practical application that interests me is artificial consciousness... unfortunately I need 50 million in research funding. The other practical application of philosophical interest is the realisation of predicted hierarchical construct that is yet to emerge and evolve whose significance to humanity will be as profound as the evolution of early hominid to human. The hierarchy is not complete.
I wasn't making an assumption btw, but stating what HCT says must be tha case... that subjectivity must evolve in a universe like ours.
 
@Pharoah, I'm impressed by the ramifications of your recent posts quoted below and hope we will discuss them (and perhaps hear further elaborations of them) here.

The qualitative nature of experience is a representation of the comparative relevancies and merit of environmental interactions. A species' physiologies determine the relative significance of one experience over another and in this way the world becomes one of meaning that is informed—phenomenally: in terms of its qualitative merits.


. . . 2. you say, "I don't see the relevance of the extension. It would seem to be self-evident."
Self-evident?? If you look at my draft paper on information (recently linked #404), you will be puzzled why virtually nobody thinks in this manner about information. Alternatively, I am misunderstanding what you mean here.
To clarify, the orthodoxy is that information is a commodity that exists out there in the world, and that can be transferred, modified, transmitted etc. The extension above ("By extension, I say that no physical entity has independent informational content by which we might call it "a fact of that physical entity, that has this or that property of existence") states that nothing is informational in and of itself. The meaning of informational content is dictated by the nature of the observing agency's dynamic construction. Thus information is meaning attributed by an agency to an external entity that interacts with and thereby has an impact on it.
This is why the phenomenal experience of redness is observer-dependent: redness does not exist in the world as a qualitative phenomenon. Consequently, there is no representation of worldly redness as such (this is where representational theories get it wrong). Where does the representation come into it? — The qualitative nature of experience is a representation of the comparative relevancies and merit of environmental interactions. A species' physiologies determine the relative significance of one experience over another and in this way the world becomes one of meaning that is informed—phenomenally: in terms of its qualitative merits.


3. re: objective–subjective bridge
The Searle video recently linked (26'10") might help here:
We start from the premise that information is not a commodity out there in the world but that it is determined by the nature of the dynamic construction of the observing agency—in virtue of the meaningful impact that interaction has on the agency's dynamic construction.
This premise imbues agency with an ontological status.
This status confers meaning to worldly interaction and thereby an epistemological interpretation to the interactive world.
When an agency becomes informed epistemically of its own ontological status, it identifies the subjective nature of being.
That subjective identification is populated by a qualitative meaning, because the world has a meaningful impact on the agency's construction.

Perhaps that sounds ridiculous... I have no idea.
Alternatively, the dynamic construction of an agency determines what the world means to it. The information it accrues is a reflection of what it is. And it is meaningful and relevant to it. Humans have as part of that informational construction, a recognition of this relation which confers a subjective identity populated with its qualitative meaning of its existence.


Please clarify this comment to, I think, Soupie:

Your last point... recognising two different types of reality is not the same as having one explain why the other must exist. Subjectivity must come into existence in a universe of our kind. I appreciate that I haven't convinced you (or anyone for that matter) that my HCT does this... but my internal coherence tells me it is obvious; somehow I have to make it obvious to everyone else.

It seems to me that your developments of HCT have in recent months become increasingly more attuned to phenomenological insights into/descriptions of consciousness. Today I'll read that second paper concerning 'information'. Thanks for the corrected link.
 
While I have not read "The Embodied Mind," I have, as you know, read Thompson's "Mind in Life." As I noted at the time, what I found was support for my current approach to consciousness. That is, phenomenal consciousness is largely--if not wholly--Intentional Information. However, this is not computized information as we generally think of it (as I've tried to explain many times). The article I posted above clarifies this concept in a way that I had not yet encountered, even after reading all of Mind in Life.

That is, the environment isn't to be thought of as a computer "program" or "input" that the organism is "running" or "reading" which it then translates as "output" or "behavior." As I've tried to gently (and at times not-so-gently) hint at, I think your interpretation of Neurophenomenology may not be completely on target.

That might be the case. We'd have to read and discuss a number of texts in detail in order to find out. My continuing sense, though, is that, while your thinking about consciousness and mind seems to be changing, what you write does not generally seem to express an acceptance of the new ontology suggested by Varela and Thompson, which accords with the ontology MP developed in his phenomenological philosophy. Do you in general agree with @ufology's view expressed in a post today that Pharoah is presenting an argument for "two 'realities'" rather than one (thus necessarily two separate ontologies to describe the world we live in)?

I'm not sure that the wikipedia entry you quote in the rest of your post is adequate to describe the philosophical significance of enactivism. I'll reproduce those quotations here because they might provoke some useful dialogue:

Enactivism - Wikipedia, the free encyclopedia

"Enactivism is one of a cluster of related theories sometimes known as the 4Es, the others being embodied, embedded and extended aspects of cognition.[10][11] It proposes an alternative to dualism as a philosophy of mind, in that it emphasises the interactions between mind, body and the environment, seeing them all as inseparably intertwined in mental processes.[12] The self arises as part of the process of an embodied entity interacting with the environment in precise ways determined by its physiology. In this sense, individuals can be seen to "grow into" or arise from their interactive role with the world.[13]

"Enaction is the idea that organisms create their own experience through their actions. Organisms are not passive receivers of input from the environment, but are actors in the environment such that what they experience is shaped by how they act."[14]

In The Tree of Knowledge Maturana & Varela proposed the term enactive[15] "to evoke the view of knowledge that what is known is brought forth, in contraposition to the more classical views of either cognitivism[Note 1] or connectionism.[Note 2] They see enactivism as providing a middle ground between the two extremes of representationalism and solipsism. They seek to "confront the problem of understanding how our existence-the praxis of our living- is coupled to a surrounding world which appears filled with regularities that are at every instant the result of our biological and social histories.... to find a via media: to understand the regularity of the world we are experiencing at every moment, but without any point of reference independent of ourselves that would give certainty to our descriptions and cognitive assertions. Indeed the whole mechanism of generating ourselves, as describers and observers tells us that our world, as the world which we bring forth in our coexistence with others, will always have precisely that mixture of regularity and mutability, that combination of solidity and shifting sand, so typical of human experience when we look at it up close."[Tree of Knowledge, p. 241]

Enactivism also addresses the hard problem of consciousness, referred to by Thompson as part of the explanatory gap in explaining how consciousness and subjective experience are related to brain and body.[16] "The problem with the dualistic concepts of consciousness and life in standard formulations of the hard problem is that they exclude each other by construction".[17] Instead, according to Thompson's view of enactivism, the study of consciousness or phenomenology as exemplified by Husserl and Merleau-Ponty is to complement science and its objectification of the world. "The whole universe of science is built upon the world as directly experienced, and if we want to subject science itself to rigorous scrutiny and arrive at a precise assessment of its meaning and scope, we must begin by reawakening the basic experience of the world of which science is the second-order expression" (Merleau-Ponty, The phenomenology of perception as quoted by Thompson, p. 165). In this interpretation, enactivism asserts that science is formed or enacted as part of humankind's interactivity with its world, and by embracing phenomenology "science itself is properly situated in relation to the rest of human life and is thereby secured on a sounder footing."[18][19]"

Enaction has been seen as a move to conjoin representationalism with phenomenalism, that is, as adopting a constructivist epistemology, an epistemology centered upon the active participation of the subject in constructing reality.[20][21] However, 'constructivism' focuses upon more than a simple 'interactivity' that could be described as a minor adjustment to 'assimilate' reality or 'accommodate' to it.[22] Constructivism looks upon interactivity as a radical, creative, revisionist process in which the knower constructs a personal 'knowledge system' based upon their experience and tested by its viability in practical encounters with their environment. Learning is a result of perceived anomalies that produce dissatisfaction with existing conceptions.[23]

How does constructivism relate to enactivism? From the above remarks it can be seen that Glasersfeld expresses an interactivity between the knower and the known quite acceptable to an enactivist, but does not emphasize the structured probing of the environment by the knower that leads to the "perturbation relative to some expected result" that then leads to a new understanding.[23] It is this probing activity, especially where it is not accidental but deliberate, that characterizes enaction, and invokes affect,[24] that is, the motivation and planning that lead to doing and to fashioning the probing, both observing and modifying the environment, so that "perceptions and nature condition one another through generating one another."[25] The questioning nature of this probing activity is not an emphasis of Piaget and Glasersfeld.

Sharing enactivism's stress upon both action and embodiment in the incorporation of knowledge, but giving Glasersfeld's mechanism of viability an evolutionary emphasis,[26] is evolutionary epistemology. Inasmuch as an organism must reflect its environment well enough for the organism to be able to survive in it, and to be competitive enough to be able to reproduce at sustainable rate, the structure and reflexes of the organism itself embody knowledge of its environment. ( @Pharoah ) This biology-inspired theory of the growth of knowledge is closely tied to universal Darwinism, and is associated with evolutionary epistemologists such as Karl Popper, Donald T. Campbell, Peter Munz, and Gary Cziko.[27] According to Munz, "an organism is an embodied theory about its environment... Embodied theories are also no longer expressed in language, but in anatomical structures or reflex responses, etc."[27][28]


You concluded the above series of quotations as follows:

Consider the above in regards to this excellent quote from Perception and Reality: Why a Wholly Empirical Paradigm is Needed to Understand Vision

"Understanding vision as reflexive (i.e., hard-wired at any given moment but subject to modification by subsequent experience) also affords the ability to account for visual perceptions generated within a few tens of milliseconds in response to complex stimuli such as wind-blown leaves, running water, animal movements and numerous other circumstances."

This is Enactivism in a nutshell!

An organism is a system "forged" slowly over time via evolution--its very structure embodies "knowledge about" (intentional information) the environment--yet it also remains flexible and malleable so that it can "learn" via experience (interaction) with the environment.

I think Enactivism is ontologically far more ramifying than what these quotations from wikipedia suggest.
 
@ufology The practical application that interests me is artificial consciousness... unfortunately I need 50 million in research funding.
To get the 50 million I think you'd need to be more specific ... LOL. So for example, given our recent discussion, and using vision as an example, because humans have evolved the perception of color through sensors that detect RGB and a processing system that combines those signals in such a way that there are millions of possible variations that somehow emerge as color perception, building color perception into an Engineered Consciousness analogous to humans would also require an RGB detection and processing system. That makes perfect sense.

The only problem I have with the approach above, and continuing with the example, is that it often seems to have been assumed that electronic RGB detection and processing systems should do as well or better than the stock human version. I question the validity of that assumption. I suggest that it may not be simply be a matter of the sheer processing power, but something more subtle in the properties and architecture of the human system that gives rise to the experience of color. In other words, the autologous nature of human consciousness may be dependent upon its unique configuration of specific materials; and nothing substantially different will actually produce consciousness at all, let alone anything like that of humans.

The other practical application of philosophical interest is the realisation of predicted hierarchical construct that is yet to emerge and evolve whose significance to humanity will be as profound as the evolution of early hominid to human.
Possibly. Or it might also be the case that there's no wiggle room. The engineering of a consciousness might end up requiring exactly the same materials and configuration for it to work as normal humans, in which case we run into all the same ethical issues that we do with "real humans". The trick is, to use the field analogy I mentioned previously; knowing how far from the baseline various facets can be taken before the field collapses.
The hierarchy is not complete. I wasn't making an assumption btw, but stating what HCT says must be tha case... that subjectivity must evolve in a universe like ours.
The only way that I can see for such a claim to not be based on assumption is if you're just playing Captain Obvious: e.g. Any universe "like ours" would by definition, in order to be "like ours" also have subjectivity because, after all, our universe has subjectivity, and therefore any universe that doesn't, would not be like ours. Basically so what? Apart from that, if you're not making any assumptions, what do you base the certainty of your claim upon? It can't be statistics, because statistical probabilities are all based on assumptions about what is "likely" to happen, but that's not the same as being certain. Simply because it can happen doesn't mean it "must happen". For all we know, there are hundreds of other universes like ours, and only a handful have evolved subjectivity. Maybe some never will.
 
Last edited:
@Pharoah, I'm impressed by the ramifications of your recent posts quoted below and hope we will discuss them (and perhaps hear further elaborations of them) here.

Please clarify this comment to, I think, Soupie:

It seems to me that your developments of HCT have in recent months become increasingly more attuned to phenomenological insights into/descriptions of consciousness. Today I'll read that second paper concerning 'information'. Thanks for the corrected link.
@Constance
1. I am up for discussing ramifications.

2. the comment that you want clarifying was a response to ufology #410:
"The typical problem with respect to the subjective and the objective, in a nutshell, is that they appear to constitute two different types of reality, and some schools of philosophy have a problem with that. I don't. I just accept it as a fact. Philosophically, I suppose it's sort of a Yin-Yang thing. Essentially you've described that same situation; but neither of our views actually constitutes an "objective-subjective bridge". Instead for my part, I'm just saying, "Look, this is how it is: We've got these two different types of reality within a larger construct. So what's next?" It seems that's ultimately where you're headed as well."

The two types of reality ufology refers to are the ontological status and epistemological status. The discussion stems from @ufology's reference to a video of a Searle lecture where Searle explains that ‘the epistemic objectivity of science does not prevent us from having an epistemically objective science of a domain that is ontologically subjective: no fact about the subjective ontology of consciousness makes it impossible to have an epistemically objective science’ (26’20”). This, I think, ties in with Nagel's View from Nowhere notion for an expansionist explanation of the subjective in objective terms.

3. And @Constance you are correct in noting that phenomenology is influencing the way I talk about HCT. It is not intentional, but I am aware when I am utilising the language. I think that HCT can be explored from a phenomenological perspective, but that approach is virgin territory to me.
 
To get the 50 million I think you'd need to be more specific ... LOL. So for example, given our recent discussion, and using vision as an example, because humans have evolved the perception of color through sensors that detect RGB and a processing system that combines those signals in such a way that there are millions of possible variations that somehow emerge as color perception, building color perception into an Engineered Consciousness analogous to humans would also require an RGB detection and processing system. That makes perfect sense.

The only problem I have with the approach above, and continuing with the example, is that it often seems to have been assumed that electronic RGB detection and processing systems should do as well or better than the stock human version. I question the validity of that assumption. I suggest that it may not be simply be a matter of the sheer processing power, but something more subtle in the properties and architecture of the human system that gives rise to the experience of color. In other words, the autologous nature of human consciousness may be dependent upon its unique configuration of specific materials; and nothing substantially different will actually produce consciousness at all, let alone anything like that of humans.


Possibly. Or it might also be the case that there's no wiggle room. The engineering of a consciousness might end up requiring exactly the same materials and configuration for it to work as normal humans, in which case we run into all the same ethical issues that we do with "real humans". The trick is, to use the field analogy I mentioned previously; knowing how far from the baseline various facets can be taken before the field collapses.

The only way that I can see for such a claim to not be based on assumption is if you're just playing Captain Obvious: e.g. Any universe "like ours" would by definition, in order to be "like ours" also have subjectivity because, after all, our universe has subjectivity, and therefore any universe that doesn't, would not be like ours. Basically so what? Apart from that, if you're not making any assumptions, what do you base the certainty of your claim upon? It can't be statistics, because statistical probabilities are all based on assumptions about what is "likely" to happen, but that's not the same as being certain. Simply because it can happen doesn't mean it "must happen".
You are correct in your last paragraph: you say, "Basically so what?" I suppose I am just reaffirming Nagel (The view form nowhere) who argues that the objective world can explain the inevitable evolution of subjectivity.

I base my assertions on the axioms underpinning my HCT, which I don't think I have articulated yet because we have never got that far into the theory.

"Wiggle room"? Just to clarify. Hierarchical construct theory identifies distinct hierarchical levels that have emerged and evolve. Each level emerging from the previous, and each level evolving complexities of form. The theory predicts that there must be another hierarchical level that is yet to emerge from the human 'awareness of reality' perspective, and that this level is ... transformative.

The AC discussion is a huge topic. My understanding of the realisation of AC is predicated on my acceptance of HCT's validity. It shares no common ground with current AI modelling or theory.
 
Status
Not open for further replies.
Back
Top