I havent finished this yet, but its a great exploration of what i refer to as the perspectival nature of the MBP.
1. The privacy constraint on consciousness
On a naturalistic view of ourselves, we are entirely physical beings who are also conscious, but thus far there is no consensus on the nature of consciousness. The central difficulty is that the defining feature of conscious experience – the subjective, qualitative ‘what it’s like’ or phenomenal character of tasting a mango, seeing a red rose, or dreaming about a blue lake – is not available to intersubjective observation or measurement (Gamez, 2014). If it were, there would be no problem of consciousness, nor of other minds, for instance whether fish feel pain. Pain would be out there in public, so we’d know that they either do or don’t suffer when hooked. But pain isn’t public, unlike whatever its neural or otherwise physical correlates might be, whether in fish, fowl, or us. We don’t and won’t see pain when peering into the brain, at whatever level of magnification. And so it is with all experiences: they are only available to, only exist for, individual conscious subjects. Unlike the subjects’ brains, they aren’t observables.
Despite its subjective nature, many approaches to explaining consciousness hypothesize that experience is an objective, physically-embodied phenomenon; optimistic physicalists suppose that consciousness will eventually find its place in the material world as described by science.
Theories aiming to objectify consciousness range from reductive identity theories that equate experiences with physically-realized states or functions; non-reductive, property dualist theories in which consciousness supervenes on material or functional states; panpsychist theories in which phenomenology is a fundamental property of matter; theories involving quantum or magnetic fields as instantiators of consciousness; and radically enactivist theories that identify conscious states with ordinary physical objects (Manzotti, 2011).
Although we can’t conclusively discount these possibilities, I’d suggest we not hold our collective breath. The existential privacy of an experience, its subjectivity, that it exists only for the mind undergoing it, isn’t likely going away. We can call this the privacy constraint on consciousness. We can then ask why conscious experience isn’t objectifiable even though as conscious creatures we are physically objective. The answer I propose has to do with what I’ll call the representational relation: the world is only known by cognitive systems, including ourselves, using content-bearing representations. Conscious experience is arguably a species of representational content – qualitative, phenomenal content – and the world appears to each of us as a conscious subject in terms of that content.
The reason we don’t find conscious experience in the world, the reason we can’t objectify it, is because as a rule we don’t and won’t find representational content in the world as modeled by it. [
@Michael Allen ] We only find the physical objects and phenomena characterized in terms of such content, including the physically-instantiated content vehicles. Theories that suppose we can objectify consciousness, putting it in the public domain, are thus barking up the wrong tree. ...
2. The representational relation
It is a commonplace that as knowers we deploy various sorts of representations in negotiating our contact with the world. The world is represented by us conceptually and quantitatively in the ‘manifest image’ of ordinary human discourse and the ‘scientific image’ of physical theory (Sellars, 1962).
That we are in a representational relation to reality seems an unavoidable condition of our being limited, situated creatures with particular perspectives on the world, whether individual or collective. This means that the world never appears to us undressed, so to speak, but always clothed in perspectivally conditioned models. Still, the models we humans deploy generally include a vague but plausible (and perhaps unavoidable) realist assumption: the world exists mind-independently and includes various mind-independent entities and processes, some of which appear to us as having spatio-temporal, physical properties as given in both science and everyday experience. Among those entities are composite, complex, and integrated systems that constitute minds – mind-systems – at least some of which, like ourselves, are conscious.
On the face of it, conscious experience seems to be a representational, informationally rich phenomenon that mediates our contact as individual subjects with the world. There’s usually a non-coincidental and behaviorally crucial correspondence between our waking experience and what’s the case in our immediate surroundings. This correspondence is underwritten by causal interaction with the environment via our information-gathering, behavior-guiding sensory modalities, the operation of which consciousness is closely associated. Most of the time we unreflectively take the world as given in experience to be the spatio-temporal manifold as it is in itself, directly presented to us. But we can infer, on the basis of dreams, hallucinations, and optical illusions that experience is a selective and fallible individual-level model of what’s outside the head.
This shouldn’t be construed as saying that we somehow see experience – the model – instead of the world; we shouldn’t suppose we observe consciousness (T. Clark, 2005). We can avert our gaze and otherwise perceptually distance ourselves from physical objects, but cannot divorce ourselves from the experience in terms of which objects appear and disappear for us since, as subjects, we consist of experience.
To be conscious is for us to subjectively constitute an experiential world-model (Revonsuo, 2015) – what Thomas Metzinger (2009) calls an ‘ego- tunnel’ –
that is modulated and constrained by our direct, physical contact with the world itself (including the body) via our sensory-perceptual systems.
We have developing theories of such contact, most recently and notably Bayesian predictive coding (Seth et al., 2011; A. Clark, 2013): impinging stimuli activate sensory channels that inform the brain’s current multi-modal world-model, helping to minimize mismatches between neural representations and the world in service to behavior control. The continually updated mappings and covariances between the world and brain – the neurally-realized representational
relation of sensory perception – allows for successful action and system maintenance, given the nature and needs of the organism (Kanwisher, 2001; Dehaene & Changeux, 2011; Sterling & Laughlin, 2015).
The information-bearing neural processes associated with conscious experience – call them conscious processes – can be identified by contrasting the neural networks active when performing tasks only possible when conscious (e.g., complex learning, planning, reporting) to those networks subserving behavior that can be handled unconsciously (e.g., habitual or automatized behaviors) (Baars, 1997; T. Clark, 2005, 52-55). Experience, since it closely correlates with conscious processes that carry information about the world, itself tracks the world, at least when we’re awake and in perceptual contact with our body and environment. Consciousness can thus carry representational content as inherited from its neural correlates, but couched in qualities available only to the subject (the privacy constraint).1