NEW! LOWEST RATES EVER -- SUPPORT THE SHOW AND ENJOY THE VERY BEST PREMIUM PARACAST EXPERIENCE! Welcome to The Paracast+, eight years young! For a low subscription fee, you can download the ad-free version of The Paracast and the exclusive, member-only, After The Paracast bonus podcast, featuring color commentary, exclusive interviews, the continuation of interviews that began on the main episode of The Paracast. We also offer lifetime memberships! Flash! Take advantage of our lowest rates ever! Act now! It's easier than ever to susbcribe! You can sign up right here!
Exactly. People's belief is "true" ( def 1 ) and therefore the perception of that belief represents a reality ( the belief exists ); as opposed to imaginative, unsubstantiated, illusory, or delusory.( def 2 ), but the content of the experiences themselves remain unsubstantiated, and may well be imaginative, illusory, or or delusory. Simply because someone honestly thinks something happened and there happens to be some tenuous circumstantial evidence does not establish the objective veridicality of a claim, and simply adding the word "veridical" as a pre-qualifier to a claim, doesn't actually make it any more veridical unless that claim can be backed up with substantial evidence, which you may think exists, but which others may find questionable ( at best ).Veridical:
from Wiktionary, Creative Commons Attribution/Share-Alike License
from the GNU version of the Collaborative International Dictionary of English
- adj. True.
- adj. Pertaining to an experience, perception, or interpretation that accurately represents reality; as opposed to imaginative, unsubstantiated, illusory, or delusory.
from The Century Dictionary and Cyclopedia
- adj. Truth-telling; truthful; veracious.
from WordNet 3.0 Copyright 2006 by Princeton University. All rights reserved.
- Truth-telling; veracious; truthful.
- True; being what it purports to be.
- adj. coinciding with reality
veridical - definition and meaning
I certainly "entertain" ideas, and part of that includes an analysis of them so as to get some idea where they fall in the spectrum between fact and fiction. Within that spectrum a lot of connections can be made, and because of previous analysis, I have a better idea what is and isn't reasonable to believe than someone who doesn't bother. Whether or not that makes me happy isn't relevant because I don't choose to believe things on whether or not they make me happy. In fact, I enjoy being proven wrong more than making claims I believe are true.I've been more entertaining of a lot of ideas than others in this thread have been. Instead of paring down, I try to keep a lot of things in mind and then come back to them as I've found in the past there are sometimes surprising connections to be made. And because it makes me happy.
Thanks. I've been somewhat aware of this viewpoint on what can be achieved (or at least hoped/intended to be achieved): the surcease of sorrow in the world) in and through meditation. It doesn't negate the reality of the hard problem, though. It demonstrates the effort required to come to terms with, find peace with, our condition in the world.
I certainly "entertain" ideas, and part of that includes an analysis of them so as to get some idea where they fall in the spectrum between fact and fiction. Within that spectrum a lot of connections can be made, and because of previous analysis, I have a better idea what is and isn't reasonable to believe than someone who doesn't bother. Whether or not that makes me happy isn't relevant because I don't choose to believe things on whether or not they make me happy. In fact, I enjoy being proven wrong more than making claims I believe are true.
I'm reading the paper right now. It's excellent. It expresses precisely what I've been clumsily trying to express throughout this discussion. (The mind is green and the discussion about "experiencing" consciousness.)On a somewhat related note, have you read "Killing the Observer?" yet? If so, have a look at this exchange and help me understand it:
Constance, I was referring to the apparent fact that while we can't seem to locate consciousness in physical space, we can seem to locate it in time. See the following excerpt from an article by Evan Thompson:
Is Consciousness a “Stream”? | The Brains Blog
"For example, recent experiments show that whether a visual stimulus is consciously detected or not depends on when it arrives in relation to the phases of the brain’s ongoing alpha (8–12 Hz) and theta (5–7 Hz) rhythms (see also this study). You’re more likely to miss the stimulus when it occurs during the trough of an alpha wave; as the alpha wave crests, you’re more likely to detect it.
The moral of these new studies isn’t that perception is strictly discrete, but rather that it’s rhythmic; it happens through successive rhythmic pulses (an idea James also proposed), instead of as one continuous flow. Like a miniature version of the wake-sleep cycle, neural systems alternate from moment to moment between phases of optimal excitability, when they’re most “awake” and responsive to incoming stimuli, and phases of strong inhibition, when they’re “asleep” and least responsive. Moments of perception correspond to excitatory or “up” phases; moments of nonperception to inhibitory or “down” phases. A gap occurs between each “up” or “awake” moment of perception and the next one, so that what seems to be a continuous stream of consciousness may actually be composed of rhythmic pulses of awareness."
That is some pretty interesting, exciting stuff!
another point is that to get a grasp on any one "approach" to consciousness requires an enormous investment and with that comes a tendency not to see the errors in the approach (cognitive dissonance) ...
Consciousness - a stream?
Overall, Thompson concludes that while conscious perception is continuous, it isn’t smoothly regular, but comes in pulses. Perhaps we could say that it’s more like the flow of a bloodstream than that of a river.
Still, though – is consciousness actually continuous? Suppose in fact that it was composed of a series of static moments, like the succeeding frames of a film. In a film the frames follow quickly, but we can imagine longer intervals if we like. However long the gaps, the story told by the film is unaffected and retains all its coherence; the discontinuity can only be seen by an observer outside the film. In the case of consciousness our experience actually is the succession of moments, so if consciousness were discontinuous we should never be aware of it directly. If we noticed anything at all, it would seem to us to be discontinuity in the external world.
It’s not, of course, as simple as that; there are two particular issues. One is that consciousness is not automatically self-consciousness. To draw conclusions about our conscious state requires a second conscious state which is about the first one. We’ve remarked here before on Comte’s objection that the second state necessarily disrupts the first, making reliable introspection impossible: James’ view was that the second state had to be later, so that introspection was always retrospection.
This obviously raises many potential complications; all I want to do is pick out one possibility: that when we introspect the first and second order states alternate. Perhaps what we do is a moment of first-order thinking, then a moment of second order reflection on the moment just past, then another moment of simple first-order thought and so on; a process a bit like an artist flicking his gaze back and forth between subject and canvas.
...
Pyscho-physical parallelism. The activity of the brain does not cause, generate, or excrete consciousness; rather, the activity of the brain is consciousness.Trond says:
I don’t understand Tom Clark’s argument. On one hand he says that the brain and consciousness are completely detached and no effect on each other. On the other hand he acknowledges that the content of consciousness depends on the brain. Isn’t that the very epiphenomenalism he denies?
Tom Clark says:
Trond, thanks for the feedback. What I’m suggesting is that the sense in which one’s consciousness depends on being a representational system, like a brain, isn’t a causal relation. Epiphenomenalists usually think that there’s a one way causal relation from brain to mind, that the physical somehow produces or generates the mental, but that the mental has no effect on the physical. What I’m suggesting is a psycho-physical parallelism between phenomenal consciousness and the brain, with no causal interaction and in which the physical doesn’t have ontological priority (as it does for epiphenomenalists). But of course I reserve the right to be wrong about all this!
Trond says:
So in a nutshell you are suggesting that phenomenal consciousness *exists* independently from the brain, but in our case the brain affects consciousness?
While its true that so far we only find phenomenal states associated with brain states, its possible that phenomenal states could be associated with the states of other non-brain systems.Tom Clark says:
I wouldn’t say that consciousness exists independently of the brain. We only find phenomenal states associated with brain states, so there’s clearly a relation. But it doesn’t seem to be a causal relation, since if it were we’d see consciousness as something in addition to brain states that those states produce or generate, and we don’t.
One can't see consciousness; one can only be consciousness.If you want to say that consciousness just *is* those states, perhaps at some functional or representational level, then again there’s no causal relation, but rather an identity. But that would mean when we look at the operations of the brain, we would be literally seeing consciousness, and that seems wrong.
Consciousness isn’t observable from the outside (or inside for that matter); it exists for the instantiating system alone, not as an object of observation but as the (phenomenal) medium of representation which includes the conscious phenomenal subject as an element.
Pyscho-physical parallelism. The activity of the brain does not cause, generate, or excrete consciousness; rather, the activity of the brain is consciousness.
So we wouldn't say, in this case, that brain states cause consciousness; rather, we would say (some) brain states *are* consciousness.
One confounding factor, however, is that not all brain states are conscious states. Only some brain states are conscious states.
Some brain states are intentional, and some intentional brain states are conscious. Why some intentional brain states are conscious we do not know.
Its easy to see how some brain states can be intentional, but its not easy to see how some intentional brain states can be conscious. But its easy for me to see that conscious states are intentional states.
Some intentional brain states are conscious states. All conscious states are intentional states.
While its true that so far we only find phenomenal states associated with brain states, its possible that phenomenal states could be associated with the states of other non-brain systems.
Not only does consciousness not appear to be something the brain generates, creates, or excretes, it does not even appear to be something that serves an objective function.
In other words, consciousness is not something brain states create, but rather, conscious *is* brain states.
One can't see consciousness; one can only be consciousness.
Consciousness is not objective nor physical. Thus, it is not a contradiction to say that consciousness shares an identity with some brain states, but that when one looks at that those brain states they do not objectively see consciousness.
The shared identity of certain brain states and conscious states is special, no doubt. There is a physical and phenomenal identity.
As Clark indicates, and as I have long suspected, this phenomenal identity is essentialy an intentional identity, and the intentional identity is essentially an informational identity.
What this means is that some brain states have two properties: their physical properties and, in the context of the dynamic brain system, their informational properties.
The physical properties are objective and can be observed from the 3rd-person perspective; the informational, intentional, phenomenal properties are subjective and constitute a 1st-person perspective.
I'd be interested in those papers. I didn't see anything of interest on his naturalism page.I read several of Clark's papers - I found an exchange where he talks about closing the gap by the weight of more and more specific evidence about the correlation: mind/brain.
I'd be interested in those papers. I didn't see anything of interest on his naturalism page.
Re evidence of correlation:
http://m.medicalxpress.com/news/2015-08-brain-signature-human-emotions.html
"Chang and his colleagues studied 182 participants who were shown negative photos (bodily injuries, acts of aggression, hate groups, car wrecks, human feces) and neutral photos. Thirty additional participants were also subjected to painful heat. Using brain imaging and machine learning techniques, the researchers identified a neural signature of negative emotion—a single neural activation pattern distributed across the entire brain that accurately predicts how negative a person will feel after viewing unpleasant images.
"This means that brain imaging has the potential to accurately uncover how someone is feeling without knowing anything about them other than their brain activity," Chang says. "This has enormous implications for improving our understanding of how emotions are generated and regulated, which have been notoriously difficult to define and measure. In addition, these new types of neural measures may prove to be important in identifying when people are having abnormal emotional responses - for example, too much or too little—which might indicate broader issues with health and mental functioning."
Unlike most previous research, the new study included a large sample size that reflects the general adult population and not just young college students; used machine learning and statistics to develop a predictive model of emotion; and, most importantly, tested participants across multiple psychological states, which allowed researchers to assess the sensitivity and specificity of their brain model.
"We were particularly surprised by how well our pattern performed in predicting the magnitude and type of aversive experience," Chang says. "As skepticism for neuroimaging grows based on over-sold and -interpreted findings and failures to replicate based on small sizes, many neuroscientists might be surprised by how well our signature performed. Another surprising finding is that our emotion brain signature using lots of people performed better at predicting how a person was feeling than their own brain data. There is an intuition that feelings are very idiosyncratic and vary across people. However, because we trained the pattern using so many participants - for example, four to 10 times the standard fMRI experiment—we were able to uncover responses that generalized beyond the training sample to new participants remarkably well.""
As noted above, seeing the brain state pattern in the 3rd person is not the same as being consciousness in the 1st person. We will never observe consciousness via the 3rd person.
Moreover, the brain wasn't "generating" consciousness. Rather, the global brain state pattern—in the context of the intentional brain system—is negative emotion.
While some intentional brain states are conscious intentional brain states, not all of them are. In my discussions with Robin Faichney, he had noted re IIT that he didn't feel there would be a physical cause of consciousness per se (I'm paraphrasing; I may have misunderstood). That is, since brain states don't cause consciousness but rather are consciousness, there is some other, non-physical mechanism that instantiates consciousness. It's possible HOT theories could provide a model.
Note: these informational patterns need not be computationally derived patterns. DST I think offers such a non-computational model.
I'd be interested in those papers. I didn't see anything of interest on his naturalism page.
Re evidence of correlation:
http://m.medicalxpress.com/news/2015-08-brain-signature-human-emotions.html
"Chang and his colleagues studied 182 participants who were shown negative photos (bodily injuries, acts of aggression, hate groups, car wrecks, human feces) and neutral photos. Thirty additional participants were also subjected to painful heat. Using brain imaging and machine learning techniques, the researchers identified a neural signature of negative emotion—a single neural activation pattern distributed across the entire brain that accurately predicts how negative a person will feel after viewing unpleasant images.
"This means that brain imaging has the potential to accurately uncover how someone is feeling without knowing anything about them other than their brain activity," Chang says. "This has enormous implications for improving our understanding of how emotions are generated and regulated, which have been notoriously difficult to define and measure. In addition, these new types of neural measures may prove to be important in identifying when people are having abnormal emotional responses - for example, too much or too little—which might indicate broader issues with health and mental functioning."
Unlike most previous research, the new study included a large sample size that reflects the general adult population and not just young college students; used machine learning and statistics to develop a predictive model of emotion; and, most importantly, tested participants across multiple psychological states, which allowed researchers to assess the sensitivity and specificity of their brain model.
"We were particularly surprised by how well our pattern performed in predicting the magnitude and type of aversive experience," Chang says. "As skepticism for neuroimaging grows based on over-sold and -interpreted findings and failures to replicate based on small sizes, many neuroscientists might be surprised by how well our signature performed. Another surprising finding is that our emotion brain signature using lots of people performed better at predicting how a person was feeling than their own brain data. There is an intuition that feelings are very idiosyncratic and vary across people. However, because we trained the pattern using so many participants - for example, four to 10 times the standard fMRI experiment—we were able to uncover responses that generalized beyond the training sample to new participants remarkably well.""
As noted above, seeing the brain state pattern in the 3rd person is not the same as being consciousness in the 1st person. We will never observe consciousness via the 3rd person.
Moreover, the brain wasn't "generating" consciousness. Rather, the global brain state pattern—in the context of the intentional brain system—is negative emotion.
While some intentional brain states are conscious intentional brain states, not all of them are. In my discussions with Robin Faichney, he had noted re IIT that he didn't feel there would be a physical cause of consciousness per se (I'm paraphrasing; I may have misunderstood). That is, since brain states don't cause consciousness but rather are consciousness, there is some other, non-physical mechanism that instantiates consciousness. It's possible HOT theories could provide a model.
Note: these informational patterns need not be computationally derived patterns. DST I think offers such a non-computational model.