• NEW! LOWEST RATES EVER -- SUPPORT THE SHOW AND ENJOY THE VERY BEST PREMIUM PARACAST EXPERIENCE! Welcome to The Paracast+, eight years young! For a low subscription fee, you can download the ad-free version of The Paracast and the exclusive, member-only, After The Paracast bonus podcast, featuring color commentary, exclusive interviews, the continuation of interviews that began on the main episode of The Paracast. We also offer lifetime memberships! Flash! Take advantage of our lowest rates ever! Act now! It's easier than ever to susbcribe! You can sign up right here!

    Subscribe to The Paracast Newsletter!

Consciousness and the Paranormal — Part 5

Free episodes:

Status
Not open for further replies.
A further extract from that paper:

"Assumption 5 is pancomputationalism: everything is computational. There are two ways to defend assumption 5. Some authors argue that everything is computational because describing something as computational is just one way of interpreting it, and everything can be interpreted that way [19, 23]. We reject this interpretational pancomputationalism because it conflates computational modeling with computational explanation. The computational theory of cognition is not limited to the claim that cognition can be described (modeled) computationally, as the weather can; it adds that cognitive phenomena have a computational explanation [28, 31, 34]. Other authors defend assumption 5 by arguing that the universe as a whole is at bottom computational [56, 57]. The latter is a working hypothesis or article of faith for those interested in seeing how familiar physical laws might emerge from a “computational” or “informational” substrate. It is not a widely accepted notion, and there is no direct evidence for it.

The physical form of pancomputationalism is not directly relevant to theories of cognition because theories of cognition attempt to find out what distinguishes cognition from other processes—not what it shares with everything else. Insofar as the theory of cognition uses computation to distinguish cognition from other processes, it needs a notion of computation that excludes at least some other processes as noncomputational (cf. [28, 31, 34]). . . . .

We agree that if cognition involves computation, then the job of neuroscience and psychology is to discover which specific computations cognition involves. But the if is important. The job of psychology and neuroscience is to find out how cognition works, regardless of whether it involves computation. The claim that brains compute was introduced in neuroscience and psychology as an empirical hypothesis to explain cognition by analogy with digital computers. Much of the empirical import of the computational theory of cognition is already eliminated by stretching the notion of computation from digital to generic (see below). Stretching the notion of computation even further, so as to embrace pancomputationalism, erases all empirical import from the claim that brains compute."
 
Last edited:
continuing . . .

"Here is another way to describe the problem. The view that cognition involves computation has been fiercely contested. Many psychologists and neuroscientists reject it. If we adopt an all-encompassing notion of computation, we have no way to make sense of this debate. It is utterly implausible that critics of computationalism have simply failed to notice that everything is computational. More likely, they object to what they perceive to be questionable empirical commitments of computationalism. For this reason, computation as it figures in pancomputationalism is a poor foundation for a theory of cognition. From now on, we will leave pancomputationalism behind."
 
. . . "Let us now turn to another debate sometimes conflated with the one just described, namely the debate between classicists and connectionists. By the 1970s, McCulloch and Pitts were mostly forgotten. The dominant paradigm in cognitive science was classical (or symbolic) AI, aimed at writing computer programs that simulate intelligent behavior without much concern for how brains work [9]. It was commonly assumed that digital computationalism is committed to classicism, that is, the idea that cognition is the manipulation of linguistic, or sentence-like, structures. On this view, cognition consists of performing computations on sentences with a logico-syntactic structure akin to that of natural languages but written in the language of thought {?}[71,72].

It was also assumed that, given digital computationalism, explaining cognition is an autonomous activity from explaining the brain, in the sense that figuring out how the brain works tells us little or nothing about how cognition works: neural descriptions and computational descriptions are at “different levels” [72, 73].

During the 1980s, connectionism reemerged as an influential approach to psychology. Most connectionists deny that cognition is based on a language of thought and affirm that a theory of cognition should be at least “inspired” by the way the brain works [17].

The resulting debate between classicists and connectionists (e.g., [10, 18]) has been somewhat confusing. Part of what’s confusing about it is that different authors employ different notions of computation, which vary in both their degree of precision and their inclusiveness (see [30] for more details). But even after we factor out differences in notions of computation, further confusions lie in the wings.

Digital computationalism and neural network theories are often described as being at odds with one another. This is because it is assumed that digital computationalism is committed to classical computation (the idea that the vehicles of digital computation are language-like structures) and to autonomy from neuroscience, two theses flatly denied by many prominent connectionists and computational neuroscientists. But many connectionists also model and explain cognition using neural networks that perform computations defined over strings of digits, so perhaps they should be counted among the digital computationalists [1517, 19, 7477]. To make matters worse, other connectionists and computational neuroscientists reject digital computationalism—they maintain that their neural networks, while explaining behavior, do not perform digital computations [13, 5, 7, 78]."
 
. . . . . "To sum up, everyone is (or should be) a connectionist or computational neuroscientist in the general sense of embracing neural computation, without thereby being committed to either strong or weak associationism. Some people are classicists, believing that, in order to explain cognition, neural networks must amount to manipulators of linguistic structures. Some people are nonclassicist (but still digital computationalist) connectionists, believing that cognition is explained by nonclassical neural network digital computation. Finally, some people are anti-digital-computationalist connectionists, believing that cognition is explained by neural network processes, but these do not amount to digital computation (e.g., because they process the wrong kind of vehicles). To find out which view is correct, in the long run, the only effective way is to study nervous systems at all levels of organization and find out how they produce behavior (Fig. 4)."
 
continuing to section on 'information':

"We now turn our attention to the notion of information. Computation and information processing are commonly assimilated in the sciences of mind. As we have seen, computation is a mongrel concept that comprises importantly different notions. The same is true of information. Once the distinctions between notions of information are introduced, we will be in a position to consider whether the standard assimilation of information processing and computation is warranted and useful for a theory of cognition.

Information plays a central role in many disciplines. In the sciences of mind, information is invoked to explain cognition and behavior (e.g., [86, 87]). In communication engineering, information is central to the design of efficient communication systems such as television, radio, telephone networks, and the internet (e.g., [39, 88, 89]). A number of biologists have suggested explaining genetic inheritance in terms of the information carried by DNA sequences (e.g., [90]; see also [91, 92]). Animal communication theorists routinely characterize nonlinguistic communication in terms of shared information [93]. Some philosophers maintain that information can provide a naturalistic grounding for the intentionality of mental states, namely their being about states of affairs [46, 47]. Finally, information plays important roles in several other disciplines, such as computer science, physics, and statistics.

To account for the different roles information plays in all these fields, more than one notion of information is required. In this paper, we distinguish between three main notions of information: Shannon’s nonsemantic notion plus two notions of semantic information. . . .

Shannon information (nonsemantic)
Semantic information
Natural (semantic) information

. . .
Broadly understood, semantic notions of information pertain to what a specific signal broadcast by an information source means. To address the semantics of a signal, it is neither necessary nor sufficient to know which other signals might have been selected instead and with what probabilities. Whereas the selection of, say, any of 25 equiprobable but distinct words will generate the same nonsemantic information, the selection of each individual word will generate different semantic information depending on what that particular word means.

Semantic and nonsemantic notions of information are both connected with the reduction of uncertainty. In the case of nonsemantic information, the uncertainty has to do with which among many possible signals is selected. In the case of semantic information, the uncertainty has to do with which among many possible states of affairs is the case.

One of the foundational problems for the sciences of mind is that no uncontroversial theory of semantic information—comparable in rigor and scope to Shannon’s theory—has emerged. This explains the temptation felt in many quarters to appropriate Shannon’s theory of nonsemantic information and squeeze into it a semantic story. The temptation is in effect to understand information transmission in terms of reduction of uncertainty, use Shannon’s measures to quantify the information transmitted, and draw conclusions about the meanings of the transmitted signals.

This will not do. As we have stressed, Shannon information does not capture, nor is it intended to capture, the semantic content, or meaning, of signals. From the fact that Shannon information has been transmitted, no conclusions follow concerning what semantic information, if any, has been transmitted.

To tackle the meaning relation, we begin with Grice’s distinction between two kinds of meaning that are sometimes conflated, namely, natural and nonnatural meaning [96].
Natural meaning is exemplified by a sentence such as “those spots mean measles,” which is true—Grice claimed—just in case the patient has measles. Nonnatural meaning is exemplified by a sentence such as “those three rings on the bell (of the bus) mean that the bus is full” ([96], p. 85), which is true even if the bus is not full.

We extend the distinction between Grice’s two types of meaning to a distinction between two types of semantic information: natural information and nonnatural information.21 Spots carry natural information about measles by virtue of a reliable physical correlation between measles and spots. By contrast, the three rings on the bell of the bus carry nonnatural information about the bus being full by virtue of a convention.22 We will now consider each notion of semantic information in more detail. . ."
 
Last edited:
If we are to consider anything to be a substrate in the brain-body system ( BBS ), it would be the material of which the system is built. That is the usual context of a substrate. With respect to the brain, the substrate is analogous to the board of a computer chip. In fact the board material is often referred to as the substrate. Information on the other hand is non-material. It's an abstract idea. It only has existence within the context of a system that is able to identify information, and that is not the same as simply operating on an instruction set. For example a digital weigh scale operates on an instruction set and provides a readout, but it has no comprehension of the idea that it is providing information about weight ( another abstract concept ).

Hypothetically an intelligent system should be able to identify information and perform various tasks with it. However consciousness is on yet another level. For example an intelligent system could probably be designed to detect a range of environmental conditions and operate a series of controls to maintain specified environmental tolerances, but it wouldn't necessarily be able to experience being either hot or cold. Furthermore that experience cannot be reduced to ones and zeroes or any particular configuration of materials without losing the hotness or the coldness of the experience. So in this sense, although information can be imparted about the present state of one's consciousness, that information is not the same as consciousness itself.
 
"The most basic task of an account of natural semantic information is to specify the relation that has to occur between a source (e.g., fire) and a signal (e.g., smoke) for the signal to carry natural information about the source. Following Dretske [46], we discuss natural information in terms of correlations between event types. On the view we propose, an event token a of type A carries natural information about an event token b of type B just in case A reliably correlates with B. (Unlike [46], in addition to natural information, we posit nonnatural information; see below.) . . .

Nonnatural (semantic) information
Cognitive scientists routinely say that cognition involves the processing of information. Sometimes, they mean that cognition involves the processing of natural information. At other times, they mean that cognition involves the processing of nonnatural information. This second notion of information is best understood as the notion of representation, where a (descriptive) representation is by definition something that can get things wrong.26

We should point out that some cognitive scientists simply assimilate representation with natural semantic information, assuming in effect that what a signal represents is what it reliably correlates with. This is a weaker notion of representation than the one we endorse. Following the usage that prevails in the philosophical literature, we reserve the term “representation” for states that can get things wrong or misrepresent (for further discussion, see for instance [47]).

Bearers of natural information, we have said, “mean” states of affairs in virtue of being physically connected to them. We have provided a working account of the required connection: there must be a reliable correlation between a signal and its source. Whatever the right account may be, one thing is clear: in the absence of the appropriate physical connection, no natural information is carried.

Bearers of nonnatural information, by contrast, need not be physically connected to what they are about in any direct way. Thus, there must be an alternative process by which bearers of nonnatural information come to bear nonnatural information about things they may not reliably correlate with. A convention, as in the case of the three rings on the bell of the bus, is a clear example of what may establish a nonnatural informational link. Once the convention is established, error (misrepresentation) becomes possible . . . .".

The convention may either be explicitly stipulated, as in the rings case, or emerge spontaneously, as in the case of the nonnatural information attached to words in natural languages. But we do not wish to assume that nonnatural information is always based on convention (cf. [96]). There may be other mechanisms, such as learning or biological evolution, through which nonnatural informational links may be established. What matters for something to bear nonnatural information is that, somehow, it stands for something else relative to a signal recipient.

An important implication of our account is that semantic information of the nonnatural variety does not entail truth. On our account, false nonnatural information is a genuine kind of information, even though it is epistemically inferior to true information. The statement “water is not H2O” gets things wrong with respect to the chemical composition of water, but it does not fail to represent that water is not H2O. By the same token, the statement “water is not H2O” contains false nonnatural information to the effect that water is not H2O.

Most theorists of information have instead followed Dretske [4648] in holding that false information, or misinformation, is not really information. This is because they draw a sharp distinction between information, understood along the lines of Grice’s natural meaning, and representation, understood along the lines of Grice’s nonnatural meaning.

The reason for drawing this distinction is that they want to use natural information to provide a naturalistic reduction of representation (or intentionality). For instance, according to some teleosemantic theories, the only kind of information carried by signals is of the natural variety, but signals come to represent what they have the function of carrying natural information about. According to theories of this sort, what accounts for how representations can get things wrong is the notion of function: representations get things wrong whenever they fail to fulfill their biological function.

But our present goal is not to naturalize intentionality. Rather, our goal is to understand the central role played by information and computation in cognitive science. If cognitive scientists used “information” only to mean natural information, we would happily follow the tradition and speak of information exclusively in its natural sense. The problem is that the notion of information as used in the special sciences often presupposes representational content. . . ."

Information processing, computation, and cognition
 
Good video, even if we've already covered all the essential issues. I liked his iceberg analogy. I had thought of using it myself, but I find the surfer analogy more appealing due to its dynamic and experiential nature. The part of icebergs that is above the water, I suspect, doesn't have much fun :). On the issue of embodiment, there are two distinct contexts. I'm not sure which you're referring to e.g. embodied cognition, or embodied imagination, but in any case, you may recall that I have often used the phrase brain-body system ( BBS ), and that is because in the context of our material selves, the body plays a crucial role in identity and appears to directly influences the mind via a chain of biological processes.

Regarding Velman and the model he calls Reflexive Monism, he's onto some key points that I believe are perfectly valid, but I think that perhaps not all inferences drawn from it are necessary. For example on one level one could argue that depending on one's model of Physicalism, Reflexive Monism and Physicalism are essentially the same thing, the only difference being that Reflexive Monism focuses on the relationship between consciousness and other physical phenomena.

With respect to your emphasis on speech production, I suggest that this is the essential point, to quote:

"Literally it is the case that I'm only conscious of what I want to say, I've only realized what I want to say, once I've actually spoken. So the actual experience follows the processing to which it most obviously relates."

The above is entirely expected and logical when considered in the context of the BBS as the source of speech. This reminds me that I should rewind a bit, because if I recall correctly, at some point when I mentioned how I create music, you suggested ( to paraphrase ) that if I create the work as a whole in my mind, then the outcome could be looked at simply as brain function, not unlike a tape recorder, and therefore consciousness isn't really necessary, or is just along for the ride. It's not quite that simple. The work doesn't just appear in a flash as a completed work.

Typically, sound samples are drawn out of physical memory and evaluated for suitability and then pieced together with other sound samples. So this particular type of creative process requires the subjective experience of sound in order to work. However once it's all done, and after enough rehearsal, all the pieces come together, and that tape recorder analogy then works. I literally become a human jukebox and don't have to think much about what I'm doing. Indeed, if I did, it would cause some serious problems. Some riffs are so fast that they can't be played while trying to consciously "think them through".

@ufology
@Soupie

Would it be possible to discuss some issues of terminology/jargon?

The way I think of the discussion, we are working "idiosyncratically" - by that I mean without consistent reference to an established body of thought - so we don't have access to a common source of word definitions/meanings and context. The individual using a particular term idiosyncratically knows exactly what they mean but this may not convey to the reader who may have other associations with the terms used.

example Let's say we were all phenomenologists - if there is a question about a term, we could then point back to an established body of thought and texts in order to help resolve what words and concepts mean. This can be very difficult even in such circumstances of course - but when we work idiosyncratically we need to be particularly careful because the meanings aren't established and widely available.

Here's an example:

@ufology you write:
On the issue of embodiment, there are two distinct contexts. I'm not sure which you're referring to e.g. embodied cognition, or embodied imagination, but in any case, you may recall that I have often used the phrase brain-body system ( BBS ), and that is because in the context of our material selves, the body plays a crucial role in identity and appears to directly influences the mind via a chain of biological processes.

My use of the word "embodiment" was unclear, so you responded with two possible contexts: "embodied cognition" and "embodied imagination". I had to look up "embodied imagination" - what I found is that it is a form of dream work developed by Jungian analyst Robert Bosnak. As far as I can tell, that has nothing to do with the discussion, so now I would have to go back and look at my original post to see what I meant and if it matches "embodied cognition" or some other term.

A good goal then, would be to either use every day meanings or phrases or to mark any questionable term with a source or to provide context or definition at each use so that the terms in every post are clear. Post by post clarity as a goal so that a reader doesn't have to look to multiple posts for clarity. Obviously this is an ideal. But maybe this will help having to go back through multiple posts to see what is being referred to - even in established bodies of thought, word meanings are contested (see the article I posted above on "information processing").

@ufology you continue:
"you may recall that I have often used the phrase brain-body system ( BBS ), and that is because in the context of our material selves, the body plays a
crucial role in identity and appears to directly influences the mind via a chain of biological processes."


Two things here, I have seen you, @Soupie, use something like this term - but otherwise I don't find a clear defintion of "brain-body system" by doing a search. You could well say the meaning is clear ... but again, this may be a case where you know exactly what you mean but it's not clear to the reader. We also have an appearance of another red flag: the acronym. You do provide context for the phrase in the sentence above: "the body plays a crucial role in identity ... " but could we just use this phrase or something like it to convey precise meaning rather than rely on brain-body system to convey a range of meanings?

@ufology this isn't an issue of vocabulary, but it's a source of confusion:

The above is entirely expected and logical when considered in the context of the BBS as the source of speech. This reminds me that I should rewind a bit, because if I recall correctly, at some point when I mentioned how I create music, you suggested ( to paraphrase ) that if I create the work as a whole in my mind, then the outcome could be looked at simply as brain function, not unlike a tape recorder, and therefore consciousness isn't really necessary, or is just along for the ride. It's not quite that simple. The work doesn't just appear in a flash as a completed work.

When I see the phrase "if I recall correctly" I prepare myself to go look for the intial instance of what's being recalled ... what makes the thread/discussion format so difficult for me is that I so often don't recall correctly. You probably do - but as in the example above, I don't and so I still need to go back and check my memory - because looking at your paraphrase I remember the post but not exactly, I remember a dream and a song by Bruce Cochran (and this was about the time I had a dream with music in it) and I remember some other discussion but I can't confirm that your paraphrase gets at what I meant exactly without hunting down the reference - so if possible, provide quotes or links to things that other people said. This can be hard in the flow of a discussion and for me it feels like it slows me down, but once confusion enters the discussion comes to a stop anyway.

So vocabulary, sourcing or quoting and finally, post in small bits - I am terrible at this, but in the discussion format - it's so easy to get confused ... so instead of dealing with multiple concepts in one post, can we do one question or concept at a time? Or, what might work better, if you want to soar and post a lot of thoughts and ideas, that is great, but at the end, if you want a specific response, post a single question or concept you want to be addressed?

I'm open to all feedback on this and any other ideas to make the discussion more productive.
 
I hope my first post doesn't come across as overly negative, but this thread really is a bunch of pretentious, headache-inducing twaddle.

I would argue that all this pseudo-scientific gibberish was originally designed and continues to be perpetuated SOLELY to over-complicate and obscure the direct apprehension and self-understanding of consciousness itself.
 
I hope my first post doesn't come across as overly negative, but this thread really is a bunch of pretentious, headache-inducing twaddle.

I would argue that all this pseudo-scientific gibberish was originally designed and continues to be perpetuated SOLELY to over-complicate and obscure the direct apprehension and self-understanding of consciousness itself.
Like any good philosophical discussion this group is really just getting warmed up. I'm sure soon enough the paranormal part of the discussion will unfold, as we know that at the heart of the paranormal & UFO discussion is consciousness studies. It is necessary work. And it's not all twaddle, just a different and more specific & self-involved language.
 
And it's not all twaddle, just a different and more specific & self-involved language.

Every freshly uttered word of this 'language' strikes me as an increasing degree of blind fragmentation leading away from consciousness, like: "Leaving your elephant at home and going to look for its footprints in the forest."
 
Last edited by a moderator:
Every freshly uttered word of this 'language' strikes me as an increasing degree of blind fragmentation leading away from consciousness, like: "Leaving your elephant at home and going to look for its footprints in the forest."
@TheBitterOne have you actually got something to say... or is your mind an empty void that just sucks?
 
!?! Pharoah, that comment is out of line and not at all like you. I understand exactly what she is saying and I'm quite sure others here do as well.
 
!?! Pharoah, that comment is out of line and not at all like you. I understand exactly what she is saying and I'm quite sure others here do as well.
I apologise to him/her and everyone... I thought he/she was being a plonker, but I must have got the wrong idea.
 
Every freshly uttered word of this 'language' strikes me as an increasing degree of blind fragmentation leading away from consciousness, like: "Leaving your elephant at home and going to look for its footprints in the forest."
Because this sentence doesn't ooze with pretentiousness and doublespeak. Please.

I believe all consistent participants in this discussion have been sincere. Have we often presented contradictory and confusing explanations of our approach to and understanding of the origin and nature of consciousness? Yes.

Is that surprising considering there is no consensus regarding the origin and nature of consciousness among the world's leading philosophers and scientists? No.
 
Would it be possible to discuss some issues of terminology/jargon?
The way I think of the discussion, we are working "idiosyncratically" - by that I mean without consistent reference to an established body of thought - so we don't have access to a common source of word definitions/meanings and context. The individual using a particular term idiosyncratically knows exactly what they mean but this may not convey to the reader who may have other associations with the terms used.

I think that's exactly right, Steve. The paper you linked last night goes to the same point, the need to define terms used in what we write here. The two authors of that paper make it clear how much divergence there is in the positions taken by various neuroscientists in the ways they understand (or not) what 'computation' is, and additionally in the current ambiguity of the term 'information' in neuroscience.

But the language problem goes even deeper, significantly deeper, given the lack of shared understanding of what is meant by the phenomenology of consciousness. Varela, Thompson, Gallagher, Zahavi, and others have gone to great lengths to educate neuroscientists concerning the 'inescapability of phenomenology', the title of a recent paper by Taylor Carman, linked again here:

On the Inescapability of Phenomenology | Taylor Carman - Academia.edu

Another link you posted yesterday, to a lecture by Max Velmans on language, speech, and speaking, is also highly relevant along with the many papers we've read by Velmans. I wish the second edition of his Understanding Consciousness were available online [his publisher, Routledge, seems to want to sell some bound books before allowing that], but Velmans has also published many papers on his theory of reflexive monism that are available online.

Another scholar I've cited in the past, Richard Lanigan, will also be most helpful in clarifying the problems of ambiguity in language itself if we focus on this problem in this discussion.

I think the remedy for the problem of ambiguity in the terminology used by neuroscientists and information theorists can only be overcome if more of them read papers and books by phenomenologists and neurophenomenologists. Only accumulating pressure placed upon them by those of their colleagues who have read and understood phenomenology is likely to make a difference in time.
 
"The most basic task of an account of natural semantic information is to specify the relation that has to occur between a source (e.g., fire) and a signal (e.g., smoke) for the signal to carry natural information about the source. Following Dretske [46], we discuss natural information in terms of correlations between event types. On the view we propose, an event token a of type A carries natural information about an event token b of type B just in case A reliably correlates with B. (Unlike [46], in addition to natural information, we posit nonnatural information; see below.) . . .

Nonnatural (semantic) information
Cognitive scientists routinely say that cognition involves the processing of information. Sometimes, they mean that cognition involves the processing of natural information. At other times, they mean that cognition involves the processing of nonnatural information. This second notion of information is best understood as the notion of representation, where a (descriptive) representation is by definition something that can get things wrong.26

We should point out that some cognitive scientists simply assimilate representation with natural semantic information, assuming in effect that what a signal represents is what it reliably correlates with. This is a weaker notion of representation than the one we endorse. Following the usage that prevails in the philosophical literature, we reserve the term “representation” for states that can get things wrong or misrepresent (for further discussion, see for instance [47]).

Bearers of natural information, we have said, “mean” states of affairs in virtue of being physically connected to them. We have provided a working account of the required connection: there must be a reliable correlation between a signal and its source. Whatever the right account may be, one thing is clear: in the absence of the appropriate physical connection, no natural information is carried.

Bearers of nonnatural information, by contrast, need not be physically connected to what they are about in any direct way. Thus, there must be an alternative process by which bearers of nonnatural information come to bear nonnatural information about things they may not reliably correlate with. A convention, as in the case of the three rings on the bell of the bus, is a clear example of what may establish a nonnatural informational link. Once the convention is established, error (misrepresentation) becomes possible . . . .".

The convention may either be explicitly stipulated, as in the rings case, or emerge spontaneously, as in the case of the nonnatural information attached to words in natural languages. But we do not wish to assume that nonnatural information is always based on convention (cf. [96]). There may be other mechanisms, such as learning or biological evolution, through which nonnatural informational links may be established. What matters for something to bear nonnatural information is that, somehow, it stands for something else relative to a signal recipient.

An important implication of our account is that semantic information of the nonnatural variety does not entail truth. On our account, false nonnatural information is a genuine kind of information, even though it is epistemically inferior to true information. The statement “water is not H2O” gets things wrong with respect to the chemical composition of water, but it does not fail to represent that water is not H2O. By the same token, the statement “water is not H2O” contains false nonnatural information to the effect that water is not H2O.

Most theorists of information have instead followed Dretske [4648] in holding that false information, or misinformation, is not really information. This is because they draw a sharp distinction between information, understood along the lines of Grice’s natural meaning, and representation, understood along the lines of Grice’s nonnatural meaning.

The reason for drawing this distinction is that they want to use natural information to provide a naturalistic reduction of representation (or intentionality). For instance, according to some teleosemantic theories, the only kind of information carried by signals is of the natural variety, but signals come to represent what they have the function of carrying natural information about. According to theories of this sort, what accounts for how representations can get things wrong is the notion of function: representations get things wrong whenever they fail to fulfill their biological function.

But our present goal is not to naturalize intentionality. Rather, our goal is to understand the central role played by information and computation in cognitive science. If cognitive scientists used “information” only to mean natural information, we would happily follow the tradition and speak of information exclusively in its natural sense. The problem is that the notion of information as used in the special sciences often presupposes representational content. . . ."


The last part of the neuroscience paper Steve linked last night and that I quoted from appeared in this post, where I provided a link to the paper itself so that others could read for themselves the discussion on 'information' of various types, some semantic and some not semantic, and discern further distinctions between expressions of 'natural' and 'unnatural' semantics with which conscious individuals must attempt to think. Here again is the link to the whole paper, where you can scroll down to that discussion:

Information processing, computation, and cognition
 
Carman, On the Inescapability of Phenomenology

"Language can be misleading, however, if regarded as a direct and literal reflection of the cognitive realities underlying it, and we should no more expect intentional content to be carved up in discrete sentence-like chunks than we should expect landscapes to look like road maps. Rather, as Dennett puts it, 'the multidimensional complexities of the underlying processes are projected through linguistic behavior, which creates an appearance of definiteness and precision, thanks to the discreteness of words’. The reality of intentional content lies in ‘the brute existence of pattern’, albeit patterns of behavior, indeed patterns we could in principle dismiss as less real than the underlying physics, provided the physical stance offered greater explanatory and predictive power than the intentional stance."
 
Status
Not open for further replies.
Back
Top