S
smcder
Guest
@smcder (I cant seem to quote select parts of your posts, but the following speaks to some of the issues youve raised.)
Minsky - Matter, Mind and Models
Matter, Mind and ModelsThis is the "craziness" I've alluded to in reference to metacognition (thinking about thinking about thinking, etc) and what might come of it.
Marvin L. Minsky
1. Introduction
This chapter attempts to explain why people become confused by questions about the relation between mental and physical events. When a question leads to confused, inconsistent answers, this may be because the question is ultimately meaningless or at least unanswerable, but it may also be because an adequate answer requires a powerful analytical apparatus. It is the author's view that many important questions about the relation between mind and brain are of that second kind, and that some of the necessary technical and conceptual tools are becoming available as a result of work on the problems of making computer programs behave intelligently. We shall suggest a theory to explain why introspection does not give clear answers to these questions. Technical solutions to the questions will not be attempted, but there is probably some value in finding at least a clear explanation of why we are confused.
2. Knowledge and Models
If a creature can answer a question about a hypothetical experiment without actually performing it, then it has demonstrated some knowledge about the world. For, his answer to the question must be an encoded description of the behavior (inside the creature) of some sub-machine or "model" responding to an encoded description of the world situation described by the question We use the term "model" in the following sense: To an observer B, an object A* is a model of an object A to the extent that B can use A* to answer questions that interest him about A. The model relation is inherently ternary. Any attempt to suppress the role of the intentions of the investigator B leads to circular definitions or to ambiguities about "essential features" and the like. It is understood that B's use of a model entails the use of encodings for input and output, both for A and for A*. If A is the world, questions for A are experiments. A* is a good mode of A, in B's view, to the extent that A*'s answers agree with those of A's, on the whole, with respect to the questions important to B. When a man M answers questions about the world, then (taking on ourselves the role of B) we attribute this ability to some internal mechanism W* inside M. It would be most convenient if we could discern physically within M two separate regions, W* and M-W*, such that W* "really contains the knowledge" and M-W* contains only general-purpose machinery for coding questions, decoding answers, or administering the thinking process. However, one cannot ready expect to find, in an intelligent machine, a clear separation between coding and knowledge structures, either anatomically or functionally, because (for example) some "knowledge" is likely to be used in the encoding and interpreting processes. What is important for our purposes is the intuitive notion of a model, not the technical ability to delineate a model's boundaries Indeed, part of our argument hinges on the inherent difficulty of discerning such boundaries.
3. Models of Models
Questions about things in the world are answered by making statements about the behavior of corresponding structures in one's model W* of the world. For simple mechanical, physical, or geometric matters one can imagine, as did Craik (1), some machinery that does symbolic calculation but when read through proper codings has an apparently analogue character. But what about broader question about the nature of the world? These have to be treated (by M) not as questions to be answered by W*, but as questions to be answered by making general statements about W*. If W** contains a model M* of M then M* can contain a model W** of W*; and, going one step further, W** may contain a model M** of M*. Indeed, this must be the case if M is to answer general questions about himself. Ordinary questions about himself, e.g., how tall he is, are answered by M*, but very broad questions about his nature, e.g., what kind of a thing he is, etc., are answered, if at all, by descriptive statements made by M** about M*.
The reader may be anxious, at this point, for more details about the relation between W* and W**. How can he tell, for example, when a question is of the kind that requires reference to W** rather than to W*. Is W** a part of W? (Certainly W*, like everything else, is part of W.) Unfortunately, I cannot supply these details yet, and I expect serious problems in eventually clarifying them. We must envision W** as including an interpretative mechanism that can make references to W*, using it as a sort of computer-program subroutine, to a certain depth of recursion. In this sense W** must contain W*, but in another, more straightforward, sense W* can contain W**. This suggests first that the notion "contained in" is not sufficiently sophisticated to describe the kinds of relations between parts of program-like processes and second that the intuitive notion of "model" used herein is likewise too unsophisticated to support developing the theory in technical detail. It is clear that in this area one cannot describe inter-model relationships in terms of models as simple physical substructures. An adequate analysis will need much more advanced ideas about symbolic representation of information- processing structures. ...
Published in 1965 ... see Dreyfus' argument against Object Oriented Ontology and why the first round of AI, directed by Minsky, would (and did) fail ... then see Dreyfus again for why the second round would (and did) fail ... then look up Minsky's admission that Dreyfus was right. I've posted this in the forum, I think in part Two or you can Google it, let me know if you don't find it.
Nagel's paper came out in 78 for reference.