Реферат

Реферат на тему Computational Mind Essay Research Paper The purpose

Работа добавлена на сайт bukvasha.net: 2015-06-21

Поможем написать учебную работу

Если у вас возникли сложности с курсовой, контрольной, дипломной, рефератом, отчетом по практике, научно-исследовательской и любой другой работой - мы готовы помочь.

Предоплата всего

от 25%

Подписываем

договор

Выберите тип работы:

Скидка 25% при заказе до 27.12.2024


Computational Mind Essay, Research Paper

The purpose of this paper is to present John Searle?s Chinese room argument in which it challenges the notions of the computational paradigm, specifically the ability of intentionality. Then I will outline two of the commentaries following, the first by Bruce Bridgeman, which is in opposition to Searle and uses the super robot to exemplify his point. Then I will discuss John Eccles? response, which entails a general agreement with Searle with a few objections to definitions and comparisons. My own argument will take a minimalist computational approach delineating understanding and its importance to the concepts of the computational paradigm.

Searle’s argument delineates what he believes to be the invalidity of the computational paradigm’s and artificial intelligence’s (AI) view of the human mind. He first distinguishes between strong and weak AI. Searle finds weak AI as a perfectly acceptable investigation in that it uses the computer as a strong tool for studying the mind. This in effect does not observe or formulate any contentions as to the operation of the mind, but is used as another psychological, investigative mechanism. In contrast, strong AI states that the computer can be created so that it actually is the mind. We must first describe what exactly this entails. In order to be the mind, the computer must be able to not only understand, but to have cognitive states. Also, the programs by which the computer operates are the focus of the computational paradigm, and these are the explanations of the mental states. Searle’s argument is against the claims of Shank and other computationalists who have created SHRDLU and ELIZA, that their computer programs can (1) be ascribed with understanding and that (2) explain human understanding. In order to explicate Searle’s view, he utilizes the example of the Chinese room.

Understanding the notion of the Chinese room requires a bit of an explanation. Imagine you are solely an English speaking person in a room by yourself, armed with a pencil, and the only things on the walls are a series of instructions and rules. There is a door in the room, and on the other side is a Chinese speaking person. This Chinese speaker is able to slide cards under the door upon which are written Chinese symbols and sentences. The instructions written on the walls allow you to respond appropriately to each symbol, well enough so that the Chinese speaker is fooled into thinking you have a formidable grasp of Chinese. Now imagine that instead of a Chinese speaker outside the room, there is an English speaker, and the same things are written. You would still respond appropriately, convincing the other that you are a native English speaker, which of course, you are. Searle feels that the two positions are unique in that, in the first instance, you are “manipulating uninterpreted formal symbols,” simply an instantiation of a computer program. In the second instance, you actually understand the English being given to you.

In response to the first claim of the computationalists, Searle states, although you respond appropriately, in no way do you understand the Chinese that you are being given and responding with. As far as the second condition, he counterclaims that the computer is simply “functioning and there is no understanding,” and with understanding comes meaning. He concedes that although these two claims do not provide the whole of understanding, they may possibly provide part of it, but the rest must be left to empirical data. He does not feel they give any necessary reason as to why humans are operating under the restrictions of formal rules whatsoever.

Searle then outlines various replies and his own responses.

(1) The Systems Reply: Although the person in the room does not understand the Chinese, it is the system as a whole that does.

Response: a.) Simply have the person in the room represent all parts of the system. The person can memorize all the rules, and visualize their execution. b.) This reply simply demonstrates that there are two types of machines that can pass the Turing test, one that understands and one that merely processes. c.) If the systems reply is correct, then anything that has input that goes through a program to supply output is cognitive. Take the eye for instance, it receives photons, processes them, and produces early vision. We wouldn’t apply cognitive processes to it.

(2) The Robot Reply: Take a computer, and give it eyes, ears, legs, and arms, essentially all the perceptual apparatus of a human. This robot would then have understanding and mental states.

Response: a.) This reply in of itself concedes that cognition is not merely a formal symbol manipulation. b.) The robot has no intentional states, it is simply operating as to the apparatus with which it has been given.

(3) The Brain Simulator: Create a system of neurons that fires exactly like the human brain. If this system does not understand, then neither does the Chinese speaking native.

Response: This system only simulates formal structure, it fails to encompass what matters, most importantly causal properties and intentional states.

(4) The Combination Reply: Create a robot using notions of replies one through three, with a brain shaped computer and all the necessary synapses, all the perceptual abilities of a human. This would be a unified system which can be described as intentional.

Response: Labeling this robot with intentionality in no way refers to the actual programs of the computer which AI states is sufficient condition of intentionality. Plus, we know how the robot is coming to its conclusions, therefore it cannot have intentionality.

The comparison of the mind to the brain is as the program to the hardware is what Searle feels falls short of reality. Computers can be given the ability to realize certain functions, but what it lacks is intentionality. It is these intentional states that are important because what they contain is important by content, not the formal operations of the program. In addition, mental states are a result and product of operations within the brain, but for the computer, the programs are independent and separate. The computer is not necessary. In order to have mental capacities, computational processes are not enough, causal powers operating over intentionality are a necessity. In addition, examining the human mind, we are not merely discussing the sequences with which neural synapses occur, but also the properties inherent in these sequences. Essentially, the computer programs in Searle’s eyes lack the semantics and consequently, the reference and meaning. They are simply syntactic. It is this distinction that makes intentionality impossible. Also, in referring to information processing as cognitive science so often does, there is no place within the processing for intentionality, there must be something more. Most importantly, computational theorists create a dualism, by isolating the mind from the brain, with the importance being placed with programs. This assumption and acceptance of dualism is necessary to continue within the computational paradigm. What follows from this assumption is that programs can be instantiated on any type of machine. Searle states you simply cannot isolate off the brain. He uses the example that a computer can simulate the production of milk and sugar, but in the end we have no milk and sugar. Searle states “intentionality is?a biological phenomenon, and it is as likely to be a causally dependent on the specific biochemistry of its origins as?any other biological phenomena.”

In response to Searle?s article, Bruce Bridgeman of University of California, Santa Cruz wrote an article that was in support of the computational theory and a more extensive version of the Robot Reply. Enable this robot with simply more information. For example, for the visual fields accompany them with spatial processing programs. The auditory systems would need temporal processing programs. Program the robot to avoid things like pain and other harmful stimuli and to embrace necessities. All of this information can be built into a database, much like human memory, so that the robot is able to create a representation and eventually learn how to interact with the environment. In what way would this be different from humans, in that human intentionality is not that different from machine states? Bridgeman voices three problems that he feels demonstrate the invalidity of Searle?s argument: 1) The human brain only receives and gives out input strings. Aside from this, the brain is deaf, dumb, and blind. It is only a function of electrical impulses (Bridgeman decides to ignore the concept of hormonal levels and their interaction in the brain). 2) Insofar as the brain has things like genetic information and experience that enrich its database, these two aspects do not bring with them intentionality, and therefore, do not challenge the computational argument. The brain must only be characterized by neuronal properties, and to go beyond this is a form of dualism. 3) Searle fails to provide a suitable criterion for how far intentionality can be extended. Searle is willing to appoint intentionality to certain animals such as the ape, but at what junction does this stop? With these three points according to Bridgeman, ?we are left with a human brain that has an intention-free, genetically determined structure, on which are superimposed the results of storms of tiny nerve signals.? He challenges Searle?s use of mathematics to demonstrate that humans understand something that machines do not. Bridgeman claims that he or any other human does not understand numbers, but that we merely apply a system of rules learned in childhood. This is the same basic system of rules of which a machine makes use. Bridgeman will go so far as to suggest that intentionality is a ?cognitive illusion? and that ?consciousness is a neurological system like any other, with functions such as the long-term direction of behavior (intentionality?), access to long-term memories, and others?that make it?a processor of biologically useful information.? Perhaps the focus here should not be that humans and machines are hopelessly differentiated, but that ?programs, being designed without extensive evolution, have more restricted goals and motivations.? This condition, however, does not limit the possibility of computers ?making plans and achieving goals.?

John Eccles of Switzerland also provides a commentary on Searle?s argument with which he has a few basic problems, but also finds a wealth of accurate observations. As far as Searle?s proposition that intentionality is a product of causal features of the brain, Eccles, being a dualist-interactionist, feels that ?intentionality is a property of the self-conscious mind, the brain being used as an instrument in the realization of intentions.? Eccles will go so far as to support Searle completely if intentionality is forsaken as a property of the brain. This will then be an appropriately strong and rational argument against artificial intelligence. He feels the intrinsic flaw in all of AI can be demonstrated by Premack?s experiments regarding the chimp Sarah. Sarah was taught to use certain symbols that represented human language. Premack felt that Sarah was learning a simplistic level of language. This same system was taught by an experimenter, Lenneberg who challenged the notion, to high school students, and although the error percentage decreased, when asked what they were doing, the students were not able to identify it as a symbolic representation of English. This shows that in either case, there was no understanding being produced, in other words a version of the Chinese room. In Eccles own words the accomplishments of a computer program are ?no more than a triumph for the computer designer in simulation.? The only other problem Eccles has with Searle?s argument is Searle?s criticism of the mind is to brain as program is to hardware. He especially criticizes the statement of Searle that the mind is a product of the brain relying on biological processes. As a dualist, this is not acceptable to Eccles. Overall, Eccles feels it was ?high time that strong AI was discredited.?

Much like Searle, I myself have found the computational paradigm to be fraught with overestimations and unqualified legitimacies. I do not personally embrace all of Searle?s dogmas, I feel his argument and rebuttal fail to address certain important issues, such as what is the exact appropriate causal powers that he speaks so often of? At no point does Searle exactly state what this is to suggest to the reader. Actually, the failure to properly identify certain key notions and verbiage throughout cognitive science is a trend. The very concepts of intentionality and understanding remain vague and poorly defined. The question, or so I have gathered here, is do computers have the ability or possibility to understand in the sense of human cognition? Well, first we must come up with a certain and exact definition of understanding. It has been suggested to me by fellow peers and through discussion that understanding must consist of three important points. 1) Symbols that represent concepts and items of the environment. 2) A means of referring these symbols to the things that they represent. 3) Rules for manipulating and forming symbols in new and complex ways. All three of these notions I can accept as being pertinent and necessary to the discussion of understanding, but I must confess I find it to be still short of the qualities of understanding. In order to further expound on this definition, I must elucidate what I feel is the strongest of the replies, and that is the Robot Reply. In turn, I also found this to be the weakest of Searle?s counter replies. More specifically, I want to discuss the exemplification of the Robot Reply as expressed by Bridgeman. In terms of the three points of understanding, I feel this super robot satisfies them, but as I mentioned I do not find this sufficient for understanding. Take for example the idea of a banana. Someone says to you ?banana? and you instantly refer to it in some manner. Perhaps you think of its sweet taste, or that it is rich in potassium, or maybe a visual of a yellow, oblong, curving, shape that is bespeckled with brown spots comes to you, or perhaps the memory of some bad experience you had with a banana once. The point is, that in some way you have referred to the banana. Now the super robot, with the addition of all its appendages and perceptual achievement, will also be able to refer in some manner to the banana. The question is, how? Well, Joi, you might respond, perhaps we can install the super robot with some type of random generator sequence that will pick one form of reference or another. Ah, ha, I will say, this is just my point. Our brain does not simply receive input strings, process them, and output strings, there is a very specific and nonrandom association going on that is based on the motivations and inclinations at that time. In other words, it is directly influenced by those hormonal levels, which Bridgeman is so eager to disregard. For instance, I may think, ?yum, a banana tastes very good,? because I am hungry right then. At another moment, I might refer to a visual representation of the banana, because I am painting a still life, and banana will do well for my composition. So in turn my fourth point would be that understanding is hormonal and motivational specific, changing, perhaps even from moment to moment. In summary, I feel computational understanding can be achieved at a secondary level, but the primary motivations are lacking.


1. Доклад Владимир
2. Курсовая Товароведная характеристика и оценка качества пива
3. Реферат на тему Black Heart Essay Research Paper A Small
4. Реферат на тему Особенности загрязнения заражения и обеззараживания помещений и территорий на сельскохозяйственных
5. Реферат БЖД 2
6. Реферат Аудит расчетов с покупателями и заказчиками 3
7. Реферат Сознание как философская проблема 2
8. Отчет по практике на тему Анализ уроков истории студентов практикантов
9. Сочинение на тему Сравнительная характеристика Онегина и Печорина
10. Курсовая на тему Конституционные основы судебной власти 2