Реферат на тему Individiual Understanding Essay Research Paper Individual UnderstandingI
Работа добавлена на сайт bukvasha.net: 2015-06-16Поможем написать учебную работу
Если у вас возникли сложности с курсовой, контрольной, дипломной, рефератом, отчетом по практике, научно-исследовательской и любой другой работой - мы готовы помочь.
Individiual Understanding Essay, Research Paper
Individual Understanding
I agree with functionalists, specifically the strong Artificial Intelligence (AI) camp, concerning the concept of understanding. While John Searle poses a strong non-functionalist case in his AChinese Room@ argument, I find that his definition of Ato understand@ falls short and hampers his point. I criticize his defense that understanding rests on a standardized knowledge of meaning, but not before outlining the general background of the issue.
Functionalists define thought and mental states in terms of input and output. They claim that what we see, hear, smell, taste, and touch (input) creates a mental state or belief, and that particular mental state in turn creates our reaction (output). If I see it=s raining outside, I believe that if I go outside I will get wet, and therefore I take an umbrella with me. The functionalists define a mental state strictly through its cause and effect relationships, through its function.
This thinking leads to the conclusion that the human brain is little more than a big, complex computer. All we humans do is take input, process it, and accordingly create output, just like a computer. In fact, functionalists who support strong AI go so far as to say that an appropriately programmed computer actually has all the same mental states and capabilities as a human. In AMinds, Brains, and Programs,@ John Searle outlines this argument:
AIt is a characteristic of human beings= story understanding capacity that they can answer questions about [a] story even though the information they give was never explicitly stated in the story. . . . [Strong AI claims that m]achines can similarly answer questions about [stories] in this fashion. . . . Partisans of strong AI claim that in this question and answer sequence the machine is not only simulating a human ability but also (1) that the machine can literally be said to understand the story . . . and (2) that what the machine and its program do explains the human ability to understand the story and answer questions about it@ (354).
While strong AI claims that a machine can understand just as a human understands, Searle himself disagrees. He claims that a strictly input-output system, such as a computer is, cannot understand anything, nor does it explain humans= ability to understand. In criticizing strong AI, Searle creates his famous AChinese Room@ argument: suppose that Searle was locked in a room with a large batch of Chinese writing. Here, Searle knows absolutely no Chinese, but he does understand English fluently. For Searle, AChinese writing is just so many meaningless squiggles@ (355). Then, someone slips under the door another set of Chinese writing, but along with it an English rulebook. The rulebook shows Searle how to simply correlate one Chinese symbol with another, identifying them only by shape and not by meaning. Searle then strings together his meaningless Chinese symbols according to the English rulebook, and slips his Awriting@ under the door. More Chinese symbols come in, and in response Searle simply pieces new ones together and sends them back out.
The question at hand: does Searle understand Chinese? If he becomes good enough at piecing the symbols together, a native Chinese speaker outside the room would say yes, Searle does understand Chinese. Chinese questions were sent in, and flawless answers came back out. There were input (Chinese questions), obvious processing (Searle=s matching symbols according to the rulebook), and appropriate output (Searle=s pieced-together responses), just as strong AI proposes. But is this understanding?
Searle claims it is not. He argues that locked in the room, he certainly does not understand Chinese. AI have inputs and outputs that are indistinguishable from those of [a] native Chinese speaker, and I can have any formal program you like, but I still understand nothing. For the same reason,@ Searle claims, A[strong AI=s] computer understands nothing of any stories . . .@ He goes on to argue that with English sentences, he knows what they mean, and therefore understands them, but with the Chinese symbols, he knows nothing of their meaning and therefore does not understand them (357).
One of the many functionalist responses to Searle=s Chinese Room argument claims that while ASearle may not understand Chinese, the room does.@ This view concedes that in looking at Searle as an independent individual, he obviously does not understand the meaning of his Awriting.@ However this response also seems to claim that Searle is not really independent, but rather a piece of a larger whole, and in looking at the room as a whole system, there actually does exist an understanding of the Chinese writing.
Inside of the room exist three things: Searle, an English rulebook, and a big batch of Chinese writing symbols. While Searle, one of the three, may not individually understand Chinese, the functioning of the three together creates an understanding. The symbols exist already, but are not structured in any particular order; they need direction to create meaning. The English rulebook gives that direction, but cannot execute its instructions; it needs a processing unit. Searle can follow the rulebook=s instructions and give proper order to the Chinese symbols. As the third and final Aprocessor piece,@ he may not understand the symbols nor why he places one after another, but an understanding exists in the working of the three pieces together. According to certain functionalists, therefore, the room, as a whole system composed of smaller parts, understands Chinese.
Searle is not convinced. His response, he claims, Ais simple: Let the individual [inside the room] internalize all of these elements of the system.@ Let him memorize all the symbols, of course not by their meaning but by their shapes only. Let him memorize the rules of the English rulebook and let him, instead of physically piecing symbols together, do all the work in his head. AWe can even get rid of the room and suppose he works outdoors. All the same, he understands nothing of Chinese, and a fortiori neither does the system, because there isn=t anything in the system that isn=t in him.@ Give the man all the elements of the system, and he still doesn=t understand Chinese, claims Searle. Therefore, the system never understood Chinese in the first place.
Searle goes on however to address the meaning of the verb Ato understand.@ If the man did internalize the entire system, and was independently capable of such input-process-output action, some functionalists would claim that he in fact did understand Chinese. Searle refuses this, claiming that then many different levels, or subsystems, of understanding must exist. AThe subsystem that understands English knows that the stories are about restaurants and eating hamburgers . . . . But the Chinese subsystem knows none of this. Whereas the English subsystem knows that >hamburgers= refers to hamburgers, the Chinese subsystem knows only that >squiggle squiggle= is followed by >squoggle squoggle=@ (359). The difference between the man=s understanding of English and his understanding of Chinese is that in English, he actually knows the meaning of the writing, whereas in Chinese he does not. It is knowledge of meaning, therefore, that defines Searle=s Aunderstanding.@
Without this knowledge of meaning, Searle continues, true understanding does not really exist. A[I]f we accept the systems reply, then it is hard to see how we avoid saying that stomach, heart, liver, and so on are all understanding subsystems . . .@ (360). Searle argues that if we only need an input, a process, and an output to attribute understanding, many objects would be said to have understanding. The stomach certainly has input (undigested food), a process (chemical breakdown and the numerous processes it includes), and an output (sugar to the blood, protein the muscles, waste to the anus, etc.). But has the stomach ever been said to understand anything at all?
Searle does not accept that an understanding exists in a strictly input-process-output system. Clearly, he argues, understanding must come from a knowledge of the meaning of input and output. The Chinese room system does not at any point display a knowledge of the meaning of the symbols it works with, and so it accordingly does not understand the Chinese language.
I find the functionalists to be more convincing, although I do not agree with them entirely. I do believe that our brains are similar to highly complex computers. Our senses provide us with input, and this input is stored as memory. As we live our lives, we constantly record input; we note and remember everything we experience. I put my hand in fire, and I soon feel intense pain; I record the experience through a sensory perception of it. When we come across a specific experience again, we recall the old data. I see another fire, I recall the old input of touching fire and burning myself, I remember not being happy with that so my new output now is to walk away from the fire. My brain functions as a constant, complex computer, always recording input and consequently creating new output.
According to my model, would I say Searle=s Chinese room demonstrates understanding? My thoughts fall to his definition of the verb, Ato understand.@ I do not agree with him that to understand something we must know the meaning of it, at least not in the way he uses the phrase Ato know the meaning.@ With my idea of how the brain works solely as a big, recording computer, to know the meaning of something is only to have had experience with it. The man in the Chinese room understands English because he uses it everyday, knows what object the word Ahamburger@ refers to, and has memory of eating a hamburger. In my view, he also understands Chinese symbols, just not as extensively. He understands the Asquiggle@ to mean that a Asquoggle@ must follow. Granted, that is not an exciting knowledge of what Chinese symbols mean, nor is it the same knowledge that the Chinese native has outside the room, but still it is a knowledge of meaning. No matter how limited in scope, the man does understand Chinese; he just doesn=t understand it in the traditional way. Without question, he has had experience, albeit brief, with Chinese and he can produce output. While his understanding of English is deep and flavored, his understanding of Chinese is thin and formal. Although not in the same way, he certainly understands both.
Searle differs with functionalists in defining Aunderstanding.@ His main argument rests on the idea that one cannot understand without knowledge of meaning, but he fails to define what proper Aknowledge of meaning@ is. I do not agree with him that we need know the traditional, standard meaning of Chinese symbols in order to understand them. The functionalist view allows for individual interpretation, individual creation of meaning. Granted, this provides a very open and broad definition of Ato understand@; it soon becomes attributed not in terms of whether a system does or does not understand, but rather how and how much it understands. As fresh as that may seem, if it is how humans function, we must recognize it as so.