Computers can have minds

As first formulated, the Chinese Room scenario was a direct analogy of a program created by computer scientist Roger C.

computers and the human mind

Is thinking a kind of computation? InAlan Newell and Herbert Simon created Logic Theorist, a program that modeled human problem-solving methods in order to prove mathematical theorems.

On the other hand, arguments for strong AI typically describe the lowest levels of the mind in order to assert its mechanical nature.

In modern computers this layer is composed of transistors, miniscule electronic switches with properties corresponding to basic Boolean logic. But when closely examined, the history of their efforts is revealed to be a sort of regression, as the layer targeted for replication has moved lower and lower.

Yes, AI may have solved the game of checkersbut this is a far cry from being able to simulate consciousness. One of the most pervasive abuses has been the purely functional description of mental processes.

Computers can have minds

If so, then current machines will come up short. To do so, you must be able to represent the problem in terms that the computer can understand — but the computer only knows what numbers and memory slots are, not titles or shelves. If a computer understands false beliefs, it may know how to induce them in people. But more than believing that their mimicry makes them sufficient human companions, the makers of socialized robots often state that their creations actually possess human traits. But the similarity between computers and brains isn't just superficial: at their most fundamental levels, computers and brains process data in a similar binary fashion. You can use the concepts that the computer understands to symbolize the concepts of your problem: assign each letter to a number so that they will sort in the same way 1 for A, 26 for Z , and write a title as a list of letters represented by numbers; the shelf is in turn represented by a list of titles. Empathy is a necessary component of good human computer interaction. Searle argues that the experience of consciousness can't be detected by examining the behavior of a machine, a human being or any other animal.

This dualism means that symbolic systems and their physical instantiations are separable in two important and mirrored ways. If the strong AI project is to be redefined as the task of duplicating the mind at a very low level, it may indeed prove possible — but the result will be something far short of the original goal of AI.

To be sure, some programs can be defined by what output they return for a particular input.

your mind is not a computer

The program must still contain some internal structures and properties. This abstraction is useful because the objects involved in the algorithm can easily be represented by symbols that describe only these relevant properties. Psychology and physics, for example, can each be used to answer a distinct set of questions about a single physical system — the brain.

This is a variation on the "systems reply" that appears more plausible because "the system" now clearly operates like a human brain, which strengthens the intuition that there is something besides the man in the room that could understand Chinese.

Rated 7/10 based on 85 review
Artificial intelligence has learned to probe the minds of other computers