As first formulated, the Chinese Room scenario was a direct analogy of a program created by computer scientist Roger C.
Is thinking a kind of computation? InAlan Newell and Herbert Simon created Logic Theorist, a program that modeled human problem-solving methods in order to prove mathematical theorems.
On the other hand, arguments for strong AI typically describe the lowest levels of the mind in order to assert its mechanical nature.
In modern computers this layer is composed of transistors, miniscule electronic switches with properties corresponding to basic Boolean logic. But when closely examined, the history of their efforts is revealed to be a sort of regression, as the layer targeted for replication has moved lower and lower.
Yes, AI may have solved the game of checkersbut this is a far cry from being able to simulate consciousness. One of the most pervasive abuses has been the purely functional description of mental processes.
This dualism means that symbolic systems and their physical instantiations are separable in two important and mirrored ways. If the strong AI project is to be redefined as the task of duplicating the mind at a very low level, it may indeed prove possible — but the result will be something far short of the original goal of AI.
To be sure, some programs can be defined by what output they return for a particular input.
The program must still contain some internal structures and properties. This abstraction is useful because the objects involved in the algorithm can easily be represented by symbols that describe only these relevant properties. Psychology and physics, for example, can each be used to answer a distinct set of questions about a single physical system — the brain.
This is a variation on the "systems reply" that appears more plausible because "the system" now clearly operates like a human brain, which strengthens the intuition that there is something besides the man in the room that could understand Chinese.