Fodor, Johnson-Laird, and Crick95 think that conscious reasoning follows a sequential logic, whereas the rustling of imagination96 and the nonconscious mechanisms function mostly in a modular and parallel fashion. This type of model implies that the organism is capable of generating independent responses to each stimulus that is perceived by an organism at a given moment.
When one talks of modular analysis, the term parallel is not used the same way as when one talks of the parallelism of Descartes, Spinoza, and Leibniz. Even if there exists some relationship between these two ways to use the term parallel, it is important not to confuse them
The limits of a mind functioning only with parallel modular procedures became apparent as soon as engineers tried to construct robots that could move on the surface of the moon. For example, they could not build a robot that was capable of climbing a hill.
When an officer orders a soldier to climb to the top of a hill, the soldier executes the order and climbs to the top with relative ease. When programmers have to detail what the soldier actually did to climb the hill, they suddenly discover the millions of small automatic skills that were accomplished in a relatively smooth way. As we have already seen, most of these activities are nonconscious, whereas others require the support of conscious dynamics. Before engineers began to write such programs, no philosopher or psychologist could have imagined the complexity involved to accomplish such a banal task.
At first, the engineers assumed that all they had to do was to give instructions to put one foot after the other on the ground ahead that was a bit higher and to stop when no higher ground could be found.97 What happened was that the machine would end up on top of a rock that was miles away from the top of the hill. The soldier that climbs a hill would have automatically realized that the top of a mound is not the top of the hill. He would also know that one must sometimes go downhill for a while before resuming the climb, as a hill does not necessarily rise in a straight line. This is how, slowly but surely, engineers began to explore all the procedures that are required to climb a hill. Allen Newell and Herbert A. Simon98 worked on similar problems from 1956 onward for more than 20 years, introducing fundamental ideas that are still the core of problem solving theoryâ (Newell, 1990). The crux of the analysis is that the robots need to define long-term goals (e.g., where is the top of the hill?) and short-term goals (the next step). Long-term goals require a different kind of thinking than what is needed for the next step. Indeed, the next step requires not only finding what is the most relevant next step but also the capacity to avoid falling into holes or bumping into a tree when one can walk around it. A poor analysis of such tasks would slow down the machine’s progress and could even cause its destruction. The machine must also analyze the geography that separates it from the top of the hill to find a manageable route. A manageable route goes to the top of the hill rather than to the top of a mound and will ensure the machine’s integrity.
The number of variables involved in such a task, from the point of view of an engineer, is staggering. Engineers found that an intellectual analysis (e.g., where is the top of the hill?) requires relatively small programs, whereas a machine that acts requires an immense amount of work. One needs software programs that coordinate hardware and environmental factors. Traditionally, it was thought that intellectual performances are the most complex accomplishments of evolution. Engineers are now showing that the coordination between mental acts and relevant behavior is the really remarkable achievement. This has led to a series of theories that differentiate computing skills from embodied intelligence:99
Processing is involved in routine behaviours such as driving, cooking, taking a walk, or manipulating everyday objects. These abilities, simple for humans, remain distant goals for robotics and seem to impose hard real-time requirements on an agent…. A local subsystem integrating sensory data or generating potential actions may have incomplete, uncertain, or erroneous information about what is happening in the environment or what should be done. But if there are many such local nodes, the information may in fact be present, in the aggregate, to assess a situation correctly or select an appropriate global action policy. (Rosenschein, 1999, pp. 410-412)
This quotation100 shows the compatibility between artificial intelligence and an evolutionary approach to the mind. It allows one to describe forms of mental computation that could have evolved into increasingly efficient dynamics that became capable of producing symbolic thought and conscious activity. Once again, if such mechanisms were used by human nature, consciousness does not have the means to grasp such complex dynamics. They are actually sufficiently simple to be implemented in a robot, but too complex to be perceived through individual conscious perception.