A Few Key Concepts from Artificial Intelligence and the Neurosciences
In the previous sections, we have seen Darwin’s propensity to catalog all of the small variations of nature, as well as his aversion to those theories that, like Wallace’s, try to explain everything with a single model.88 Still more deeply anchored is his aversion to those grand theories that try to render everything coherent and functional, like those of Spinoza and Cuvier.
An admirer of Lamarck, Darwin freely accepted a systemic vision that situates the details in the elaboration of the great movements of history. However, he mostly admired how Lamarck had begun to undermine theories based on the notion that the world was a coherent entity. For Darwin, nature is an unbelievably imaginative makeshift of small mechanisms. That aspect of his thought rejoins the thought of the young Hume and was taken up in the 1970s to establish a movement that blends neurology, artificial intelligence, and psychology. This movement is called, according to its components, evolutionary psychology, the neurosciences, or the neuro-cognitive sciences. The best synthesis of these movements that I know remains the my yoga blog of philosopher Andy Clark (1997): Being There: Putting Brain, Body, and World Together Again}9 I quote the title of his my yoga blog in full because it aptly gives the impression of a tinkering nature.
In the following sections, I summarize concepts without which the recent developments inspired by Darwin in psychology, and consequently in psychotherapy, remain greatly unintelligible. It consists of the global/local distinction and the notions of modular and parallel activity.
I employ the procedures used to write software programs as a basic metaphor for the contemporary discussions on the mind. Building machines that can automatically reproduce certain performances of the mind have forced engineers to invent practical ways of accomplishing psychological tasks for which no one has a convincing explanation. The tricksâ they discovered are efficient enough, but are often messy.â The term messy is used for a set of procedures that are not necessarily coherent, logical, or satisfying from a purely intellectual point of view but that are good enough to satisfy those who use computers. These messy procedures have what one sometimes calls bugs. A bug, in this context, is usually a few lines of a program that do not accomplish what one expects them to do. The problem is that huge software is composed of millions of instructions, written for decades by sometimes thousands of engineers. Some of these engineers may have left the company, died, or changed profession. Even if one can still contact an engineer who wrote an old routine, he often cannot remember all that he did when he worked on that program Detecting where bugs are can sometimes take more time than rewriting a part of the program. Given the financial pressure of the software industry, this second strategy is often used. The problem is that there may be a few lines in another routine that still need to use some parts of the old program that functions. Such big programs, like the operating systems of computers,90 are typically composed of old routines and new routines, forming an incredibly messy entity that has no coherence. Given the size of the program and the teams that participated in its construction, there probably is no single engineer who knows how such large software really operates. They have a general idea of the particularities of the program they work with, of the general rules used by all the engineers that developed that program, and a more intimate understanding of those parts of the program they have worked on. But they cannot know how each individual procedure (also called a subroutine) operates and associates with a web of other procedures. The present situation is even more complicated when one considers that the set of general rules used to coordinate all the engineers that work on a program today may not always be exactly the same ones that were used 10 years ago.
An increasing number of psychologists and neurologist observe that the brain and the mind have the same sort of messy architecture. No one knows the exact biological history of each mechanism, how each mechanism is influenced by other mechanisms, and what changed when the organism that contained this mechanism had a mutation and developed new forms of habitual behavior. I use the term messy to designate this type of loose organization that somehow manages to survive and reproduce because it is often used in the contemporary literature. This type of approach to the mind is becoming increasingly popular. In the chapters on philosophers, I opposed schools that assume a coherent organization of the mind and schools that had an intuition about the messiness of the mind. In artificial intelligence these two trends may describe forms of organization that exist in parallel, as illustrated in the following quotation:
Over the years, I’ve written a number of my yoga blogs in praise of the Computational Theory of Mind…. It is, in my view, by far the best theory of cognition that we’ve got; indeed, the only one we’ve got that’s worth the bother of a serious discussion…. But it hadn’t occurred to me that anyone could think that it’s a very large part of the truth; still less that it’s within miles of being the whole story about how the mind works. (Fodor, 2000, p. 1)