Introducing Military Industrial Cognition

Everyone knows the story of Oppenheimer ruefully quoting the Bhagavad Gita after a successful test of the atomic bomb, and his subsequent opposition to nuclear proliferation. Cognitive science has a parallel story, although it is rarely told. During Word War II, Norbert Weiner developed the basic principles of cybernetics while working on developing automated control mechanisms for artillery and anti-aircraft guns. Weiner quickly saw "the very pressing menace of the large-scale replacement of labor by machine on the level not of energy, but of judgment.” He refused to work further on projects that could be useful for the automation of production and sought out leaders in the labor movement to offer his services and warn them of what he saw coming:

"I do not wish to contribute in any way to selling labor down the river, and I am quite aware that any labor, which is in competition with slave labor, whether the slaves are human or mechanical, must accept the conditions of work of slave labor. For me merely to remain aloof is to make sure that the development of these ideas will go into other hands which will probably be much less friendly to organized labor.”

Warren Weaver, who had supervised Weiner’s work during the war, would go on to be instrumental in assuring continued military support for research in cybernetics and related areas. Weaver was convinced that machine translation could be accomplished by borrowing techniques from cryptography, and this view was reflected in the Air Force’s funding of a machine translation project at MIT. Noam Chomsky would briefly lead this project, despite being convinced that it would never work, and used the time and resources to write Syntactic Structures. This is a curious inversion of Weiner’s relationship to automation: Weiner was convinced that automation would lead to mass unemployment in a decade or so, and refused to participate. Chomsky was convinced that machine translation would never work in the way his funders imagined, so he took the money and ran. 

Military funding was so central to the cognitive revolution that by the time Dick Neisser's landmark textbook Cognitive Psychology was brought out, it would focus almost entirely on research funded by military agencies — even a brief section on the unconscious is based largely on research conducted as part of the MK-ULTRA program. Foundational studies of perception, memory, and attention were carried out in the service of the development of RADAR equipment, for example.

On the one hand, we might be tempted to shrug this off. There is little in Syntactic Structures to suggest that it was funded by the Air Force Office of Scientific Research, and anyway, had machine translation succeeded by 1960, surely it would have found its way into a wide variety of perfectly benign civilian uses, as it has today. Even signal detection theory, with its origins in the detection of enemy planes on RADAR screens, is today most widely used in medical research. 

But as a basic science, and one that is supposedly about universal human capacities, it is deeply weird that the life world of the cognitive subject so closely resembles the command control communication information infrastructure developed during WWII, and elaborated since. To understand much of cognitive science, we must first assume a cyborg assemblage in which cognition is happening: humans sitting at a computer, taking some input and producing some output, their responses quantified and analyzed in an attempt to reverse engineer their various functions. In the 1960’s a very thin sliver of humanity had any direct contact with computers, but somehow an entire field decided to understand humans using computers as both the primary research tool and as a driving metaphor.

Of course today things are different. Regular people talk about algorithms the way previous generations talked about fairies and gremlins. Calling a restaurant to order take-out, and then showing up to pay with cash feels like a determined act of Luddism. It is common sense that the mind is "a kind of computer," and that computers do, or shortly will, evince some kind of "intelligence."

This project is about how we got here. In the current AI hype cycle, there is intense interest in how computational systems became so ubiquitous. I'll have a bit so say about that, but, frankly, others are already doing this better than I have the bandwidth or expertise for. My focus is on the ways the basic science of understanding the mind has been shaped by the demands and interests of capital.

I'll argue that the story we are often told about the cognitive revolution in psychology – that the field was dominated by behaviorism, all talk of the mind punishable by a stern lecture from BF Skinner, until a few brilliant mavericks decided to start playing with computers – is a Whig history.

In fact, at the time of the Dartmouth conference (one of the first milestones for both the cognitive revolution and the history of AI), psychology had several important strands that were centrally concerned with the mind. Leaders in the field were focused on developing statistical techniques for mental testing – an area that itself became entrenched thanks to generous military support during WWI – various "schools" such as functionalists, Gestalt psychologists, and even many card-carrying behaviorists filled the pages of leading journals with arguments about the nature of human experience, largely understood in terms of a whole organism adapting to its environment. If there was a paradigm shift after the introduction of the computer, it was from this open world of people interacting with the environment and one another to the closed world of computer systems then in their infancy.

Without the investment of a small number of military funding agencies and private foundations in understanding the mind as a kind of computer, we might have a completely different understanding of the mind today. Maybe we still can.