Fully Intelligent Machine? An Anarchy of Methods

·

An important early milestone in the history of artificial intelligence was the 1956 Dartmouth Summer Research Project on Artificial Intelligence, often referred to as the Dartmouth seminar. More than a mere organizational event, the seminar marked the first explicit attempt to define artificial intelligence as a coherent scientific field and to articulate its foundational methodological assumptions. It was here that a fundamental discussion took place concerning how intelligence itself should be studied: as symbolic reasoning, as learning from experience, or as processes inspired by biological systems.

Dartmouth seminar

John McCarthy coined the term artificial intelligence in the Dartmouth proposal, partly to distinguish the emerging field from the closely related tradition of cybernetics. McCarthy later admitted that the name was not especially popular – after all, the aim was real intelligence, not something “artificial”.

The research agenda proposed for the Dartmouth seminar rested on a bold and explicitly stated conjecture:

“The study is to proceed on the basis of the conjecture that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it. An attempt will be made to find how to make machines use language, form abstractions and concepts, solve kinds of problems now reserved for humans, and improve themselves. We think that a significant advance can be made in one or more of these problems if a carefully selected group of scientists work on it together for a summer.” (10, p. 2)

This statement not only defined the scope of artificial intelligence research but also expressed the remarkable optimism characteristic of the field’s early years. Despite the fact that the most powerful computers of the mid-1950s were many orders of magnitude slower than contemporary machines, the authors of the proposal believed that rapid and decisive progress was within reach.

Although the discussions at Dartmouth were lively and often stimulating, they lacked overall coherence. As was typical of such interdisciplinary gatherings, participants arrived with sharply differing views, strong intellectual commitments, and considerable enthusiasm for their own approaches. Nevertheless, the seminar succeeded in establishing artificial intelligence as a distinct research domain and in setting the stage for the methodological debates – between logic and learning, symbols and neurons – that continue to shape the field to this day.

Symbolic AI: Intelligence as Reasoning

Early pioneers of AI believed that human thought—especially logical reasoning—operates through the manipulation of symbols.Digital computers, operating on sequences of zeros and ones, appeared to offer a natural substrate for such an approach. Logic, rules, and formal languages promised a way to capture intelligence in explicit, inspectable structures.

At the same time, optimism was high. Researchers no longer treated intelligence as an ineffable human mystery; instead, they defined it as a set of processes that could, in principle, be described with sufficient precision to allow machine simulation.This optimism culminated in early research programs that aimed not merely at narrow problem solving but at general intelligence.

Symbolic AI treated intelligence as fundamentally rule‑based. They encoded knowledge as symbols—words, predicates, and formal expressions—together with rules for manipulating them. Researchers framed problem solving, planning, and reasoning as searches through symbolic state spaces guided by logical constraints.

This approach drew heavily on mathematical logic and on introspective accounts of conscious thought. Patterns of human reasoning evident in puzzles, mathematical proofs, and language use appeared to support the view that intelligence could be captured through explicit representations.Symbolic systems were transparent: their internal states were, at least in principle, understandable to human designers.

As A. Newell and A. Simon argue,

“one of the fundamental contributions to knowledge of computer science has been to explain, at a rather basic level, what symbols are. This explanation is a scientific proposition about nature. It is empirically derived, with a long and gradual development” (6, p. 83).

However, symbolic AI also faced limitations. It struggled with perception, ambiguity, learning from raw data, and the informal, context‑dependent aspects of everyday cognition. As tasks moved beyond toy problems, the brittleness of purely rule‑based systems became increasingly apparent.

The Logical Roots of Symbolic Artificial Intelligence

The roots of the symbolic approach to intelligence lie in formal logic, most notably in the work of Bertrand Russell and Alfred Whitehead, especially their monumental Principia Mathematica. Closely related developments in early twentieth-century philosophy – particularly logical positivism – played a crucial role in shaping the intellectual climate in which symbolic conceptions of intelligence emerged.

In parallel, traditions within linguistic philosophy, including the later work of Ludwig Wittgenstein, contributed an influential framework for understanding language as a system of rule-governed practices. Taken together, these philosophical currents reinforced the idea that reasoning, language, and intelligence could be analyzed in terms of formal rules, symbolic representations, and purely syntactic operations.

This intellectual lineage is explicitly acknowledged by Allen Newell and Herbert A. Simon, who describe the foundations of symbolic AI as follows:

“The roots of the hypothesis go back to the program of Frege and of Whitehead and Russell for formalizing logic: capturing the basic conceptual notions of mathematics in logic and putting the notions of proof and deduction on a secure footing. This effort culminated in mathematical logic—our familiar propositional, first-order, and higher-order logics. It developed a characteristic view, often referred to as the ‘symbol game’. Logic, and by incorporation all of mathematics, was a game played with meaningless tokens according to certain purely syntactic rules. All meaning had been purged. One had a mechanical, though permissive (we would now say nondeterministic), system about which various things could be proved. Thus progress was first made by walking away from all that seemed relevant to meaning and human symbols. We could call this the stage of formal symbol manipulation.” (6, p. 88)

This conception – of intelligence as symbol manipulation governed by formal rules – became a cornerstone of symbolic artificial intelligence. It framed cognition as a computational process operating over abstract symbols, independent of biological implementation, perception, or learning from raw sensory data. The resulting research program emphasized logic, deduction, and explicit representations, laying the methodological foundation for early AI systems such as theorem provers, expert systems, and the General Problem Solver.

Subsymbolic AI: Intelligence as Learning

In contrast, subsymbolic approaches took inspiration from biology, particularly neuroscience. Rather than encoding knowledge explicitly, these systems relied on large numbers of simple units whose interactions produced intelligent behavior. Learning, not reasoning, was the central mechanism.

Early neural models suggested that intelligence might emerge from adaptive connections shaped by experience. Instead of symbols and rules, subsymbolic systems consisted of numerical parameters adjusted through exposure to examples. Their internal workings were opaque, but their flexibility was striking.

As Mitchell explains in her accessible introduction to AI,

“Unlike the symbolic General Problem Solver system that I described earlier, a perceptron doesn’t have any explicit rules for performing its task; all of its ‘knowledge’ is encoded in the numbers making up its weights and threshold. In his various papers, Rosenblatt showed that given the correct weight and threshold values, a perceptron … can perform fairly well on perceptual tasks such as recognizing simple handwritten digits. But how, exactly, can we determine the correct weights and threshold for a given task? Again, Rosenblatt proposed a brain-inspired answer: the perceptron should learn these values on its own. And how is it supposed to learn the correct values? Like the behavioral psychology theories popular at the time, Rosenblatt’s idea was that perceptrons should learn via conditioning” (5, pp. 13-14).

This shift reflected a deeper philosophical divide. Where symbolic AI emphasized explicit knowledge and deduction, subsymbolic AI emphasized adaptation, statistics, and pattern recognition. Intelligence, on this view, was less about following rules and more about acquiring sensitivities to structure in the environment.

Conflict, Critique, and the AI Winter

The coexistence of these approaches led not to synthesis but to conflict. Advocates of symbolic AI criticized early neural models as limited and theoretically shallow. Conversely, proponents of learning‑based methods viewed symbolic systems as fragile and disconnected from real cognition.

These disputes were not merely intellectual; they shaped funding priorities and institutional power. Periods of excessive optimism were followed by disappointment when promised breakthroughs failed to materialize. These cycles of hype and collapse – often referred to as “AI winters” – slowed progress and deepened methodological divisions.

Conclusion: Symbolism vs ConnectionismFirst AI Winter

After the 1956 Dartmouth Conference, the symbolic approach came to dominate the AI landscape. In the early 1960s, while Frank Rosenblatt was working with great enthusiasm on the perceptron, the four widely acknowledged “founders” of artificial intelligence -all strong proponents of symbolic AI – established influential and well-funded research laboratories: Marvin Minsky at MIT, John McCarthy at Stanford University, and Herbert A. Simon together with Allen Newell at Carnegie Mellon University. Notably, these three institutions remain among the most prestigious centers for AI research to this day.

Minsky, in particular, regarded Rosenblatt’s brain-inspired approach as a dead end and argued that it diverted research funding away from more promising symbolic AI efforts. In 1969, Minsky and his MIT colleague Seymour Papert published the book Perceptrons, in which they presented a mathematical proof demonstrating that the class of problems a single-layer perceptron can solve perfectly is extremely limited, and that the perceptron learning algorithm does not scale well to tasks requiring large numbers of weights and thresholds.

More broadly, the emergence of digital computers in the 1950s gave rise to two fundamentally different ways of understanding computation. One group viewed computers as physical symbol systems, capable of instantiating formal representations of the world. The other regarded them as simulations of neuronal interactions. One approach framed intelligence in terms of problem solving, the other in terms of learning; one emphasized logic, the other statistics. These opposing perspectives led to deep and enduring methodological debates within AI.

Alongside the symbolic tradition, a second intuition developed, drawing in particular on the ideas of Donald O. Hebb and Warren McCulloch. Hebb famously proposed in 1949 that a collection of neurons could learn if the simultaneous activation of neuron A and neuron B strengthened the connection between them. This insight led Rosenblatt to the idea that if intelligent behavior proves difficult to formalize symbolically, AI might instead be investigated by studying how neurons learn to recognize patterns and generate responses.

First AI Winter

Initially, both approaches appeared equally promising. By the 1970s, however, the connectionist approach based on the perceptron had fallen out of favor. One major reason was that the computers of the time could handle only relatively simple tasks and failed at more complex ones. This decline is often referred to as the first “AI winter.” Contemporary multilayer perceptrons, by contrast, are capable of solving far more challenging problems and have played a central role in the modern resurgence of AI. Rosenblatt himself, however, focused primarily on the single-layer perceptron and did not live to see the later revival of connectionist methods.

Bibliography

  1. Boden, Margaret A. AI: Its Nature and Future. 2nd ed. Oxford: Oxford University Press, 2016.
  2. Boden, Margaret A. Mind as Machine: A History of Cognitive Science. Oxford: Oxford University Press, 2006.
  3. Hebb, Donald O. The Organization of Behavior: A Neuropsychological Theory. New York: John Wiley & Sons, 1949.
  4. McCulloch, Warren S., and Walter Pitts. “A Logical Calculus of the Ideas Immanent in Nervous Activity.” Bulletin of Mathematical Biophysics 5 (1943): 115–133. https://doi.org/10.1007/BF02478259.
  5. Mitchell, Melanie. Artificial Intelligence: A Guide for Thinking Humans. New York: Farrar, Straus and Giroux, 2019.
  6. Newell, Allen, and Herbert A. Simon. “Computer Science as Empirical Inquiry: Symbols and Search.” Communications of the ACM 19, no. 3 (1976): 113–126.
  7. Rosenblatt, Frank. The Perceptron: A Perceiving and Recognizing Automaton (Project PARA). Report No. 85-460-1. Buffalo, NY: Cornell Aeronautical Laboratory, 1957.
  8. Rosenblatt, Frank. “The Perceptron: A Probabilistic Model for Information Storage and Organization in the Brain.” Psychological Review 65, no. 6 (1958): 386–408. https://doi.org/10.1037/h0042519.
  9. Rosenblatt, Frank. Principles of Neurodynamics: Perceptrons and the Theory of Brain Mechanisms. Washington, DC: Spartan Books, 1962.
  10. McCarthy, John, Marvin L. Minsky, Nathaniel Rochester, and Claude E. Shannon. A Proposal for the Dartmouth Summer Research Project on Artificial Intelligence. Dartmouth College, August 31, 1955.

Image: First page of the seminal 1955 proposal that coined the term “artificial intelligence” and launched the Dartmouth Summer Research Project (August 31, 1955) by John McCarthy, Marvin Minsky, Nathaniel Rochester, and Claude Shannon.

Source: Wikimedia Commons (public domain scan) — https://commons.wikimedia.org/wiki/File:A_Proposal_for_the_Dartmouth_Summer_Research_Project_on_Artificial_Intelligence,_by_John_McCarthy_et_al,_1955.pdf

License: Public domain (no copyright restrictions; free to use, modify, and distribute).