The story of automating intelligence
A deep dive into the origins and evolution of our understanding of intelligence from early mathematics to modern large language models like ChatGPT through a historical odyssey.
“The question of whether a computer can think is no more interesting than the question of whether a submarine can swim.” — Edsger W. Dijkstra
On New Year's Eve of 1879, thousands of residents of Menlo Park, New Jersey, and hundreds of excited visitors from out of town, left their dimly lit lodgings illumined by flickering candles, walked through the dark and snowy streets feebly illuminated by gas lamps, and emerged into a dazzling block speckled with hundreds of incandescent light bulbs, each like a miniature sun. It was the first public demonstration of Edison’s incandescent light bulbs. Nothing like it had ever been witnessed before, and only few had any inkling that what Edison had achieved was the conquest of darkness itself.
Artificial Intelligence (AI) had its light-bulb moment in November 2022 when ChatGPT was released, and the world watched rapt with awe and wonder. From simple prompts provided by a human user, the bot spat out sentence after sentence displaying a sparkling intelligence and understanding of the English language — long held as the holy grail of AI — as well as the world it was describing. With the ability of producing humanlike poems, paintings, and even videos, Large Language Models (LLM) exploded into public consciousness like a spectacular firework display, reminiscent of those who had witnessed Edison’s grand display. ChatGPT, like the light bulbs of Menlo Park, was like a fantasy that found its way into the real world.
Even more remarkable was the effect it had on the experts who’d been directly involved in its development for decades. Geoffrey Hinton, considered widely as one of the founding pioneers of the field, who long held the view that computers can never become as powerful as the human brain, called his own invention an “existential threat”. Hinton said, “it’s quite conceivable that humanity is just a passing phase in the evolution of intelligence. Biological intelligence evolved to create digital intelligence, which can absorb everything humans have created and start getting direct experience of the world.” Another leading pioneer Stewart Russell wrote in an op-ed at The Guardian, “Humanity has much to gain from AI, but also everything to lose.”
On 29 March, the Future of Life Institute released an open letter signed by some of the world’s leading research experts in the field calling for a pause on “giant AI experiments.” Below is an excerpt from their cautionary message.
Advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources. Unfortunately, this level of planning and management is not happening, even though recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control.
The human race, it seems, has been confronted by its Frankenstein moment, having created a beast far beyond its own comprehension, which, even in its infancy, seem well beyond its control.
The Quest for Intelligence
How did we get here? How did a primitive bipedal ape, wandering from jungle to jungle, subsisting on herbs, fruits, and half-roasted carcasses, embark on this extraordinary journey from stone tools to thinking machines that now threaten to outsmart their creators? What were the pivotal stages of this process, unfolding gradually over millennia across cultures, continents, and civilizations? Who were the pioneers, what were their stories, and what were the times like in which they lived and worked?
Over the next four articles, we’ll trace the human journey into the heart of intelligence. We’ll travel to remarkable periods of history, to distant pasts and forgotten worlds, to see how scholars living modest lives, sometimes under the patronage of emperors and sultans, sometimes in the chambers of great libraries, and sometimes alone in a corner next to the flickering candle, charted out a brilliant voyage into the heart of the process we call human cognition: thinking, reasoning, feeling, communicating, and creating.
The story of AI is intertwined with the history of the computer (no surprises there), and both trace their origins back to early mathematics: the common language between humans and computers. Although modern Artificial Intelligence can work with languages, images, speech, and navigation, these tasks are ultimately achieved by expressing them in mathematical terms that can be used for calculations and making predictions. Everything that a computer does is ultimately mathematical operations, which in turn are all based in logic that are implemented by tiny electrical circuits on a chip, as conceived by Alan Turing, father of the modern computer, in the early 20th century.
This physical realization of mental logic on an electrical circuit was made possible by a new kind of mathematics known as binary arithmetic, developed by the mathematician Leibniz. Inspired by the ancient Chinese classic, the I-Ching, this new math was based on just two symbols — 0 and 1, hence binary — instead of the familiar ten that we use (1,2,3,…,9,0). A new kind of algebra that precisely defined their logical combination and manipulation, similar to how addition and subtraction work for ordinary numbers. All the numbers as well as the mathematical operations discovered until then could be expressed using just two symbols in Leibniz’s new world, 0 and 1. Binary numbers could then be physically represented by a simple electric ON-OFF switch whose ON position represents a 1 and OFF a 0. These switches can then be connected by electrical wires to form circuits that perform operations of both math and logic, as we shall see later.

The power of artificial intelligence comes from trillions of such circuits working at trillions of times per second, powered by electricity, giving computers godlike abilities. So, a real understanding of AI will require some insight into what mathematics is, how it evolved, and why it works, since it is mathematics that is the language that computers and AI speaks.
In this series, we’ll look at how these ideas were discovered, conceived and developed over the course of human history, focusing on the thinkers, philosophers, mathematicians, scientists, and engineers who furnished pieces of this grand puzzle that we understand today as Artificial Intelligence.

The Evolution of AI
The next four articles will be as follows.
The history of early mathematics going back to the earliest civilizations of Egypt, Mesopotamia, Babylonia, India, China and Greece, passes through the Islamic Golden Age when the crucial idea of an algorithm was first formulated by Al-Khwarizmi when he developed algebra by adopting a new number system from India. The story continues to the European Enlightenment with its emphasis on science and reason, culminating in the calculating machines of Pascal, Leibniz, and Babbage. This phase of understanding intelligence witnessed the formalization of both mathematics and deductive logic, from intuitive leaps to rule-based processes, rigorous, repeatable, and amenable to automation.
The next phase of AI’s development picks up from the great mathematician and philosopher Leibniz’s development of binary numbers, laying the groundwork for the information revolution of the 20th century. His ideas would be realized in practice by an ingenious French weaver named Joseph Jacquard who invented the punch card to encode and automate the warp-and-woof pattern of the woven fabrics. Nearly two centuries later, this idea would inspire the father of the modern computer, English mathematician Alan Turing, to develop the concept of a universal computer, kicking off the information age.
Three simultaneous breakthroughs — Turing’s conception of machine intelligence, Claude Shannon’s idea of quantifying information, and the first brain-inspired artificial neuron by Walter Pitts and Warren McCulloch led an explosion of early research culminating in the Dartmouth Conference where the term Artificial Intelligence was first coined. This was followed by a period of inactivity and decline in research funding, leading to the AI winter in the 70s and 80s, but a breakthrough in 1989 led to a resurgence that culminated in the watershed moment when IBM’s Deep Blue beat Gary Kasparov.
A whole new paradigm of intelligence known as neural networks were inspired by results in brain sciences, the primary organ of intelligence. New breakthroughs in two areas of artificial intelligence known as deep learning and reinforcement learning, both biologically inspired, revived the lost enthusiasm for AI research in the 90s. These ideas received a tremendous boost from the large amounts of data available due to the internet, the exponential growth in both computing power as well as the diminishing cost of memory, an explosive combination that led to a series of remarkable breakthroughs. Leading gradually to what has long been the holy grail of AI, to understand and generate natural languages like English, Hindi and Spanish, including translating from one to another, it reached a climax with Large Language Models like ChatGPT, that we are all witnessing at this moment in our history.
I hope these four essays will illuminate this mysterious and momentous technology that poses grave threat to future generations of humans, while also holding immense potential for positive change, with the promise of solving some of the greatest challenges of the 21st century. So join me as we travel back to Samos and Athens, visit the House of Learning in Baghdad and the libraries of Alexandria, enter the salons of Paris and the research labs of Europe and America where the modern computer was born.
If you find this type of content interesting, please consider subscribing and following. Thank you for reading.
Cheers and Peace.