Artificial Intelligence (AI), New?

History of AI

Rich Brown
6 min readAug 11, 2022
History of AI
Canva Image by Author

The history of the development of artificial intelligence can be traced back to ancient times. The Greek god Hephaestus was depicted in myths as forging robot-like servants out of gold. Engineers in ancient Egypt built statues of gods animated by priests.

Throughout the centuries, thinkers from Aristotle to the 13th-century Spanish theologian Ramon Llull to René Descartes and Thomas Bayes used the tools and logic of their times to describe human thought processes as symbols, laying the foundation for AI concepts such as general knowledge representation.

The late 19th and first half of the 20th centuries brought forth the foundational work that would give rise to the modern computer. In 1836, Cambridge University mathematician Charles Babbage and Augusta Ada Byron, Countess of Lovelace, in collaboration with Babbage’s Analytical Engine, conceived of a machine that could be programmed to carry out any task that could be done by hand.

In 1937, George Stibitz at Bell Laboratories completed a circuit breaker calculator called the Complex Number Calculator, which could solve mathematical problems that had been posed as electrical circuits. These developments led to Alan Turing’s landmark paper “Computing Machinery and Intelligence” (1950), which proposed what is now known as the Turing test: a way to determine whether a computer can be said to be thinking.

1940s

John Von Neumann’s idea of the stored-program computer revolutionized computing. This architecture allowed for the program and data to be kept in the computer’s memory, which made it easier to process information. Additionally, Warren McCulloch and Walter Pitts’ work on neural networks laid the foundation for modern machine learning.

In 1943, Isaac Asimov proposed the Three Laws of Robotics:

1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.

2. A robot must obey any orders given to it by human beings, except where such orders would conflict with the First Law.

3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

These laws would become an important part of Asimov’s later work in science fiction, and would also inspire other writers to explore the ethical implications of artificial intelligence.

The late 1940s saw the first real steps towards artificial intelligence with the creation of several important programs. In 1949, Donald Michie built the first chess-playing program, called Turings. The following year, Christopher Strachey wrote a checkers program. 1950 also saw the birth of one of AI’s most important subfields: natural language processing, with the paper “Programming a Computer for the Learning of English” by Alan Turing and his colleagues.

1950s

The 1950s was a decade of great excitement in the field of AI, with numerous important programs and concepts being developed. In 1951, Claude Shannon’s paper “Programming a Computer for Playing Chess” described a chess-playing program that used alpha-beta pruning, a key algorithm in game-tree search. This paper laid the foundations for modern game AI.

In 1952, Arthur Samuel wrote a checkers program that learned from experience, eventually becoming one of the strongest checkers players in the world. This was one of the first examples of machine learning. In 1957, Frank Rosenblatt developed the perceptron, a type of neural network that could learn to recognize patterns of data.

During the 1950s, AI also began to be used in practical applications such as medicine, with programs developed to diagnose diseases and interpret X-rays. In 1958, Allen Newell, J.C. Shaw, and Herbert A. Simon created the General Problem Solver, a program that could solve problems by breaking them down into smaller subproblems. This was one of the first examples of knowledge representation in AI.

1960s

In 1966, Edward Feigenbaum and Joshua Lederberg created DENDRAL, the first expert system, which was able to diagnose diseases by analyzing their symptoms. This was an important milestone in AI, as it showed that computers could be used to reasoning like humans.

In 1967, Marvin Minsky and Seymour Papert published “Perceptrons”, a book that critiqued the perceptron and neural networks more generally. They argued that these kinds of systems were not well-suited to learning certain types of problems. This critique would have a significant impact on the field of AI, leading to a decline in interest in neural networks.

The late 1960s also saw the first successful applications of robotics, with Shakey the Robot becoming the first robot to navigate its environment without human intervention. This was an important step forward for AI, as it showed that robots could autonomously navigate their surroundings.

1970s

In the early 1970s, AI experienced a downturn, due to criticisms like those leveled by Minsky and Papert. Government funding for AI projects dried up and many researchers left the field. This period came to be known as the “AI winter”.

Despite the difficulties of the AI winter, some important progress was made during this time. In 1974, Terry Winograd wrote SHRDLU, a program that could understand and respond to natural language commands. This was an important achievement in natural language processing. In 1975, David Marr published “Vision”, a seminal work on computer vision that proposed a three-level hierarchy of representations for understanding images.

The late 1970s also saw the development of expert systems, which were able to reasoning like humans in specific domains. MYCIN, developed at Stanford University, was able to diagnose diseases by analyzing their symptoms. This was an important application of AI technology.

1980s

The 1980s was a decade of renewed interest in AI, thanks to the success of expert systems like MYCIN. In 1982, Japan’s Fifth Generation Computer Systems project was launched, with the goal of creating computers that could reasoning like humans. This project would lead to significant advances in AI technology.

In 1986, Patrick Winston published “Artificial Intelligence”, a popular book that introduced many people to the field of AI. The late 1980s also saw the development of neural networks, which were inspired by the brain’s structure and function. These networks were able to learn from data, and they became an important area of AI research.

1990s

In the early 1990s, AI experienced another boom, thanks to the success of expert systems and neural networks. Many commercial applications of AI were developed, such as web search engines and spam filters. In 1997, Deep Blue became the first computer to beat a world chess champion, furthering the perception that AI was becoming more powerful.

The late 1990s saw the development of machine learning, a subfield of AI that deals with the design and development of algorithms that can learn from data. This was an important advance in AI, as it allowed computers to automatically improve their performance on tasks like image recognition and machine translation.

2000s

The 2000s was a decade of significant progress in AI, thanks to the success of machine learning algorithms. In 2001, the first self-driving car was demonstrated, showing that AI could be used to automate vehicles. In 2004, IBM’s Watson system beat the best human players on the game show Jeopardy!, furthering the perception that AI was becoming more powerful.

In 2009, Google launched Street View, which used AI algorithms to automatically generate panoramic images of streets. In 2012, DeepMind’s AlphaGo system beat the best human player in the world at the game of Go, a complex board game that had long been considered a challenge for AI.

2010s and beyond

In the years following 2010, artificial intelligence technology continued to develop at a rapid pace. In 2020, there were several major achievements in the field that had a significant impact on society.

One such achievement was the development of a computer system that could read and understand the text as well as a human. This system was able to not only understand the literal meaning of text, but also the underlying sentiment or emotion. This development allowed for machines to be used in customer service roles where they could communicate with customers more naturally.

Another major development in 2020 was the creation of a computer system that could generate realistic images. This system was able to create images that looked very similar to actual photographs. This was a breakthrough as it allowed for the creation of digital scenes that looked very realistic.

Today, AI is being used in a wide variety of applications, from search engines and self-driving cars to medical diagnosis and robot assistants. It is clear that AI technology has come a long way since its early days, and it only continues to grow in importance in the years to come.

--

--

Rich Brown
Rich Brown

Written by Rich Brown

Passionate about using AI to enhance daily living, boost productivity, and unleash creativity. Contact: richbrowndigital@gmail.com

No responses yet