The Artificial Intelligence Timeline


Artificial intelligence is the use of computer systems and machines to simulate human intelligence and processing. According to a Forbes article,  artificial intelligence’s significant impact can be seen in different fields such as healthcare, retail, finance, communication, and retail.

A look at the history of artificial intelligence gives an idea on how long artificial intelligence has been around for and how it led to current advancements and status. MongoDB’s post on artificial intelligence explains more about the definition of AI, as well as its models, uses, and history which we will go into now...

The Groundwork

In 1947, a pivotal contribution was made by logician Walter Pitts and neurophysiologist Warren McCulloch in their development of the artificial neuron model. This mathematical model was named McCulloch-Pitts neuron. This development influenced interest and work into developing more complex networks which would eventually lead to deeper learning models.

The Imitation Game

In 1950, the Turing test or imitation game was developed by mathematician, cryptanalyst, and computer scientist Alan Turing. This test aimed to determine the ability of a machine to exhibit intelligence comparable to a human.

Turing’s test has been an important benchmark in the concept of artificial intelligence. It has inspired research and experiments aiming to develop machines that can pass this test even until recently such as when Google’s AI was shown to pass the test.

Defining AI

“Artificial intelligence” as a term was officially coined by computer scientist and “father of artificial intelligence” John McCarthy in 1955. McCarthy presented his definition at the first academic conference on the subject at Dartmouth College. AI was defined by him as “the science and engineering of making intelligent machines”.

The First Computer and AI lab

In the late 1950s and early 1960s, AI work continued. Frank Rosenblatt built the first computer called Mark I Perceptron machine in 1957 at the Cornell Aeronautical Laboratory. This was the first hardware implementation of the McCulough-Pitts neuron or perceptron.

Later, John McCarthy, who has been working on computer program LISP, founded  AI research at MIT with computer scientist Marvin Minsky in 1959, and the AI lab at Stanford University in 1963.

Expert Computer Systems

In the 1970s, advancements in artificial intelligence continued, despite challenges which led to the “AI Winter”. The next major breakthroughs were the creation of expert systems. These expert systems are computer systems which emulate the decision-making ability of human experts. These particular knowledge-based computer systems were used in healthcare, military, and business.

The AI Industry

The proliferation of expert systems – coupled with increased interest and funding – led to the “AI boom” in the 1980s. More powerful knowledge systems allowed more use of AI in different areas of expertise. Corporations adopted the use of AI and governments invested heavily into research, information technology, and robotics.

One AI landmark came in the 1990s with the development of Deep Blue, the chess-playing computer from IBM. Deep Blue defeated then-world chess champion and grandmaster Gary Kasparov. The highly-publicized match created further interest in artificial intelligence.

The Beginning of Language Models

A language model is a type of machine learning model that is trained to predict which words are most likely to appear together. The development of small language models began in the 1980s by IBM, and continued to more natural language processing and understanding.

In 2010s, language models became more powerful and led to the arrival of voice-controlled systems and personal assistants like Siri, Alexa, and Cortana. This period also saw advancements leading to practical deep neural networks capable of learning generative models.

Generative AI

Generative artificial intelligence is artificial intelligence capable of generating data often in response to prompts. Large language models are able to generate new data such as text and images. Starting in 2018, generative pre-trained transformers able to generate human-like text were introduced by AI research organization OpenAI. More recently, deeper learning models are able to generate complex digital images, such as  OpenAI’s new model called Sora, which can produce videos from text input.

If you enjoyed this article why not read some more of our AI-related content on the site? Including this article on ‘The 15 Best Text-to-Video AI Generators’. 

Post a Comment

Previous Post Next Post