AI101: An introduction to Artificial Intelligence

  • #Technical & Entreprise Architecture
  • #Communication, Marketing and Sales Performance
  • #Brand & Strategy

Published on Updated on

HAL9000, the super computer from the movie “2001, A Space Odyssey” (1968)

Episode I: Introduction

With the recent advance of chatGPT, image and voice generation, it seems that AI is a new technology about to revolutionise our world. But is it really so? In this short article, we will attempt to demystify what artificial intelligence is by explaining what it is, trace back its history and address some of the key concepts. We will then focus on the more ethical questions such as its business use cases, risks and impact on society. Finally we will open up the conversation on the current trends.

What is AI?

AI stands for Artificial Intelligence. It refers to the development of computer systems that can perform tasks that typically require human intelligence. AI systems are designed to analyse and interpret data, learn from experience, make decisions, and solve problems in a manner similar to human intelligence.

AI is used in many domains from decision making, search engines, image & voice recognition to natural language processing. AI also plays a crucial role in robotics, enabling robots to perceive, reason, and make decisions autonomously.

How does AI work?

A scene from the movie “War Games” (1986) where the WOPR supercomputer simulates the 3rd world war using an exhaustive search approach

AI uses a wide range of strategies to achieve its goal. It would be ambitious to describe exactly how those work as it would go way beyond my own understanding. So we will try to keep things simple and focus on demystifying some of the terminology often read in the press :

Exhaustive search (aka “brute-force search”): the objective is to try all possible combinations to resolve a given problem in order to find the best possible solution. This approach is illustrated in the movie WarGames when the super computer tries to identify the best possible strategy to win the nuclear war but in the end times out… 

Heuristic search : This approach supplements the previous one by trying to find a “good enough” solution (not perfect), within a reasonable amount of time. This is made possible by implementing the “rule of thumbs” (aka decision trees) to guide the search towards the most promising paths or solutions.

Expert systems take the previous approach a step further by building a knowledge-base to make decisions based on facts. Expert systems became the first truly successful forms of artificial intelligence and were a big thing in the 70s/80s.

Neural networks are mathematical representations that mimic the ways synapses are interconnected in the brain. Instead of synapses, the model uses a network of nodes or “neurons” where each neuron is applied a weight. Neural networks have the ability to “learn” (i.e. adapt the weights) to model complex relationships between inputs and outputs and find patterns in data.

Deep Learning takes the approach of neural networks a step further by multiplying the number of neurons through several layers. The concept dates back from the 50s but only gained traction recently thanks to the vast amounts of data available on the internet and the increase of computer power.

Large language models (LLMs) leverage the deep learning approach at a very large scale to process vast amounts of text data, mostly scraped from the Internet. The term “Language model” comes from the fact that they are specialised at predicting the next plausible output from a given text.

GPT stands for Generative Pre-Trained Transformer. The GPT approach was introduced in 2018 by OpenAI and is based on LLMs. 

  • Generative (or Generative AI) means that the AI engine is designed to generate some content, whether it be text, image or sound. 
  • Pre-Trained means that the AI engine has been trained with (large amount of) data in order to produce the desired output. 
  • The term “Transformer” refers to a specific type of neural network architecture to generate accurate predictions in a short amount of time. It was introduced by Google Brain team in 2017.

A short history of AI

The cover of the book “I Robot” (1950), in which the three laws of Robotics are depicted

AI pre-dates the era of computer science and has fascinated scores of sci-fi writers. Authors such as Isaac Asimov envisioned a world dominated by robots and published the “Three Laws of Robotics” for intelligent robots back in the 40s. At the time, Alan Turing, the British mathematician father of artificial intelligence & founder of computer science, was working on the Bombe, a machine used to decrypt the German Enigma machine during WWII. 

Since then, mathematicians and computer scientists have been hard working into improving the AI models and a lot of progress has been made. Some of the key milestones are : 

1940 the “Three laws of Robotics” are redacted by Isaac Asimov
1950The Turing Test is introduced, whereby a machine that exhibits behaviours indiscernible from human beings can be considered intelligent since it is has the same abilities as humans
1956Dartmouth Conference: considered the birth of AI, this conference brought together researchers who coined the term “Artificial Intelligence” and laid the foundations for AI as a field of study.
1970-1980Expert systems, also known as knowledge-based systems or decision trees or heuristic systems, were developed to mimic human expertise in specific domains. One notable example is MYCIN, a system for diagnosing bacterial infections.
1980Introduction of the backpropagation algorithms
1997 IBM’s Deep Blue defeated world chess champion Garry Kasparov, demonstrating the potential of AI systems to surpass human expertise in specialised domains, effectively passing the Turing test
1998Sony announces AIBO, a pet robot behaving like a dog
2009Google starts working on autonomous cars known as Waymo. Tesla will later (2014) release its Autopilot system to provide semi-autonomous driving capabilities.
2020The introduction of chatGPT-3 goes viral, redefining the standards of artificial intelligence and proving that machines can indeed “learn” the complexities of human language and interaction.
2023Microsoft, Google (Bard), Amazon (BedRock) & Facebook (LLama) also release their version of chatGPT to be integrated in their product lines.

What are the different types of AI?

Example of General AI personified in Star Wars with C3P0 and R2D2 characters

There are lots of different types of AI, and for the sake of conciseness we will focus on the three concepts below:

Machine Learning (ML): Machine Learning is a subset of AI that focuses on enabling computer systems to learn and improve from data without being explicitly programmed. ML leverages LLMs to analyse large amounts of data, identify patterns, and make predictions or decisions based on that data. It is widely used in areas such as fraud detection, recommendation systems, and predictive analytics.

Narrow or Weak AI: Narrow AI refers to AI systems designed to perform specific tasks or solve specific problems, such as image recognition, natural language processing, voice assistants or autonomous systems. Examples of narrow AI include virtual assistants like Siri or Alexa. 

General or Strong AI: Unlike narrow AI, which focuses on specific domains, Artificial General Intelligence (AGI) seeks to mimic human cognitive abilities and exhibit versatile problem-solving skills. Achieving general AI remains a significant challenge, as it requires developing algorithms and systems that can understand, learn, and adapt to various situations and tasks. The concept of general AI is still largely theoretical, and there are ongoing debates and discussions surrounding its feasibility, ethical implications, and potential societal impacts.


As we have seen in this article, AI is not a revolutionary technology but rather an evolution and a logical continuation of computing as it has been conceived since its inception. Additionally, we have observed that there is not just one AI, but rather multiple AIs, each with their own strengths, weaknesses, and areas of application. In the next episode, we will delve into the challenges and limitations that AI imposes upon us.