Brilliant minds behind AI – why it’s not magic

AI is new, but the maths behind it is old.

Ideas like linear algebra, calculus, and probability have been around for centuries. So why is AI only taking off now?

Brilliant Minds behind AI

In the 1800s, Ada Lovelace wrote the first computer instructions. She believed machines could do more than just add numbers—they could follow rules and work with symbols. That’s a key idea in AI today.

In the 1900s, Grace Hopper made the first compiler. It let people write computer programs using words, not just numbers. This made computers easier to use and more powerful.

During World War II, Alan Turing came up with a machine that could follow any set of rules—a model for modern computers.

Soon after, John von Neumann designed a system where programs and data could live in the same memory. Today’s computers still use this setup.

Catching up to the maths

By the 1950s and 60s, the maths for AI was ready. Scientists had built models that could learn and adjust. But the tech wasn’t ready. There wasn’t enough data or power to make it work well.

That changed around 2012. Three big things came together:

  • The internet and smartphones gave us huge amounts of data

  • Cloud computing made it easy and affordable to use powerful computers

  • Graphics chips (GPUs), intended for video games, turned out to be perfect for training AI

The breakthrough came when researchers used GPUs to win a global image recognition contest called ImageNet in 2012. That moment showed that deep learning really worked: and GPUs were the key.

GPUs can do lots of simple maths really fast. A normal computer chip has a few cores. A GPU has thousands. That makes it great at handling the maths AI needs, especially linear algebra. Most GPUs today are designed by companies like NVIDIA and often manufactured in Taiwan.

Thanks to all this, AI took off. Now we have tools that can see, listen, translate, write, and more. But under the hood, it’s still the same maths:

Linear algebra helps AI work with lots of numbers at once. Think of it like a massive spreadsheet—AI scans rows and columns of numbers to find patterns.

Calculus helps AI learn by showing how to change its answers step by step. Like measuring how fast a car speeds up or slows down, it helps AI adjust in the right direction.

Probability helps AI guess and deal with uncertainty. Like checking the weather forecast and deciding if you need an umbrella, AI uses it to make smart, informed predictions.

AI isn’t magic—like most technology, it’s applied maths.

Previous
Previous

We’re thinking about AI risk the wrong way

Next
Next

What is the Industrial Internet of Things?