AI news about new tools and services is appearing almost daily. I find it overwhelming to keep up with the constant buzz. I prefer to start by understanding the fundamentals and core concepts. I’ve found that building a strong foundation pays off in the long run, helping to guide deeper learning as I become more familiar with the topic. So what is the relation between AI, Machine Learning, Deep Learning and Large Language Models (LLM)?
AI
AI is basically using computers to simulate or even exceed human intelligence. Intelligence means the ability to learn, infer, and reason—things humans have done forever.
AI isn’t new. It started with early systems built using languages like LISP and Prolog. Then came Expert Systems in the 90s, which really pushed AI forward. This was all before machine learning became popular.
Machine Learning
Machine Learning (ML) is a subset of AI. It uses training data to:
- Make predictions
- Spot patterns
- Detect outliers
ML started to take off around 2010 and has improved a lot since then. It became the foundation for today’s large language models (LLMs).
Deep Learning
Deep Learning is a subset of machine learning.
It uses neural networks (made up of many layers) to mimic how the human brain works. Like the brain, it can produce unpredictable results. That’s because with so many layers, it’s hard to understand exactly what’s going on inside.
Deep learning powers most of today’s Generative AI.
Foundation Model
Foundation models are where we get Large Language Models (LLMs).
They’re trained on huge amounts of unlabeled and unstructured data. LLMs model language by predicting what comes next—not just the next word, but full sentences, paragraphs, or sections.
They generate new content, not just repeat stored text.
There are many kinds of models—not just for text, but also for audio, video, and code. These models became popular around 2020 and have driven a major shift in AI adoption.
By feeding them labeled data, we can fine-tune these general models for specific tasks.
Types of Models
There are different types of foundation models, such as:
- Language
- Code
- Voice
- Vision
- And more …
Pros and Cons
| Advantages | Disadvantages |
|---|---|
| Performance – trained on tons of data | Compute Cost – very expensive to train |
| Productivity – needs less labeled data | Trust – unclear where data comes from |