The Modern AI Stack (Topic 1) in Module 1 – AI-Landscape-Essentials (BG)

The Modern AI Stack

When you use ChatGPT, Copilot, or any AI-powered tool, you're interacting with the application layer — the tip of a larger technological stack. Understanding the three layers helps you make better decisions about which tools to use and why.

Layer 1: Foundation Models

At the base are foundation models — massive AI systems trained on enormous datasets of text, code, images, and more. Examples include GPT-5 (OpenAI), Claude Opus (Anthropic), Gemini Ultra (Google), and Llama (Meta). These are general-purpose reasoning engines.

Key traits: - Trained once at enormous expense (millions to hundreds of millions of dollars) - Can perform many tasks without task-specific retraining - Accessed via APIs (application programming interfaces) — essentially a standardized plug

Layer 2: APIs and Platforms

Between the raw model and end-user products sit platforms and APIs. OpenAI's API, Anthropic's API, and Google's Vertex AI let developers call a foundation model from their own software. Companies like Microsoft integrate these APIs into Office 365 (Copilot). Google integrates Gemini into Workspace.

This layer also includes: - Retrieval-Augmented Generation (RAG): connecting models to your private documents - Fine-tuning services: adjusting a base model for a specific domain - Orchestration tools: LangChain, LlamaIndex, and others that chain AI calls together

Layer 3: Applications

At the top are the products you actually use: ChatGPT, Microsoft Copilot, Google Gemini for Workspace, Claude.ai, Perplexity, GitHub Copilot, Cursor, and thousands of others. These are user interfaces and workflows wrapped around foundation models.

Why This Matters Practically

  • When a product shows AI capability, it may be licensing access to someone else's model (e.g., many SaaS tools use OpenAI or Anthropic APIs under the hood)
  • Data flows to the model provider regardless of which application you use, unless you're using a private or self-hosted deployment
  • Evaluating an AI product means evaluating both the application experience and the underlying model
  • A competitor can build a similar product by switching foundation model providers — the model choice is often a key differentiator
Sign in to join the discussion.
Recent posts
No posts yet.

The Modern AI Stack

When you use ChatGPT, Copilot, or any AI-powered tool, you're interacting with the application layer — the tip of a larger technological stack. Understanding the three layers helps you make better decisions about which tools to use and why.

Layer 1: Foundation Models

At the base are foundation models — massive AI systems trained on enormous datasets of text, code, images, and more. Examples include GPT-5 (OpenAI), Claude Opus (Anthropic), Gemini Ultra (Google), and Llama (Meta). These are general-purpose reasoning engines.

Key traits: - Trained once at enormous expense (millions to hundreds of millions of dollars) - Can perform many tasks without task-specific retraining - Accessed via APIs (application programming interfaces) — essentially a standardized plug

Layer 2: APIs and Platforms

Between the raw model and end-user products sit platforms and APIs. OpenAI's API, Anthropic's API, and Google's Vertex AI let developers call a foundation model from their own software. Companies like Microsoft integrate these APIs into Office 365 (Copilot). Google integrates Gemini into Workspace.

This layer also includes: - Retrieval-Augmented Generation (RAG): connecting models to your private documents - Fine-tuning services: adjusting a base model for a specific domain - Orchestration tools: LangChain, LlamaIndex, and others that chain AI calls together

Layer 3: Applications

At the top are the products you actually use: ChatGPT, Microsoft Copilot, Google Gemini for Workspace, Claude.ai, Perplexity, GitHub Copilot, Cursor, and thousands of others. These are user interfaces and workflows wrapped around foundation models.

Why This Matters Practically

  • When a product shows AI capability, it may be licensing access to someone else's model (e.g., many SaaS tools use OpenAI or Anthropic APIs under the hood)
  • Data flows to the model provider regardless of which application you use, unless you're using a private or self-hosted deployment
  • Evaluating an AI product means evaluating both the application experience and the underlying model
  • A competitor can build a similar product by switching foundation model providers — the model choice is often a key differentiator
Sign in to join the discussion.
Recent posts
No posts yet.
Info
You aren't logged in. Please Log In or Join for Free to unlock full access.