AI Engineer Headquarters: Curriculum for the Full-Stack AI Engineer

April 11, 2023

|repo-review

by Florian Narr

AI Engineer Headquarters: Curriculum for the Full-Stack AI Engineer

A structured curriculum covering the full AI engineering stack — from numpy fundamentals to ReAct agents — organized as numbered Jupyter notebooks and standalone Python scripts.

Why I starred it

Most "learn AI" repos are either too shallow (a single notebook that trains MNIST) or too sprawling (a collection of random papers with no through-line). This one has an actual progression: foundations → LLMs → RAG → fine-tuning → agents. Each phase maps to a numbered directory, and the later sections include working Python scripts rather than just notebooks.

What caught my eye was that it keeps shipping. The most recent commit is from November 2025 — a workshop on building an AI agent. That's over two years of continuous additions on a repo starred by 3,600+ people.

How it works

The repo is structured as a drill:

0_Prep/
1_Foundations of AI Engineering/
  010_Machine Learning/
  011_Deep Learning/
  012_MLops end-to-end project/
  4_Python Hands-On/
  8_Mathematics for AI/
2_Mastering Large Language Models/
  14_Transformer Architecture/
  22_23_Prompt Engineering/
  24_Vibe Coding/
3_Retrieval-Augmented Generation (RAG)/
  27_RAG Fundamentals and Workflow/
  28_Embeddings and Vector Databases/
  30_RAG Reranking/
4_Fine-Tuning/
  33_34_Fine-tuning Fundamentals/
6_Agentic Workflows/
  38_Agentic Workflow/
  39_ReAct Agent/
  40_Tool Calling/

The deeper you go, the more the format shifts. Early sections are notebooks — exploratory, visual, with matplotlib output inline. By section 38, the content is standalone .py files with real dependencies and real API calls.

I opened 6_Agentic Workflows/39_ReAct Agent/calculator_agent.py and found a clean LangChain ReAct implementation using Gemini 1.5 Flash:

@tool
def calculator(expression: str) -> str:
    """
    Use this tool to evaluate a mathematical expression.
    It can handle addition, mutliplication, subtraction, division and exponents.
    Example: `calculator("2 + 2")` or `calculator('3**4')`
    """
    try:
        return str(eval(expression))
    except Exception as e:
        return f"Error evaluating expression: {e}"

agent = create_react_agent(llm, tools, prompt)
agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True)

The eval() is intentionally unsafe — this is a learning exercise, not production code. What's correct is the ReAct loop structure: the PromptTemplate in the same file hard-codes the Thought/Action/Observation format that makes the reasoning trace legible.

The next file up — 40_Tool Calling/agent.py — adds a real external API: an exchange rate fetcher that hits exchangerate-api.com, with parsing logic that validates the input format before making the request. That's the teaching moment: tool definitions require defensive input handling because the LLM's formatting isn't always consistent.

The agentic workflow in 38_Agentic Workflow/langchain_agent/langchain_gemini_agent.py combines two tools — DuckDuckGo search and Yahoo Finance stock prices — showing how multi-tool agents chain observations across steps. The verbose=True flag on AgentExecutor means every Thought/Action/Observation step is printed, which is useful when learning why the agent chose one tool over another.

The RAG section (27_RAG Fundamentals and Workflow/27_RAG.ipynb) covers the architecture in notebook form with diagrams committed alongside the code — rag architecture.png and rag basic.png sit in the same directory, so you can diff the diagram against the implementation.

Using it

Clone it and start at the section you need:

git clone https://github.com/hemansnation/AI-Engineer-Headquarters
cd AI-Engineer-Headquarters/6_Agentic\ Workflows/38_Agentic\ Workflow/langchain_agent

pip install langchain langchain-google-genai duckduckgo-search yfinance python-dotenv
echo "GOOGLE_API_KEY=your-key" > .env

python langchain_gemini_agent.py

Output (abbreviated):

> Entering new AgentExecutor chain...
Thought: I need to find the current stock price of NVIDIA (NVDA)...
Action: get_stock_price
Action Input: NVDA
Observation: The latest price of NVDA is $118.43
...
Final Answer: The current stock price of NVIDIA (NVDA) is $118.43...

The ReAct trace is the point — watching the agent decide which tool to call and why is the fastest way to understand the loop.

Rough edges

No tests anywhere. The Jupyter notebooks have no execution order guarantees — some rely on variables set in earlier cells, and there's no requirements.txt at the root, just scattered dependencies you discover by running things and reading import errors.

The numbering scheme has gaps (there's no 5_ directory at the top level) and section numbers inside 1_Foundations restart from 4, which gets confusing when you're trying to follow the "path" listed in the README.

The ReAct agent in 39_ReAct Agent uses eval() on LLM-generated strings. That's fine for a calculator demo, but there's no note in the code flagging it as intentionally unsafe. Someone new might copy-paste it into something bigger.

Documentation is light outside the README. The section READMEs exist but mostly just list what's covered rather than explaining why topics are sequenced the way they are.

Bottom line

Useful as a structured starting point if you're mapping out the AI engineering skill stack and want runnable examples at each layer. The agentic workflow section is the most polished — skip to it directly if that's what you need.

hemansnation/AI-Engineer-Headquarters on GitHub
hemansnation/AI-Engineer-Headquarters