Sunday, April 12 – April 13
Chicago Teachers' Institute · San Francisco, CA
Sunday, April 12
Claude API Fundamentals & Prompt Engineering
Priya Sharma
4:00 PM – 6:00 PM
Start with the core API surface: messages, roles, tokens, and context windows. Work through progressively complex prompts and learn how model behavior shifts with system prompt design. You'll leave with a clear mental model for reasoning about latency, cost, and quality trade-offs. Hands-on: build a basic chat interface from scratch using the SDK.
Tool Use & Function Calling
Priya Sharma
6:15 PM – 7:30 PM
Deep-dive into Anthropic's tool use API. Learn how to define tools, handle multi-step tool call loops, and design safe, deterministic tool schemas. We'll build a real agent that fetches weather data, queries a database, and sends emails — all orchestrated by Claude. Hands-on: build and test a multi-tool agent end-to-end.
Building a RAG Pipeline from Scratch
James Okafor
8:30 PM – 10:30 PM
Retrieval-augmented generation demystified. Cover the full pipeline: chunking strategies, embedding models, vector store options (pgvector, Pinecone, Weaviate), retrieval scoring, and re-ranking. Walk through the architectural decisions that separate a quick demo from a production system. Hands-on: build a document Q&A tool over a custom knowledge base.
Streaming Responses & Real-Time UX
Priya Sharma
10:45 PM – 12:00 AM
How to stream tokens from Claude to your frontend and why it matters for perceived performance. Covers server-sent events, React streaming patterns with Next.js App Router, skeleton loading, and how to handle tool call interlacing in streamed responses. Hands-on: add streaming to the chat interface you built in session 1.
Monday, April 13
Testing & Evaluating LLM Outputs
James Okafor
12:15 AM – 1:00 AM
The discipline most teams skip. Learn how to write deterministic unit tests for LLM features, build eval harnesses that catch regressions, use Claude-as-judge patterns, and instrument your app for production monitoring. We'll cover Braintrust, PromptFoo, and a hand-rolled baseline. Hands-on: build an eval suite for the RAG pipeline from session 3.