Prompt Engineering is Dead. Long Live Context Engineering.
What We're Actually Doing with AI...Beyond Chat
After three years of building AI features at CoderPad, here's what I've learned: it's not about the "prompt"—the words you type into chat.
The real skill? Nailing the context sweet spot. Give the LLM just enough context to solve the task, but not so much that it hallucinates or costs explode.
And "context" isn't just what you type. It's images, documents, code snippets, audio, video—every mode of information. It's the delicate art of filling that context window—the AI's working memory—with exactly the right stuff.
"Context Engineering" captures this perfectly. It's the right term for what we're actually doing.
Why "Prompt Engineering" needs to retire
Simon Willison, creator of the Django framework, recently wrote about why it's time to ditch the old terminology:
The term context engineering has recently started to gain traction as a better alternative to prompt engineering. I like it. I think this one may have sticking power.
Willison gets why "Prompt Engineering" became a problem, despite initially supporting it:
I've spoken favorably of prompt engineering in the past - I hoped that term could capture the inherent complexity of constructing reliable prompts. Unfortunately, most people's inferred definition is that it's a laughably pretentious term for typing things into a chatbot!
Ouch. But he's not wrong.
Why context engineering captures what we're really doing
Context engineering nails it because it describes the actual work. Context is way more than just what you type into a chat box. Even when you're using ChatGPT's chat application, you're doing context engineering every time you upload an image, drag in your company's quarterly report, paste code snippets, add audio files, or simply provide more conversational details and examples that help the AI understand exactly what you need.
Here's what's happening underneath: even your images and audio get converted to text-like tokens. System prompts, chat history, current input, file contents—it all becomes carefully orchestrated data that the LLM processes.
When you're building AI systems that users actually want to use, you're architecting entire information environments:
Deciding what data to include (text prompts, images, videos, audio, code snippets, documents, emails, entire databases—the works)
Structuring that information so it makes sense
Timing when to pull in more context
Filtering out the noise that confuses everything
But even perfect context management isn't enough. You also need to nail the use case selection—finding spots where AI genuinely adds value. The sweet spots are tasks like synthesis across multiple sources, summarization of complex information, and creative problem-solving where there's no single "right" answer. These are also inherently areas where improving models make your features better and better over time.
Look for use cases where AI can handle ambiguity, connect disparate information, or generate novel combinations. Avoid trying to force AI into rigid, deterministic workflows where users expect perfect consistency every time. And then there's the user experience challenge: making AI features feel natural and helpful, not like you're forcing users to learn a new way to work.
It's information architecture meets product strategy meets UX design—definitely not just creative writing.
The cool kids are already on board
This isn't just academic theorizing. Industry heavyweights are jumping on the context engineering train.
Shopify CEO Tobi Lutke weighed in:
I really like the term “context engineering” over prompt engineering.
It describes the core skill better: the art of providing all the context for the task to be plausibly solvable by the LLM.
And Andrej Karpathy (OpenAI cofounder, Tesla Autopilot legend, and the genius who gave us "vibe coding") responded:
+1 for "context engineering" over "prompt engineering".
People associate prompts with short task descriptions you'd give an LLM in your day-to-day use. When in every industrial-strength LLM app, context engineering is the delicate art and science of filling the context window with just the right information for the next step. Science because doing this right involves task descriptions and explanations, few shot examples, RAG, related (possibly multimodal) data, tools, state and history, compacting... Too little or of the wrong form and the LLM doesn't have the right context for optimal performance. Too much or too irrelevant and the LLM costs might go up and performance might come down. Doing this well is highly non-trivial. And art because of the guiding intuition around LLM psychology of people spirits.
Of course Karpathy (as he often does), goes deeper with the nitty gritty details of building full AI applications:
On top of context engineering itself, an LLM app has to:
- break up problems just right into control flows
- pack the context windows just right - dispatch calls to LLMs of the right kind and capability
- handle generation-verification UIUX flows
- a lot more - guardrails, security, evals, parallelism, prefetching, ...
So context engineering is just one small piece of an emerging thick layer of non-trivial software that coordinates individual LLM calls (and a lot more) into full LLM apps. The term "ChatGPT wrapper" is tired and really, really wrong.
When the people building the future start using new terminology, it's probably time to pay attention. In a rapidly evolving industry, using the right terms signals you understand what's actually happening—not just what was trendy six months ago.
Bottom line: Context engineering isn't just more accurate terminology—it captures the complex reality of building AI products that actually work. It's not about getting better at prompting; it's about leveling up your entire approach to AI product development.
Plus, you'll never have to explain that you're not just "typing things into a chatbot" ever again.
Time to update our vocabulary.