Context Before Prompt: Why We Got LLMs Backwards
As a programmer and sysadmin, I’ve noticed something curious about how people approach LLMs: they started with prompt engineering before context engineering. This ordering reveals a fundamental misunderstanding about what these systems actually are.
The Inversion Problem
When prompt engineering emerged first, people treated LLMs as functions to be called correctly. Find the right incantation, the magic words, and you get the right output. This is a very programmer-centric view: “If I just phrase the input correctly, the system will behave.”
But this gets the causality backwards. An LLM doesn’t respond to prompts—it completes contexts. The prompt is just the final fragment of a much larger information space that shapes the probability distribution of what comes next.
The Iceberg Metaphor
Prompt engineering focuses on the visible tip: the user’s immediate query. Context engineering recognizes the iceberg beneath:
- System instructions
- Retrieved knowledge (RAG)
- Conversation history
- Tool definitions and capabilities
- Examples and few-shot patterns
- The persona or role framing
When you engineer only the prompt, you’re trying to steer a ship by adjusting the flag on top. When you engineer context, you’re shaping the hull, the rudder, the currents it sails in.
Why Did Prompt Come First?
Several reasons, all understandable in retrospect:
Accessibility — Early API access gave you one text box. Context was prompt.
Mental models from search — We came from Google, where query refinement was the primary skill.
Short context windows — Early models had 2-4k tokens. There wasn’t much room for rich context.
The “instruction following” framing — Marketing emphasized “tell it what to do,” not “shape what it knows.”
The Philosophical Shift
The move from prompt to context engineering reflects a deeper shift in how we conceptualize what an LLM is:
| Prompt Engineering View | Context Engineering View |
|---|---|
| LLM as tool/function | LLM as reasoning within an environment |
| Input → Output | State → Continuation |
| “How do I ask?” | “What world does it inhabit?” |
| Imperative | Declarative/environmental |
Context engineering is closer to world-building than instruction-writing. You’re not telling the model what to do—you’re constructing the epistemic situation in which its natural behavior produces what you need.
The Practical Impact of Getting It Backwards
When people over-invest in prompt tricks without context design:
- They hit ceilings that no prompt refinement can break through
- Solutions become brittle (slight rephrasing breaks everything)
- They fight the model’s “defaults” instead of reshaping them
- They miss that the same prompt in different contexts produces radically different outputs
A well-designed context makes prompts almost trivial. A poorly designed context makes even perfect prompts fail.
The Sysadmin Intuition
As a sysadmin, this seems obvious: the environment determines what’s possible. Configs, permissions, dependencies, network topology—these shape the system’s behavior. The command you type is just the trigger.
No amount of clever bash one-liners fixes a misconfigured system.
Context is the system. Prompt is the command.
You wouldn’t spend hours crafting the perfect systemctl restart nginx variation while ignoring broken configs. Yet that’s exactly what early LLM work did—endlessly tweaking the command while leaving the system’s state unconsidered.
What Context Engineering Looks Like
Modern LLM systems spend far more effort on context design:
- RAG systems retrieve relevant documents before the prompt ever arrives
- Agent frameworks define tool sets and operational boundaries
- System prompts establish role, constraints, and behavioral patterns
- Conversation management curates what history gets included
- Few-shot examples provide behavioral templates
The actual user prompt becomes almost incidental—just the trigger for computation happening in a carefully engineered information environment.
The Lesson
If you’re still prompt-engineering your way around problems, you’re fighting the wrong battle. Step back. Look at the context. What world is the model inhabiting when it sees your prompt?
Fix the configs. Shape the environment. Build the right system.
Then the prompts take care of themselves.