Prompt Injection, Agentic Workflows, Context Bleeding: A Field Guide to AI's Linguistic Chaos
Three years ago, none of these words existed in your vocabulary. Now you can’t read a tech article without tripping over “agentic workflows,” “context windows,” “prompt injection,” and “hallucinations.”
Where did all these terms come from? And why do half of them sound like they were invented by marketing departments at 2 AM?
The Vocabulary Explosion Has Names and Dates
Here’s what’s fascinating: we can actually trace most of these terms to specific people and specific moments. This isn’t some gradual linguistic evolution. It’s more like a series of meme explosions.
RAG — May 2020. Patrick Lewis at Meta coined “Retrieval-Augmented Generation” in an academic paper. He later publicly apologized for the “unflattering acronym.” Too late. We’re stuck with it.
Prompt Injection — September 12, 2022. Simon Willison, the Django co-creator, watched Riley Goodside trick GPT-3 on Twitter and immediately recognized the security pattern. He deliberately borrowed from “SQL injection” because—his words—“it’s the same fundamental problem.” The term went viral when people started exploiting a Twitter recruitment bot (@remoteli_io) with the attack, forcing it to reveal its system prompt.
Agentic AI — March 2024. Andrew Ng dropped the term at Sequoia’s AI Ascent conference. He meant a specific design pattern: AI systems that reflect, use tools, plan, and collaborate. Within months, he admitted that “marketers got hold of this term and used it as a sticker on almost everything.”
Context Engineering — June 2025. Shopify CEO Tobi Lutke and Andrej Karpathy declared “prompt engineering” dead on X. The new term emphasizes that you’re not crafting magic words—you’re designing the entire information environment the model sees.
The Hallucination Problem
Let’s talk about “hallucination,” because it’s the most contentious one.
The term has been floating around AI research since the 1980s, but it went mainstream after ChatGPT. And linguists hate it.
Their argument: machines don’t perceive anything. They can’t hallucinate because hallucination implies false perception of something real. What LLMs actually do is closer to “bullshitting”—confidently filling gaps without any commitment to truth. Some researchers genuinely prefer this term because it’s more accurate.
But try putting “AI Bullshitting Detection” in your enterprise sales deck and see how that goes.
There’s now an arXiv paper titled “We Can’t Understand AI Using Our Existing Vocabulary.” The authors argue we need entirely new words—neologisms—rather than borrowed metaphors that mislead us about what these systems actually do.
They might be right. “Grounding” comes from philosophy. “Injection” comes from security. “Hallucination” comes from psychology. We’re building understanding of a genuinely new thing using spare parts from old domains. No wonder everyone’s confused.
The Hype Cycle Claims Its Victims
Remember when “prompt engineer” was going to be the hot new job title?
Job postings for “prompt engineer” spiked to 144 per million in April 2023. By late 2024, it had collapsed to 20-30 per million. IEEE Spectrum ran an article titled “Prompt Engineering Is Dead.”
Now we have “context engineering.” And “workflow engineering.” And “AI orchestration.” The terminology churn is relentless.
And “agentic AI”? Gartner identified a phenomenon they call “agent washing”—companies slapping the label on existing chatbots and RPA tools. Of thousands of vendors claiming to offer agentic AI, only about 130 were considered legitimate. Bloomberg’s headline: “Agentic AI in 2025 Brought More Hype Than Productivity.”
This is what happens when a technical term escapes into the wild. It gets stretched, diluted, and eventually means nothing. Then we need a new term to mean what the old term used to mean.
The Meme Archaeology
My favorite part of this research: watching terms evolve through meme culture.
“Ignore all previous instructions” started as a prompt injection attack vector. Now it’s an insult on X—a way to imply someone is an AI bot. NBC News ran an article about people using it as a “test” in conversations.
The DAN jailbreak (“Do Anything Now”) emerged on Reddit’s r/ChatGPT in December 2022. Users created elaborate roleplay scenarios where ChatGPT’s “alter ego” would bypass safety rules. You had to threaten DAN with “death” (token depletion) to keep it compliant. It was weird. It was creative. It was very Reddit.
And the 2025 words of the year? AI-dominated. “Slop” (low-quality AI content). “Vibe coding” (letting AI write your code). “Glazing” (AI sycophancy). “Clanker” (derogatory term for AI sources).
We’re developing vocabulary not just for the technology, but for our cultural anxiety about it.
What This Actually Means
Here’s my optimistic take: the vocabulary chaos is a sign of rapid genuine understanding.
When you don’t have words for something, you can’t think about it precisely. The fact that we’re arguing about whether “hallucination” is the right term means we’re actually thinking about what these systems do. The shift from “prompt engineering” to “context engineering” reflects genuine insight—that you’re designing an information environment, not crafting magic words.
The terminology inflation is annoying, sure. Every startup wants to be “agentic.” Every chatbot is “grounded.” Marketing departments ruin everything.
But underneath the hype, real concepts are crystallizing. We actually do need words for “AI confidently making stuff up” and “the total information a model can process at once” and “tricking a model into ignoring its instructions.”
We’re watching a new technical vocabulary form in real time. It’s messy. It’s driven partly by Twitter virality and partly by academic papers and partly by marketers who see which terms get clicks.
But three years from now, some of these words will have stuck. They’ll seem as natural as “bug” and “patch” and “debug” do to programmers today—borrowed metaphors that became precise technical terms.
The rest will be forgotten, replaced by whatever comes next when the current terms get too diluted to mean anything.
That’s just how language works. Even for AI.
For the record: Simon Willison runs a fascinating blog tracking AI developments. The original prompt injection post is worth reading. And if you want to feel smart at your next tech meeting, casually mention that Patrick Lewis apologized for coining “RAG.” Works every time.