The Plumbing Is the Skill: Why Your AI Setup Right Now Matters

There’s a window open right now. Not a long one — maybe one or two years. And what you build inside that window will quietly decide how much leverage you have for the next decade.
I’m not talking about which model you use. I’m talking about your setup. The way you wire AI into your actual life and work. The configs, the scripts, the agents, the keybindings, the directory layout, the command-line muscle that makes the whole thing usable. The plumbing.
This is the skill. And right now is when you should be learning it.
The Setup Is the Product
Most people think of AI as “a thing you talk to.” A chat window. A magic box. Type in, get out.
That’s not what serious AI use looks like. Serious AI use looks like a system. You have files the AI reads. You have hooks that fire on events. You have skills it loads on demand. You have an inbox of notes it scans, a journal it can update, a worklog it appends to. You have agents that do specific jobs and shells that pipe their output into other tools.
Whatever your “AI setup” is — that’s the product. The model is just the engine. Engines change. Setups compound.
If you can describe how a request flows through your system — which file it touches, which agent picks it up, where the output lands, how you find it again next week — you have a setup. If you can’t describe that, you don’t have a setup. You have a chat window.
Why the Next Two Years Specifically
The window matters because the abstractions aren’t finished forming yet. Right now, working with AI still means knowing what’s underneath. You see the prompts. You see the context files. You see the tool calls. You see when something fails and you can debug it because the parts are still visible.
In two years that won’t be true. The whole stack will get wrapped. There will be polished products that hide all of this behind nice UIs, and most people will use those, and they’ll be fine.
But the people who learned the plumbing while it was still exposed will have something the wrappers can’t give: an accurate mental model of how AI actually works. They’ll know which problems are model problems, which are context problems, which are tool-call problems, which are their own fault. They’ll be able to bend the system instead of just using it.
That mental model is the durable skill. The wrappers will keep changing. The plumbing principles won’t.
Why the Command Line, Again
You knew this part was coming. Yes, learn the command line. Yes, again. Yes, in 2026.
Here’s why it specifically matters for your AI setup: every powerful AI workflow I’ve ever seen runs on text and pipes. Files in directories. Markdown that gets read. Scripts that get triggered. SSH into a box, run a thing, get output. JSON in, JSON out. grep, sed, jq, xargs, find.
The command line is how AI agents actually work. They read files. They run commands. They write files. If you don’t speak that language, you’re constantly translating between what the AI is doing and what you understand. If you do speak it, you can read your own logs, fix your own breakage, and extend your setup yourself instead of waiting for someone to ship a feature.
People who already lived in the terminal didn’t have to learn anything new for the AI age. They already knew how to compose small tools into big workflows. That’s exactly what AI agents do. The terminal-native crowd just inherited the future for free.
You can still join them. The basics haven’t changed in forty years and they won’t change in the next two.
The Basics Also Matter
By “basics” I mean: how a filesystem works. How a shell works. What an environment variable is. What a process is. How text files are encoded. How HTTP requests look. What JSON is. What YAML is. How git tracks changes. How SSH keys work.
None of this is exciting. All of it is now load-bearing.
When your AI setup breaks — and it will — the fix is almost never at the model layer. It’s a path that wasn’t escaped. A file that wasn’t where you expected. A permission you forgot to set. A config that overrode another config. An env var that wasn’t loaded. The boring fundamentals.
If you don’t know the basics, debugging your AI setup feels like magic. Sometimes the spell works, sometimes it doesn’t. If you do know the basics, debugging feels like reading. You look at the parts, you see what’s wrong, you fix it.
The people who will frustrate themselves most over the next two years are the ones trying to operate AI systems without understanding the substrate they run on. The people who will look like wizards are the ones who already know how computers work, and are simply applying that knowledge to a new layer.
What Your Setup Should Actually Have
A workable starting setup, for whatever it’s worth:
- One canonical place where the AI reads context about you. A markdown file or directory. Versioned in git.
- A notes folder the AI can search. Whatever your system — Obsidian, plain markdown, Logseq — make it greppable.
- A worklog that captures what you did each day, ideally appended to by the AI itself.
- A scripts directory for the small automations you’ll accumulate. Bash, Python, whatever you read fluently.
- An SSH config that lets you (and the AI) reach the boxes you actually care about by short name.
- A way to spawn agents for specific jobs, instead of asking one chat to do everything.
- A backup discipline, because the more you wire AI into your work, the more it can break in interesting ways.
Notice what’s not on that list: nothing exotic. No specialized hardware. No expensive subscriptions. No specific framework. The setup is mostly files in directories you understand and commands you can read.
That’s the point. The leverage isn’t in fancy tools. It’s in the fact that you understand the ones you have.
The Quiet Window
Here’s what I think most people will miss: this window doesn’t feel important from the inside. It feels like nothing’s happening. The models keep getting a little better. Your colleagues keep using ChatGPT in a browser tab. The hype cycle is loud about flashy demos and quiet about plumbing.
Meanwhile, a small number of people are spending an hour a week tightening their setup. Adding a hook. Cleaning a config. Writing a script that saves them five minutes a day. Learning one more shell command.
In two years, those people are going to have setups that are unrecognizable to everyone else. Not because they had access to better AI, but because they’ve been compounding small improvements while the rest of the field treated AI as a product instead of a system.
The leverage isn’t dramatic. It’s just consistent. And it’s available right now, to anyone willing to spend a little time in the terminal.
Get Going
If you take one thing from this: the AI setup you build over the next year or two is going to matter more than you think, and the foundation for it is older and less glamorous than you’d like.
Open a terminal. Read your shell config. Pick one task you do with AI today and turn it into a script tomorrow. Put your context in a file the AI can find. Learn one more command per week. Boring, repeatable, compounding.
The window is open. The plumbing is the skill. And the basics, somehow, are still the basics.
Better get started.
Written after watching too many smart people treat AI as a chat window and wondering when they’re going to notice it’s actually an operating system.