From AI Skeptic to Practitioner: My Journey Through the Stages of AI Adoption
I spent way too long at Stage 1.
"AI is just a fad." Refuses to acknowledge AI capabilities. Still uses paper maps and handwritten grocery lists. Believes autocomplete is witchcraft.
You know the type — “AI is just a fad,” “I could do this myself,” “why would I need a robot to write my emails?” That was me. Firmly planted in denial, watching colleagues experiment with ChatGPT and Claude while I stubbornly stuck to my old workflows. And every AI failure screenshot that crossed my feed? That was justification. Every hallucination, every confidently wrong answer — I was sharing those, laughing at those, using them as evidence that the whole thing was overhyped.
But it wasn’t just skepticism. Free time is limited. My org had zero guardrails around AI usage, and I didn’t trust these tools with real data. So the calculus was simple: why carve out time to experiment with something I don’t trust, using data I’m not comfortable sharing, with no organizational guidance on what’s even okay? It was easier to let the failures confirm what I already believed than to sit down and actually try it.
Then my company offered an optional training. Four hours on Claude Code. I almost didn’t sign up.
The Turning Point
Here’s what made the difference: instead of following along with toy examples, I pointed the practice exercise at a real problem I was actually working on. Real stakes, real context, real payoff.
And it worked.
That’s when it clicked. I wasn’t learning a tool — I was getting my work done faster while learning. The skepticism didn’t stand a chance against actual results.
Where I Am Now
I’d place myself somewhere between Stage 4 (The Practitioner) and Stage 5 (The Evangelist) on the AI adoption spectrum. Not quite at “custom GPT for ordering coffee” levels, but definitely past the point of no return.
"It's just part of how I work now." AI is embedded in daily workflows — not as a novelty, but as a tool. Knows which tasks benefit from AI and which don't. Still learning, but shipping faster.
"Have you tried using AI for that?" Every problem has an AI solution. Runs 12 AI subscriptions. Has a custom GPT for ordering coffee. Considers non-AI users "legacy humans."
View the full interactive infographic to see all seven stages — from Denier to Balanced Human.
The Two Habits That Changed Everything
Two practices have been critical to making AI actually useful in my workflow:
1. Iterate, Don’t Restart
When the first response isn’t quite right, don’t abandon ship and re-prompt from scratch. Just tell it what’s wrong.
“This is close but the edge case when X is empty breaks.”
That kind of feedback is usually faster and gets better results than starting over. The first response is just a draft — the back-and-forth is where the magic happens.
2. Learn While Doing
Instead of just accepting code that works, I ask:
- “Explain why this approach works”
- “What are the tradeoffs here?”
- “What would break if I changed X?”
This turns a code generator into a mentor. Six months from now, you’ll realize you’ve leveled up in ways that would’ve taken much longer through docs and Stack Overflow alone.
The Compound Effect
The combination is powerful: you ship faster and you learn more. That’s rare. Usually it’s one or the other.
Using AI to learn how to use AI better creates a compounding effect. Workflows build on workflows. Before you know it, you’re explaining token limits at parties and your coworkers are avoiding you at lunch.
Where the Industry Is Heading
Looking at the trajectory over the next several years, I think the adoption curve isn’t going to stop at Stage 6. The real destination — Stage 7 — is what I’d call The Balanced Human.
The hype cycle will settle. The people who get the most out of AI won’t be the ones who outsource everything to it. They’ll be the ones who figured out where the boundaries are: which tasks benefit from AI assistance, which ones need human judgment, and how to move fluidly between the two.
We’re already seeing early signs of this in the industry:
- Security teams are using AI for triage and pattern detection, but keeping humans in the loop for incident response decisions
- Development teams are shipping faster with AI-assisted coding, but learning that review and architecture still need human eyes
- Organizations are moving past the “AI everything” phase and into deliberate, targeted adoption
The next few years will be less about whether to use AI and more about how well you use it. The competitive advantage won’t be adoption — everyone will adopt. It’ll be judgment: knowing when to lean on AI and when to lean on yourself.
Where Are You on the Spectrum?
The seven stages, as I see them:
"AI is just a fad." Refuses to acknowledge AI capabilities. Still uses paper maps and handwritten grocery lists. Believes autocomplete is witchcraft.
"Fine, I'll try it once." Uses ChatGPT secretly to win an argument. Immediately clears browser history. Still insists they "could have done it themselves."
"It's actually pretty helpful." Uses AI for emails, recipes, and explaining things to them like they're five. Has accidentally said "as an AI" in real conversations.
"It's just part of how I work now." AI is embedded in daily workflows — not as a novelty, but as a tool. Knows which tasks benefit from AI and which don't. Still learning, but shipping faster.
"Have you tried using AI for that?" Every problem has an AI solution. Runs 12 AI subscriptions. Has a custom GPT for ordering coffee. Considers non-AI users "legacy humans."
"I don't remember life before AI." Neural link pending. Outsources decisions, creativity, and small talk to AI. Has named their AI assistant. Existential crisis scheduled via AI calendar integration.
"I know when to use AI and when not to." Has clear boundaries. Uses AI as a force multiplier for the right tasks. Still writes some things by hand — on purpose. Mentors others on effective adoption, not maximum adoption.
View the full interactive version
If you’re still at Stage 1 or 2, here’s my advice: find a real problem. Not a tutorial exercise, not a “let’s see what it can do” experiment. A real thing you need to solve. Point the AI at it and see what happens.
If you’re at Stage 5 or 6, here’s different advice: find the places where AI isn’t helping. Figure out your boundaries. The goal isn’t maximum AI usage — it’s maximum effectiveness.
That’s what got me moving. And the destination isn’t “cyborg” — it’s something more sustainable than that.
This blog post was drafted with assistance from Claude. The interactive infographic was also created with Claude. Meta? Maybe. Effective? Definitely.