If 2023 was the year AI hype hit every headline, and 2024 was the year organizations started experimenting, then 2025 became the year I finally rewired my own habits.
Not the flashy, futuristic kind of rewiring – I'm talking about the deeply unglamorous work of unlearning 20+ years of muscle memory. The instinct to just do it myself. The reflex to research everything manually. The quiet suspicion that asking an AI platform to help was somehow "cheating" or "taking longer."
Earlier this year, I wrote about treating AI like a summer intern: eager, inconsistent, occasionally brilliant, and definitely in need of supervision. And for a while, that worked. I checked its work. I didn't trust it with anything too important. I kept it at arm's length – helpful, but temporary.
Fast-forward to December, and the entire landscape feels different.
Claude, ChatGPT (whom I've inexplicably nicknamed Fred), Perplexity, Gemini, and a rotating cast of smaller models are no longer "interns." They've become critical teammates, each with their own strengths, quirks, and preferred ways of working.
More importantly: they're now embedded in how I think.
They help shape strategy conversations.
They brainstorm communications, refine messaging, and pressure-test ideas.
They participate in outreach.
They turn rough thoughts into something coherent.
They pair-program.
They analyze large amounts of unstructured data.
They translate complexity into something a client can act on today.
Their roles shift as the tech evolves – sometimes weekly – but their presence is constant. There is no "doing my job" without them. Not because they replace the work, but because they make the work sharper, deeper, faster, and sometimes more imaginative.
The big shift wasn't technological, it was behavioral.
I had to build new reflexes:
- Ask the models earlier, not last.
- Let them into the messy, half-formed first draft stage.
- Treat them like collaborators rather than calculators.
- Rely on them for reasoning, not just output.
- And yes, occasionally push back when they go off in the weeds.
Some days they're brilliant. Some days they're chaotic. But so are humans, and I've worked with plenty of humans.
What's changed – dramatically – is that AI has moved from "interesting tool" to trusted partner in the process. Not infallible, not magical, not sentient, but fully integrated.
That shift has changed how I led, how I think, how I run FireOak, and how FireOak serves our clients.
AI is now part of the team – not the whole team, not the boss, but an essential node in the network of how strategy gets built, communicated, tested, refined, and delivered.
Looking back at the beginning of the year, I realize the retraining wasn't about learning prompts or comparing models, it was about rewiring habits. Letting go of the instinct to shoulder everything alone. Learning to co-create with systems that weren't created when I started my career in the 1990s.
This wasn't the year that AI got better, although it did.
This was the year that I got better at working with AI.
And that shift is going to define not just how I work in 2026, but how most organizations (or at least the most successful ones, the most effective ones) will work too.