
In June 2025, ChatGPT experienced an outage.
"It was like losing electricity," according to one of our senior engineers with fifteen years of experience.
He experienced a dramatic loss of productivity, something he hadn’t dealt with since his first days as a junior developer.
"It wasn’t just an inconvenience – I'd developed an intuition to think with AI first, and without it, I didn't know how to start my work anymore."
Experiences like this illuminate what work is like in 2025.
Like many companies, Civic has been adopting AI across our workplace, not only to better posture our company to serve the future needs of our customers, but also to expand our capabilities.
As part of this transition, we interviewed team members about their experience across every department – engineers, finance, marketers, designers – to understand how AI integration is reshaping professional work. What emerged wasn't just a story of productivity gains, but a fundamental shift in how we think, create, and collaborate.
A moment of reflection
What we found is profound. Collectively, we haven't just accelerated existing processes – we've fundamentally rewired how we approach problems. "AI consideration is now the default first step," one team member observed. Not an option, not a tool in the toolkit, but the starting point for nearly every task.
What makes this dependency particularly striking is who feels it most acutely: our most experienced professionals. The engineer who couldn't start work without AI? Another senior team member who described feeling "lost" during the outage? A decade in the field. These aren't junior staff using AI as a crutch – they're experts who've rebuilt their entire workflows around AI's capabilities. One senior team member captured it perfectly: "It's like having a team of engineers always available. When they disappear, you realize you've forgotten how to work alone."
Grappling with what’s important
But here's where the paradox deepens. The same professionals who report feeling far less productive without AI also worry about what they're losing.
- "I'm scared of losing contact with my own work," one engineer admitted.
- Another described the risk of "vibe coding" a personal project without truly understanding the underlying systems.
- A designer worried about "disengagement with organic local reality," creating echo chambers where AI reinforces potentially flawed ideas without the friction of real-world constraints.
We're gaining superhuman capabilities while simultaneously fearing the atrophy of our foundational skills.
This tension manifests in unexpected ways. Teams report working the same hours but producing exponentially more output – until they hit mysterious ceilings. "I wonder at what point we will reach a productivity threshold on the human mind," one team member reflected.
The tools amplify our capabilities, but human cognition still has limits. We can consume more, create more, iterate more – but at some point, the bottleneck shifts from tool to human. The question becomes: are we hitting the ceiling of what's humanly possible to process, even with AI assistance?
New problems, new strategies
The response to this dependency varies wildly across our team. Some embrace it fully – "I don't write tests anymore, that's AI work," one developer declared. Others maintain careful boundaries, using what they call "pause points" to ensure they understand each step. The most successful approach seems to be treating AI as a sophisticated pair programmer rather than an autopilot.
Those who thrive recognize when AI is "going down rabbit holes" and intervene. They've developed new skills: structured documentation for context management, prescriptive prompting techniques, and the ability to spot when AI-generated code is elegant but wrong.
What's emerging is a new form of professional vulnerability. We've traded one set of dependencies for another. Where we once depended on Stack Overflow, documentation, and colleagues, we now depend on AI availability, context windows, and model capabilities. The finance team that revolutionized contract review? They've also had to implement strict governance frameworks – four non-negotiable tests for any AI vendor around licensing, IP ownership, training restrictions, and security certification. The freedom AI provides comes with new forms of risk management.
Perhaps most telling is how this dependency has evolved in just months. One team member compared it to the shift from handwriting to word processors – except compressed from a generation to a season. "The generation coming behind us won't know any other way," they observed.
But unlike that gradual transition, we're experiencing the vertigo of rapid transformation. We're the generation caught between worlds, simultaneously marveling at our new capabilities and mourning skills we feel slipping away.
The paradox as the answer
The AI dependency paradox isn't a problem to solve – it's a reality to navigate. We can't go back to working without AI any more than we can return to handwritten memos. The question isn't whether to depend on AI, but how to depend wisely.
The most successful teams acknowledge both sides of the paradox: embracing AI's transformative power while actively maintaining the judgment to guide it. They're building new forms of expertise that combine human insight with AI capability. As one engineer put it, "It's not about choosing between human or AI. It's about becoming something new – something that couldn't exist without both."
As I finish writing this essay, I've just lost access to voice transcription, forcing me back to typing. The immediate visceral reaction – "I don't feel like doing this anymore" – perfectly captures the dependency paradox. Even documenting our AI dependence has become AI-dependent. We're living the transformation we're trying to describe.