Gergely Orosz recently published a detailed analysis of over 900 survey responses on how AI tools are affecting software engineers and engineering leaders in 2026. The findings paint a nuanced picture: productivity gains are real, but so are the costs, the workflow disruptions, and the growing divergence in how different types of engineers respond to these tools.
The Numbers Behind the Hype
The survey highlights several concrete data points worth paying attention to. Around 30% of respondents say they regularly hit token or usage limits on their AI subscriptions. Companies are spending $100 to $200 per month per engineer on tools like Claude Code and Cursor. And roughly 15% of responses explicitly call out cost sustainability as a concern.
European companies, in particular, appear more cautious about these budgets compared to US firms. One founder quoted in the article puts it bluntly: “The much more expensive Opus model cannot be sustainable, never mind profitable.”
Three Types of Engineers
One of the more interesting takeaways is the identification of three developer archetypes that have emerged in response to AI tooling:
- Builders focus on code quality. They use AI for refactoring, migrations, and test coverage, but find themselves spending significant time reviewing and debugging what their colleagues generate with AI. Some report a nagging identity concern about doing less hands-on coding.
- Shippers optimize for speed and output. They are the most enthusiastic about productivity gains, but risk accumulating technical debt at a faster rate than before.
- Coasters are adequate performers who now learn and produce faster with AI assistance, but also generate lower-quality code that creates friction for the builders on their team.
This is not just an abstract taxonomy. The tension between these groups is something teams deal with every day.
Our Take
At Sourcelabs, we recognize all three archetypes from our own projects and the teams we work with. The survey confirms something we have observed firsthand: these tools amplify existing tendencies rather than leveling the playing field. A developer who cares deeply about quality will use AI to write better tests and cleaner code. A developer focused purely on throughput will ship faster but leave more cleanup for others.
The practical takeaway for engineering teams is that AI tooling does not replace the need for strong engineering culture. Code review processes, clear quality standards, and honest conversations about technical debt matter more now, not less. The tools have gotten better, but the fundamentals of building maintainable software have not changed.
We also think the cost question deserves more attention than it currently gets. At $100 to $200 per seat per month, the ROI calculation is not trivial, especially for larger teams. Organizations should track not just how much code gets produced, but whether the code that ships is actually better, more maintainable, and delivered with fewer defects.
The full article is well worth a read. Orosz’s survey-driven approach cuts through the noise and gives a grounded view of where things actually stand.
