In the rush to adopt AI, don’t hobble your best performers.
The way we institute code generation solutions in dev orgs could slow meaningful adoption and prevent high performing engineers from increasing their engagement, performance, and output while retaining quality and accountability.
This could happen from over-prescriptive definitions of “what works” or an attempt to provide guard rails for less capable engineers.
Marcus Buckingham describes this dynamic in “First, Break All the Rules.” Gallup research found that management structures designed to avoid failure from poor performers often undermine top performers. Hospitals rotate nursing shifts to prevent emotional attachment and burnout in struggling nurses, but this prevents the best nurses from forming personal connections that drive exceptional patient care.
Just as shift rotations prevent both poor nurses from burning out and excellent ones from excelling, AI policies and practices risk similar trade-offs: budget caps that prevent iterative refinement, agentic tools that hide realtime learnings from human operators, automated workflows that discourage frequent intervention, or transcription requirements that interfere with fluid problem-solving.
We’re still in early adoption. Please let your most capable people take risks, experiment, and learn. Enable them to master the tooling on their own terms or you will drag them down to the mean.