A new study just dropped on how AI stigma is tanking engineers’ reputations. Researchers tested 1,026 engineers who reviewed the same Python code but thought it was either AI-assisted or fully human-coded. The verdict? If reviewers believed AI was involved, they rated the engineer’s competence 9% lower. Same code. Different judgment.
The penalty hit women and older engineers the hardest. This shows that workplaces aren’t just missing the technical side of AI adoption—they’re ignoring the social fallout.
The study points out that companies focus on tools and training but overlook how bias shapes who actually uses AI and how they’re perceived.
Sandra Navarro led the research and shared:
> Researchers conducted an experiment with 1,026 engineers in which participants evaluated a Python code snippet that was purportedly written by another engineer, either with or without AI assistance. The code itself was identical across all conditions—only the described method of creation differed. The results were striking. When reviewers believed an engineer had used AI, they rated that engineer’s competence 9% lower on average, despite reviewing identical work—and the penalty was more severe for women and older workers. This competence penalty points to a fundamental misalignment in how organizations approach AI adoption. While companies focus on access, training, and technical infrastructure, they overlook the social dynamics that determine whether employees actually use these tools.