Is AI Damaging Your Professional Image?
Is AI Damaging Your Professional Image?
Fuqua researchers found colleagues perceive AI users as lazier, less competent, and less motivated
People using AI may face a social stigma in the workplace. Artificial intelligence – often touted for improving performance — could also damage people’s professional reputation, Fuqua researchers found.
Professors Richard Larrick and Jack Soll of Duke University’s Fuqua School of Business, and Jessica A. Reif (a Fuqua Ph.D. candidate) revealed this paradox in the paper, Evidence of a social evaluation penalty for using AI, recently published in the journal Proceedings of the National Academy of Sciences.
The study found that employees judge colleagues who use AI as less competent and lazier, and recruiting managers may act based on this perception by penalizing job candidates who rely on AI to complete tasks.
However, the penalty disappears when managers use AI themselves.
"Our findings reveal a tension between productivity and perception," says Jessica A. Reif. "While AI can significantly enhance work performance, using it may damage your professional reputation."
How perceptions shape your professional image
People cultivate their professional image by trying to convey signals of competence, character, and commitment. In this context, one way people protect their reputation is by avoiding appearing dependent on “external assistance”, the researchers write.
The researchers hypothesized that the use of AI, increasingly encouraged by companies to boost productivity, may also risk compromising people’s professional reputation, as it may be perceived as a particularly powerful form of external help.
In fact, a 2024 industry survey of more than 5,000 knowledge workers found that the apprehension about being perceived as lazy ranks among the top concerns of people using AI at work.
Is using AI a powerful form of external assistance? And if so, should employees using AI be worried about how they are perceived? the researchers wondered.
“Many times, people's perceptions are wrong,” Larrick said. “So our question was, ‘Is it true that people are penalized when others know they use AI?’”
Employee self-perception and observers' judgment
The researchers examined this question across four studies involving almost 4,500 participants, many of whom were college-educated, to ensure the sample was representative of the knowledge worker demographic most likely impacted by AI.
In their first experiment, the researchers asked 500 participants to imagine working in a company's operations department where they had discovered a tool to help them generate customer reports more efficiently. Half were told this was a "generative AI tool," while the other half were told it was a "dashboard creation tool."
The researchers found that participants in the AI tool condition believed they would be perceived as lazier, more replaceable, less competent, and less diligent than those in the non-AI tool condition.
They were also significantly less likely to disclose their use of the tool to managers and colleagues.
"People anticipate being judged harshly for using AI," Reif said. "This creates a situation where employees might hide their AI, even when that AI use is beneficial for the organization."
To determine whether these fears were justified, the researchers conducted a second experiment with 1,215 participants who evaluated fictional employees described as receiving help from different sources. The descriptions systematically varied the employee's gender, occupation, age, and type of assistance (AI help, non-AI help, or no mention of help). In total, the researchers presented participants with 384 different scenarios.
For example, some participants read about a lawyer who "sometimes asks generative AI to summarize information," while others read about a lawyer who "sometimes asks a paralegal to summarize information." The results confirmed what employees feared: people who used AI were consistently rated as lazier, less competent, and less diligent than those who received help from human colleagues or those whose help wasn't specified.
The researchers examined whether these findings varied across occupations, age, and gender of the employees presented in the scenarios.
"This social penalty for using AI held true regardless of the employee's age, gender, or occupation," Larrick said. "This suggests the stigma associated with AI use is pervasive and not limited to specific groups."
Practical consequences of the social penalty
The researchers then wanted to understand whether these negative evaluations affected hiring decisions in an online work setting. In a third experiment, 1,718 participants were instructed to act as managers reviewing profiles of real candidates who had completed a task and reported how frequently they used AI tools.
The findings showed that managers who rarely used AI themselves were less likely to hire candidates who used AI daily. However, managers who frequently used AI showed the opposite pattern, favoring candidates who also regularly used AI tools.
In their final experiment, the researchers examined whether the social penalty could be mitigated in certain contexts. They asked 1,006 participants to evaluate a candidate for either a manual task (writing handwritten notes) or a digital task (writing personal email messages). The candidate was described as either regularly using AI tools or Microsoft Office programs.
"For manual tasks, candidates using traditional tools were rated as having significantly higher task fit than those using AI tools," Soll said. "But for digital tasks where AI was explicitly described as useful, there was no significant difference in task fit ratings between candidates using traditional tools and those using AI tools."
Together, these findings show there are limits on when the penalty arises, the researchers noted. It occurs more strongly among evaluators who are less frequent users of AI and on work tasks where AI is not directly of use.
Creating a safe environment to discuss AI in the workplace
High-profile companies are making it clear that employees will be increasingly expected to use AI in the workplace, with Shopify and Duolingo standing out as some of the most vocal in mandating AI proficiency for current personnel and new hires.
However, companies and managers may need to find strategies beyond just mandating tools, the researchers noted, as these studies demonstrated that AI can carry a unique social penalty for its users.
"If organizations want to encourage AI adoption, they need to address these social barriers," Reif said. "This may involve publicly endorsing AI tools and creating a psychologically safe environment where people can discuss how they're using AI."
For employees, the researchers suggest that being transparent about AI use might be most effective when you can clearly demonstrate how it enhances rather than replaces your unique skills.
The researchers added that while this negative perception is real today, workplace attitudes may change as AI becomes more commonplace.
“More companies are likely to come out with policies on AI use,” Reif said. “And I think this may change some of the perceptions around it.”
“Unlike most psychological findings, this one feels very contingent on the current cultural context,” Soll added. “This penalty could disappear 10 years from now. It will be interesting to see how quickly it evolves.”
This story may not be republished without permission from Duke University’s Fuqua School of Business. Please contact media-relations@fuqua.duke.edu for additional information.