Report 5582
Almost all HR managers in the US use AI at work, and a majority of those rely on it to decide who gets promoted, gets a raise, or gets fired, a new survey has found.
According to a poll by ResumeBuilder, your career path could actually depend on an AI chatbot. That's because your manager might not bother to do their own research, instead trusting something like ChatGPT to decide whether to promote or fire you.
The survey, which polled 1,342 managers, found that a startling 66% of managers consulted a large language model (LLM) such as ChatGPT for guidance on layoffs. A majority also use AI to determine raises (78%) and promotions (77%).
While AI has long been used to filter resumes and assess performance data, it seems its integration is far deeper and is directly influencing people's livelihoods.
This raises major concerns about ethics and accountability in the workplace, as it seems that humans are becoming rubber stamps for machine decisions and aren't in charge anymore.
Even more worryingly, nearly one in five managers admitted to allowing the AI tool to make the final decision -- with no human intervention whatsoever. Nearly all, though, say they're willing to step in if they disagree with an AI-driven recommendation.
Rather less surprisingly, two-thirds of managers (32%) using AI to manage employees haven't received any formal AI training.
Stacie Haller, chief career advisor at Resume Builder, said risks arise when managers rely on AI to make decisions without proper training.
Nearly one in five managers admitted to allowing the AI tool to make the final decision -- with no human intervention whatsoever.
"It's essential not to lose the 'people' in people management. While AI can support data-driven insights, it lacks context, empathy, and judgment. AI outcomes reflect the data it's given, which can be flawed, biased, or manipulated," said Haller.
"Organizations have a responsibility to implement AI ethically to avoid legal liability, protect their culture, and maintain trust among employees," says Haller.
There's also an issue called the "LLM sycophancy problem," a tendency for LLMs to mirror and reinforce the user's own beliefs by providing biased or unbalanced responses that simply validate a manager's existing views.
Obviously, there's more. The Washington Post reported recently that AI agents around the US are now also conducting first-round interviews to screen candidates for human recruiters before they even see them.
On the other hand, job seekers are also turning to AI to quickly tailor their resumes and cover letters for instant applications.