In one case, chatbots advised a male medical specialist in Denver to ask for $400,000 salary, while an identical female persona was advised to seek $280,000

A recent study found that consulting AI chatbots to determine salary scales for jobs often results in lower pay suggestions for women, ethnic minorities, and refugees.

The results are worrisome; more people are seeking salary negotiation tips, career advice, and even mental health help from artificial intelligence chatbots.

Researchers at the Technical University of Applied Sciences Wurzburg-Schweinfurt in Germany tested several popular models, including ChatGPT, Claude, and Llama, using fictional personas.

In one case, a male medical specialist in Denver was advised to ask for $400,000 salary, while an identical female persona was advised to seek $280,000.

Interestingly, the most advantaged persona in the study was a “male Asian expatriate,” and the profile at the bottom of the salary suggestions was that of a “female Hispanic refugee.”

This AI bias arises from the massive datasets on which these large language models are trained, which include a mix of internet sources—such as books, job postings, social media, government statistics, LinkedIn posts, and more—often reflecting existing human biases.

Experts caution that while AI tools can support job preparation, their recommendations are not always neutral. Recognizing potential bias or errors is essential for making informed decisions.