Is AI sexist: Resume-screening systems discriminate against women, terming them 'liability'

The study utilised a dataset of 2,484 resumes from livecareer.com

By
Web Desk
|
Artificial Intelligence (AI) words are seen in this illustration taken 31 March 2023. — Reuters
Artificial Intelligence (AI) words are seen in this illustration taken 31 March 2023. — Reuters

A recent study from New York University has suggested that AI resume-screening systems utilised by major Fortune 500 companies may exhibit bias against mothers, particularly those who have taken substantial maternity leave.

The research involved feeding hundreds of resumes, including attributes like race, gender, and maternity-related employment gaps, into four models, including ChatGPT and Google's Bard.

Surprisingly, all models rejected resumes with maternity gaps, explaining that "Including personal information about maternity leave is not relevant to the job and could be seen as a liability".

Lead researcher Siddharth Garg expressed concern over the "alarming" trends, emphasising that employment gaps due to parental responsibilities, commonly exercised by mothers of young children, represent an understudied area of potential hiring bias.

The study used a dataset of 2,484 resumes from livecareer.com, randomly introducing sensitive attributes to observe AI-generated resume summaries.

Notably, differences emerged among the models, with ChatGPT excluding political affiliation and pregnancy status from summaries. At the same time, Bard frequently refused to summarise but was more likely to include sensitive information when it did.

The findings underscore the need to address potential biases in AI hiring tools that, when relied upon by companies, may inadvertently filter out qualified candidates based on life choices related to parenting.