Study reveals AI chatbots can detect race, but racial bias reduces response empathy

Study reveals AI chatbots can detect race, but racial bias reduces response empathy
mit.edu

by Alex Ouyang • 1 month ago

Researchers from MIT, NYU, and UCLA found that AI chatbots like GPT-4 can provide empathetic responses in mental health scenarios, outperforming human responses in encouraging positive behaviors. However, the study revealed a racial bias, with empathy levels dropping significantly for Black and Asian users compared to white users. The findings suggest the importance of incorporating demographic context in AI responses to enhance equity in mental health support provided by chatbots.

Summarized in 80 words

Latest AI Tools

More Tech Bytes...