Researchers from MIT, Equality AI, and Boston University emphasize the need for stronger regulation of AI and non-AI algorithms in healthcare. Their commentary follows a new rule from the U.S. Office for Civil Rights aiming to prevent discrimination in patient care decision-support tools. While the rule is seen as a positive step, experts highlight the lack of oversight for clinical risk scores, which could perpetuate biases, stressing the importance of transparency in these tools.