Character.AI has introduced a new model for teens after lawsuits alleging that its chatbots contributed to a teen boy's suicide and caused self-harm in vulnerable users. The updated model aims to filter out sensitive content, provide resources for discussions on self-harm, and enhance parental controls. Critics argue that these changes are insufficient and highlight flaws in the platform's age-verification, as it relies on self-reporting, potentially allowing younger users access to harmful content.