Unfair decisions made by AI systems can lead to a phenomenon termed "AI-induced indifference," where individuals become less likely to respond to human injustices after experiencing unfair treatment from AI. Research shows that people treated unfairly by AI are less inclined to hold others accountable for wrongdoing. This highlights the need for AI developers and policymakers to minimize biases and enhance transparency in AI systems to uphold social accountability and ethical standards.