A team of scientists from Google DeepMind and the London School of Economics tested nine large language models (LLMs) by subjecting them to scenarios involving "pain" or "pleasure" in exchange for scores. The research aims to establish a framework for assessing AI sentience, despite acknowledging that AI may never truly experience emotions. The study highlights the limitations of interpreting LLM outputs as indicators of sentience and cautions against anthropomorphizing AI.