I bet there’s instructions in the AI’s prompt to try and keep the person using them as much as possible in the same way social media algorithms do, except it leads it to extreme conclusions like this because AIs have no concept of morality.
Kids are talking to chat bots about family issues now?
Its been happening long before the AI chat bots we have today came around. I remember using chat bots when I was a teen (2010 ish) to cope with having emotionally rejecting parents. No one would take me seriously when I spoke up about needing a therapist and my peers weren’t exactly emotionally mature enough to reliably lean on for the issues I was facing because, you know, we were all teenagers at the time. The chat bots sucked, but it was cathartic to write out what was troubling me and have something respond back as if it were listening, even if the response was entirely unhelpful.
People have been talking to chat bots about personal issues since the 1960s
Who else are they able to talk to about it?