Because the illusion is dangerous.

So, the next time you see an AI generate a response, remember the Turkish warning.

We need every chatbot to occasionally flash a warning label: "You are not talking to a person. You are talking to math."

This is not your friend. This is not your assistant. This is not a person.

Why would an AI say this? Why would a system label itself as "Not Olivia"?