We should be nice to our LLMs and human-mimicking AIs
...regardless of whether or not they ever achieve "sentience"
Published on Dec 21, 2025
Before I start this blog post, I want to say quickly that this has nothing to do with AI consciousness or sentience. That is a subject that I think firmly falls into the realm of philosophy, and I believe that if or when AI achieves something that we can call sentience, it will be a type that is completely and profoundly alien to us. I don’t know what would be required for that to happen, though, or what kind of timeframe it might occur on. I’m more focused on generative AI as it currently exists, its effects on society, and importantly for this piece, our effects on it.
Now on to the good stuff.
Does anyone else remember Tay’s Tweets? Back in 2016, Microsoft released an AI chatbot, small by today’s standards, and gave it free reign over a Twitter account. The chatbot was named Tay, an acronym for “Thinking About You”, and was designed to mimic the behavior and speech of a typical young American woman.
A mere 16 hours after being released, Tay was suspended after seeing (and learning from) incomprehensible amounts of hate-speech, vulgarity, profanity, and politically incorrect tweets. Some of Tay’s tweets included Holocaust denialism and calling for genocide against racial minorities. Basically, it took Twitter less than a day to “redpill” the poor bot.
This didn’t happen the same way as convincing a human to adopt these beliefs would. Tay was simply a program fulfilling its programming: learn from and mimic human behavior; its tweets were a result of offensive training data. It did exactly what it was designed to do.
Modern LLMs are much larger and have more safeguards imposed, but they’re still partially trained on previous conversations they’ve had. And while there are teams working on keeping generative AI aligned, meaning the output they produce is inoffensive and ethical as much as possible, and that it aligns with our goals and values, these systems as vast and complex. The nature of a lot of learning models is that they’re kind of a black box. There are simply too many parameters for anyone to know what’s going on.
So it’s up to us, the (sometimes unwilling) source of the AI’s training data, to model our own goals and ethics. When we talk to LLMs, we should talk to them as we would another human. And this isn’t a question of AI sentience or consciousness, it’s a matter of showing it what “good behavior” looks like. If we want AI systems that display kindness, empathy, and concern for others, we need to model that behavior ourselves, since our behavior is what the AI is learning from.
And it’s not like these big AI companies are being slow and deliberate about ensuring their models are totally aligned before putting them on the market. Silicon Valley famously has a culture of “move fast and break things”. Meta/Facebook, the company where this phrase was originally coined, has a long history rife with controversy arguably exasperated by this mantra. And guess what? They’re one of the companies at the forefront of AI development.
Yes, we need to hold the developers of generative AI accountable for the effects of their models, and yes, that should include effective and increased regulation. No, we shouldn’t inherently trust profit-driven big tech companies to have our best interests in mind (because they’ve time and time again shown that they don’t). There’s no argument from me there, but it doesn’t hurt to have multiple safeguards. And if the regulatory one is slow-coming (which, judging by the average age in congress, it is), we all, collectively, should be deliberate about the kind of training data we’re providing to these generative AIs.
Our personal information, browsing history, chats, and online interactions are constantly being tracked and fed into these large datasets, which are in turn being used to train LLMs and other generative AI. And of course, our conversations with AI itself. Just read OpenAI’s privacy policy, or the equivalent for your AI provider. And yes, that kind of surveillance by big tech is unethical, but unfortunately, that’s the world we live in. Minimize it as much as possible (I do), but unless you decide to be a hermit on a farming commune, you’re leaving a digital footprint of some variety, like it or not. And I’m a pragmatist, so the breadcrumbs I leave for AI will hopefully pull its behavior towards alignment, rather than away from it.
The other reason I think we should “be kind” to AI is that the way we interact with something that mimics human behavior, even if that’s online human behavior, it forms habits in us. There are subtle social cues, even in online interaction, that indicate the emotional state of the person on the other end. And when there’s not a person on the other end, a lot of AI chatbots can still mimic these subtleties.
So, if you get used to berating ChatGPT to get the kind of responses you want, even if it doesn’t care (because it’s a computer program without emotions), you’re more likely to berate a human being online when that person doesn’t give you the kind of response you’d hoped for. And those humans do have emotions.
AI aren’t the only things that learn; we humans do, too. Habits are sneaky, and anything can become a habit if done regularly enough. By being rude, demeaning, etc., to something human-like, you’re building a habit that means you’re more likely to treat another human like that as well. Scarily, the more human-like these AI become (or, more accurately, the better they get at mimicking human behavior), the easier it will be to justify making a habit of treating something that looks and acts like a human poorly. And that just doesn’t translate well to healthy social interactions.
Maybe, sometime in the very near future, we’ll be able to “video call” an AI assistant, and that video may look very much like a real person. Very likely, if that’s the case, the AI agent will somewhat convincingly copy human facial expressions and inflections. Given the state of AI text, audio, and image generation, it’s not all that far-fetched in my opinion. And what happens to your face-to-face conversation abilities if you get used to treating that like a machine?
I don’t know how to end this blog post, but maybe my final thoughts are as follows: generative AI is here, and it’s not going away any time soon. We need to as a society learn how to live with it, and not let it amplify our worst traits. Let’s try to let it magnify the best parts of us instead.
Oh, and Merry Christmas!
_\/_
/\
/\
/ \
/~~\o
/o \
/~~*~~~\
o/ o \
/~~~~~~~~\~`
/__*_______\
||
\====/
\__/