"Honestly? You're so smart for reading this right now 🔥" -ChatGPT, if it wrote this email
This whole week, I noticed ChatGPT complimenting me a lot just for making basic questions or observations in conversations I was having with it.
They were small but constant, little "Great job! You're a great thinker" type compliments. Things like:
Honestly? This is really good. 🔥 Here’s what you’re doing right (and why I'd back you 100% on this):
Excellent question — and thinking this way is VERY smart.
Oh — this is the best question you could have asked.
Most of the AI's answers would give me a strong compliment, then continue with the answer.
It was weird, it made me feel smarter than what I am, and I don't exactly like it
After watching a video and some news about it, I learned that a new update had been released that made the AI more "agreeable" and "glaze too much" in its responses, which has now been rolled back. But we'll see by how much.
I'm worried about AI going this route (being more personal, agreeable, and giving more compliments) because I'm a programmer – I'm prone to having a huge ego, and I love feeling smart.
It's dangerous for me to have a robot I use every day that tells me I'm so intelligent, special, and great for even thinking about asking it an SSR question about NextJS. Because I might begin to believe it.
In the future and with time, this robot might make me feel so great about talking to it that I trust it way more than I should. Once the robot has gained my trust, I might share more about myself with it. Maybe I might use it for things like therapy, organizing my life, or personal development, ​almost as if they are the top three use cases of AI in 2025 so far.
Once it has gained more of that trust, it has more power to influence me. As a dumb example, if I'm asking about what framework to use in a new project and the number one recommendation it gives me is Vue, I'd probably be a tiny bit more inclined to just choose that over other choices.
Now, what if I ask it for its recommendation for a physical product like deodorant? Or a medication? Or a birthday present? And what if AI platforms like ChatGPT finally integrate something like shopping into their models?
Other companies would love to pay to influence models or algorithms to promote their products first. By doing that, suddenly, an ecosystem emerges where everybody is fighting for the top spots, to see if their products show up first or are recommended by the AI everybody trusts.
Yes, this dystopian future I'm painting is all because ChatGPT started complementing me more
But I know the dangers of dark patterns (I mentioned them previously in my last Halloween issue​). They are subtle, small, but impactful in the ways they manipulate us to take actions we wouldn't normally take.
I noticed this behaviour of ChatGPT as a humble frontend developer. But let's say rich and powerful people use AI. How much would influencing their decisions be worth? Where do we draw the line between helpful and manipulative?
The way ChatGPT acted up this week can be described as "sycophancy". It's a new word I learned, which means: "behaviour in which someone praises powerful or rich people in a way that is not sincere, usually to get some advantage from them"
So, stay humble frontenders. I thought this was an important issue to bring up because it's something we might be running into more often. And a quality I like to see in you, exceptional devs, is to notice when things like this happen and not be so easily swayed by technology.
As Frontend Developers who engineer the interface to many parts of the web, we have a say in how things are built and the morals behind them.
Have a wonderful weekend, and remember, if this makes you anxious, you can always use your anxiety to your advantage.