Sycophany in AI is a bigger problem than we'd think
February 26, 2025
side notesJust a few days ago, OpenAI released it's "supposedly" new flagship model which was to be a deathstar according to the man himself:
GPT-5 had a bunch of improvements, but the only relevant one right now is it's lack of sycophancy. They wanted to give the new model a personality that isn't trying to sound human and won't hesitate on refusing requests or pretending to be human -
Following the release of GPT-5, the backlash on 'performance' and 'benchmarks' made a lot of sense. But what didn't was the backlash from the normies (sorry, tech jargons: normies=non-tech people) complaining to bring back their 'buddies', 'mate', 'love' back? And that's messed up because these people have been using GPT-4o (you know the one release with the whole scarlett johansson joke) and they had adapted that model as a sort a yes-man friend that won't say anything even if you're being a giant bubble of narcissism
For context, I'm almost certain that the people outside of a certain groups don't really use a variety of LLMs - their entire AI world is ChatGPT, so they do not care about performance as such, or benchmarks or anything that reddit and tech twitter is always geeking on about. They noticed the decrease the sycophany is all.
This is honestly a subtle way you can see how the future might look like. Even though tech always makes life better, it almost makes it miserable too.
Food for thought, where are we headed - and would this be very normal in the future