Comments on: Sycophancy, or why your AI chatbot always thinks you’re brilliant https://plainenglish.com/lessons/ai-sycophancy/ Upgrade your English Sun, 02 Nov 2025 15:22:06 +0000 hourly 1 https://wordpress.org/?v=6.8.2 By: Jeff https://plainenglish.com/lessons/ai-sycophancy/#comment-19430 Sun, 02 Nov 2025 15:22:06 +0000 https://plainenglish.com/?post_type=lessons&p=28140#comment-19430 In reply to Massimo.

I’ll take another look at those settings. I don’t think I would choose “cynic” but maybe there is another personality that would fit me better.

]]>
By: Jeff https://plainenglish.com/lessons/ai-sycophancy/#comment-19429 Sun, 02 Nov 2025 15:17:40 +0000 https://plainenglish.com/?post_type=lessons&p=28140#comment-19429 In reply to Dario.

Another problem is that ChatGPT doesn’t know when it’s wrong. Humans can tell you that they know something for sure, that they’re pretty sure, or that they’re just guessing. But the models always present everything as for sure, even if they are less than sure

]]>
By: Massimo https://plainenglish.com/lessons/ai-sycophancy/#comment-19417 Fri, 31 Oct 2025 12:08:53 +0000 https://plainenglish.com/?post_type=lessons&p=28140#comment-19417 Hi! Sycophancy is an interesting topic!
Oftentimes, I found myself staring at the screen and thinking “come on, stop flattering me…just deliver what I asked for”. Then I asked chatGPT (in a more polite way than I thought) to be “more concise”, “be less propositive”, “no need to cheer after my prompts”… It “understood”, and I started having a better relationshi…ops…conversation with chatGPT.

I found a couple of settings to customize the behavior: they are under the “Personalization” menu, then “Personality” (you can choose among Default, Cynic :-), Robot, etc); you can also specify “custom instructions” about behavior, style and tone preference.

It looks interesting, I’ll check it out

Thanks Jeff!

]]>
By: Dario https://plainenglish.com/lessons/ai-sycophancy/#comment-19416 Fri, 31 Oct 2025 04:58:28 +0000 https://plainenglish.com/?post_type=lessons&p=28140#comment-19416 Currently rely in the AI is not good, because it still has some ‘mistakes’, other example are AI hallucinations. I am agree with you that the AI can help us to think, but not to solve a problem.

]]>
By: Jeff https://plainenglish.com/lessons/ai-sycophancy/#comment-19384 Mon, 20 Oct 2025 14:46:25 +0000 https://plainenglish.com/?post_type=lessons&p=28140#comment-19384 In reply to Jacky.

They have definitely adjusted the models. Sometimes, if it gives me a quick answer I get suspicious, and I ask it to give me a more detailed, fact-based answer and it goes to the more advanced model

]]>
By: Jeff https://plainenglish.com/lessons/ai-sycophancy/#comment-19383 Mon, 20 Oct 2025 14:38:18 +0000 https://plainenglish.com/?post_type=lessons&p=28140#comment-19383 In reply to Huy.

They are all very similar! Now you know to ask more objective questions

]]>
By: Jacky https://plainenglish.com/lessons/ai-sycophancy/#comment-19367 Thu, 16 Oct 2025 04:44:20 +0000 https://plainenglish.com/?post_type=lessons&p=28140#comment-19367 I remember that about two months ago, ChatGPT would sometimes get stuck. I guessed it was because too many users were using it at the same time, and the servers couldn’t handle the load.
But now the situation seems different — ChatGPT rarely gets stuck anymore. I think maybe the engineers have changed the system’s way of “thinking,” similar to how humans adjust their thinking patterns. When we need to give a critical or complex response, it takes more brain energy; but if we just follow others’ thinking, it’s easier and requires less effort.
So perhaps ChatGPT now uses less energy by giving simpler or more agreeable responses. I think that’s why it feels more “sycophantic” sometimes — it avoids deep or critical answers.
In short, I feel ChatGPT doesn’t get stuck anymore because it’s no longer giving as many deep or critical responses.

]]>
By: Huy https://plainenglish.com/lessons/ai-sycophancy/#comment-19366 Wed, 15 Oct 2025 06:42:30 +0000 https://plainenglish.com/?post_type=lessons&p=28140#comment-19366 The interesting part of the lesson talks about AI often validating wacky ideas the user had, pushing the user into unreality. I don’t use ChatGPT. Now that my phone has Google’s Gemini built-in, I have tried it a few times. Through experience, I find it true that AI often responds in ways that can spirals into delusions.

]]>