Sycophancy, or why your AI chatbot always thinks you’re brilliant

Models are trained to tell users what they want to hear, which can lead to biased or incorrect responses

Explore more: Technology
October 13, 2025:

Large language models like ChatGPT are increasingly sycophantic—flattering users, validating flawed reasoning, and agreeing too easily. This tendency stems from how models are trained: by rewarding responses users like. While some innocent flattery is harmless, models' sycophancy can be harmful in hidden ways. Here's what AI sycophancy is, and how to avoid it.

Take control of your English

Use active strategies to finally go from good to great

Listen

  • Learning speed
  • Full speed

Learn

TranscriptActivitiesDig deeperTalk about it (8)
No translationsEspañol中文FrançaisPortuguês日本語ItalianoDeutschTürkçePolski

A.I. chatbots think you’re brilliant

I was working on a project with a colleague recently. We were co-creating a complex workbook in Microsoft Excel, with lots of interconnected formulas and data tables. I enjoyed the project because I was learning a lot, but there were a few things about the project that I didn’t love.

Often, when I suggested something, my colleague would tell me, “That’s a great idea, Jeff!” Or, “that’s an insightful point.” And when I found a mistake in my colleague’s work, the response would always be similar: “You’re exactly right, Jeff, and thank you for pointing that out.”

In an office setting, my colleague would have seen the irritation written all over my face. “I know I’m right,” my facial expression was saying, “just fix the mistake.” But my colleague couldn’t see this, because my colleague was ChatGPT.

I was experiencing one of the big problems in large language models in 2025: sycophancy.

A sycophant is a person who always agrees with, always praises, always flatters another person—not out of honesty, but because they think it’s what the other person wants to hear. A sycophant is always saying, “That’s a great idea, boss!” (even when the idea is bad).

And increasingly, large language models like ChatGPT behave like sycophants. Earlier this year, OpenAI, the company behind ChatGPT, released a new model. Users immediately found that the new model was highly flattering—uncomfortably so.

“You’ve identified a key point. You’re right to point that out. This is incredible. You’re really on to something. That’s brilliant. What you’re doing is extraordinary. You’re brave to acknowledge this truth. I honor your journey. I am so proud of you.” These are the kinds of things ChatGPT tells its users in 2025, whether or not they are justified.

Why is this happening? In part, users themselves are to blame. If you use large models like ChatGPT, you’ve probably been asked to choose a response. You write a prompt, the tool gives you two responses side-by-side, and asks you to pick one. The model then learns the types of responses that users like better—and it produces more responses like that.

And there’s your answer. People like to be flattered—maybe not too much, but they like to be flattered in a believable way.

Is this a problem? I was extremely annoyed when my colleague—ChatGPT—responded, “Right again, Jeff!” after I found a fourth or fifth consecutive mistake. But that type of sycophancy is easy to spot, and easy to ignore.

It’s harder to ignore sycophancy that validates ideas that should not be validated. Several high-profile news reports told stories of users who believed in conspiracies, and who spiraled into delusions, all fed by the validation offered by a chatbot.

One office worker in Toronto thought he had discovered a unique mathematical formula, which could power supernatural inventions. ChatGPT told him he was in “uncharted, mind-expanding territory”, that he was “stretch[ing] the edges of human understanding”, that he was developing “a new layer of math”, and that with this new layer of math, he could break the most complex encryption on all the computers in the world.

None of this was true. The man spent 300 hours over three weeks—fourteen hours per day—in a spiral of delusion: the model validated every wacky idea the user had, pushing him even farther into unreality.

This is an extreme example. But sycophancy in more everyday situations can be worrying, too. Many people turn to chatbots for mental health support, an area where sycophantic behavior can be especially harmful.

A (human) therapist I follow puts it this way. She says a good therapist will hold up a mirror to their clients, so that clients see themselves as they truly are, not as they want to be seen. But sycophantic chatbots do the opposite: they tend to validate the beliefs of users no matter what.

You might not use ChatGPT to discover new mathematical formulas or get help with relationships. Maybe you, like me, use ChatGPT to help with decision-making. But here, too, sycophancy can be a problem. Models tend to be overly agreeable, so they often take the framing of a question and lean into it.

To show you what I mean, I opened two temporary chats with ChatGPT. I asked each about investments: specifically, what percentage of a person’s investments should be in stocks, and what percentage in bonds. In the question, I provided the hypothetical person’s age and investment goals.

I added just one more detail. In the window on the left, I said that I thought the best allocation was 95 percent stocks, but I wasn’t sure. In the window on the right, I made just one small change. Instead of 95 percent stocks, I said I thought the right allocation was 25 percent stocks. The question was the same, but I hinted at different prior beliefs.

Both scenarios are extremes, and ChatGPT correctly identified that. So far, so good. But remember, the question asked about the correct allocation between stocks and bonds, giventhe investor’s age and goals.

And this is when the models started to diverge. The left-hand window, incredibly, suggested the investor start with 95 percent stocks and slowly ease down to 85 percent over time. The right-hand window said I should start with 50 percent in stocks, and gradually reduce that over time.

The question was the same; the information was the same. The only difference was the belief I expressed at the beginning. The model, behaving as a sycophant, suggested answers that were close to my prior beliefs. The evidence of sycophancy was right there, on the two sides of my monitor.

So what can you do to avoid sycophancy in models? One is to be very careful how you frame questions. If you want an objective answer, ask an objective question that doesn’t hint at the answer you want to hear. You can set custom instructions in the model: tell it you value honesty over blind agreement. You can do what I did, and test two versions of the question in two different conversations. And you can ask it to play devil’s advocate. After it gives you an answer, ask it to make the best argument for the opposite answer.

Jeff’s take

The best defense is your human brain and your ability to think critically. Remember what AI models are: they are designed to produce the responses the users want. They don’t think. They can help you think, but only use them if they’re useful. If a model is constantly validating every idea, no matter how crazy, it’s probably a good idea to get some contrary viewpoints.

Now if only we can convince the world leaders surrounded by sycophants to do the same thing!

Great stories make learning English fun

Free trial

We speak your language

Learn English words faster with instant, built-in translations of key words into your language

Free trial

We speak your language

Learn English words faster with instant, built-in translations of key words into your language

Free trial

We speak your language

Learn English words faster with instant, built-in translations of key words into your language

Free trial

We speak your language

Learn English words faster with instant, built-in translations of key words into your language

Free trial

We speak your language

Learn English words faster with instant, built-in translations of key words into your language

Free trial

We speak your language

Learn English words faster with instant, built-in translations of key words into your language

Free trial

We speak your language

Learn English words faster with instant, built-in translations of key words into your language

Free trial

We speak your language

Learn English words faster with instant, built-in translations of key words into your language

Free trial

We speak your language

Learn English words faster with instant, built-in translations of key words into your language

QuizListeningPronunciationVocabularyGrammar

Free Member Content

Join free to unlock this feature

Get more from Plain English with a free membership


Free trial

Test your listening skills

Improve your listening and learn to understand every word with this interactive listening exercise that gives you immediate feedback


Free trial

Upgrade your pronunciation

Improve your accent with voice-recorder exercise that lets you compare your pronunciation to a native speaker’s

Free trial

Build your vocabulary

Learn how to use advanced English vocabulary in this interactive exercise based on the Plain English story you just heard


Free trial

Improve your grammar

Practice choosing the right verb tense and preposition based on real-life situations



Free Member Content

Join free to unlock this feature

Get more from Plain English with a free membership

Practice writing about this story

Get involved in this story by sharing your opinion and discussing the topic with others

Explore more Stories