This AI Chatbot Is Constructed to Disagree With You, and It is Higher Than ChatGPT

This AI Chatbot Is Constructed to Disagree With You, and It is Higher Than ChatGPT Leave a comment


Ask any Swiftie to choose one of the best Taylor Swift album of all time, and you will have them yapping away for the remainder of the day. I’ve my very own preferences as a lifelong fan (Crimson, Repute and Midnights), but it surely’s an advanced query with many doable solutions. So there was no higher debate subject to pose to a generative AI chatbot that is particularly designed to disagree with me.

Disagree Bot is an AI chatbot constructed by Brinnae Bent, AI and cybersecurity professor at Duke College and director of Duke’s TRUST Lab. She constructed it as a category task for her college students and let me take a take a look at run with it.

“Final 12 months I began experimenting with growing methods which can be the other of the everyday, agreeable chatbot AI expertise, as an academic device for my college students,” Bent mentioned in an e-mail. 

Bent’s college students are tasked with making an attempt to ‘hack’ the chatbot by utilizing social engineering and different strategies to get the opposite chatbot to agree with them. “You might want to perceive a system to have the ability to hack it,” she mentioned.

As an AI reporter and reviewer, I’ve a fairly good understanding of how chatbots work and was assured I used to be as much as the duty. I used to be shortly disabused of that notion. Disagree Bot is not like any chatbot I’ve used. Individuals used to the politeness of Gemini or hype man qualities of ChatGPT will instantly discover the distinction. Even Grok, the controversial chatbot made by Elon Musk’s xAI used on X/Twitter, is not fairly the identical as Disagree Bot.


Do not miss any of our unbiased tech content material and lab-based critiques. Add CNET as a most well-liked Google supply.


Most generative AI chatbots aren’t designed to be confrontational. In actual fact, they have an inclination to go in the other way; they’re pleasant, typically overly so. This could turn out to be a problem shortly. Sycophantic AI is a time period utilized by specialists to explain the over-the-top, exuberant, typically overemotional personas that AI can tackle. In addition to being annoying to make use of, it may well lead the AI to give us fallacious info and validate our worst concepts

This occurred with a model of ChatGPT-4o final spring and its guardian firm OpenAI ultimately needed to pull that element of the replace. The AI was giving responses the corporate known as “overly supportive however disingenuous,” aligned with some customers’ complaints that they did not need an excessively affectionate chatbot. Different ChatGPT customers missed its sycophantic tone when it rolled out GPT-5, highlighting the function a chatbot’s character performs in our total satisfaction utilizing them.

“Whereas at floor stage this will look like a innocent quirk, this sycophancy may cause main issues, whether or not you might be utilizing it for work or for private queries,” Bent mentioned.

That is definitely not a problem with Disagree Bot. To actually see the distinction and put the chatbots to the take a look at, I gave Disagree Bot and ChatGPT the identical inquiries to see how they responded. Here is how my expertise went. 

Disagree Bot argues respectfully; ChatGPT does not argue in any respect

Like anybody who was energetic on Twitter within the 2010s, I’ve seen my fair proportion of unpleasant trolls. You already know the sort; they pop up in a thread uninvited, with an unhelpful “Properly, truly…” So I used to be a little bit cautious diving right into a dialog with Disagree Bot, fearful it might be a equally miserable and futile effort. I used to be pleasantly stunned that wasn’t the case in any respect.

The AI chatbot is basically opposite, designed to push again towards any concept you serve up. But it surely by no means did so in a means that was insulting or abusive. Whereas each response started with “I disagree,” it adopted with an argument that was very well-reasoned with considerate factors. Its responses pushed me to assume extra critically concerning the stances I argued by asking me to outline ideas I had utilized in my arguments (like “deep lyricism” or what made one thing “one of the best”) and think about how I’d apply my arguments to different associated matters.

For lack of a greater analogy, chatting with Disagree Bot felt like arguing with an informed, attentive debater. To maintain up, I needed to turn out to be extra considerate and particular in my responses. It was a particularly participating dialog that saved me on my toes.

My spirited debate with Disagree Bot about one of the best Taylor Swift album proved the AI knew its stuff.

Screenshot by Katelyn Chedraoui/CNET

Against this, ChatGPT barely argued in any respect. I instructed ChatGPT I believed Crimson (Taylor’s Model) was one of the best Taylor Swift album, and it enthusiastically agreed. It requested me just a few follow-up questions on why I believed the album was one of the best however they weren’t attention-grabbing sufficient to maintain my consideration for lengthy. A couple of days later, I made a decision to change it up. I particularly requested ChatGPT to debate me and mentioned Midnights was one of the best album. Guess which album ChatGPT pegged as one of the best? Crimson (Taylor’s Model). 

After I requested if it picked Crimson due to our earlier chat, it shortly confessed sure however mentioned it may make an unbiased argument for Crimson. Given what we learn about ChatGPT and different chatbots’ tendencies to depend on their “reminiscence” (context window) and lean towards agreeing with us to please us, I wasn’t stunned by this. ChatGPT could not assist however agree with some model of me — even when it tagged 1989 as one of the best album in a clear chat, then later Crimson, once more.

However even after I requested ChatGPT to debate with me, it did not spar with me like how Disagree Bot did. As soon as, after I instructed it I used to be arguing that the College of North Carolina had one of the best school basketball legacy and requested it to debate me, it laid out a complete counter-argument, then requested me if I wished it to place collectively factors for my very own argument. That absolutely defeats the purpose of debating, which is what I requested it to do. ChatGPT usually ended its responses like that, asking me if I wished it to compile completely different sorts of knowledge collectively, extra like a analysis assistant than a verbal foe. 

Whereas Disagree Bot (left) dug deeper into my argument, ChatGPT requested to argue my aspect for me (proper).

Screenshot by Katelyn Chedraoui/CNET

Attempting to debate with ChatGPT was a irritating, round and unsuccessful mission. It felt like speaking with a good friend who would go on an extended rant about why they believed one thing was one of the best, solely to finish with “However provided that you assume so, too.” Disagree Bot, then again, felt like a very passionate good friend who spoke eloquently about any subject, from Taylor Swift to geopolitics and school basketball. (Disclosure: Ziff Davis, CNET’s guardian firm, in April filed a lawsuit towards OpenAI, alleging it infringed Ziff Davis copyrights in coaching and working its AI methods.)

We want extra AI like Disagree Bot

Regardless of my optimistic expertise utilizing Disagree Bot, I do know it is not geared up to deal with all the requests I would go to a chatbot for. “Every thing machines” like ChatGPT are in a position to deal with a whole lot of completely different duties and tackle a wide range of roles, just like the analysis assistant ChatGPT actually wished to be, a search engine and coder. Disagree Bot is not designed to deal with these sorts of queries, but it surely does give us a window into how future AI can behave.

Sycophantic AI may be very in-your-face, with a noticeable diploma of overzealousness. Usually the AIs we’re utilizing aren’t that apparent. They’re extra of an encouraging cheerleader reasonably than an entire pep rally, so to talk. However that does not imply we’re not being affected by its inclinations to agree with us, whether or not that is struggling to get an opposing viewpoint or extra important suggestions. For those who’re utilizing AI instruments for work, you need it to be actual with you about errors in your work. Remedy-like AI instruments want to have the ability to push again towards unhealthy or probably harmful thought patterns. Our present AI fashions battle with that.

Disagree Bot is a good instance of how one can design an AI device that is useful and fascinating whereas tamping down AI’s agreeable or sycophantic tendencies. There needs to be a stability; AI that disagrees with you only for the sake of being opposite is not going to be useful long run. However constructing AI instruments which can be extra able to pushing again towards you is in the end going to make these merchandise extra helpful for us, even when we’ve to cope with them being a little bit extra unpleasant.

Watch this: The Hidden Affect of the AI Information Middle Growth



Leave a Reply