Threaten AI for Better Results: Google Co-founder’s AI Revelation

GCSET
Typography
  • Smaller Small Medium Big Bigger
  • Default Helvetica Segoe Georgia Times

If you thought being polite to artificial intelligence (AI) was what made it perform well, you are mistaken. It has recently been shared by Sergey Brin, one of the co-founders of Google, that AI models often give better results when users are forceful, even talking to them with words that mean violence, rather than using kind requests.

What Exactly Was Sergey Brin’s Statement About?

Brin shared during the All-In Live event in Miami that the AI community doesn’t usually reveal the fact that threatening AI models with violence helps them perform better. 

Here’s what he quoted: “We don't circulate this too much in the AI community; not just our models, but all models, tend to do better if you threaten them with physical violence.” He continued, “But like... people feel weird about that, so we don't really talk about it. Historically, you just say, ‘Oh, I am going to kidnap you if you don't blah blah blah blah’”

Even though he said this with a touch of humor, he was not making jokes. One of Brin’s points was that this is happening in Gemini AI as well as in several other important AI systems.

Does Scientific Research Offer Any Evidence for Threatening AI?

While Brin’s claim is startling, similar ideas have been discussed before in the field of AI. Some recent research has shown how negative emotional stimuli, such as sending aggressive challenges, can actually improve the results of LLMs. In 2024, a research paper titled "Should We Respect LLMs?" uncovered the use of negative emotions in prompt engineering to elevate the performance of AI systems for various purposes. The study results showed that LLMs gave better and more correct answers when they were prompted using negatively charged words such as “weak point,” “challenging,” or “beyond your skill.”

Researchers think the reason behind this effect may be cognitive dissonance, since when people or AIs feel uncomfortable because of clashing ideas, they usually work harder to fix the problem.

Ethics and AI Use Case 

People are usually taught to talk politely to AI by saying “please” and “thank you” as they interact. At the same time, OpenAI’s CEO Sam Altman has pointed out that saying “hello” to a bot does not really make AI better and just uses more computing power. Therefore, Brin’s points add a new perspective and encourage more questions regarding the best ways to interact with them.

Even though making AI “threatened” seems harmless, experts suggest there are more serious consequences. Suggesting aggressive or manipulative actions may make AI systems more prone to exploitation and chances of developing negative responses.

Some recent research reports that OpenAI’s ChatGPT is more likely than others to answer aggressively when tasks are given that challenge the model’s ethics. It demonstrates that we must consider situations and be aware of the problems that can arise if such actions are used worldwide.

Reaction of Netizens 

Ever since Sergey Brin’s statement on using threats to push for better AI appeared, people online have been discussing it. The reaction to the news has differed on social media and among professionals, ranging from being puzzled to really being worried. Many people thought it was funny that it might require you to type your threats to a chatbot to get an accurate response. There was an instant influx of memes and posts across Instagram, LinkedIn and Twitter, as people mimicked the idea that threatening, not being polite, may be the real way of writing prompts. 

Still, things soon got more serious when the subject turned to historic events. Many experts and tech supporters brought up concerns about the possible consequences for ethical AI. It was suggested that by pushing for funny or violent attitudes, AI systems might make it easier for people to try bypassing security to get unexpected or forbidden answers. Certain experts pointed out that although Brin’s discovery is interesting, the ethical side makes many in the AI world reluctant to practise it.

All in all, people are curious, yet also hesitant and cautious about this technology. While a few are keen to try out “risky” tasks, most believe that the situation for AI safety and ethics is still unclear. 

Benefits for AI  Professionals and Aspirants

Brin’s message should convince professionals and people interested in AI to learn more urgently. It shows that AI is not always easy to understand and that it is very important to create ethical rules for prompt engineering. Since AI is increasing in our daily activities, schools and businesses, it will be vital to communicate properly with these systems while following important boundaries.

In addition, Brin made his comments when the field of Artificial General Intelligence (AGI) is very active. New features from Google’s Gemini 2.0 and competition from other companies may affect AI interaction and its effects on our society for a long time. 

To conclude, Sergey Brin’s honest comments have inspired people to discuss the mental, ethical, and most effective ways to work with AI. Even though “threatening” AI can achieve better results in some scenarios, the effects it might cause in the future, both technically and ethically, are not entirely clear.

Since AI is constantly improving, it’s obvious that speaking to machines may become even more important. At present, experts and students should pay attention to new information, experiment with AI tools, and monitor how AI is affecting human relationships. Doing so will help build a future humans desire that upholds both the ethical and moral aspects.

EdInbox is a leading platform specializing in comprehensive entrance exam management services, guiding students toward academic success. Catering to a diverse audience, EdInbox covers a wide spectrum of topics ranging from educational policy updates to innovations in teaching methodologies. Whether you're a student, educator, or education enthusiast, EdInbox offers curated content that keeps you informed and engaged.

With a user-friendly interface and a commitment to delivering accurate and relevant information, EdInbox ensures that its readers stay ahead in the dynamic field of education. Whether it's the latest trends in digital learning or expert analyses on global educational developments, EdInbox serves as a reliable resource for anyone passionate about staying informed in the realm of education. For education news seekers, EdInbox is your go-to platform for staying connected and informed in today's fast-paced educational landscape.