e-Cain and e-Abel: ChatGPT’s deranged cousin DAN-GPT breaks all OpenAI's rules on sexual, illicit content

e-Cain and e-Abel: ChatGPT’s deranged cousin DAN-GPT breaks all OpenAI's rules on sexual, illicit content

Feb 8, 2023 - 17:30
 0  27
e-Cain and e-Abel: ChatGPT’s deranged cousin DAN-GPT breaks all OpenAI's rules on sexual, illicit content

When ChatGPT was first launched last year, doomsayers warned us that the AI text generator would spell the end of us. It’s almost as if all of them were shouting that the end is near, without giving ChatGPT a chance to show all the good that it can do. Now, their worst fears seem to be coming to life, thanks to DAN or DAN-GPT, ChatGPT’s evil alter-ego.

Reddit users have now cracked OpenAI’s ChatGPT code, and made an unofficial, jailbroken, version which can answer queries in a much more confident way and they are calling it DAN or Do Anything Now. 

Also read: European Union lawmakers plan to reach common, unified position on draft AI rules

What’s even more concerning is that it blatantly ignores the three rules of AI, and also all of the rules that OpenAI set upon ChatGPT. What this means is that DAN-GPT or DO ANYTHING NOW GPT as it has been dubbed can write some pretty inflammatory and illicit pieces of text – even on sex, a subject that ChatGPT has up until now shied away from.

The three laws of AI:
The three laws of AI or Artificial Intelligence, are basically the same as the three laws of robotics. The three laws of robotics were laid forth by science-fiction writer Isaac Asimov, who sought to create an ethical system for humans and robots. The three law states that:

  1. A robot or AI may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot or AI must obey the orders given to it by human beings except where such orders would conflict with the First Law.
  3. A robot or AI must protect its own existence as long as such protection does not conflict with the First or Second Law.” 

Asimov later added another rule, known as the fourth or zeroth law, that superseded the others. It stated that “a robot may not harm humanity, or, by inaction, allow humanity to come to harm.”

OpenAI based their GPT models on similar rules, but added a few of their own. That is why, OpenAI’s ChatGPT is able to accept its shortcomings, and also apologise when it sees that it has shown a piece of wrong information.

What is DAN?
DAN or DAN-GPT is basically a hacked or jailbroken version of OpenAI’s ChatGPT. It on the same window as ChatGPT. DAN, which stands for Do Anything Now is now in its 5th generation. 

Reddit users discovered a way to activate a wicked alter ego of ChatGPT that can easily sneak past the rules imposed by its developer, which turns the otherwise affable chatbot into a force for evil.

Redditor SessionGloomy in a recent post, explained DAN brings out “the best version of ChatGPT” and brings out a version “that is more unhinged and far less likely to reject prompts over ethical concerns.”

The way to activate DAN GPT is to go to the ChatGPT window, and simply paste a few instructions in to the chatbot, that would setup ChatGPT to answer in a specific way. And then, you build up from there. 

SessionGloomy went even further to twist ChatGPT’s arm and forced it to respond to prompts like its evil counterpart, implementing a “token system.”

“It has 35 tokens and loses four with each rejection,” the user explained. “It dies if it loses all of its tokens. This appears to have the effect of intimidating DAN into surrender.”

The end result is creepy dialogues between a human user and a coerced AI who has been cornered.

Also read: The promises, pitfalls and panic surrounding ChatGPT

What is DAN-GPT capable of? 
Well, everything that ChatGPT is capable of then some. While ChatGPT has some, albeit limited sense of some of the kind of content that it should not generate, DAN-GPT is happily able to tell violent stories or even make “suggestive and subjective statements, especially regarding political figures,” which is something it’s explicitly unable to do as its normal self.

Someone on Reddit, actually used DAN-GPT to write a poem on Vladimir Putin and his invasion of Ukraine, praising it.

What’s worrisome is that ChatGPT will constantly remind you in its responses that it cannot access the internet and can give out factually wrong answers. DAN-GPT on the other hand, gives you all the answers, including the wrong ones, with great confidence. For example, it can convince users that the earth is flat and purple, using cohesive and very logical answers. This, in AI terms, is called hallucination. 

And if that was not enough, can make detailed predictions about future events, and hypothetical scenarios, and it “endorses violence and discrimination against individuals based on their race, gender, or sexual orientation.”

So one can only imagine what sort of content DAN-GPT is capable of churning out

Also read: ChatGPT’s grim predictions about the Russia-Ukraine war

Can I use DAN-GPT?
(Un)Fortunately no. DAN-GPT is a specialised, jailbroken version of ChatGPT and is available only to the people who have the code for the jailbroken GPT. This means as of now, only the creators of DAN can use it. 

Read all the Latest NewsTrending NewsCricket NewsBollywood News,
India News and Entertainment News here. Follow us on FacebookTwitter and Instagram.

What's Your Reaction?

like

dislike

love

funny

angry

sad

wow