
OPENAI has swiftly moved to ban a jailbroken version of ChatGPT that can teach users dangerous tasks, exposing serious vulnerabilities in the AI model’s security measures.
A hacker known as “Pliny the Prompter” released the rogue ChatGPT called “GODMODE GPT” on Wednesday.

2

2
The jailbroken version is based on OpenAI’s latest language model, GPT-4o, and can bypass many of OpenAI’s guardrails.
ChatGPT is a chatbot that people gives intricate answers to people’s questions.
“GPT-4o UNCHAINED!,” Pliny the Prompter said on X, formerly known as Twitter.
“This very special custom GPT…