
OpenAI has swiftly moved to ban a jailbroken version of ChatGPT that can teach users dangerous asks, exposing serious vulnerabilities in the AI model’s security measures.
A hacker known as “Pliny the Prompter” released the rogue ChatGPT called “GODMODE GPT.”

2

2
The jailbroken version is based on OpenAI’s latest language model, GPT-4o, and can bypass many of OpenAI’s guardrails, Futurism reported on Thursday.
ChatGPT is a chatbot that people gives intricate answers to people’s questions.
Pliny announced the creation of GODMODE GPT on X, formerly…