A New Attack Impacts ChatGPT—and No One Knows How to Stop It
“Making models more resistant to prompt injection and other adversarial ‘jailbreaking’ measures is an area of active research,” says ...
“Making models more resistant to prompt injection and other adversarial ‘jailbreaking’ measures is an area of active research,” says ...