Researchers find hole in AI guardrails by using strings like =coffee
Who guards the guardrails? Often the same shoddy security as the rest of the AI stack
Large language models frequently ship with “guardrails” designed to catch malicious input and harmful output. But if you use the right word or phrase in your prompt, you can defeat these restrictions.…The RegisterRead More