Implicit execution authority is the real failure mode behind prompt injection
I’m approaching prompt injection less as an input sanitization issue and more as an authority and trust-boundary problem. In many systems, model output is implicitly authorized to cause side effects, for example by triggering tool calls or function execution. Once generation is treated as execution-capable, sanitization and guardrails become reactive defenses around an actor that already holds authority. I’m exploring an architecture where the model never has execution rights at all. It produces proposals only. A separate, non-generative control plane is the sole component allowed to execute actions, based on fixed policy and system state. If the gate says no, nothing runs. From this perspective, prompt injection fails because generation no longer implies authority. There’s no privileged path from text to side effects. I’m curious whether people here see this as a meaningful shift in the trust model, or just a restatement of existing capability-based or mediation patterns in security systems. submitted by /u/anima-core [link] [comments]Technical Information Security Content & DiscussionRead More