First and foremost,
Geiger does not claim to solve prompt injection.
Geiger is biased towards false positives and false positives can be curtailed by improving how you specify the
task
parameter, but Geiger is clearly not infallible. It is a stop-gap measure that‘s meant to be used
right now for services that are exposing the LLM to untrusted potentially-poisoned post-GPT information.
The injection test set is as wide as the public injection attempts. There‘s obviously also some well-kept secret sauce but it‘s not anything groundbreaking and can be replicated independently with enough effort.
That said, Geiger is here now, it is better than nothing, and you are welcome to try it out.
Consider reading Simon Willison‘s Prompt Injection Explained.