Security researcher who enjoys poking at systems that were never meant to break.
Lately I have been spending most of my time exploring AI and LLM security. This includes prompt injection, agent based systems and red teaming language models.
-
LLMMap
An automated prompt injection testing framework for LLM-integrated applications, inspired by sqlmap. Discovers injection points in HTTP requests, generates targeted payloads using a dual-LLM architecture and confirms findings with statistical reliability testing. Covers 227 prompt injection techniques across 18 attack families with support for Ollama, OpenAI, Anthropic and Google backends. LLMMap -
Prompt Injection Taxonomy
253 prompt injection techniques organized into 17 attack categories with mapping to the OWASP LLM Top 10.
Prompt Injection Taxonomy -
Researching attack surfaces in agent based AI systems and RAG pipelines


