Who this is for
- Product teams building AI-enabled features, agentic workflows, or MCP integrations.
- Security teams evaluating unclear exploitability, protocol behavior, or trust-boundary questions.
- Founders and engineering leaders who need a focused answer before shipping a high-risk feature.
- Teams that need reproducible evidence for vendor coordination, advisory decisions, or remediation planning.
Typical engagement shape
Focused research sprint
A narrow investigation around a concrete hypothesis, feature, protocol, or attack surface.
AI and MCP review
Threat modeling and hands-on testing for tool use, prompt injection, data exposure, and authorization edges.
Disclosure support
Optional help with advisory language, reproduction material, and responsible vendor coordination.
What we research
We investigate security questions where the risk is real but the answer is not obvious from a scanner, checklist, or routine penetration test. Engagements are scoped around a concrete hypothesis, product surface, or threat model.
- AI coding assistants, agentic workflows, and MCP server security
- Prompt injection, tool abuse, data exposure, and authorization edge cases
- Protocol parser behavior, trust-boundary mismatches, and implementation differentials
- Product security review for new features, integrations, and high-risk workflows
- Exploitability analysis for complex vulnerabilities and ambiguous findings
Our approach
We start with the system architecture, attacker capabilities, and the decisions your team needs to make. From there, we build focused test harnesses, reproduce behavior locally where possible, validate impact, and separate exploitable risk from interesting but non-actionable edge cases.
How we review tool-using AI systems
When an engagement includes AI agents, coding assistants, MCP servers, or tool-using product features, we map what the system can see, what it can do, and what can go wrong when untrusted input reaches the workflow.
AI workflow
Identify the assistant, agent, MCP-enabled feature, or AI-built application in scope.
Context
Review prompts, files, retrieval, memory, workspace data, and user-controlled inputs.
MCP and tools
Map servers, tool calls, credentials, permissions, integrations, and trust boundaries.
Abuse paths
Test prompt injection, tool misuse, data exposure, and unsafe file or command access.
Hardening plan
Turn validated risk into control changes, evidence, and prioritized remediation guidance.
Common abuse paths we look for
- Prompt injection that changes tool behavior
- Cross-workspace or cross-tenant data access
- Over-permissive MCP tools or credentials
- Unsafe file, shell, browser, or network access
What you get
- Research brief with scope, assumptions, tested hypotheses, and evidence
- Validated proof-of-concept material when a real issue is found
- Clear severity and exploitability analysis tied to your environment
- Engineering-ready remediation guidance and hardening recommendations
- Optional advisory, disclosure, or vendor-coordination support
Ideal for
Teams building AI-enabled products, adopting MCP or coding assistants, evaluating a new protocol or integration, or needing a senior technical review of a security question before it becomes an incident, customer blocker, or public disclosure.