Get in Touch

Security Research

Focused investigation into emerging attack surfaces, AI systems, protocols, and product security questions that do not fit a standard assessment.

AI SecurityMCPProtocol AnalysisExploit ResearchProduct Security

Who this is for

  • Product teams building AI-enabled features, agentic workflows, or MCP integrations.
  • Security teams evaluating unclear exploitability, protocol behavior, or trust-boundary questions.
  • Founders and engineering leaders who need a focused answer before shipping a high-risk feature.
  • Teams that need reproducible evidence for vendor coordination, advisory decisions, or remediation planning.

Typical engagement shape

Focused research sprint

A narrow investigation around a concrete hypothesis, feature, protocol, or attack surface.

AI and MCP review

Threat modeling and hands-on testing for tool use, prompt injection, data exposure, and authorization edges.

Disclosure support

Optional help with advisory language, reproduction material, and responsible vendor coordination.

What we research

We investigate security questions where the risk is real but the answer is not obvious from a scanner, checklist, or routine penetration test. Engagements are scoped around a concrete hypothesis, product surface, or threat model.

  • AI coding assistants, agentic workflows, and MCP server security
  • Prompt injection, tool abuse, data exposure, and authorization edge cases
  • Protocol parser behavior, trust-boundary mismatches, and implementation differentials
  • Product security review for new features, integrations, and high-risk workflows
  • Exploitability analysis for complex vulnerabilities and ambiguous findings

Our approach

We start with the system architecture, attacker capabilities, and the decisions your team needs to make. From there, we build focused test harnesses, reproduce behavior locally where possible, validate impact, and separate exploitable risk from interesting but non-actionable edge cases.

How we review tool-using AI systems

When an engagement includes AI agents, coding assistants, MCP servers, or tool-using product features, we map what the system can see, what it can do, and what can go wrong when untrusted input reaches the workflow.

01

AI workflow

Identify the assistant, agent, MCP-enabled feature, or AI-built application in scope.

02

Context

Review prompts, files, retrieval, memory, workspace data, and user-controlled inputs.

03

MCP and tools

Map servers, tool calls, credentials, permissions, integrations, and trust boundaries.

04

Abuse paths

Test prompt injection, tool misuse, data exposure, and unsafe file or command access.

05

Hardening plan

Turn validated risk into control changes, evidence, and prioritized remediation guidance.

Common abuse paths we look for

  • Prompt injection that changes tool behavior
  • Cross-workspace or cross-tenant data access
  • Over-permissive MCP tools or credentials
  • Unsafe file, shell, browser, or network access

What you get

  • Research brief with scope, assumptions, tested hypotheses, and evidence
  • Validated proof-of-concept material when a real issue is found
  • Clear severity and exploitability analysis tied to your environment
  • Engineering-ready remediation guidance and hardening recommendations
  • Optional advisory, disclosure, or vendor-coordination support

Ideal for

Teams building AI-enabled products, adopting MCP or coding assistants, evaluating a new protocol or integration, or needing a senior technical review of a security question before it becomes an incident, customer blocker, or public disclosure.

Need this adapted to your environment?

Schedule a discovery call and leave with a clear recommendation on scope, timeline, and expected deliverables.