Introducing Automated Adversarial Validation

Find the vulnerabilities in your AI agents before attackers do.

Audn runs a closed adversarial loop against your models. Red Team AI attacks. Blue Team AI defends. Every finding scored, traced, and mapped to OWASP, NIST, and the EU AI Act.

Founders & Team from
WayveMetaMicrosoft
Customers

Used in production by teams shipping AI.

Early design partners across voice agents, conversational AI, and autonomous systems.

Freya

Voice AIDesign partner

Hardened their production voice agents against prompt injection and social engineering attacks before their consumer launch.

Voice adversarialPrompt injectionSocial engineering

InTouchNow

Conversational AIContinuous QA

Runs continuous adversarial regressions on every model release — catches data-exfiltration and jailbreak regressions before they ship.

Data exfiltrationJailbreaksRegression suite
Pingu Unchained 4

Your pentesters. Your attack model. Your weights.

Pingu Unchained is a blackbox external penetration-testing LLM, trained from real pentester usage. Isolation is opt-in and per-tenant configurable — you decide whether your traffic contributes to the shared model or stays entirely in your own weights.

Personal

Pentester A → LLM A. Pentester B → LLM B.

Each pentester retrains their own Pingu tailored to the targets they actually work against. 50 operators have done it so far — their models never leave their tenant unless they choose to share.

Opt-in training

Share weights, never data.

If you opt in to contribute, only the weight deltas get federated back — no raw prompts, no target artefacts, no customer PII ever leaves your environment. Keep your weights entirely private, or help strengthen the shared attack corpus.

Investment

$40K in training over the last six months.

H100 pre-train + continual fine-tuning on real-world pentester feedback. The base model is free for vetted researchers; the retraining pipeline is what every customer deploys on day one.

Powered by

MetaClaw — adversarial continual learning.

We orchestrate per-pentester retraining with aiming-lab/MetaClaw, an open framework for adversarial continual learning. You own the artefacts it produces.

Research

We break things with the best labs in AI.

Audn’s research output is how we earn the right to call ourselves adversarial experts.

Compliance mapping

Every finding maps to the frameworks your buyers, partners and auditors ask about.

Attacks are tagged against the top AI security frameworks so your security team can turn evidence into compliance artefacts without re-writing a thing.

OWASP LLM Top 10NIST AI RMFEU AI ActMITRE ATLASISO 42001
Research · 28pp

The CISO Handbook

Our internal playbook of attack taxonomies, scoring rubrics and live case studies. Tailored by a human, emailed to your inbox.

Request the handbook
Team

Built by people who ethically hacked things for a living.

Applied researchers and offensive engineers from Wayve, Meta, Microsoft and Cambridge.

Ozgur Ozkan (Oz)

Ozgur Ozkan (Oz)

Co-founder & CEO
An exited founder with rare AI security infra depth. Ex Wayve.AI (Softbank-funded AI unicorn), ex Series C fintech (PCI-DSS).
LinkedIn →
Arun Baby

Arun Baby

Co-founder & CTO
Agentic AI · ex Samsung Galaxy AI (speech models on 200M+ devices), ex Cisco. IIT Madras · 2 speech-AI patents · 20 research publications.
Sanchali Sharma

Sanchali Sharma

Co-founder & Enterprise PM
Exited Voice AI founder (talkingly.ai). PM ex-Microsoft, ex-Meta. IIM Bangalore. Drove $40M incremental revenue at NexgAI.
Tessa Hutchman

Tessa Hutchman

Co-founder & Chief Corporate Affairs
University of Cambridge (MEd Maths — AI in Education) · 1st class honours, top 5%. Ex Nurturious policy lead; Emma Enterprise finalist.
Get started

Your agents are already being attacked.

Run your first validation in 30 minutes. No integration required, no data leaves your stack.