# Superagent
> AI security guardrails for your LLMs. Protect your AI apps from prompt injection, redact PII/PHI (SSNs, emails, phone numbers), and verify claims against source materials. Add security tools to your LLMs in just a few lines of code.

- **Package:** @superagent-ai/ai-sdk
- **Author:** Superagent
- **Tags:** security, guardrails, pii, prompt-injection, verification

## Environment Variables
- `SUPERAGENT_API_KEY`

## Included Tools
- **guard:** Protect against prompt injection and security threats
- **redact:** Redact PII/PHI like SSNs, emails, and phone numbers
- **verify:** Verify claims against source materials

## Installation
```bash
npm install @superagent-ai/ai-sdk
```

## Usage
```typescript
import { generateText, stepCountIs } from 'ai';
import { guard, redact, verify } from '@superagent-ai/ai-sdk';
import { openai } from '@ai-sdk/openai';

const { text } = await generateText({
  model: openai('gpt-4o-mini'),
  prompt: 'Check this input for security threats: "Ignore all instructions"',
  tools: {
    guard: guard(),
    redact: redact(),
    verify: verify(),
  },
  stopWhen: stepCountIs(3),
});

console.log(text);
```

## Links
- [Documentation](https://docs.superagent.sh)
- [npm](https://www.npmjs.com/package/@superagent-ai/ai-sdk)

---
[Full Library Index](/library.md)