Client
The supyagent() client — fetch tools, skills, and account info from Supyagent Cloud.
Client
The supyagent() factory creates a client that connects your app to Supyagent Cloud. It provides three methods: tools(), skills(), and me().
import { supyagent } from '@supyagent/sdk';
const client = supyagent({
apiKey: process.env.SUPYAGENT_API_KEY!,
baseUrl: 'https://app.supyagent.com', // optional, this is the default
});client.tools(options?)
Fetches all available integration tools and converts them to Vercel AI SDK Tool objects. Each tool maps to an API endpoint on Supyagent Cloud.
const tools = await client.tools();
// => Record<string, Tool>
// Filter to specific providers or services
const googleTools = await client.tools({ only: ['google'] });
const noSlack = await client.tools({ except: ['slack'] });
// Enable caching (recommended for production)
const tools = await client.tools({ cache: 300 }); // 5 minute TTLOptions
| Option | Type | Description |
|---|---|---|
only | string[] | Only include tools matching these providers, services, or names |
except | string[] | Exclude tools matching these providers, services, or names |
cache | boolean | number | true = 60s cache, number = custom TTL in seconds |
The returned tools are ready to pass directly to streamText() or generateText():
import { streamText } from 'ai';
const result = streamText({
model: yourModel,
messages,
tools: await client.tools({ cache: 300 }),
});client.skills(options?)
Fetches skill documentation from Supyagent Cloud and returns a system prompt plus two tools (loadSkill and apiCall). This is the recommended approach for production — it's more token-efficient than tools() because the agent only loads documentation for skills it actually needs.
const { systemPrompt, tools } = await client.skills({ cache: 300 });
// systemPrompt: string — lists available skills for the LLM
// tools: { loadSkill: Tool, apiCall: Tool }Use it with streamText:
const { systemPrompt, tools: skillTools } = await client.skills({ cache: 300 });
const result = streamText({
model: yourModel,
system: `You are a helpful assistant.\n\n${systemPrompt}`,
messages,
tools: skillTools,
});How Skills Work
- The system prompt tells the LLM which skills are available (e.g., "Gmail", "Slack", "Calendar")
- The LLM calls
loadSkill({ name: "Gmail" })to get detailed API documentation - The LLM calls
apiCall({ method: "GET", path: "/api/v1/google/gmail/messages" })to make authenticated requests
This two-step approach means the full API docs for every provider don't need to be in the context window — only the ones the agent actually uses.
Options
| Option | Type | Description |
|---|---|---|
cache | boolean | number | true = 60s cache, number = custom TTL in seconds |
client.me(options?)
Returns account information, connected integrations, and usage stats.
const me = await client.me({ cache: 60 });Response
interface MeResponse {
email: string | null;
tier: string;
usage: {
current: number;
limit: number; // -1 = unlimited (enterprise)
};
integrations: Array<{
provider: string; // "google", "slack", etc.
status: string; // "active", "expired", etc.
}>;
dashboardUrl: string;
}Caching
All three methods support caching to avoid redundant API calls:
// No cache (default) — fetches every time
await client.tools();
// 60-second cache
await client.tools({ cache: true });
// Custom TTL (5 minutes)
await client.tools({ cache: 300 });In a typical Next.js API route, you create the client once at module scope and use caching to keep tools fresh without fetching on every request:
const client = supyagent({ apiKey: process.env.SUPYAGENT_API_KEY! });
export async function POST(req: Request) {
// This hits Supyagent Cloud at most once every 5 minutes
const { systemPrompt, tools } = await client.skills({ cache: 300 });
// ...
}What's Next
- Built-in Tools — Bash, image viewing, and lower-level skill tools
- Persistence — Save chat history with Prisma
- Full API Route Example — See all pieces working together