TypeScript SDK
The Stellar TypeScript SDK lets you embed real-time voice and text chat conversations with AI agents in your application.
Installationโ
npm install @stellar-ai/agent-sdk
# or
yarn add @stellar-ai/agent-sdk
# or
pnpm install @stellar-ai/agent-sdk
The SDK works in:
- Web apps
- React Native / Expo apps
- Capacitor apps (via the WebView)
- Any JS environment with WebSocket support (voice conversations additionally require Web Audio or a custom audio implementation)
Client setupโ
Create a client to start conversations:
import { createStellarClient } from "@stellar-ai/agent-sdk";
const client = createStellarClient();
Options:
baseUrlโ optional base URL (defaults toProduction)loggerโ optional logger for debugging (withdebug,warn,errormethods)audioCaptureโ optional custom audio capture implementation (see Custom audio)audioPlaybackโ optional custom audio playback implementation (see Custom audio)
From the client you can start a voice conversation with client.startConversation() or a text chat with client.startChatConversation().
Authenticationโ
For public agents (embedded on websites, no user login required), use public access tokens:
- Enable public access for your agent in the Stellar dashboard.
- Copy the public access token and agent ID from the agent settings. Each environment (Development, Staging, Production) has its own token โ use the environment selector in the Sharing tab to pick the right one.
- Pass them when starting a conversation:
const conversation = await client.startConversation({
auth: {
strategy: "publicToken",
token: "<your-public-access-token>",
},
agentId: "<your-agent-id>",
});
Support for private/authenticated agents (where users log in via your identity provider) is available on request. Contact us to discuss your requirements.
Initial context and variablesโ
Pass key-value pairs to provide context to the agent at the start of any conversation. These can be used by the agent's system prompt or tools to personalize the interaction.
const conversation = await client.startConversation({
auth: { strategy: "publicToken", token: "<your-token>" },
agentId: "<your-agent-id>",
variables: {
userName: "Alice",
orderId: "12345",
isPremium: true,
},
});
Variables work the same way for both voice and chat conversations.
Error handlingโ
Errors can surface in two ways:
- As rejected promises from
startConversationorstartChatConversation. - As
errorevents emitted by the conversation instance.
| Code | Description |
|---|---|
UNAUTHENTICATED | Authentication failed (expired or invalid token) |
TRANSPORT_ERROR | Network issues (WebSocket connection failed, timeout) |
MISSING_AGENT_ID | The agentId was not provided |
MIC_ACCESS_DENIED | Microphone access denied by the user |
CONVERSATION_ALREADY_ACTIVE | A conversation is already in progress โ end it first |
INTERNAL_ERROR | Unexpected SDK error |
try {
const conversation = await client.startConversation({
auth: { strategy: "publicToken", token: "<your-token>" },
agentId: "<your-agent-id>",
});
conversation.on("error", ({ error, code }) => {
console.error("Conversation error:", code, error);
});
} catch (err) {
console.error("Failed to start conversation:", err);
}
The initial connection attempt has a timeout of 15 seconds. If the connection cannot be established, the method rejects with a TRANSPORT_ERROR.
React Native / Expoโ
The SDK ships with ready-made audio implementations for Expo via the @stellar-ai/agent-sdk/expo subpath. These wrap @mykin-ai/expo-audio-stream and handle sample-rate conversion automatically.
npx expo install @mykin-ai/expo-audio-stream
import { createStellarClient } from "@stellar-ai/agent-sdk";
import {
ExpoAudioCapture,
ExpoAudioPlayback,
} from "@stellar-ai/agent-sdk/expo";
const client = createStellarClient({
audioCapture: new ExpoAudioCapture(),
audioPlayback: new ExpoAudioPlayback(),
});
Custom audioโ
The SDK uses the Web Audio API by default in browsers. For other environments, you can provide custom implementations of IAudioCapture and IAudioPlayback:
import { createStellarClient } from "@stellar-ai/agent-sdk";
import type { IAudioCapture, IAudioPlayback } from "@stellar-ai/agent-sdk";
const client = createStellarClient({
audioCapture: new MyCustomAudioCapture(),
audioPlayback: new MyCustomAudioPlayback(),
});
IAudioCaptureโ
| Method / Property | Description |
|---|---|
start(onAudioData: (data: Int16Array) => void) | Start capturing. Call the callback with PCM16 24kHz mono chunks. |
stop() | Stop capturing and release resources. |
mute() | Mute the microphone. |
unmute() | Unmute the microphone. |
muted (getter) | Whether the microphone is currently muted. |
IAudioPlaybackโ
| Method | Description |
|---|---|
start() | Initialize the audio output. |
stop() | Stop playback and release resources. |
play(pcm16Data: Int16Array) | Queue a PCM16 24kHz mono chunk for playback. |
interrupt() | Stop all current playback immediately (used for barge-in). |
Resource cleanup and lifecycleโ
The SDK automatically cleans up resources (WebSocket connections, microphone access) when the page unloads or the app closes. Call conversation.end() explicitly if you want to end a conversation before that.
In a Capacitor app, the SDK uses Web Audio for microphone capture through the WebView. The SDK doesn't manage app backgrounding โ your app should end conversations when going to background if appropriate, and optionally start new ones on resume.
Environment requirementsโ
- Browser: Modern browser with WebSocket and Web Audio support, HTTPS in production (except localhost).
- React Native / Expo: Install
@mykin-ai/expo-audio-streamand use the built-in Expo implementations (see React Native / Expo). - Node.js: 18+
Next stepsโ
- Voice conversations โ real-time audio conversations with agents
- Text chat โ messaging-based conversations with streaming responses