AllOnEars API
Real-time speech intelligence infrastructure
Quick Start
Get up and running with AllOnEars in under 5 minutes. Install the React SDK, connect to the WebSocket stream, and start receiving real-time context cards.
# Install the React SDK
npm install @allonears/react-sdk
# Use in your React app
import { useAllOnEars } from '@allonears/react-sdk'
const { startStream, transcript, cards } =
useAllOnEars('your_api_key', {
domain: 'general',
aggressiveness: 'medium'
})
// Start streaming audio
await startStream()
// Cards arrive automatically via WebSocket
// as the conversation flowsAuthentication
All API requests require a valid JWT (JSON Web Token) generated via the /auth/token endpoint. Pass the token in the Authorization header for REST, or as a query parameter for WebSocket connections.
⚠ Never expose your API key in client-side code. Use environment variables and generate JWT tokens server-side.
# Generate a JWT token
curl -X POST https://api.allonears.ai/auth/token \
-H "Content-Type: application/json" \
-d '{"api_key": "your_api_key"}'
# Use in REST requests
curl -X GET https://api.allonears.ai/v1/sessions \
-H "Authorization: Bearer <jwt_token>"
# Use in WebSocket connections
wss://api.allonears.ai/v1/stream?token=<jwt_token>Sessions
A session is configured via a WebSocket event after connecting to the stream. Send a session.config message to set the domain, aggressiveness, and buffer behavior for your use case.
domainreqaggressivenesslanguagebuffer_sizedata_residency// Configure session after WebSocket connect
ws.send(JSON.stringify({
type: 'session.config',
domain: 'medical',
aggressiveness: 'low',
language: 'en',
buffer_size: 2048,
overflow_behavior: 'drop_oldest'
}))
// Send user feedback for RLHF
ws.send(JSON.stringify({
type: 'interaction.feedback',
card_id: 'c-998877',
action: 'clicked' // or 'dismissed'
}))Streaming
Stream audio via WebSocket for real-time transcription and context card generation. Send 16-bit PCM audio chunks at 16kHz sample rate.
Server → Client Events:
transcript.partialtranscript.finalcontext.cardsystem.status// Connect to WebSocket
const ws = new WebSocket(
'wss://api.allonears.ai/v1/stream?token=<jwt>'
)
// Configure session on connect
ws.onopen = () => {
ws.send(JSON.stringify({
type: 'session.config',
domain: 'medical',
aggressiveness: 'medium'
}))
}
// Stream audio chunks (PCM16 @ 16kHz)
mediaRecorder.ondataavailable = (e) => {
ws.send(e.data)
}
// Receive events
ws.onmessage = (event) => {
const data = JSON.parse(event.data)
switch (data.type) {
case 'transcript.partial':
console.log('Interim:', data.text)
break
case 'transcript.final':
console.log('Final:', data.text)
break
case 'context.card':
console.log('Card:', data.title)
break
case 'system.status':
console.log('Latency:', data.latency_ms)
break
}
}Insights
Context cards are streamed in real-time via WebSocket as the conversation flows. Each card contains a trigger entity, visual content, source information, and a relevance score.
trigger_entitycard_typecontent_urlrelevance_scoresource// context.card event payload
{
"type": "context.card",
"id": "c-998877",
"trigger_entity": "aerodynamic loads",
"card_type": "image",
"title": "Aerodynamic Load Distribution",
"content_url": "https://cdn.allonears.ai/...",
"summary": "Distribution of pressure...",
"source": "NASA Technical Reports",
"relevance_score": 0.88
}
// Card types: image, text, graph, map
// Cards are pushed automatically —
// no polling or REST calls neededWebhooks
Receive server-side event notifications via HTTP webhooks. Configure a webhook URL in your session config to receive context card and transcript events for backend processing.
Event Types:
context.card.createdtranscript.finalsession.ended// Webhook payload example
{
"event": "context.card.created",
"session_id": "ses_a1b2c3d4e5",
"timestamp": "2026-03-15T14:32:00Z",
"data": {
"id": "c-998877",
"trigger_entity": "myocardial infarction",
"card_type": "image",
"title": "ECG Reference Chart",
"relevance_score": 0.94,
"source": "PubMed"
},
"signature": "sha256=a1b2c3..."
}SDKs & Libraries
Official client libraries for integrating AllOnEars into your application.
# React (Web)
npm install @allonears/react-sdk
# React Native (Mobile)
npm install @allonears/react-native-sdk
# Swift (iOS/macOS)
# Add to Package.swift:
.package(
url: "https://github.com/allonears/swift-sdk",
from: "0.1.0"
)
# Kotlin (Android)
// build.gradle
implementation 'com.allonears:sdk:0.1.0'