Skip to main content
Official TypeScript/Node.js SDK supports tracing LLM calls for OpenAI, Anthropic, Google Gemini.

Installation

npm install @ambertrace/node
Install your preferred LLM SDK(s):
# For OpenAI support
npm install openai

# For Anthropic support
npm install @anthropic-ai/sdk

# For Google support (original SDK)
npm install @google/generative-ai

# For Google support (newer SDK)
npm install @google/genai

# Or all providers
npm install openai @anthropic-ai/sdk @google/generative-ai

Quick Start

import ambertrace from '@ambertrace/node';
import OpenAI from 'openai';

// 1. Initialize AmberTrace (one time, at app startup)
ambertrace.init({
  apiKey: process.env.AMBERTRACE_API_KEY,
  environment: 'production',
});

// 2. Use OpenAI as normal - calls are automatically traced!
const openai = new OpenAI({
  apiKey: process.env.OPENAI_API_KEY,
});

const response = await openai.chat.completions.create({
  model: 'gpt-4',
  messages: [{ role: 'user', content: 'Hello!' }],
});

console.log(response.choices[0].message.content);

// 3. Before exiting, flush pending traces
await ambertrace.flush();
That’s it! Every OpenAI, Anthropic, and Google call is now traced to your AmberTrace dashboard.

Usage Examples

OpenAI (ESM)
import ambertrace from '@ambertrace/node';
import OpenAI from 'openai';

ambertrace.init({ apiKey: process.env.AMBERTRACE_API_KEY });

const openai = new OpenAI();
const response = await openai.chat.completions.create({
  model: 'gpt-4',
  messages: [{ role: 'user', content: 'Explain TypeScript' }],
});

await ambertrace.flush();
Anthropic
import ambertrace from '@ambertrace/node';
import Anthropic from '@anthropic-ai/sdk';

ambertrace.init({ apiKey: process.env.AMBERTRACE_API_KEY });

const anthropic = new Anthropic();
const response = await anthropic.messages.create({
  model: 'claude-opus-4-5-20251101',
  max_tokens: 100,
  messages: [{ role: 'user', content: 'Hello Claude!' }],
});

await ambertrace.flush();
Google (ESM) Using the original @google/generative-ai SDK:
import ambertrace from '@ambertrace/node';
import { GoogleGenerativeAI } from '@google/generative-ai';

ambertrace.init({ apiKey: process.env.AMBERTRACE_API_KEY });

const genai = new GoogleGenerativeAI(process.env.GEMINI_API_KEY!);
const model = genai.getGenerativeModel({ model: 'gemini-pro' });

const result = await model.generateContent('Explain TypeScript');
console.log(result.response.text());

await ambertrace.flush();
Using the newer @google/genai SDK:
import ambertrace from '@ambertrace/node';
import { GoogleGenAI } from '@google/genai';

ambertrace.init({ apiKey: process.env.AMBERTRACE_API_KEY });

const ai = new GoogleGenAI({ apiKey: process.env.GEMINI_API_KEY! });
const response = await ai.models.generateContent({
  model: 'gemini-2.0-flash',
  contents: 'Explain TypeScript',
});

console.log(response.text);

await ambertrace.flush();
CommonJS
const ambertrace = require('@ambertrace/node').default;
const OpenAI = require('openai');

ambertrace.init({ apiKey: process.env.AMBERTRACE_API_KEY });

const openai = new OpenAI();
// ... use OpenAI as normal
Express.js API
import express from 'express';
import ambertrace from '@ambertrace/node';
import OpenAI from 'openai';

// Initialize AmberTrace at app startup
ambertrace.init({ apiKey: process.env.AMBERTRACE_API_KEY });

const app = express();
const openai = new OpenAI();

app.post('/chat', async (req, res) => {
  const response = await openai.chat.completions.create({
    model: 'gpt-4',
    messages: [{ role: 'user', content: req.body.message }],
  });

  res.json({ reply: response.choices[0].message.content });
});

app.listen(3000);

// Graceful shutdown
process.on('SIGTERM', async () => {
  await ambertrace.shutdown();
  process.exit(0);
});
Next.js API Route
// app/api/chat/route.ts
import ambertrace from '@ambertrace/node';
import OpenAI from 'openai';

// Initialize once (consider using a singleton)
if (!ambertrace.isEnabled()) {
  ambertrace.init({ apiKey: process.env.AMBERTRACE_API_KEY });
}

const openai = new OpenAI();

export async function POST(request: Request) {
  const { message } = await request.json();

  const response = await openai.chat.completions.create({
    model: 'gpt-4',
    messages: [{ role: 'user', content: message }],
  });

  return Response.json({ reply: response.choices[0].message.content });
}
Multi-Provider (OpenAI + Anthropic + Gemini)
import ambertrace from '@ambertrace/node';
import OpenAI from 'openai';
import Anthropic from '@anthropic-ai/sdk';
import { GoogleGenerativeAI } from '@google/generative-ai';

// Single init() traces all providers!
ambertrace.init({ apiKey: process.env.AMBERTRACE_API_KEY });

const openai = new OpenAI();
const anthropic = new Anthropic();
const genai = new GoogleGenerativeAI(process.env.GEMINI_API_KEY!);

// All calls are automatically traced
const gptResponse = await openai.chat.completions.create({
  model: 'gpt-4',
  messages: [{ role: 'user', content: 'Hello' }],
});

const claudeResponse = await anthropic.messages.create({
  model: 'claude-opus-4-5-20251101',
  max_tokens: 100,
  messages: [{ role: 'user', content: 'Hello' }],
});

const geminiModel = genai.getGenerativeModel({ model: 'gemini-pro' });
const geminiResponse = await geminiModel.generateContent('Hello');

await ambertrace.flush();

Error Handling

The SDK follows a never-fail philosophy:
  • Network errors are logged but never thrown
  • Trace collection errors never impact your application
  • If the backend is unavailable, traces are silently dropped
  • Your LLM calls always succeed/fail based on the provider, not the SDK
Enable debug: true to see trace delivery logs:
ambertrace.init({
  apiKey: 'your-api-key',
  debug: true, // See trace collection and delivery logs
});

What’s Traced?

Successful Calls For each successful LLM API call, AmberTrace captures:
  • Request Data
    • Model name
    • Full conversation history (all messages)
    • Parameters (temperature, max_tokens, etc.)
  • Response Data
    • Response ID
    • Model used
    • Generated messages
    • Token usage (prompt, completion, total)
    • Finish reason
  • Metadata
    • Unique trace ID
    • Timestamp (ISO 8601 UTC)
    • Duration in milliseconds
    • SDK version
    • Environment tag (if configured)

Failed Calls

When an LLM call fails, AmberTrace traces:
  • Request data (same as above)
  • Error information:
    • Exception type
    • Error message
    • Error code (if available)