Documentation Index
Fetch the complete documentation index at: https://docs.ambertrace.dev/llms.txt
Use this file to discover all available pages before exploring further.
The Python SDK supports Python 3.8+ and works with both synchronous and asynchronous code.
Installation
Install with specific provider support
# Install with OpenAI support
pip install ambertrace[openai]
# Install with Anthropic support
pip install ambertrace[anthropic]
# Install with Google support (original SDK)
pip install ambertrace[gemini]
# Install with Google support (newer SDK)
pip install ambertrace[gemini-new]
# Install with all providers
pip install ambertrace[all]
# Or install core and add SDKs separately
pip install ambertrace
pip install openai anthropic google-generativeai
Or install the core SDK and add providers separately
pip install ambertrace
pip install openai anthropic google-generativeai
Requirements:
- Python 3.8+
- OpenAI SDK:
openai>=1.0.0 (optional)
- Anthropic SDK:
anthropic>=0.18.0 (optional)
- Google (original):
google-generativeai>=0.3.0 (optional)
- Google (newer):
google-genai>=0.1.0 (optional)
Quick start
OpenAI usage
import openai
import ambertrace
# 1. Initialize AmberTrace (once at startup)
ambertrace.init(api_key="your_ambertrace_api_key")
# 2. Use OpenAI as normal — tracing happens automatically
client = openai.OpenAI(api_key="your_openai_api_key")
response = client.chat.completions.create(
model="gpt-4",
messages=[
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Hello!"}
]
)
print(response.choices[0].message.content)
# 3. Flush traces before exit
ambertrace.flush()
Anthropic Claude Usage
import anthropic
import ambertrace
# 1. Initialize AmberTrace (do this once at startup)
ambertrace.init(api_key="your_ambertrace_api_key")
# 2. Use Anthropic SDK normally - tracing happens automatically!
client = anthropic.Anthropic(api_key="your_anthropic_api_key")
response = client.messages.create(
model="claude-opus-4-5-20251101",
max_tokens=1024,
messages=[
{"role": "user", "content": "Hello!"}
]
)
print(response.content[0].text)
# 3. (Optional) Flush traces before exit
ambertrace.flush()
Google Usage
import google.generativeai as genai
import ambertrace
# 1. Initialize AmberTrace (do this once at startup)
ambertrace.init(api_key="your_ambertrace_api_key")
# 2. Use Google SDK normally - tracing happens automatically!
genai.configure(api_key="your_gemini_api_key")
model = genai.GenerativeModel("gemini-pro")
response = model.generate_content("Hello!")
print(response.text)
# 3. (Optional) Flush traces before exit
ambertrace.flush()
The newer google-genai SDK is also supported:
from google import genai
import ambertrace
ambertrace.init(api_key="your_ambertrace_api_key")
client = genai.Client(api_key="your_gemini_api_key")
response = client.models.generate_content(
model="gemini-2.0-flash",
contents="Hello!"
)
print(response.text)
ambertrace.flush()
Multi-Provider Usage
import openai
import anthropic
import google.generativeai as genai
import ambertrace
# Initialize once - traces all providers automatically
ambertrace.init(api_key="your_ambertrace_api_key")
# Use OpenAI
openai_client = openai.OpenAI(api_key="your_openai_api_key")
gpt_response = openai_client.chat.completions.create(
model="gpt-4",
messages=[{"role": "user", "content": "What is Python?"}]
)
# Use Anthropic
anthropic_client = anthropic.Anthropic(api_key="your_anthropic_api_key")
claude_response = anthropic_client.messages.create(
model="claude-opus-4-5-20251101",
max_tokens=1024,
messages=[{"role": "user", "content": "What is Python?"}]
)
# Use Google
genai.configure(api_key="your_gemini_api_key")
gemini_model = genai.GenerativeModel("gemini-pro")
gemini_response = gemini_model.generate_content("What is Python?")
# All calls are traced to AmberTrace!
ambertrace.flush()
Error Handling
AmberTrace traces both successful calls and errors:
import openai
import ambertrace
ambertrace.init(api_key="your_api_key")
client = openai.OpenAI(api_key="invalid_key")
try:
response = client.chat.completions.create(
model="gpt-4",
messages=[{"role": "user", "content": "Hello!"}]
)
except openai.AuthenticationError as e:
# OpenAI error is raised normally to your code
# BUT the error is also traced and sent to AmberTrace
print(f"Authentication failed: {e}")
Disabling/Enabling Tracing
import ambertrace
ambertrace.init(api_key="your_api_key")
# Temporarily disable tracing
ambertrace.disable()
# ... OpenAI calls here are NOT traced ...
# Re-enable tracing
ambertrace.enable()
# ... OpenAI calls here ARE traced again ...
Check If Tracing Is Active
import ambertrace
if ambertrace.is_enabled():
print("Tracing is active")
else:
print("Tracing is disabled")
What’s Traced?
Successful Calls
For each successful LLM API call, AmberTrace captures:
- Request Data
- Model name
- Full conversation history (all messages)
- Parameters (temperature, max_tokens, etc.)
- Response Data
- Response ID
- Model used
- Generated messages
- Token usage (prompt, completion, total)
- Finish reason
- Metadata
- Unique trace ID
- Timestamp (ISO 8601 UTC)
- Duration in milliseconds
- SDK version
- Environment tag (if configured)
Failed Calls
When an LLM call fails, AmberTrace traces:
- Request data (same as above)
- Error information:
- Exception type
- Error message
- Error code (if available)