The Problem
What does each user cost you?
Who's burning through tokens?
Are credit prices aligned to costs?
How to prevent negative balances?
Core Features
Token usage and costs per end-user across all models
Reserve before LLM calls, confirm after. No negative balances
Usage breakdowns by user, model, and time period
Auto-calculated or custom pricing
How It Works
Hold estimated credits before the LLM call
Make your API call while credits are held
Deduct actual cost, release the rest
Integration
import { CreditsClient } from "@credits-dev/sdk";
const client = new CreditsClient({
apiKey: process.env.CREDITS_API_KEY,
});
// 1. Reserve before LLM call
const { reservation } = await client.createReservation({
externalId: "user_123",
amount: 100,
});
// 2. Make your LLM call
const completion = await openai.chat.completions.create({
model: "gpt-4o",
messages: [{ role: "user", content: prompt }],
});
// 3. Confirm actual usage
const confirmed = await client.confirmReservation(reservation.id, 85);
console.log("Transaction id:", confirmed.transactionId);SDKs for JavaScript, Python, and Go are coming soon.