Loading...
Loading...
Track your OpenAI API costs including GPT-4, GPT-3.5-Turbo, and other models.
npm install costlens openai
Use the wrapper for automatic tracking:
import { CostLens } from 'costlens';
import OpenAI from 'openai';
const openai = new OpenAI({ apiKey: process.env.OPENAI_API_KEY });
const costlens = new CostLens({
apiKey: process.env.COSTLENS_API_KEY
});
// Wrap your client
const tracked = costlens.wrapOpenAI(openai);
// Use it exactly like normal OpenAI
const result = await tracked.chat.completions.create({
model: 'gpt-4',
messages: [{ role: 'user', content: 'Hello!' }]
});
// ✅ Automatically tracked!
✨ Benefits: No timing code, automatic error tracking, built-in retries, and optional caching!
For more control, track manually:
import { CostLens } from 'costlens';
import OpenAI from 'openai';
const openai = new OpenAI({ apiKey: process.env.OPENAI_API_KEY });
const costlens = new CostLens({
apiKey: process.env.COSTLENS_API_KEY
});
const params = {
model: 'gpt-4',
messages: [{ role: 'user', content: 'Hello!' }]
};
const start = Date.now();
const result = await openai.chat.completions.create(params);
await costlens.trackOpenAI(params, result, Date.now() - start);
Save costs by caching responses:
const costlens = new CostLens({
apiKey: process.env.COSTLENS_API_KEY,
enableCache: true // Enable caching
});
const tracked = costlens.wrapOpenAI(openai);
// Cache this response for 1 hour
const result = await tracked.chat.completions.create(
{ model: 'gpt-4', messages: [...] },
{ cacheTTL: 3600000 }
);
Track streaming responses:
const stream = await tracked.chat.completions.stream({
model: 'gpt-4',
messages: [{ role: 'user', content: 'Tell me a story' }]
});
for await (const chunk of stream) {
process.stdout.write(chunk.choices[0]?.delta?.content || '');
}
// ✅ Automatically tracked after stream completes
import { CostLens } from 'costlens';
import OpenAI from 'openai';
// Initialize
const costlens = new CostLens({
apiKey: process.env.COSTLENS_API_KEY
});
const openai = new OpenAI({
apiKey: process.env.OPENAI_API_KEY
});
// Make your API call
const params = {
model: 'gpt-4',
messages: [
{ role: 'system', content: 'You are a helpful assistant.' },
{ role: 'user', content: 'Hello!' }
]
};
const start = Date.now();
const result = await openai.chat.completions.create(params);
// Track the call
await costlens.trackOpenAI(params, result, Date.now() - start);
console.log(result.choices[0].message.content);
Pass a promptId
to group and analyze specific prompts in your dashboard:
const params = {
model: 'gpt-4',
messages: [...]
};
const start = Date.now();
const result = await openai.chat.completions.create(params);
// Pass promptId as 4th parameter
await costlens.trackOpenAI(
params,
result,
Date.now() - start,
'customer-support-v2'
);
Always track failed calls to monitor error rates:
const start = Date.now();
try {
const result = await openai.chat.completions.create(params);
await costlens.trackOpenAI(params, result, Date.now() - start);
return result;
} catch (error) {
await costlens.trackError(
'openai',
params.model,
JSON.stringify(params.messages),
error,
Date.now() - start
);
throw error;
}
All OpenAI models are supported. Default pricing:
💡 Custom Pricing: Set your own negotiated rates in Settings → Pricing for maximum accuracy.
⚠️ Your Responsibility: You are responsible for verifying and setting accurate pricing. CostLens is not responsible for pricing accuracy. Always check openai.com/pricing for current rates.