เคธเคฎเคธเฅเคค (samast) = Sanskrit for "all", "everything", "universal"
Quick Start โข Documentation โข Examples โข Migration
Stop rewriting LLM integration code. Samast provides a single TypeScript interface for 100+ AI models across OpenRouter, Anthropic, Google, OpenAI, and more. Switch providers in 2 lines without touching your application logic.
// Production-ready in 30 seconds
import { Samast } from '@darshjme/samast';
const ai = new Samast();
await ai.use('openrouter', { mode: 'api_key', apiKey: process.env.OPENROUTER_KEY });
const response = await ai.complete({
messages: [{ role: 'user', content: 'Explain quantum entanglement' }],
model: 'anthropic/claude-sonnet-4'
});Built for production: Type-safe, OAuth-ready, zero vendor lock-in.
- ๐ Provider Abstraction โ Unified interface for OpenRouter, Anthropic, Google, OpenAI
- ๐ OAuth + API Keys โ Device code flow for Claude, API key fallback for all
- ๐ 100% TypeScript โ Full type safety with IntelliSense support
- ๐ฏ Zero Lock-in โ Switch providers without code changes
- ๐ชถ Lightweight โ ~5KB minified, tree-shakeable ESM
- ๐ง Extensible โ Add custom providers via clean interface
- โก Production-Ready โ Used by Brahmand CLI
npm install @darshjme/samast
# or
yarn add @darshjme/samast
# or
pnpm add @darshjme/samastimport { Samast } from '@darshjme/samast';
const ai = new Samast();
// Initialize with any provider
await ai.use('openrouter', {
mode: 'api_key',
apiKey: 'sk-or-v1-...'
});
// Generate completions
const response = await ai.complete({
messages: [
{ role: 'system', content: 'You are a helpful assistant' },
{ role: 'user', content: 'What is TypeScript?' }
],
model: 'anthropic/claude-sonnet-4',
temperature: 0.7,
maxTokens: 1000
});
console.log(response.content);
// Usage stats included
console.log(response.usage); // { promptTokens, completionTokens, totalTokens }// Start with OpenRouter
await ai.use('openrouter', { mode: 'api_key', apiKey: OR_KEY });
const resp1 = await ai.complete({ messages, model: 'anthropic/claude-opus-4' });
// Switch to Google Gemini
await ai.use('google', { mode: 'api_key', apiKey: GOOGLE_KEY });
const resp2 = await ai.complete({ messages, model: 'gemini-2.0-flash-exp' });
// Switch to Anthropic direct (with OAuth!)
await ai.use('anthropic', { mode: 'oauth', accessToken: CLAUDE_TOKEN });
const resp3 = await ai.complete({ messages, model: 'claude-sonnet-4' });Samast uses a provider registry pattern with a clean separation between the client interface and provider implementations:
graph TB
A[Your App] -->|uses| B[Samast Client]
B -->|delegates to| C[Provider Registry]
C -->|manages| D[OpenRouter Provider]
C -->|manages| E[Anthropic Provider]
C -->|manages| F[Google Provider]
C -->|manages| G[Custom Providers...]
D -->|API calls| H[OpenRouter API<br/>100+ models]
E -->|API calls| I[Anthropic API<br/>Claude models]
F -->|API calls| J[Google AI API<br/>Gemini models]
style B fill:#4A90E2,color:#fff
style C fill:#50C878,color:#fff
style H fill:#E8E8E8
style I fill:#E8E8E8
style J fill:#E8E8E8
| Component | Purpose |
|---|---|
| Samast Client | Main entry point โ simple, consistent API for your app |
| Provider Registry | Manages provider lifecycle, handles switching |
| Provider Interface | Standardized contract all providers implement |
| Built-in Providers | OpenRouter, Anthropic, Google (more coming) |
Key Design Principles:
- Abstraction over integration โ One interface, many backends
- Auth flexibility โ OAuth when possible, API keys as fallback
- Type safety first โ Compile-time guarantees prevent runtime errors
- Zero configuration โ Sensible defaults, explicit overrides
See ARCHITECTURE.md for implementation details.
Creates a new Samast instance. No configuration required.
const ai = new Samast();Initialize and activate a provider. Returns a Promise that resolves when the provider is ready.
Signature:
async use(providerName: string, config: ProviderConfig): Promise<void>Parameters:
| Parameter | Type | Description |
|---|---|---|
providerName |
string |
Provider identifier ('openrouter', 'anthropic', 'google') |
config |
ProviderConfig |
Authentication configuration (see below) |
Config Shapes:
// API Key mode (most providers)
{
mode: 'api_key',
apiKey: string,
baseURL?: string // Optional: override API endpoint
}
// OAuth mode (Anthropic, OpenAI)
{
mode: 'oauth',
accessToken: string,
refreshToken?: string,
clientId?: string,
clientSecret?: string
}Example:
// OpenRouter with custom base URL
await ai.use('openrouter', {
mode: 'api_key',
apiKey: process.env.OPENROUTER_KEY,
baseURL: 'https://openrouter.ai/api/v1'
});
// Anthropic with OAuth tokens
await ai.use('anthropic', {
mode: 'oauth',
accessToken: tokens.access_token,
refreshToken: tokens.refresh_token
});Send a chat completion request to the active provider.
Signature:
async complete(request: CompletionRequest): Promise<CompletionResponse>Request Shape:
interface CompletionRequest {
messages: Message[]; // Conversation history
model?: string; // Model ID (provider-specific)
temperature?: number; // 0-2, default 0.7
maxTokens?: number; // Max response length
stream?: boolean; // Streaming support (future)
}
interface Message {
role: 'system' | 'user' | 'assistant';
content: string;
}Response Shape:
interface CompletionResponse {
content: string; // Generated text
finishReason: string; // 'stop', 'length', 'content_filter'
usage?: {
promptTokens: number;
completionTokens: number;
totalTokens: number;
};
model?: string; // Actual model used
}Example:
const response = await ai.complete({
messages: [
{ role: 'system', content: 'You are a technical writer' },
{ role: 'user', content: 'Explain REST APIs in 3 sentences' }
],
model: 'anthropic/claude-sonnet-4',
temperature: 0.3, // More focused
maxTokens: 200
});
console.log(response.content);
console.log(`Used ${response.usage.totalTokens} tokens`);List all models available from the currently active provider.
Signature:
async listModels(): Promise<string[]>Example:
await ai.use('openrouter', { ... });
const models = await ai.listModels();
console.log(models);
// ['anthropic/claude-opus-4', 'openai/gpt-4-turbo', 'google/gemini-2.0-flash-exp', ...]List all registered provider names (both built-in and custom).
Signature:
getProviders(): string[]Example:
console.log(ai.getProviders());
// ['openrouter', 'anthropic', 'google', 'my-custom-provider']Get the name of the currently active provider.
Signature:
getActiveProvider(): string | undefinedExample:
await ai.use('anthropic', { ... });
console.log(ai.getActiveProvider()); // 'anthropic'Initiate OAuth device code flow for providers that support it (Anthropic, OpenAI).
Signature:
async startOAuth(providerName: string): Promise<{ authUrl: string; state: string }>Example:
const { authUrl, state } = await ai.startOAuth('anthropic');
console.log(`Visit: ${authUrl}`);
console.log(`Device code: ${state}`);
// User approves in browser, then you poll for token...Complete OAuth flow after user authorization.
Signature:
async handleOAuthCallback(
providerName: string,
code: string,
state: string
): Promise<ProviderConfig>Example:
// After user approves
const config = await ai.handleOAuthCallback('anthropic', authCode, deviceCode);
// Use the new config
await ai.use('anthropic', config);Advanced users can access the registry directly for custom workflows.
import { registry } from '@darshjme/samast';
// Register custom provider
registry.registerProvider(new MyCustomProvider());
// Get all models across all providers
const allModels = await registry.getAllModels();
console.log(allModels);
// Map { 'openrouter' => [...], 'anthropic' => [...], 'google' => [...] }| Provider | Auth Modes | Models | OAuth | Status |
|---|---|---|---|---|
| OpenRouter | API Key | 100+ (Claude, GPT, Gemini, Llama, Mistral, etc.) | โ | โ Production |
| Anthropic | API Key, OAuth | Claude Opus 4, Sonnet 4, Haiku 4, 3.5 Sonnet | โ | โ Production |
| API Key | Gemini 3 Pro, 2.0 Flash, 1.5 Pro/Flash | โ | โ Production | |
| OpenAI | API Key, OAuth | GPT-4, GPT-4 Turbo, GPT-3.5 | โ | ๐ง Coming Soon |
| NVIDIA NIM | API Key | Nemotron, Llama models | โ | ๐ง Coming Soon |
| GitHub Copilot | OAuth | Codex, GPT-4 | โ | ๐ง Coming Soon |
| Feature | OpenRouter | Anthropic | |
|---|---|---|---|
| Model variety | โญโญโญโญโญ (100+) | โญโญโญ (Claude family) | โญโญโญ (Gemini family) |
| Cost efficiency | โญโญโญโญ (competitive) | โญโญโญ (premium) | โญโญโญโญโญ (free tier!) |
| Response quality | โญโญโญโญ (varies by model) | โญโญโญโญโญ (top-tier) | โญโญโญโญ (excellent) |
| Rate limits | โญโญโญโญ (generous) | โญโญโญ (moderate) | โญโญโญโญโญ (very high) |
Recommendation:
- Development: Start with Google (free tier)
- Production: OpenRouter (model flexibility) or Anthropic (quality)
- Cost optimization: Use Samast to switch based on workload!
import { Samast } from '@darshjme/samast';
async function generateWithFallback(prompt: string) {
const ai = new Samast();
const providers = [
{ name: 'anthropic', config: { mode: 'api_key', apiKey: ANTHROPIC_KEY }, model: 'claude-sonnet-4' },
{ name: 'openrouter', config: { mode: 'api_key', apiKey: OPENROUTER_KEY }, model: 'anthropic/claude-sonnet-4' },
{ name: 'google', config: { mode: 'api_key', apiKey: GOOGLE_KEY }, model: 'gemini-2.0-flash-exp' }
];
for (const { name, config, model } of providers) {
try {
await ai.use(name, config);
const response = await ai.complete({
messages: [{ role: 'user', content: prompt }],
model
});
return response.content;
} catch (error) {
console.error(`${name} failed:`, error.message);
continue;
}
}
throw new Error('All providers failed');
}// Route based on task complexity
async function smartRoute(task: string, complexity: 'simple' | 'complex') {
const ai = new Samast();
if (complexity === 'simple') {
// Use cheap, fast model
await ai.use('google', { mode: 'api_key', apiKey: GOOGLE_KEY });
return ai.complete({
messages: [{ role: 'user', content: task }],
model: 'gemini-2.0-flash-exp' // Fast + free tier
});
} else {
// Use premium model for hard tasks
await ai.use('anthropic', { mode: 'api_key', apiKey: ANTHROPIC_KEY });
return ai.complete({
messages: [{ role: 'user', content: task }],
model: 'claude-opus-4' // Best quality
});
}
}import { Samast } from '@darshjme/samast';
class ChatSession {
private ai = new Samast();
private history: Message[] = [];
async initialize(provider: string, apiKey: string) {
await this.ai.use(provider, { mode: 'api_key', apiKey });
}
async sendMessage(content: string, model?: string): Promise<string> {
this.history.push({ role: 'user', content });
const response = await this.ai.complete({
messages: this.history,
model: model || 'anthropic/claude-sonnet-4'
});
this.history.push({ role: 'assistant', content: response.content });
return response.content;
}
async switchProvider(provider: string, apiKey: string, model: string) {
await this.ai.use(provider, { mode: 'api_key', apiKey });
console.log(`Switched to ${provider} (${model})`);
}
clearHistory() {
this.history = [];
}
}
// Usage
const chat = new ChatSession();
await chat.initialize('openrouter', process.env.OPENROUTER_KEY);
console.log(await chat.sendMessage('What is quantum computing?'));
console.log(await chat.sendMessage('Explain it like I\'m 5'));
// Switch mid-conversation
await chat.switchProvider('google', process.env.GOOGLE_KEY, 'gemini-2.0-flash-exp');
console.log(await chat.sendMessage('Give me an analogy'));import type { Provider, ProviderConfig, CompletionRequest, CompletionResponse } from '@darshjme/samast';
class LocalLLMProvider implements Provider {
name = 'local-llm';
supportedAuth = ['api_key'];
private baseURL = 'http://localhost:11434'; // Ollama
async initialize(config: ProviderConfig): Promise<void> {
if (config.baseURL) this.baseURL = config.baseURL;
}
async complete(request: CompletionRequest): Promise<CompletionResponse> {
const response = await fetch(`${this.baseURL}/api/chat`, {
method: 'POST',
body: JSON.stringify({
model: request.model || 'llama2',
messages: request.messages,
stream: false
})
});
const data = await response.json();
return {
content: data.message.content,
finishReason: 'stop',
model: request.model
};
}
async listModels(): Promise<string[]> {
const response = await fetch(`${this.baseURL}/api/tags`);
const data = await response.json();
return data.models.map(m => m.name);
}
}
// Register and use
import { registry, Samast } from '@darshjme/samast';
registry.registerProvider(new LocalLLMProvider());
const ai = new Samast();
await ai.use('local-llm', { mode: 'api_key', baseURL: 'http://localhost:11434' });Before (LangChain):
import { ChatOpenAI } from "langchain/chat_models/openai";
import { HumanMessage } from "langchain/schema";
const chat = new ChatOpenAI({ openAIApiKey: "..." });
const response = await chat.call([new HumanMessage("Hello!")]);After (Samast):
import { Samast } from '@darshjme/samast';
const ai = new Samast();
await ai.use('openrouter', { mode: 'api_key', apiKey: '...' });
const response = await ai.complete({
messages: [{ role: 'user', content: 'Hello!' }],
model: 'openai/gpt-4'
});Benefits:
- โ Simpler API (no schema classes)
- โ TypeScript-native (better IntelliSense)
- โ Multi-provider out of the box
- โ Smaller bundle size (~5KB vs ~500KB)
Before:
import OpenAI from 'openai';
const openai = new OpenAI({ apiKey: '...' });
const response = await openai.chat.completions.create({
model: 'gpt-4',
messages: [{ role: 'user', content: 'Hello!' }]
});After:
import { Samast } from '@darshjme/samast';
const ai = new Samast();
await ai.use('openrouter', { mode: 'api_key', apiKey: '...' });
const response = await ai.complete({
messages: [{ role: 'user', content: 'Hello!' }],
model: 'openai/gpt-4'
});Benefits:
- โ Same simplicity
- โ Switch to Claude/Gemini anytime
- โ No code changes when migrating providers
- โ Unified token usage tracking
Before:
import Anthropic from '@anthropic-ai/sdk';
const client = new Anthropic({ apiKey: '...' });
const response = await client.messages.create({
model: 'claude-3-opus-20240229',
max_tokens: 1024,
messages: [{ role: 'user', content: 'Hello!' }]
});After:
import { Samast } from '@darshjme/samast';
const ai = new Samast();
await ai.use('anthropic', { mode: 'api_key', apiKey: '...' });
const response = await ai.complete({
messages: [{ role: 'user', content: 'Hello!' }],
model: 'claude-opus-4',
maxTokens: 1024
});Benefits:
- โ Unified interface across providers
- โ Easier to A/B test models
- โ Built-in OAuth support (coming)
We welcome contributions! Here's how to get started:
# Clone the repo
git clone https://github.com/darshjme-codes/samast.git
cd samast
# Install dependencies
npm install
# Build
npm run build
# Watch mode for development
npm run dev- Create
src/providers/yourprovider.ts:
import type { Provider, ProviderConfig, CompletionRequest, CompletionResponse } from '../core/types.js';
export class YourProvider implements Provider {
name = 'yourprovider';
supportedAuth = ['api_key'];
async initialize(config: ProviderConfig): Promise<void> {
// Setup your client
}
async complete(request: CompletionRequest): Promise<CompletionResponse> {
// Implement completion logic
}
async listModels(): Promise<string[]> {
// Return available models
}
}- Register in
src/core/registry.ts:
import { YourProvider } from '../providers/yourprovider.js';
// In constructor
this.registerProvider(new YourProvider());- Export from
src/index.ts:
export { YourProvider } from './providers/yourprovider.js';- Add tests and documentation!
- โ Keep PRs focused (one feature/fix per PR)
- โ Update README if adding features
- โ Follow existing code style (TypeScript strict mode)
- โ Add JSDoc comments for public APIs
- โ Test with multiple providers
MIT ยฉ Darshankumar Joshi
Built with:
- Repository: https://github.com/darshjme-codes/samast
- npm Package: https://www.npmjs.com/package/@darshjme/samast
- Issues: https://github.com/darshjme-codes/samast/issues
- Used By: Brahmand CLI
เคธเคฎเคธเฅเคค โ Everything, unified. ๐๏ธ
If Samast helps your project, please โญ star the repo!