Skip to content

Extensible LLM framework for TypeScript - Clean provider interface for building AI applications. Powers Brahmand CLI. Developer-friendly API for multi-model AI integration.

License

Notifications You must be signed in to change notification settings

darshjme-codes/samast

Folders and files

NameName
Last commit message
Last commit date

Latest commit

ย 

History

4 Commits
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 

Repository files navigation

๐Ÿ•‰๏ธ Samast Framework

Universal LLM Provider Abstraction for TypeScript

เคธเคฎเคธเฅเคค (samast) = Sanskrit for "all", "everything", "universal"

npm version TypeScript License: MIT PRs Welcome

Quick Start โ€ข Documentation โ€ข Examples โ€ข Migration


๐ŸŽฏ Why Samast?

Stop rewriting LLM integration code. Samast provides a single TypeScript interface for 100+ AI models across OpenRouter, Anthropic, Google, OpenAI, and more. Switch providers in 2 lines without touching your application logic.

// Production-ready in 30 seconds
import { Samast } from '@darshjme/samast';

const ai = new Samast();
await ai.use('openrouter', { mode: 'api_key', apiKey: process.env.OPENROUTER_KEY });

const response = await ai.complete({
  messages: [{ role: 'user', content: 'Explain quantum entanglement' }],
  model: 'anthropic/claude-sonnet-4'
});

Built for production: Type-safe, OAuth-ready, zero vendor lock-in.


โœจ Features

  • ๐Ÿ”Œ Provider Abstraction โ€” Unified interface for OpenRouter, Anthropic, Google, OpenAI
  • ๐Ÿ” OAuth + API Keys โ€” Device code flow for Claude, API key fallback for all
  • ๐Ÿ“˜ 100% TypeScript โ€” Full type safety with IntelliSense support
  • ๐ŸŽฏ Zero Lock-in โ€” Switch providers without code changes
  • ๐Ÿชถ Lightweight โ€” ~5KB minified, tree-shakeable ESM
  • ๐Ÿ”ง Extensible โ€” Add custom providers via clean interface
  • โšก Production-Ready โ€” Used by Brahmand CLI

๐Ÿš€ Quick Start

Installation

npm install @darshjme/samast
# or
yarn add @darshjme/samast
# or
pnpm add @darshjme/samast

Basic Usage

import { Samast } from '@darshjme/samast';

const ai = new Samast();

// Initialize with any provider
await ai.use('openrouter', {
  mode: 'api_key',
  apiKey: 'sk-or-v1-...'
});

// Generate completions
const response = await ai.complete({
  messages: [
    { role: 'system', content: 'You are a helpful assistant' },
    { role: 'user', content: 'What is TypeScript?' }
  ],
  model: 'anthropic/claude-sonnet-4',
  temperature: 0.7,
  maxTokens: 1000
});

console.log(response.content);
// Usage stats included
console.log(response.usage); // { promptTokens, completionTokens, totalTokens }

Switch Providers Instantly

// Start with OpenRouter
await ai.use('openrouter', { mode: 'api_key', apiKey: OR_KEY });
const resp1 = await ai.complete({ messages, model: 'anthropic/claude-opus-4' });

// Switch to Google Gemini
await ai.use('google', { mode: 'api_key', apiKey: GOOGLE_KEY });
const resp2 = await ai.complete({ messages, model: 'gemini-2.0-flash-exp' });

// Switch to Anthropic direct (with OAuth!)
await ai.use('anthropic', { mode: 'oauth', accessToken: CLAUDE_TOKEN });
const resp3 = await ai.complete({ messages, model: 'claude-sonnet-4' });

๐Ÿ—๏ธ Architecture

Samast uses a provider registry pattern with a clean separation between the client interface and provider implementations:

graph TB
    A[Your App] -->|uses| B[Samast Client]
    B -->|delegates to| C[Provider Registry]
    C -->|manages| D[OpenRouter Provider]
    C -->|manages| E[Anthropic Provider]
    C -->|manages| F[Google Provider]
    C -->|manages| G[Custom Providers...]
    
    D -->|API calls| H[OpenRouter API<br/>100+ models]
    E -->|API calls| I[Anthropic API<br/>Claude models]
    F -->|API calls| J[Google AI API<br/>Gemini models]
    
    style B fill:#4A90E2,color:#fff
    style C fill:#50C878,color:#fff
    style H fill:#E8E8E8
    style I fill:#E8E8E8
    style J fill:#E8E8E8
Loading

Core Concepts

Component Purpose
Samast Client Main entry point โ€” simple, consistent API for your app
Provider Registry Manages provider lifecycle, handles switching
Provider Interface Standardized contract all providers implement
Built-in Providers OpenRouter, Anthropic, Google (more coming)

Key Design Principles:

  1. Abstraction over integration โ€” One interface, many backends
  2. Auth flexibility โ€” OAuth when possible, API keys as fallback
  3. Type safety first โ€” Compile-time guarantees prevent runtime errors
  4. Zero configuration โ€” Sensible defaults, explicit overrides

See ARCHITECTURE.md for implementation details.


๐Ÿ“š API Reference

Core Client

new Samast()

Creates a new Samast instance. No configuration required.

const ai = new Samast();

ai.use(provider, config)

Initialize and activate a provider. Returns a Promise that resolves when the provider is ready.

Signature:

async use(providerName: string, config: ProviderConfig): Promise<void>

Parameters:

Parameter Type Description
providerName string Provider identifier ('openrouter', 'anthropic', 'google')
config ProviderConfig Authentication configuration (see below)

Config Shapes:

// API Key mode (most providers)
{
  mode: 'api_key',
  apiKey: string,
  baseURL?: string  // Optional: override API endpoint
}

// OAuth mode (Anthropic, OpenAI)
{
  mode: 'oauth',
  accessToken: string,
  refreshToken?: string,
  clientId?: string,
  clientSecret?: string
}

Example:

// OpenRouter with custom base URL
await ai.use('openrouter', {
  mode: 'api_key',
  apiKey: process.env.OPENROUTER_KEY,
  baseURL: 'https://openrouter.ai/api/v1'
});

// Anthropic with OAuth tokens
await ai.use('anthropic', {
  mode: 'oauth',
  accessToken: tokens.access_token,
  refreshToken: tokens.refresh_token
});

ai.complete(request)

Send a chat completion request to the active provider.

Signature:

async complete(request: CompletionRequest): Promise<CompletionResponse>

Request Shape:

interface CompletionRequest {
  messages: Message[];        // Conversation history
  model?: string;             // Model ID (provider-specific)
  temperature?: number;       // 0-2, default 0.7
  maxTokens?: number;         // Max response length
  stream?: boolean;           // Streaming support (future)
}

interface Message {
  role: 'system' | 'user' | 'assistant';
  content: string;
}

Response Shape:

interface CompletionResponse {
  content: string;           // Generated text
  finishReason: string;      // 'stop', 'length', 'content_filter'
  usage?: {
    promptTokens: number;
    completionTokens: number;
    totalTokens: number;
  };
  model?: string;            // Actual model used
}

Example:

const response = await ai.complete({
  messages: [
    { role: 'system', content: 'You are a technical writer' },
    { role: 'user', content: 'Explain REST APIs in 3 sentences' }
  ],
  model: 'anthropic/claude-sonnet-4',
  temperature: 0.3,  // More focused
  maxTokens: 200
});

console.log(response.content);
console.log(`Used ${response.usage.totalTokens} tokens`);

ai.listModels()

List all models available from the currently active provider.

Signature:

async listModels(): Promise<string[]>

Example:

await ai.use('openrouter', { ... });
const models = await ai.listModels();
console.log(models);
// ['anthropic/claude-opus-4', 'openai/gpt-4-turbo', 'google/gemini-2.0-flash-exp', ...]

ai.getProviders()

List all registered provider names (both built-in and custom).

Signature:

getProviders(): string[]

Example:

console.log(ai.getProviders());
// ['openrouter', 'anthropic', 'google', 'my-custom-provider']

ai.getActiveProvider()

Get the name of the currently active provider.

Signature:

getActiveProvider(): string | undefined

Example:

await ai.use('anthropic', { ... });
console.log(ai.getActiveProvider()); // 'anthropic'

OAuth Support

ai.startOAuth(provider)

Initiate OAuth device code flow for providers that support it (Anthropic, OpenAI).

Signature:

async startOAuth(providerName: string): Promise<{ authUrl: string; state: string }>

Example:

const { authUrl, state } = await ai.startOAuth('anthropic');

console.log(`Visit: ${authUrl}`);
console.log(`Device code: ${state}`);

// User approves in browser, then you poll for token...

ai.handleOAuthCallback(provider, code, state)

Complete OAuth flow after user authorization.

Signature:

async handleOAuthCallback(
  providerName: string,
  code: string,
  state: string
): Promise<ProviderConfig>

Example:

// After user approves
const config = await ai.handleOAuthCallback('anthropic', authCode, deviceCode);

// Use the new config
await ai.use('anthropic', config);

Provider Registry

Advanced users can access the registry directly for custom workflows.

import { registry } from '@darshjme/samast';

// Register custom provider
registry.registerProvider(new MyCustomProvider());

// Get all models across all providers
const allModels = await registry.getAllModels();
console.log(allModels);
// Map { 'openrouter' => [...], 'anthropic' => [...], 'google' => [...] }

๐ŸŒ Supported Providers

Provider Auth Modes Models OAuth Status
OpenRouter API Key 100+ (Claude, GPT, Gemini, Llama, Mistral, etc.) โŒ โœ… Production
Anthropic API Key, OAuth Claude Opus 4, Sonnet 4, Haiku 4, 3.5 Sonnet โœ… โœ… Production
Google API Key Gemini 3 Pro, 2.0 Flash, 1.5 Pro/Flash โŒ โœ… Production
OpenAI API Key, OAuth GPT-4, GPT-4 Turbo, GPT-3.5 โœ… ๐Ÿšง Coming Soon
NVIDIA NIM API Key Nemotron, Llama models โŒ ๐Ÿšง Coming Soon
GitHub Copilot OAuth Codex, GPT-4 โœ… ๐Ÿšง Coming Soon

Provider Comparison

Feature OpenRouter Anthropic Google
Model variety โญโญโญโญโญ (100+) โญโญโญ (Claude family) โญโญโญ (Gemini family)
Cost efficiency โญโญโญโญ (competitive) โญโญโญ (premium) โญโญโญโญโญ (free tier!)
Response quality โญโญโญโญ (varies by model) โญโญโญโญโญ (top-tier) โญโญโญโญ (excellent)
Rate limits โญโญโญโญ (generous) โญโญโญ (moderate) โญโญโญโญโญ (very high)

Recommendation:

  • Development: Start with Google (free tier)
  • Production: OpenRouter (model flexibility) or Anthropic (quality)
  • Cost optimization: Use Samast to switch based on workload!

๐Ÿ’ก Examples

Multi-Provider Fallback

import { Samast } from '@darshjme/samast';

async function generateWithFallback(prompt: string) {
  const ai = new Samast();
  const providers = [
    { name: 'anthropic', config: { mode: 'api_key', apiKey: ANTHROPIC_KEY }, model: 'claude-sonnet-4' },
    { name: 'openrouter', config: { mode: 'api_key', apiKey: OPENROUTER_KEY }, model: 'anthropic/claude-sonnet-4' },
    { name: 'google', config: { mode: 'api_key', apiKey: GOOGLE_KEY }, model: 'gemini-2.0-flash-exp' }
  ];

  for (const { name, config, model } of providers) {
    try {
      await ai.use(name, config);
      const response = await ai.complete({
        messages: [{ role: 'user', content: prompt }],
        model
      });
      return response.content;
    } catch (error) {
      console.error(`${name} failed:`, error.message);
      continue;
    }
  }

  throw new Error('All providers failed');
}

Cost-Aware Routing

// Route based on task complexity
async function smartRoute(task: string, complexity: 'simple' | 'complex') {
  const ai = new Samast();

  if (complexity === 'simple') {
    // Use cheap, fast model
    await ai.use('google', { mode: 'api_key', apiKey: GOOGLE_KEY });
    return ai.complete({
      messages: [{ role: 'user', content: task }],
      model: 'gemini-2.0-flash-exp'  // Fast + free tier
    });
  } else {
    // Use premium model for hard tasks
    await ai.use('anthropic', { mode: 'api_key', apiKey: ANTHROPIC_KEY });
    return ai.complete({
      messages: [{ role: 'user', content: task }],
      model: 'claude-opus-4'  // Best quality
    });
  }
}

Building a Chat Application

import { Samast } from '@darshjme/samast';

class ChatSession {
  private ai = new Samast();
  private history: Message[] = [];

  async initialize(provider: string, apiKey: string) {
    await this.ai.use(provider, { mode: 'api_key', apiKey });
  }

  async sendMessage(content: string, model?: string): Promise<string> {
    this.history.push({ role: 'user', content });

    const response = await this.ai.complete({
      messages: this.history,
      model: model || 'anthropic/claude-sonnet-4'
    });

    this.history.push({ role: 'assistant', content: response.content });
    
    return response.content;
  }

  async switchProvider(provider: string, apiKey: string, model: string) {
    await this.ai.use(provider, { mode: 'api_key', apiKey });
    console.log(`Switched to ${provider} (${model})`);
  }

  clearHistory() {
    this.history = [];
  }
}

// Usage
const chat = new ChatSession();
await chat.initialize('openrouter', process.env.OPENROUTER_KEY);

console.log(await chat.sendMessage('What is quantum computing?'));
console.log(await chat.sendMessage('Explain it like I\'m 5'));

// Switch mid-conversation
await chat.switchProvider('google', process.env.GOOGLE_KEY, 'gemini-2.0-flash-exp');
console.log(await chat.sendMessage('Give me an analogy'));

Custom Provider Implementation

import type { Provider, ProviderConfig, CompletionRequest, CompletionResponse } from '@darshjme/samast';

class LocalLLMProvider implements Provider {
  name = 'local-llm';
  supportedAuth = ['api_key'];
  
  private baseURL = 'http://localhost:11434';  // Ollama

  async initialize(config: ProviderConfig): Promise<void> {
    if (config.baseURL) this.baseURL = config.baseURL;
  }

  async complete(request: CompletionRequest): Promise<CompletionResponse> {
    const response = await fetch(`${this.baseURL}/api/chat`, {
      method: 'POST',
      body: JSON.stringify({
        model: request.model || 'llama2',
        messages: request.messages,
        stream: false
      })
    });

    const data = await response.json();
    
    return {
      content: data.message.content,
      finishReason: 'stop',
      model: request.model
    };
  }

  async listModels(): Promise<string[]> {
    const response = await fetch(`${this.baseURL}/api/tags`);
    const data = await response.json();
    return data.models.map(m => m.name);
  }
}

// Register and use
import { registry, Samast } from '@darshjme/samast';

registry.registerProvider(new LocalLLMProvider());

const ai = new Samast();
await ai.use('local-llm', { mode: 'api_key', baseURL: 'http://localhost:11434' });

๐Ÿ”„ Migration Guide

From LangChain

Before (LangChain):

import { ChatOpenAI } from "langchain/chat_models/openai";
import { HumanMessage } from "langchain/schema";

const chat = new ChatOpenAI({ openAIApiKey: "..." });
const response = await chat.call([new HumanMessage("Hello!")]);

After (Samast):

import { Samast } from '@darshjme/samast';

const ai = new Samast();
await ai.use('openrouter', { mode: 'api_key', apiKey: '...' });
const response = await ai.complete({
  messages: [{ role: 'user', content: 'Hello!' }],
  model: 'openai/gpt-4'
});

Benefits:

  • โœ… Simpler API (no schema classes)
  • โœ… TypeScript-native (better IntelliSense)
  • โœ… Multi-provider out of the box
  • โœ… Smaller bundle size (~5KB vs ~500KB)

From Direct OpenAI SDK

Before:

import OpenAI from 'openai';

const openai = new OpenAI({ apiKey: '...' });
const response = await openai.chat.completions.create({
  model: 'gpt-4',
  messages: [{ role: 'user', content: 'Hello!' }]
});

After:

import { Samast } from '@darshjme/samast';

const ai = new Samast();
await ai.use('openrouter', { mode: 'api_key', apiKey: '...' });
const response = await ai.complete({
  messages: [{ role: 'user', content: 'Hello!' }],
  model: 'openai/gpt-4'
});

Benefits:

  • โœ… Same simplicity
  • โœ… Switch to Claude/Gemini anytime
  • โœ… No code changes when migrating providers
  • โœ… Unified token usage tracking

From Anthropic SDK

Before:

import Anthropic from '@anthropic-ai/sdk';

const client = new Anthropic({ apiKey: '...' });
const response = await client.messages.create({
  model: 'claude-3-opus-20240229',
  max_tokens: 1024,
  messages: [{ role: 'user', content: 'Hello!' }]
});

After:

import { Samast } from '@darshjme/samast';

const ai = new Samast();
await ai.use('anthropic', { mode: 'api_key', apiKey: '...' });
const response = await ai.complete({
  messages: [{ role: 'user', content: 'Hello!' }],
  model: 'claude-opus-4',
  maxTokens: 1024
});

Benefits:

  • โœ… Unified interface across providers
  • โœ… Easier to A/B test models
  • โœ… Built-in OAuth support (coming)

๐Ÿ—๏ธ Contributing

We welcome contributions! Here's how to get started:

Development Setup

# Clone the repo
git clone https://github.com/darshjme-codes/samast.git
cd samast

# Install dependencies
npm install

# Build
npm run build

# Watch mode for development
npm run dev

Adding a New Provider

  1. Create src/providers/yourprovider.ts:
import type { Provider, ProviderConfig, CompletionRequest, CompletionResponse } from '../core/types.js';

export class YourProvider implements Provider {
  name = 'yourprovider';
  supportedAuth = ['api_key'];

  async initialize(config: ProviderConfig): Promise<void> {
    // Setup your client
  }

  async complete(request: CompletionRequest): Promise<CompletionResponse> {
    // Implement completion logic
  }

  async listModels(): Promise<string[]> {
    // Return available models
  }
}
  1. Register in src/core/registry.ts:
import { YourProvider } from '../providers/yourprovider.js';

// In constructor
this.registerProvider(new YourProvider());
  1. Export from src/index.ts:
export { YourProvider } from './providers/yourprovider.js';
  1. Add tests and documentation!

Pull Request Guidelines

  • โœ… Keep PRs focused (one feature/fix per PR)
  • โœ… Update README if adding features
  • โœ… Follow existing code style (TypeScript strict mode)
  • โœ… Add JSDoc comments for public APIs
  • โœ… Test with multiple providers

๐Ÿ“„ License

MIT ยฉ Darshankumar Joshi


๐Ÿ™ Acknowledgments

Built with:


๐Ÿ”— Links


เคธเคฎเคธเฅเคค โ€” Everything, unified. ๐Ÿ•‰๏ธ

If Samast helps your project, please โญ star the repo!

About

Extensible LLM framework for TypeScript - Clean provider interface for building AI applications. Powers Brahmand CLI. Developer-friendly API for multi-model AI integration.

Topics

Resources

License

Contributing

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published