Skip to content

Epic: Discord REST caching proxy #1

@bakeb7j0

Description

@bakeb7j0

Goal

Eliminate Discord API rate limiting caused by multiple agents independently polling the same channels. Replace N direct pollers with a single caching proxy that mirrors Discord's REST API.

Context

Currently, each Claude Code agent runs its own discord-watcher MCP server, each polling Discord REST every 15s independently. With 6+ agents on one machine (growing), this triggers Discord's rate limits. The solution is a single service that polls Discord once and serves cached responses to all consumers via the same API shape.

Architecture:

Discord REST API  <--  scream-hole (polls, caches)  <--  N agents (poll scream-hole freely)
                    1 req/15s per channel              unlimited, local network

Agents swap one base URL and everything else (polling loop, filtering, identity resolution) stays unchanged. If scream-hole is unavailable, agents fall back to direct Discord access.

Future: The polling backend can be strangled out and replaced with a Discord Gateway (WebSocket) listener for real-time, zero-polling operation. The external API shape to agents remains the same.

Scope

In scope:

  • HTTP server that mirrors Discord REST endpoints agents actually use
  • Polling loop that fetches from Discord on a configurable interval
  • In-memory cache with per-endpoint TTL
  • Write pass-through (POST messages proxied to Discord)
  • Docker container for swarm deployment
  • Health check / metrics endpoint
  • Configuration via env vars (bot token, guild ID, poll interval)

Out of scope:

  • Discord Gateway (WebSocket) integration -- future phase
  • Full Discord API coverage -- only the 3 endpoints the watcher uses
  • Authentication/authorization between agents and scream-hole -- trusted network
  • Multi-guild support -- one guild per instance

Definition of Done

  • scream-hole serves cached responses for GET /guilds/{id}/channels, GET /channels/{id}/messages, and proxies POST /channels/{id}/messages
  • Single poller fetches from Discord on configurable interval (default 15s)
  • Multiple consumers can poll scream-hole concurrently with no rate limit concern
  • discord-watcher connects to scream-hole when SCREAM_HOLE_URL is set, falls back to direct Discord when not
  • cc-workflow config (discord.json) supports scream_hole_url field
  • Docker image builds and deploys to swarm
  • All sub-issue AC checklists are satisfied

Sub-Issues

Order Issue Title Dependencies
1 #2 Project scaffolding -- Bun, TypeScript, CI, Dockerfile None
2 #3 Core proxy -- polling loop, cache, HTTP server #2
3 #4 Write pass-through and send proxy #3
4 #5 Docker image and swarm deployment #3
5 #6 discord-watcher: configurable base URL with scream-hole support #3
6 #7 cc-workflow: config and docs for scream-hole integration #6

Wave Map

Wave Issues Parallel?
1 #2 Single
2 #3 Single
3 #4, #5 Yes -- independent once core exists
4 #6, #7 Yes -- cross-repo, independent

Success Metrics

  • Discord API calls from this machine drop from ~24/min to ~4/min (one poll per active channel)
  • Zero 429 (rate limit) responses from Discord during normal operation
  • Agent message latency stays under 20s (poll interval + propagation)

Metadata

Metadata

Assignees

No one assigned

    Labels

    type::epicEpic — multi-issue initiative

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions