Loading connector details…
Loading connector details…
Choose a unique username to continue using AgentHotspot
by tosin2013 • Data & Databases
An MCP server that reduces token usage by efficiently caching data between language model interactions.
Read and analyze large files repeatedly without re-sending full contents each time (file content caching).
Reuse expensive computation or analysis results across multiple interactions to save tokens and latency.
Fast access to frequently referenced project files or directory listings while avoiding repeated I/O and token costs.
A lightweight MCP-compatible cache server that stores file contents, computation results, and frequently accessed data to avoid re-sending the same information to language models. It works with any MCP client and any token-using language model, and is configurable via config.json or environment variables (TTL, memory limits, entry limits, intervals). The server automatically manages cache eviction (LRU, TTL), provides stats, and aims to speed up repeated operations while lowering token consumption and improving response times.
Scores are informational only and provided “as is” without warranty. AgentHotspot assumes no liability for actions taken based on these ratings.