
Data Flow, Caching & Performance
Redis is the shared caching layer for:
Caching strategy (Redis)
Cache-first read
Best for slow-changing metadata and hot resources.
Upstream read with short-lived cache
Best for moderately volatile resources.
Derived view
Best for analytics (OHLCV, aggregates, protocol metrics).
Most endpoints follow one of these patterns:
Data flow patterns
Acceso is optimized for read-heavy workloads. The core goal is low latency without melting upstream RPCs.
Hot reads that would be expensive on upstreams.
Rate limit counters and quota state.
Request coalescing and temporary computed results, when used.
Fast-changing data: short TTL or no caching.
Slow-changing data: longer TTL and aggressive reuse.
Expensive derived data: cached by query signature when safe.
Cache policy is determined by data volatility:
Performance levers clients control
Prefer batch and aggregate endpoints over many small calls.
Use pagination knobs consistently to avoid accidental fanout.
Prefer WebSockets and webhooks over REST polling for real-time needs.
Recommended integration defaults
Use timeouts on the client side.
Implement retries with exponential backoff for transient failures.
Treat
429or rate-limit errors as a signal to back off and batch.
Last updated