Scalability & Performance Model
Horizontal scaling (stateless edge)
Acceso is designed for read-heavy workloads and bursty traffic. Scaling is primarily horizontal and cache-assisted.
Gateway/API servers are stateless.
Instances scale out behind a load balancer.
Rolling deploys are safe because there is no sticky session requirement.
The system is built to avoid “scaling by melting upstreams”:
Domain services isolate hot spots per protocol or venue.
Caching reduces repeat upstream reads and tail latency.
Rate limits prevent burst traffic from cascading.
Caching and data flow (canonical)
Caching strategy and client performance levers live in one place:
Real-time delivery (avoid polling)
For high-frequency workloads, use streams or webhooks. Polling makes rate limits your bottleneck.
Use WebSockets for low-latency streaming.
Use webhooks for event-driven automation.
Quotas and rate enforcement
Rate limits are enforced per API key to keep usage fair and predictable. They also protect upstream dependencies.
For batching, pagination, timeouts, retries, and 429 handling: Data Flow, Caching & Performance.
Last updated