Getting Started

From zero to rate-limited in under 5 minutes.

Prerequisites


Quick Start (Local)

Clone & install dependencies

git clone https://github.com/suresh-p26/RLAAS.git
cd rlaas
go mod tidy

Run the server

go run ./cmd/rlaas-server

The server starts on :8080 (HTTP) and :9090 (gRPC) by default.

Environment Variables RLAAS_POLICY_FILE — policy file path (default: examples/policies.json)
RLAAS_GRPC_ADDR — gRPC listen address (default: :9090)
RLAAS_INVALIDATION_TARGETS — comma-separated sidecar base URLs

Make your first check

curl (Linux / macOS)

curl -X POST http://localhost:8080/v1/check \
  -H "Content-Type: application/json" \
  -d '{
    "request_id": "req-1",
    "org_id": "acme",
    "tenant_id": "retail",
    "signal_type": "http",
    "operation": "charge",
    "endpoint": "/v1/charge",
    "method": "POST",
    "user_id": "u1"
  }'

PowerShell (Windows)

$body = @{
  request_id = "req-1"
  org_id = "acme"
  tenant_id = "retail"
  signal_type = "http"
  operation = "charge"
  endpoint = "/v1/charge"
  method = "POST"
  user_id = "u1"
} | ConvertTo-Json

Invoke-RestMethod -Method Post -Uri "http://localhost:8080/v1/check" `
  -ContentType "application/json" -Body $body

Expected Response

{
  "allowed": true,
  "action": "allow",
  "reason": "within_limit",
  "remaining": 99
}

(Optional) Run the sidecar

go run ./cmd/rlaas-agent

Sidecar listens on :18080. It syncs policies from the upstream server and serves local decisions.

Sidecar Variables RLAAS_AGENT_LISTEN — listen address (default: :18080)
RLAAS_UPSTREAM_HTTP — upstream server (default: http://localhost:8080)
RLAAS_AGENT_SYNC_SECS — sync interval in seconds (default: 30)

Run tests & benchmarks

# Run all tests
go test ./...

# Run benchmarks
go test ./benchmarks -run ^$ -bench . -benchmem

Choose Your Integration Model

Option A: Centralized HTTP

  1. Send RequestContext to POST /v1/check
  2. Read allowed, action, reason, remaining, retry_after
  3. Enforce behavior in your service

Option B: Centralized gRPC

  1. Generate stubs from api/proto/rlaas.proto
  2. Call CheckLimit before protected work
  3. Use Acquire/Release for concurrency-limited sections

Option C: Sidecar Local Mode

  1. Run app and sidecar together
  2. Call sidecar POST /v1/check locally
  3. Let sidecar handle sync and invalidation

Option D: Non-Go SDK Client

  1. Install SDK for Python, TypeScript, Java, or .NET
  2. Initialize with server base URL
  3. Call check() and enforce in your code

API Examples (Copy/Paste)

Create a Policy

curl -X POST http://localhost:8080/v1/policies \
  -H "Content-Type: application/json" \
  -d '{
    "policy_id": "payments-limit",
    "name": "Payments limit",
    "enabled": true,
    "priority": 100,
    "scope": {
      "org_id": "acme",
      "signal_type": "http",
      "operation": "charge"
    },
    "algorithm": {
      "type": "fixed_window",
      "limit": 100,
      "window": "1m"
    },
    "action": "deny",
    "failure_mode": "fail_open",
    "enforcement_mode": "enforce",
    "rollout_percent": 100
  }'

Validate Before Deploying

curl -X POST http://localhost:8080/v1/policies/validate \
  -H "Content-Type: application/json" \
  -d '{
    "name": "Test",
    "enabled": true,
    "scope": {"signal_type": "http"},
    "algorithm": {"type": "fixed_window", "limit": 10, "window": "1m"},
    "action": "deny",
    "rollout_percent": 50
  }'

# → {"valid":true}

Gradual Rollout

# Start at 25%
curl -X POST http://localhost:8080/v1/policies/payments-limit/rollout \
  -H "Content-Type: application/json" \
  -d '{"rollout_percent": 25}'

# Bump to 100% when confident
curl -X POST http://localhost:8080/v1/policies/payments-limit/rollout \
  -H "Content-Type: application/json" \
  -d '{"rollout_percent": 100}'

Rollback to Previous Version

curl -X POST http://localhost:8080/v1/policies/payments-limit/rollback \
  -H "Content-Type: application/json" \
  -d '{"version": 1}'

View Audit Trail & Versions

curl http://localhost:8080/v1/policies/payments-limit/audit
curl http://localhost:8080/v1/policies/payments-limit/versions

Analytics Summary

curl http://localhost:8080/v1/analytics/summary
curl "http://localhost:8080/v1/analytics/summary?top=5"

Production Readiness Checklist

ItemDetails
✅ TLS terminationRun behind an ingress/proxy with TLS
✅ Distributed countersUse Redis for multi-node counter sharing
✅ Health probesConfigure K8s liveness/readiness on /healthz
✅ Invalidation targetsSet RLAAS_INVALIDATION_TARGETS for sidecars
✅ MonitoringMonitor decision + analytics endpoints
✅ Benchmark baselineRun benchmark suite before rollout
✅ Shadow mode firstDeploy new policies in shadow mode, validate, then enforce

What’s Next?

Explore All Features →

Algorithms, actions, matching dimensions, backends, performance details.

Full API Reference →

Every endpoint, request/response format, and object model.

Read the Design Doc →

Architecture, principles, implementation phases, and package structure.

SDK Documentation →

Code examples for Go, Python, TypeScript, Java, and .NET.

README

The full project README — everything in one place.

Rate Limiting As A Service (RLAAS)

RLAAS (Rate Limiting as a Service) is a policy-driven platform for enforcing limits, quotas, and traffic control across APIs and service workloads.

It supports three deployment models:

Completed Capabilities

Core Rate Limiting

Control Plane APIs

Service & Runtime

Available Backends

Remaining Roadmap

Project Stage

RLAAS is ready for customer integration in controlled production environments, with remaining roadmap focused on enterprise persistence backends and broader SDK/UX surface.