Managing high loads

View as MarkdownOpen in Claude

Use this guide to set up your integration to scale your use of Prolific. It explains what causes rate limits and latency on the API, how to avoid them, and what to do if you encounter them. This is most relevant when you are running more than 100 concurrent studies, approving or paying several hundreds of submissions at a time, or otherwise making high volumes of API requests.

What triggers rate limits

Prolific does not enforce fixed rate limits on API calls. Instead, limits are implicit and arise from contention on shared resources — primarily the workspace wallet.

Operations that involve payments (approving submissions, paying bonuses) require the platform to verify available funds before each transaction. When many of these requests arrive concurrently within a single workspace, they contend for the same wallet and you may receive a 429 Too Many Requests response.

High request volumes across other endpoints can also create latency within the affected workspace, even if they don’t trigger 429s directly.

When a 429 is returned, it is transient — but if the underlying load is sustained, the errors will continue until the pressure eases.

Avoiding rate limits

Use bulk endpoints for payments

Rather than making many individual approval or bonus requests concurrently, use the bulk endpoints to process multiple operations in a single call:

If you have more than the above limit to process, send them in sequential batches rather than in parallel. This eliminates wallet contention and is the most reliable way to avoid 429s on payment operations.

Bulk endpoints process requests asynchronously — a successful API response means the batch has been accepted, not that all items are complete. Track progress using resource-level webhook events.

Subscribe to webhooks rather than polling

Instead of repeatedly calling endpoints to check whether submissions have been processed or study statuses have changed, subscribe to webhook events. Webhooks deliver real-time notifications to your system when events occur, removing the need to poll and ensuring your integration responds promptly without generating unnecessary load.

Distribute activity across workspaces

For sustained high loads — such as running many studies concurrently or processing large volumes of payments — we recommend distributing activity across multiple workspaces.

As a general guideline, keep concurrent active studies within a single workspace to under 100 at any one time. Beyond this, a single workspace can become a bottleneck and any rate limit pressure will affect all studies within it.

Multiple workspaces also provide natural segregation of studies, participants, and payments, which can simplify reporting and operational management at scale.

If you need help structuring your workspaces or moving funds between them, please reach out to your Account Manager, who can assist with fund transfers and user access management.

Stagger bulk study launches

Launching a large number of studies simultaneously won’t cause rate limit errors, but it may result in delays before studies are visible to participants and before submissions start coming in. If you need to launch many studies at once, consider staggering the launches to ensure each study reaches participants promptly.

Handling 429 errors

When you receive a 429 Too Many Requests response:

  1. Check for a Retry-After header in the response. Some endpoints include this when the 429 is caused by lock contention (e.g. submission transitions) — if present, wait at least that many seconds before retrying.
  2. If no Retry-After header is present, use exponential back-off: start with a 1 second wait, then double on each retry (2s, 4s, 8s, and so on) up to a maximum of around 60 seconds.
  3. Add a small random jitter to your wait time (e.g. ±20%) to avoid multiple concurrent retries re-colliding at the same interval.
  4. After 5–6 retries without success, treat it as a sustained issue and stop retrying automatically.

A basic Python example:

1import time
2import random
3import requests
4
5def request_with_backoff(method, url, **kwargs):
6 max_retries = 6
7 delay = 1.0
8 for attempt in range(max_retries):
9 response = method(url, **kwargs)
10 if response.status_code != 429:
11 return response
12 if attempt < max_retries - 1:
13 retry_after = response.headers.get("Retry-After")
14 if retry_after:
15 time.sleep(float(retry_after))
16 else:
17 jitter = random.uniform(0.8, 1.2)
18 time.sleep(delay * jitter)
19 delay = min(delay * 2, 60)
20 return response # return final response after exhausting retries

You can monitor API and platform availability at status.prolific.com. Check this first if you’re seeing widespread errors — the issue may not be load-related.

If retrying doesn’t resolve the issue

If you’re receiving persistent 429 errors and back-off retries aren’t recovering the situation, contact our support team. Include:

  • Your workspace ID
  • The endpoint(s) affected
  • The approximate volume and pattern of requests (e.g. “~500 approval requests over 2 minutes”)

Some scenarios require investigation into the specifics of your workspace and usage pattern to resolve.