node cookbook

N8N Split In Batches Node Cookbook

Use Split In Batches, now documented as Loop Over Items, when a workflow needs to process many items in controlled batches instead of all at once.

Independent third-party notes. n8n is a trademark of its owner and is referenced only for compatibility and troubleshooting context.

Quick Answer

Use Split In Batches, now documented as Loop Over Items, when a workflow needs to process many items in controlled batches instead of all at once.

Key Facts

Node role
Loop through items in smaller batches.
Best fit
Rate-limited APIs, large lists, and workflows that should avoid processing every item at once.
Common benefit
Lower memory pressure and more controlled external API calls.
Common companion nodes
HTTP Request, Code, database nodes, IF, and Wait.

Recommended Steps

  1. Place the loop node after the node that outputs the item list.
  2. Choose a batch size appropriate for the external API or workflow load.
  3. Put the processing nodes inside the loop path.
  4. Make sure the loop continues until all items are processed.
  5. Test with a small list before running a full production batch.

Verification

  • All expected items are processed exactly once.
  • API rate limit errors decrease or disappear.
  • Workflow memory usage is lower than processing the full list at once.

Warnings

  • Batch processing can still create duplicate side effects if retries are not idempotent.
  • A very large batch size may not reduce memory or API pressure enough.

Common Mistakes

  • Setting batch size too high and still hitting memory or API limits.
  • Forgetting idempotency before retrying a partially completed batch.
  • Not testing the final partial batch.
  • Building a loop that never reaches completion.

Examples

Batch an API enrichment workflow Reduce pressure on external APIs and n8n memory.
Input: 1,000 leads
Batch size: 25
Inside loop: HTTP Request -> Set -> database update
After loop: Slack summary

FAQ

Does batching make a workflow faster?

Not always. It usually makes the workflow more controlled and reliable, especially with large data or rate-limited APIs.

What batch size should I use?

Start small enough to avoid API and memory pressure, then increase after measuring execution time and failure rate.

Sources