MuktoSource
October 24, 2026 5 min read

Optimizing Laravel Queues for High Throughput

Author

Tanvir Hasan

Lead Architect

When building enterprise applications, handling background jobs efficiently is crucial. In this post, we explore how we configured Redis Horizon to handle 10,000+ jobs per minute for a logistics client.

The Problem

Our client's system receives massive bursts of data from IoT devices. Processing these synchronously would kill the API response time. We needed a robust queue system that could scale horizontally.

Configuration

First, we ensured our config/horizon.php was optimized for memory usage. Here is the configuration we deployed:

config/horizon.php
'environments' => [
    'production' => [
        'supervisor-1' => [
            'connection' => 'redis',
            'queue' => ['default', 'notifications', 'iot-data'],
            'balance' => 'auto',
            'minProcesses' => 5,
            'maxProcesses' => 20,
            'memory' => 128,
            'tries' => 3,
            'timeout' => 60, 
        ],
    ],
],

The key here is the 'balance' => 'auto' setting. Horizon automatically scales the number of processes allocated to a queue based on the workload.

Handling Failures

Even with perfect code, external APIs fail. We implemented an exponential backoff strategy for our notification jobs.

"Reliability isn't about never failing; it's about recovering gracefully when you do."

Monitoring with Prometheus

We didn't just stop at Horizon's dashboard. We exported queue metrics to Prometheus to alert us via Slack if the wait time exceeded 30 seconds.


Key Takeaways

  • Always use a persistent driver like Redis for production queues.
  • Configure maxProcesses carefully to avoid exhausting server RAM.
  • Use unique job IDs to prevent duplicate processing.
#Laravel #Redis #DevOps #Performance
Share this article:

Have a complex backend challenge?

We help enterprises architect scalable systems just like this one.

Hire Our Team