I'm using Laravel's database queue connection to process frequent robot heartbeat data. Each second, robots send data that the Laravel server processes by updating the respective robot entries. The job processing itself is fast, 0.5 seconds. However, occasionally, the job retrieval speed slows down, taking around 2.5 seconds to fetch the job from the jobs table. This results in a bottleneck where jobs pile up and cannot be cleared efficiently, leading to more incoming jobs than can be processed.
The query responsible for retrieving the jobs is the default query used by the laravel framework as follows:
select * from jobs where queue = 'test_queue' and ((reserved_at is null and available_at <= 1738554694) or (reserved_at <= 1738554694)) order by id asc limit 1 for update;
To resolve the slowdown, I have to deleted the piled up jobs and clearing the table cache by running OPTIMIZE TABLE
jobs which will in turn result in the subsequent job retrieval queries running at fast speeds again.
What preventive measures can I take to avoid this issue from occurring and maintain optimal performance?
I tried added a composite index
on the queue
, reserved_at
, and available_at
columns, but the query still only uses the index
on the queue
column.
I'm using Laravel's database queue connection to process frequent robot heartbeat data. Each second, robots send data that the Laravel server processes by updating the respective robot entries. The job processing itself is fast, 0.5 seconds. However, occasionally, the job retrieval speed slows down, taking around 2.5 seconds to fetch the job from the jobs table. This results in a bottleneck where jobs pile up and cannot be cleared efficiently, leading to more incoming jobs than can be processed.
The query responsible for retrieving the jobs is the default query used by the laravel framework as follows:
select * from jobs where queue = 'test_queue' and ((reserved_at is null and available_at <= 1738554694) or (reserved_at <= 1738554694)) order by id asc limit 1 for update;
To resolve the slowdown, I have to deleted the piled up jobs and clearing the table cache by running OPTIMIZE TABLE
jobs which will in turn result in the subsequent job retrieval queries running at fast speeds again.
What preventive measures can I take to avoid this issue from occurring and maintain optimal performance?
I tried added a composite index
on the queue
, reserved_at
, and available_at
columns, but the query still only uses the index
on the queue
column.
Laravel's database queue is not designed for high-frequency job processing. Consider switching to Redis or Amazon SQS, which are optimized for fast queue operations. Redis, in particular, is in-memory, eliminating slow database queries
This may not be an answer you are finding, but using Database as the queue driver is not a good idea.
Instead, using redis has the queue driver would be good enough.
If you still want the Database driver, you might want to separate the jobs table from your main DB connection.