I came across a problem where a slow processing job needed to run.
The results of the job did not need to be available in real time so we
could wait 5-10 minutes for the result, no problem.
The trouble began when the results had changed and the results would no longer
be valid. A new job would be enqueued to Sidekiq again to handle the re-processing.
This is where things got interesting because the original job had already begun
work on those rows, they were already locked.
At first, letting MySQL handle the deadlocks seemed to work. Just handle the
exception and requeue the worker, letting Sidekiq Unique jobs filter out
any duplicate processing jobs. However, it would be much easier to handle this
in the worker and not add any extra requests to MySQL.
This approach using the redis-lock gem was great. Just handle the
Redis::Lock::LockNotAcquired exception and requeue.
Retrying in this fashion until the new job can start. This worked until I
realized that the locks have a timeout to ensure that any lock holder has a
finite time with the lock unless the ask for more time.
Before this, I typically avoid writing my own multi-threaded code. It’s not
worth the hassle for thinking in concurrent terms. My brain was just not
meant for it. For a simple task like this I am fine with it. I needed two
threads to handle this. One for keeping the lock alive, and a second for
doing the work.
After writing a class to handle creating the two threads, the API looked something like this.
For those with a hungrier appetite, this is the full source for the RedisLockUtils