So I want to make an application in which a user will hit an endpoint to save a job model to storage that includes some metadata to perform a long computation against which will be offloaded to a distributed task queue, allowing users to lookup their job run statuses or computation results via ID of the saved model. However, I don't want this to be enqueued to a messaging queue just once, rather I want this saved job to run on a schedule, let's say once every hour.
The problem now is how do I get the job to be enqueued multiple times a day. I know I can use a cron job to read the entire jobs table, iterate them all, and wait for all of them to be pushed up to queue but that sounds like a faux pas that would inadvertently create the very problem a task queue attempts to avoid. Furthermore, I see a lot of folks putting their cron job straight into the same API server process which probably doesn't scale well in the same process. Even as a standalone, I don't see how you could feasibly partition the table and scale the cron process. Any suggestions on what is a good architecture or best practices?