On certain occasions there can be a very large spike in submissions of jobs into a given FME Server queue. What would be really nice would be a way to manage/move this submitted glut of jobs to a new queue to free up the original queue again to process new jobs. While there are a lot of creative ways in which queues and priorities can be assigned they can't cover all bases and priority can change based on current server load.
Example Scenario:
- FME Server has several engines
- Queue X processes jobs from Repo A and splits them between Engines 1 & 2 in order to handle multiple requests
- A large very spike in jobs comes into Queue X (usually submitted by the same user using some automated process)
- Queue X is now blocked up potentially for hours or even days
- New requests from other users get blocked until block has passed
- Being able to move this glut of jobs to a low priority queue POST submission would let new jobs get processed while also letting the backlog work its way through.
Ways around this currently include the following (and I'm sure more) but each has their own drawbacks:
- Design for autoscaling to handle these (really great but complex and often really not needed or even possible in the current IT infrastructure)
- Assign more engines to the queue until backlog processed (potentially will overwhelm the server and will block up more engines)
- Design smarter job routing to try and detect larger jobs and give them lower priority or put them into their own queue. (Probably the best option but not always possible).
- Temporarily increase the priority for newly submitted jobs using job directives
Depending on licensing this problem can also become expensive. If using CPU-Based pricing for engines then you might want to move these slower jobs to a standard engine.
The way I see it working would be an option in the UI when looking at the list of queued jobs. Check one or more jobs > Options > Move Queue|Set New Priority