Change submissions to be queued rather than wait for completion of the last job

Currently the system works by sending and compiling a job, returning it and analysing it and then sending the next job. This puts the sender at a disadvantage to tgge sge advanced job system where you can queue a number of jobs operating with different task ids. The better way to do it would be to send all the jobs at once and just work on them as they finish.

Details

Id: cadcfae225d5f213a9efe7950e00227f3c712e47
Type: task
Creation time: 2011-02-07 17:08
Creator: Mathew Topper <mathew.topper@...>
Release: 0.4 (unreleased)
Component: flightdeck.py
Status: unstarted

Issue log

2011-02-09 14:41 Mathew Topper <mathew.topper@...> assigned to release 0.4 from unassigned
2011-02-07 22:02 Mathew Topper <mathew.topper@...> commented
Done a bit more looking on this one and there is a module called multiprocessing that might be able to help. Unfortunately, it is only available in 2.6.
2011-02-07 17:29 Mathew Topper <mathew.topper@...> commented
Some reading leads to os.fork() as an option. Given these subprocesses will be draining resource, perhaps it is better to have no more than two jobs being monitered, i.e. two queued or one running and one queued?
2011-02-07 17:14 Mathew Topper <mathew.topper@...> commented
This might be a bit harder than it sounds as you have moniter different jobs at the same time. For instance, how do you organise checking the licence server for a waiting job while probing the queue for the condition of a running job or even analysing the waiting job? You kind of need another process to start to do this...
2011-02-07 17:10 Mathew Topper <mathew.topper@...> assigned to component flightdeck.py from fifthwave
This is a flightdeck issue as the cycle method controls the sending of the jobs at the moment.
2011-02-07 17:08 Mathew Topper <mathew.topper@...> created