Skip to main content

Is it be
possible to setup fme scheduler this way that it is executed on given interval
but only if previous job submitted by that scheduler has already finished?

For example
lets’s say that we have scheduler setup to run once every hour. And for some
reason job took more then 1 hour. I would like to execute next job once the
previous one is finished and probably shift next submissions times.

Hi @witos,

In Advanced parameters in Schedule section there is the option Running Job Expiry Time.

 

Its possible to configurate time to expiry in: seconds, minutes, hours, days and weeks.

Thanks,

Danilo


Hi @danilo_fme

Thank you for the answer but I believe that it isn't what I need. If job runs for more then 1 h I'm still ok with that but untill it is finished I don't want that scheduler to submit next job.

 


HI @witos, I think a behavior close to your requirement would be realized by setting a very short time (e.g. 1 second. I don't know if 0 is available) to the "Queued Job Expiry Time" for the scheduled task, if you could create a Queue specific to the task and assign an engine to the Queue exclusively.


Hi @witos

 

 

I don't think there's a nice way of doing that. You could on successful completion of a job, get it to trigger a workspace subscription but then you wouldn't be running it on a schedule.

 

 

You perhaps adapt your scheduled job, and add a REST API call at the beginning. You could use it to return the records for all running, completed jobs using this call and then if it is running, use a terminator to stop the scheduled job, or if it isn't running then just run the scheduled job like normal.

We are doing this using a file and python code.

This is how we are doing it but you can adapt to fit you case:

  • To make sure the file is unlocked if there is a failure or a crash, we are restarting the engine at every job
  • Our scheduled workspace starts with a Creator and a PythonCaller
  • The PythonCaller locks the file and the file remains locked until the engine is recycled
  • If the PythonCaller fails to lock the file, we only log a message and don't let the feature go thus the workspace ends right away because there is already an instance running
  • This is working in production since 2012 (now on 2017)
  • This does not prevent the scheduler to push the job

Here's the python code:

import fmeobjects
import os


class SynchroSchedule(object):
    def __init__(self):
        self.lockFilePath = FME_MacroValues/'lockFile']
        self.output = False
        
        try:
            os.remove(self.lockFilePath)
        except Exception as e:
            fmeobjects.FMELogFile().logMessageString("Not able to remove the file: %s" % str(e))

        try:
            os.open(self.lockFilePath, os.O_CREAT|os.O_EXCL)
            self.output = True
        except Exception as e:
            fmeobjects.FMELogFile().logMessageString("There is already an instance running at this moment... %s" % str(e), fmeobjects.FME_WARN)

    def input(self, feature):
        if self.output:
            self.pyoutput(feature)


Reply