Skip to main content
Question

Best way of syncronizing job between 2 servers?


anteboy65
Contributor
Forum|alt.badge.img+4

We are slowly moving from an old platform to a newer, in this case FME 2018 to FME 2023. We cant move all our scripts at the same time so we need to find a solution that can be handled over a period of time. 

We have a lot of  flows which uses a startscript and a lot of subscripts. Each flow is dependent on that specific other flows has ended before next one starts. 

Example:

We have three flows, A, B and C. A must be done before B can start. B must be done before C can start and so on. 

In todays solution we run the flows on the same engine which will secure that some flows has ended before next one starts. All our jobs are scheduled but it doesn’t matter if flow A is taken a little longer time than normal because even if flow B is scheduled to run 02:00 AM but flow A finishes at 02:25 AM, flow B is queued and will start as soon as flow A ends. 

But if we just move flow B to the new server we have a little problem. It can’t start before flow A is finished which runs on the old server. What’s best practice to solve this type of problem?

 

 

 

 

5 replies

geomancer
Evangelist
Forum|alt.badge.img+46
  • Evangelist
  • June 13, 2024

Not an answer to your question, but a suggestion on how to implement running workspaces one after the other.

In FME Flow 2023 you can use Automations to daisychain workspaces. This way you can set workspace B to start only after workspace A has finished successfully.


lifalin2016
Contributor
Forum|alt.badge.img+29
  • Contributor
  • June 18, 2024

Apart from using an automation, as suggested above, if you’re daisy-chaining the workspaces by having A call B, and B call C, instead of having a master workspace calling A, B, and C, then you should be ok using both servers. It requires a bit of finesse, but is doable.

A ServerJobSubmitter has the server as a parameter, so a workspace running on 2018 should be able to submit a job on 2023, and vice-versa.

If you’re using a master workspace to orchestrate the 3 jobs, you will need to have it wait for job completion, which again is a parameter of ServerJobSubmitter. But you can still run the workspaces on whichever server that A, B, and C resides on.


anteboy65
Contributor
Forum|alt.badge.img+4
  • Author
  • Contributor
  • June 24, 2024

Thanks for input. The problem here is that jobs are starting in FME 2018 and jobs later in the chain is running in FME 2023. That means I cant daisy-chain them from FME 2023 and the functionality needed does not exist in FME 2018. Also I would like to affect the FME 2018 scripts as little as possible. 


lifalin2016
Contributor
Forum|alt.badge.img+29
  • Contributor
  • June 25, 2024
anteboy65 wrote:

Thanks for input. The problem here is that jobs are starting in FME 2018 and jobs later in the chain is running in FME 2023. That means I cant daisy-chain them from FME 2023 and the functionality needed does not exist in FME 2018. Also I would like to affect the FME 2018 scripts as little as possible. 

So you can’t start a job on 2023 from 2018 ? Will it work the other way around ?

If so, a master workspace on 2023 might be able to orchestrate the flow of jobs on both servers.Just wait for completion whenever it’s necessary.


anteboy65
Contributor
Forum|alt.badge.img+4
  • Author
  • Contributor
  • June 27, 2024

I was a little unclear. What I meant was that I cannot use automation methods in FME 2018 in the same way as in FME 2023. However I can start a job on FME 2023 from 2018 with FmeServerJobSubmitter.

After some experimentation, I think the best way to get total control is to move all the startup scripts to FME 2023 but have the subscripts execute on 2018. That way I don't have to convert all the subscripts at once but have control over the runs via automation methods in FME2023. In FME 2023 I can daisy chain a number of jobs via Automations and start them on schedule. 


Reply


Cookie policy

We use cookies to enhance and personalize your experience. If you accept you agree to our full cookie policy. Learn more about our cookies.

 
Cookie settings