Skip to main content

Hello,

I am using a FME Server 2014 SP1 with two engines and the corresponding FME Desktop version. We designed a process, which should trigger other workspaces in a specified order, but the complete process should run on only one engine, which is, according to the tutorial, possible and which I could confirm (see list point #4 below).

 For doing so, we have used the FMEServerJobSubmitter Transformer. Actually our process is very similar to the following tutorial (at least in regard to the technical aspects) https://knowledge.safe.com/articles/1413/fme-serve...

Since we are running different services, we need a Job Routing Configuration, which we have realized within the FMEServerConfig.txt:

TM_DEFAULT_TAG=default 
TM_QUEUE_TYPE=DEFAULT
# Assign tag
TM_REPOSITORY_2=internalProcesses:internalProcessesTag
TM_ENGINE_1=Host_Engine1:default
TM_ENGINE_2=Host_Engine2:internalProcessesTag

Using this config, every workspace in the the "internalProcesses"-repository gets the tag "internalProcessesTag". Following, the engine "Host_Engine1" is responsible for every job with the tag "internalProcessesTag". All other jobs get the "default"-tag, so Host_Engine1 is completely reserved for this purpose.

To ensure that the problem is not caused by our custom workbench, I've followed the above mentioned tutorial, published the defined workbenches to my FME Server. Working with this configuration, I have observed the following circumstances and problems:

  1. Starting the process out of FME Workbench, everything is fine. All triggered childprocess (load_data.fmw, process_data.fmw, rasterize_data.fmw) are allocated to Host_Engine2. The parent workspace (chain_jobs.fms) is not shown in the Jobs-history on my FME Server.
  2. Triggering the Job using the FME Server Web-Application, the Controller workspace (job_chain.fmw) is started on the right Engine, but the first "childprocess" (load_data.fmw) stays in Queue until I cancel the process manually. All following jobs are not triggered.
  3. Disabling the JobRouting Configuration, the process runs without problems, but does not run explicitly on Host_Engine1, but also on the second engine, which is not acceptable.
  4. Disabling one engine and the JobRouting Configuration, no problems occur, but only one engine is availble, which is of course not acceptable.
  5. Changing the the JobRouting Configuration like below, where I have disabled the configuration for the first engine (should be responsible for all other jobs except those with the internalProcessesTag-tag as it used the "default"-tag, my job runs successfully, but it is running on both engines, although the jobs from the internalProcesses-repository should run on Host_Engine2.
TM_DEFAULT_TAG=default 
TM_QUEUE_TYPE=DEFAULT
# Assign tag
TM_REPOSITORY_1=internalProcesses:internalProcessesTag
# TM_ENGINE_1=Host_Engine1:default
TM_ENGINE_2=Host_Engine2:internalProcessesTag

So...are there any known limitations in regard to the combination of FMEServerJobSubmitter and a active JobRouting Configuration?

Hello @schlomm

The answer to your final question is yes, there is a limitation re the FMEServerJobSubmitter and job routing. The answer comes from our training manual at:

https://s3.amazonaws.com/gitbook/Server-Authoring-2016/ServerAuthoring2RunningWorkspaces/2.09.AuthoringJobChains.html

The information is regarding where the master workspace is run...

Interestingly [...] the initial/control workspace can be run on either FME Desktop (e.g. Workbench) or FME Server. The FMEServerJobSubmitter works on both platforms.

 

 

However, there's a difference. On FME Desktop the control workspace runs immediately, but each child job executed by an FMEServerJobSubmitter transformer is submitted to the FME Server queue and may have to wait for an engine. On FME Server - if you have Wait for Job to Complete = Yes - it's the reverse: the control workspace is submitted to the queue, but each child job executed by an FMEServerJobSubmitter bypasses the queue and runs immediately.

 

 

This means that on Desktop the child processes are affected by the FMEServerJobSubmitter Job Priority and Job Tag parameters. But on Server (when Wait for Job = Yes) those parameters are ignored because the child processes are run immediately and not queued. In short, those FMEServerJobSubmitter parameters only apply when the call comes from FME Desktop, because only then are the jobs queued.

That seems to me to be the limitation you are describing (in points 1 and 2). I have to say that I don't know of how to work around that. For that information you would be best to get in touch with our support team and ask them. You can contact them through http://safe.com/support

I hope this information is useful, even if it's probably not moved you much further forward in terms of a solution.

Regards

Mark

 

Mark Ireland

 

Product Evangelist

 

Safe Software Inc.

Hello @schlomm

The answer to your final question is yes, there is a limitation re the FMEServerJobSubmitter and job routing. The answer comes from our training manual at:

https://s3.amazonaws.com/gitbook/Server-Authoring-2016/ServerAuthoring2RunningWorkspaces/2.09.AuthoringJobChains.html

The information is regarding where the master workspace is run...

Interestingly [...] the initial/control workspace can be run on either FME Desktop (e.g. Workbench) or FME Server. The FMEServerJobSubmitter works on both platforms.

 

 

However, there's a difference. On FME Desktop the control workspace runs immediately, but each child job executed by an FMEServerJobSubmitter transformer is submitted to the FME Server queue and may have to wait for an engine. On FME Server - if you have Wait for Job to Complete = Yes - it's the reverse: the control workspace is submitted to the queue, but each child job executed by an FMEServerJobSubmitter bypasses the queue and runs immediately.

 

 

This means that on Desktop the child processes are affected by the FMEServerJobSubmitter Job Priority and Job Tag parameters. But on Server (when Wait for Job = Yes) those parameters are ignored because the child processes are run immediately and not queued. In short, those FMEServerJobSubmitter parameters only apply when the call comes from FME Desktop, because only then are the jobs queued.

That seems to me to be the limitation you are describing (in points 1 and 2). I have to say that I don't know of how to work around that. For that information you would be best to get in touch with our support team and ask them. You can contact them through http://safe.com/support

I hope this information is useful, even if it's probably not moved you much further forward in terms of a solution.

Regards

Mark

 

Mark Ireland

 

Product Evangelist

 

Safe Software Inc.

Hello @mark2catsafe,

Thanks for your answer. These are helpful information! However this unfortunately does not solve my problem, but I will get in contact with our local support. Two last questions:

  1. In my opinion, it seems that my aimed workflow is somehow possible, isn't it? If I disable one engine, all processes (inlcuding the controll and all child-processes) are running completely without any problems (cf. the list in the first post, point #4).
  2. Is there any difference with FME Server 2015 or even 2016?

Thanks a lot,

Dominik


Hi @schlomm, @mark2catsafe,

Thanks for the post, this is somewhat under documented, for as far I can see, this is only documented in fmeServerConfig.txt.

Regards.

Helmoet.


For #2, I think you may have an issue

If you have a parent/controller workspace in FME Server (at 2014) calling FMEServerJobSubmitter that is calling child processes, this will run as one process (not many) and so you don't even need to deal with job routing.

However you mention that you get the first called job in the queue and never starts (basically because the first job is waiting for a response and never getting it because the job is queued). This make me think that the url you are calling in not what FME Server thinks it is called and therefore thinks its a external FME Server. So if you set your server up as < servername > but the url is something different (e.g. www.gofme.com) then it will submit a new job; if they are both the same it will run under the same job.

If that is the case, change the FMEServerJobSubmitter to call the < servername > and try this again.


Reply