Skip to main content

Hi,

 

FME noob here (worked with FME 10yrs ago). I've noted before that I've inherited the project that includes administration of aroun 30 FME workbenches, that are scheduled to run using the Jenkins tasking server.

 

 

I understand the reasoning to create a workbench for each feature class (esri) to be synced to AGOL, but I'm thinking of combining into one or few workbenches. The issue is, once one transformer fails, the rest is not executed.

 

So I'm thinking if there is any 'transformer', or perhaps a wrapper that would allow me to catch the errors but still continue the workbench execution?

 

 

Thank you

Hi @vajnorcan, I don't think there is a functionality similar to the concept of try-catch in the current version of FME unfortunately.

In some cases, you can split the workspace into parent and child and call the child from the parent using WorkspaceRunner (FMEServerJobSubmitter in FME Server) with the "Wait for Job to Complete" option to catch the error in the child. However I don't know if this approach can be applied to your case.

Please vote up this Idea and add your feedback: ErrorCatcher Transformer


Hi Takashi,

thank you for your response. FME Server is not yet on the table but it appears it would be exactly what need. I will definitely vote for the error catcher transformer.


We've been trying to implement this via the Rejected Port but the FeatureWriter doesn't yet have this. That would be one way of continuing (once we add that). Until then Takashi's suggestions are the best we've got.


I would love to have a callback function that you could define in the Python startup script that would act as a catch-all whenever any error or Rejected-port is triggered. For example:

import fmeobjects

def CatchAnyError(feature, errormessage):
     # Do something meaningful here
     pass

fmeobjects.FMESession().setErrorCallBack(CatchAnyError)

@vajnorcan It often makes sense to consolidate your workflows into a single workbench. It can make maintenance easier - i.e. if you have to update the feature types because of data model changes. However, this doesn't mean that you are committed to running the entire workflow with all the data read and written in one job. You can use the Feature Types to Read to select individual or possibly two or more interconnected feature types. So the same workspace can run multiple times, converting different feature types. So if a specific feature type fails, your other jobs will still have run, but you still only have one workspace to look after and maintain.


Reply