Skip to main content

I have an automation setup that monitors a directory for XML files on create and modify. Once the automation is triggered it passes the file to a workbench I published. The work bench automation reads the XML file and the ArcGIS SDE feature class and performs a feature merger to see if the unique key already exists(if the key doesn't exist the xml record gets inserted to an ArcGIS SDE Feature Class. If the key does exist the record gets updated through a change detector.)

 

I ran into a problem where the FME Server job must have been triggered twice where the second job was created 7 seconds after the first. The first time on create trigger the record was inserted properly but on the second modify trigger when the work bench runs it created a duplicate record instead of updating the existing.

 

I suspect this happened because the first job ran but didn't finish before the second job began. Is there a workflow I can follow in preventing this from happening?

 

I would like the automation to run as the first record in will be the first out but I do not know how to accomplish it.

Can you check the logs for the automation (so not the jobs triggered by the automation, but the log of the automation itself) to see what's going on?


I receive an email each time it is processed which is why I know it happened 7 seconds a part from one another.


I receive an email each time it is processed which is why I know it happened 7 seconds a part from one another.

Fair enough, but the automation log will actually tell you when and especially why the Directory Watch was triggered.


Fair enough, but the automation log will actually tell you when and especially why the Directory Watch was triggered.

Yes and this is passed through my email as well along with the file. I know an insert was triggered for both the initial create and the following modify that happened 7 seconds after the create.


If you have more than one engine in the Server install, then the second job may be started by a different engine before the first engine finishes the first job. One way around this may be to create a specific repository for this workflow and assign a specific engine to it, so only one engine processes jobs triggered by this process. You can do this by Creating a Queue in the Engines section of server.

This may not be ideal if you have a large volume of jobs that are coming through this process, but it would eliminate the overlap of jobs by having the one engine process all jobs for this workflow one at a time.


Reply