Skip to main content

The following workflow of comparing tables (in two separate databases) and updating an existing database (SQL non-spatial) is taking for ever. It has been running for 12 hours now to update the database. In 12 hours only 5424 records are compared. I am expecting about 20,000 records to be updated weekly based on update detector comparison. The change detector transformer is in the workbench since I tried that approach as well but failed. How can I make the workflow run faster?

Is the target table indexed on the key value that you are using in the fme_where?

That could speed up the writing.

Second, could you group the source records some way?

Comparing 1000 with 1000 twenty times is a lot faster than comparing 20000 with 20000 once.

Hope this helps pointing in the right direction.


Is the target table indexed on the key value that you are using in the fme_where?

That could speed up the writing.

Second, could you group the source records some way?

Comparing 1000 with 1000 twenty times is a lot faster than comparing 20000 with 20000 once.

Hope this helps pointing in the right direction.

+1 for indexing the table key attribute, super important and can make a huge difference.
Agree that an appropriate index on the target table is most likely to be helpful, but it would be interesting to know how fast the translation runs if you disable the writer. In other words, how fast is the reading and change detection on its own?

 


Reply