Skip to main content

I have a process that uses a change detector transformer to compare address data from one database and update the data in a second database based on differences in the dataset. It works great normally but the other night it updated, but I had an extra 85000 records that ended up being duplicates of existing records. Both these databases are SQL server ESRI Geodatabases and are using the same SQL server instance. What I think happened is that the database instance was failed over while the FME process was in the middle of running, which disconnected the session temporarily and when reconnected started reading records again which looks like it started reading the records from the start but still included the records that were already read in addition. Is there something I can do or set when a database is failed over in the middle of running a process to prevent this issue?

Could it be as simple as putting in a DuplicateFilter and terminating the process in the case a Duplicate feature is detected?

This isn't fail over specific but would at least prevent duplicates in all cases.


I would expect FME to fail completely if a database connection was dropped while reading. If you think that’s not the issue, consider sending the workspace and the complete log to Safe support for analysis.


Reply