Question

ArcGIS File GDB Writer Slow

  • 24 January 2022
  • 5 replies
  • 19 views

I made a simple workflow to compare two datasets using the Change Detector and make the changes to one of those datasets using a writer. The process works fine, but it takes roughly a second per records updated. Is that a normal speed? It's alright for small numbers of changes, but this dataset can have thousands of changes from day to day, which would make the workflow run for over an hour, and that's only one dataset when we need to update dozens of them every night.

 

I've looked at the performance tuning article and it either doesn't seem applicable or I don't know enough about the software to know how to check it. My data isn't versioned so I'm using a Transactions writer; I saw another question that had this problem but the answer was to not using the versioning writer. The writer is set to fme_db_operation (so that the updated, inserted, and deleted ports of the Change Detector will work) with Use Existing as the table handling. If there's other information that would help that I can provide, let me know.

 

Any ideas on how to make it faster, or is a second per updated record a normal speed?


5 replies

Badge +2

@whoffman​ I'm surprised this is so slow on a File Geodb writer. I don't think a File Geodb can be versioned so it can't be that. Look at the log file and see where the time is going. It could be in the ChangeDetector. Is your File geodb on a network drive? This can really impact performance when working with File Geodabatases?

Badge +2

@whoffman​ For incremental updates using the fme_db_operation, performance improved 4 or 5 times if the Writer Feature Type Match Columns key attribute is indexed in the File Geodatabase. In ArcCatalog use the Feature Class Properties to create an index on the field that will be used for the Match Columns in FME.

Thanks for the tip, @Mark Stoakes​. I am having a similar issue with file geodatabases, change detection and fme_db_operation. Each record change takes about a second to process, and while it is happening the computer memory isn't even maxed out. In my case the FME tool and geodatabase are located directly on the C:\\ of the computer. I will try the attribute index that you suggested and report back.

Thanks again for the note about the key attribute index @Mark Stoakes​ . I set this up on our destination file geodatabase feature class and an update process that originally took 22 hours now finishes in 30 minutes. Huge win there!!

Badge +1

Thanks again for the note about the key attribute index @Mark Stoakes​ . I set this up on our destination file geodatabase feature class and an update process that originally took 22 hours now finishes in 30 minutes. Huge win there!!

"Nothing should take longer than a cup of coffee!" If it does, finish your coffee, interrupt the process, and find a better way. Indexes are critical in file geodatabases. My motto is Index Everything. Especially spatial indexes. Even if there was one, repeat it after editing so that a new clean index is rebuilt. I index all fields that have an _ID suffix and anything that has a code or is an integer. It doesn't take long and it saves a heap of time later. Use a script to automate this. There is also a Batch option to enable easy spatial reindexing of every featureclass in a file geodatabase. Use with a bit more caution on a real relational database such as Oracle, SQL Server, PostGIS but it is worth a sweep.

 

If an index does not fix it, then the 'better way' is used. Spatial queries are not optimised and do not scale well. It is up to us programmers to use a different tool or approach. Partitioning is a good technique (think GroupBy), or just use a different transformer.

Reply