Skip to main content

Good evening,

I am wondering what the community thinks about this situation:

FME Form 2024.1.3.0 (20240927 - Build 24627)
Writing to Esri Enterprise Geodatabase (SDE) 10.9.1 - MS SQL Server 2022

I have almost 700,000 records that need to be written out to our SDE into 1 feature class.
Currently I have 1 Writer using the Esri Geodatabase (ArcSDE Geodb) format.

The 1st attempt to write the data failed after 22 hours because I had a loc on the destination feature dataset.  doh!

On the 2nd attempt to write the data to the SDE, the job has been running for 12 hours.  I cancelled the job, and below are snippets from the logfile highlighting how many records have been written, and how much time passed.  This shows a pattern that: as time progresses, the processing time is becoming exponentially longer.  I am thinking of breaking the data into 100,000 chunks, with the first chunk first truncating the table and then loading, and the sequential chunks appending to the table.

Is this performance hit expected?
Are there ways to mitigate this?

Over a year ago I was able to run this job and I think it would take about 3-4 hours.  I don’t know what changed and I didn’t look into it right when the issues started.

Thank you in advance for your thoughts.
 

LOG FILE: 

Timestamp | CPU | Elapsed | Transformer | Message

2025-10-31 11:15:33|   0.0|  0.0|INFORM|System Status: 16.00 GB of physical memory available
2025-10-31 11:15:33|   0.0|  0.0|INFORM|System Status: 64.00 GB of virtual memory available
2025-10-31 11:15:33|   0.0|  0.0|INFORM|START - ProcessID: 1724, peak process memory usage: 42180 kB, current process memory usage: 42180 kB

2025-10-31 11:15:47|   7.0|  4.4|INFORM|AttributeManager_2_OUTPUT_-1_3116_Player (RecorderFactory): Playing back feature 500

2025-10-31 11:18:29|  86.4|  4.1|INFORM|AttributeManager_2_OUTPUT_-1_3116_Player (RecorderFactory): Playing back feature 10000

2025-10-31 11:22:02| 172.9|  4.3|INFORM|AttributeManager_2_OUTPUT_-1_3116_Player (RecorderFactory): Playing back feature 20000

2025-10-31 11:26:12| 261.4|  4.5|INFORM|AttributeManager_2_OUTPUT_-1_3116_Player (RecorderFactory): Playing back feature 30000

2025-10-31 11:30:43| 350.3|  4.4|INFORM|AttributeManager_2_OUTPUT_-1_3116_Player (RecorderFactory): Playing back feature 40000

2025-10-31 11:35:40| 441.7|  4.8|INFORM|AttributeManager_2_OUTPUT_-1_3116_Player (RecorderFactory): Playing back feature 50000

2025-10-31 12:06:17| 873.0|  4.3|INFORM|AttributeManager_2_OUTPUT_-1_3116_Player (RecorderFactory): Playing back feature 100000

2025-10-31 13:40:47|1741.0|  4.1|INFORM|AttributeManager_2_OUTPUT_-1_3116_Player (RecorderFactory): Playing back feature 200000

2025-10-31 16:17:27|2613.8|  4.4|INFORM|AttributeManager_2_OUTPUT_-1_3116_Player (RecorderFactory): Playing back feature 300000

2025-10-31 19:26:02|3480.8|  4.4|INFORM|AttributeManager_2_OUTPUT_-1_3116_Player (RecorderFactory): Playing back feature 400000

2025-10-31 22:58:55|4345.3|  4.4|INFORM|AttributeManager_2_OUTPUT_-1_3116_Player (RecorderFactory): Playing back feature 500000
 

Summary:
First 50,000 records = 20 min
Next 50,000 records = 31 min
Next 100,000 records = 1 hour 34 min
Next 100,000 records = 2 hour 37 min
Next 100,000 records = 3 hours 9 min
Next 100,000 records = 3 hours 32 min

Total: 500,000 records = 11 hours 43 min and cancelled the job as other jobs need to run.

 

Update:
I deleted the regular Writer and replaced it with a FeatureWriter and now the data is written in 11 minutes!

Why is there such a difference?

From what I can see both Writers have the same settings:

  • Transaction Type: Transactions
  • Feature Dataset Handling: Write Feature Dataset

Oh wait… I see that the regular writer had a setting of “Non-versioned Edit Session”.  I will have to test running the regular Writer with the Transaction Type of Transactions to see what that changes.  

Definitely writing the entire table to a Versioned geodb will be slow, versioning introduces significant overhead, such as working with the existing indexes, additional tables with the version, reconciliation etc. 

https://support.safe.com/hc/en-us/articles/25407680189453-Improving-Performance-when-working-with-Esri-Geodatabases

If you must work with versioned geodb’s, you may want to implement a Change Detection design workflow and only write the inserts/updates/deletes via the fme_db_operation format attribute. This article gives some tips adding additional SQL https://support.safe.com/hc/en-us/articles/25407411133581-Editing-Versioned-Geodatabases-using-SQL

If you have a non-versioned Edit session for the transactions, you can table handling = truncate existing. In contrast, non-versioned writes are direct and do not require these extra steps as above, so they are much faster. You can also use Drop and Create, which will rebuild the database index from scratch and therefore run very quick in comparison

This article goes into alot of detail on your options when writing to Databases https://support.safe.com/hc/en-us/sections/25523790342285-Database-Operations