Good evening,
I am wondering what the community thinks about this situation:
FME Form 2024.1.3.0 (20240927 - Build 24627)
Writing to Esri Enterprise Geodatabase (SDE) 10.9.1 - MS SQL Server 2022
I have almost 700,000 records that need to be written out to our SDE into 1 feature class.
Currently I have 1 Writer using the Esri Geodatabase (ArcSDE Geodb) format.
The 1st attempt to write the data failed after 22 hours because I had a loc on the destination feature dataset. doh!
On the 2nd attempt to write the data to the SDE, the job has been running for 12 hours. I cancelled the job, and below are snippets from the logfile highlighting how many records have been written, and how much time passed. This shows a pattern that: as time progresses, the processing time is becoming exponentially longer. I am thinking of breaking the data into 100,000 chunks, with the first chunk first truncating the table and then loading, and the sequential chunks appending to the table.
Is this performance hit expected?
Are there ways to mitigate this?
Over a year ago I was able to run this job and I think it would take about 3-4 hours. I don’t know what changed and I didn’t look into it right when the issues started.
Thank you in advance for your thoughts.
LOG FILE:
Timestamp | CPU | Elapsed | Transformer | Message
2025-10-31 11:15:33| 0.0| 0.0|INFORM|System Status: 16.00 GB of physical memory available
2025-10-31 11:15:33| 0.0| 0.0|INFORM|System Status: 64.00 GB of virtual memory available
2025-10-31 11:15:33| 0.0| 0.0|INFORM|START - ProcessID: 1724, peak process memory usage: 42180 kB, current process memory usage: 42180 kB
2025-10-31 11:15:47| 7.0| 4.4|INFORM|AttributeManager_2_OUTPUT_-1_3116_Player (RecorderFactory): Playing back feature 500
2025-10-31 11:18:29| 86.4| 4.1|INFORM|AttributeManager_2_OUTPUT_-1_3116_Player (RecorderFactory): Playing back feature 10000
2025-10-31 11:22:02| 172.9| 4.3|INFORM|AttributeManager_2_OUTPUT_-1_3116_Player (RecorderFactory): Playing back feature 20000
2025-10-31 11:26:12| 261.4| 4.5|INFORM|AttributeManager_2_OUTPUT_-1_3116_Player (RecorderFactory): Playing back feature 30000
2025-10-31 11:30:43| 350.3| 4.4|INFORM|AttributeManager_2_OUTPUT_-1_3116_Player (RecorderFactory): Playing back feature 40000
2025-10-31 11:35:40| 441.7| 4.8|INFORM|AttributeManager_2_OUTPUT_-1_3116_Player (RecorderFactory): Playing back feature 50000
2025-10-31 12:06:17| 873.0| 4.3|INFORM|AttributeManager_2_OUTPUT_-1_3116_Player (RecorderFactory): Playing back feature 100000
2025-10-31 13:40:47|1741.0| 4.1|INFORM|AttributeManager_2_OUTPUT_-1_3116_Player (RecorderFactory): Playing back feature 200000
2025-10-31 16:17:27|2613.8| 4.4|INFORM|AttributeManager_2_OUTPUT_-1_3116_Player (RecorderFactory): Playing back feature 300000
2025-10-31 19:26:02|3480.8| 4.4|INFORM|AttributeManager_2_OUTPUT_-1_3116_Player (RecorderFactory): Playing back feature 400000
2025-10-31 22:58:55|4345.3| 4.4|INFORM|AttributeManager_2_OUTPUT_-1_3116_Player (RecorderFactory): Playing back feature 500000
Summary:
First 50,000 records = 20 min
Next 50,000 records = 31 min
Next 100,000 records = 1 hour 34 min
Next 100,000 records = 2 hour 37 min
Next 100,000 records = 3 hours 9 min
Next 100,000 records = 3 hours 32 min
Total: 500,000 records = 11 hours 43 min and cancelled the job as other jobs need to run.
Update:
I deleted the regular Writer and replaced it with a FeatureWriter and now the data is written in 11 minutes!
Why is there such a difference?
From what I can see both Writers have the same settings:
- Transaction Type: Transactions
- Feature Dataset Handling: Write Feature Dataset
Oh wait… I see that the regular writer had a setting of “Non-versioned Edit Session”. I will have to test running the regular Writer with the Transaction Type of Transactions to see what that changes.
