Skip to main content

Hey!

I was wondering what the best practices are when it comes to updating a sde table that has archiving enabled.

 

I have a few tables in a sde database that I need to update regularly. These tables have archiving enbaled and the tables have between 100,000-900,000 records.

I have noticed that when I want use “Truncate Existing” in my sde writer that it takes a long time (I’ve never actually had the patience to let it run to completion… I usually cancelled it after a few hours)

If I use the “SQL To Run Before Write” to truncate the table before I write to it, all the “archived” data gets lost.

If I use the arcpy function “Truncate Table” in a Python Caller right before I write the data to the table it works really well, it doesn’t take too much time and the archiving function is working as well.

 

I guess one other option would be to detect all the changes and only update those rows but for different reasons this makes the workflow a bit more complicated. But maybe something worth looking into more?

 

Thanks for any help and inspiration :) 

 

/Vera

Archiving does slow down the processing time. Waiting for a couple hours is what i would expect, especially given the volume of data you’re dealing with.

I would suggest it’s worth your time going through a change detection process to only update what needs updating. Do you have last edited dates on your source data? That will greatly improve the speed of being able to figure out what changes in the source data need to be made in the destination.

If you’re needing to do a full load, and don’t want archiving kept (but it appears you do) then disabling archiving, then truncating, loading, then turning archiving back on is going to be the most efficient.


@hkingsbury Thank you! That’s what I already thought. I do need to keep the archiving on unfortunately, otherwise I also thought that would’ve been a good idea.

I’ll try to maybe make it work with a change detection process, unfortunately no last edited dates but there might be other ways of making it work. Thanks so much for the help and ideas!


I’d suggest a change only detection process, repeated truncate and loads with archiving turned on is going to generate a huge amount of data which no real value alongside the processing time.


@ebygomm Yeah I also had some concerns about this… Thanks for the tip!