Skip to main content

Running an automated script that gets data from another source to map. Have table in SDE (10.3.1) and set to Insert with Truncate Existing. The table is not truncating. It appears to work with Drop and Create. However, we want to keep the permissions and such intact, therefore using Truncate. This feed started today with about 15 rows, changed to 8 rows, then to 0 rows, back to 8 rows. This data is being used in a Dashboard on ESRI Portal for real time reporting.

FME 2018.1.0.2 64 bit Build 18547 ArcDestop 10.3.1

Thanks,

Need to edit this to mention that when there is NO data to be written, then the table is not truncating. It would seem to me that the table should truncate no matter if there is new data to be written or not.

Thanks


Hi @lorinda

As mentioned in the Geodatabase writer documentation, both 'Truncate Table' and 'Drop and Create' parameters are only valid if a feature reaches the writer.

 


Hi @lorinda

As mentioned in the Geodatabase writer documentation, both 'Truncate Table' and 'Drop and Create' parameters are only valid if a feature reaches the writer.

 

That seems to be the issue. So I got around it by creating a data model in ArcMap to 'Delete Features' from the feature class first. Then wrote it out to python and have it working as a python script run before the rest of the process starts. That seems to have solved the issue - until we get hit with very large files for this process. I'm hoping that at that point, when running on the server, that the speed between the servers will compensate for the increased processing time for the number of features. Have another FME script that this same process will be utilized to see if it helps with processing time as it was taking over 4 hours (and sometimes killed after 36 hrs due to not processing at all) to truncate the feature class before inserting the new rows (approximately 125K features).

 

 


To do what you want, I suggest putting the following in the "sql to run before write": 

TRUNCATE TABLE eSCHEMANAME.TABLENAME]

That seems to be the issue. So I got around it by creating a data model in ArcMap to 'Delete Features' from the feature class first. Then wrote it out to python and have it working as a python script run before the rest of the process starts. That seems to have solved the issue - until we get hit with very large files for this process. I'm hoping that at that point, when running on the server, that the speed between the servers will compensate for the increased processing time for the number of features. Have another FME script that this same process will be utilized to see if it helps with processing time as it was taking over 4 hours (and sometimes killed after 36 hrs due to not processing at all) to truncate the feature class before inserting the new rows (approximately 125K features).

 

 

If you're using arcpy.DeleteFeatures_management() in your Python script, I would suggest using arcpy.TruncateTable_management() instead.

 

 

As noted in the Esri documentation for Delete Features, deleting all features can be slow and it is recommended to use Truncate Table instead if you intend to delete all rows.

 


To do what you want, I suggest putting the following in the "sql to run before write": 

TRUNCATE TABLE eSCHEMANAME.TABLENAME]
This would be fine for a regular sql table.  ESRI has many dependencies on the basetable which would be the schemaname.tablename.  Therefore, using a data model translated to python works in the ESRI world to clean out all of the tables associated with the one.

 

 


This would be fine for a regular sql table. ESRI has many dependencies on the basetable which would be the schemaname.tablename. Therefore, using a data model translated to python works in the ESRI world to clean out all of the tables associated with the one.

 

 

I do this for our sde tables when they are massive hundred thousand+ record tables, as it is fast, however it doesn't allow you to revert like the truncate option does and it works just fine and doesn't cause any issues.

 


If you're using arcpy.DeleteFeatures_management() in your Python script, I would suggest using arcpy.TruncateTable_management() instead.

 

 

As noted in the Esri documentation for Delete Features, deleting all features can be slow and it is recommended to use Truncate Table instead if you intend to delete all rows.

 

Also note that ArcPy is Python 2.7, so that should be considered until it is upgraded to 3.4 since future versions won't support 2.7

Hi @lorinda,

 

For the SDE table to be truncated it must be in an "edit" session therefore you will need to at least have your Transaction Type set to "Versioned edit session" or "Non-versioned edit session" depending on your data. You can either do this when you add the writer or change it through the writer parameters under "transaction type".

Let me know if it works for you.


Reply