Skip to main content

Hi I have a problem with overwriting the existing gdb file..

 

Scenario: There is a new data that I need to insert/update into the gdb file without having to create a new gdb file...

 

I have tried by using shp as reader and gdb as writer (It works) but when I uses gdb as reader and the same gdb as writer (Because I need the updated/new data to be in the same gdb file -FAILED).... I tried to look into the error but there is no clear explanation in the translation log.... Can anyone help me out on this?

 

Capture

The likely reason is because there is a Schema Lock on the FGDB. Until any Read processes have finished reading from the FGDB and released their Schema Locks, then data can't be written to the same FGDB Schema objects.

 

In your workflow, you have "Slope" as the Initiator that is initiating a downstream FeatureReader 51 times and then trying to write to the same FGDB 51 times with what appears to be the same data, so it is likely the first FeatureWriter hasn't released its Schema lock before the workflow attempts to commence the Final Writer on the same FGDB Feature Class.

 

To step back somewhat and look at the overall Workflow design, on first impressions this seems a very convoluted way of performing Updates/Inserts and not the way most workflows would do this.

 

Most workflows would Read the data in with a single Reader (or single FeatureReader), transform the data inside the Workflow, and then write any Inserts/Updates with a single Writer (or single FeatureWriter). So I can't see why there is a need for any intermediate FeatureReaders or FeatureWriters here.

 

For FGDBs in particular using this conventional method makes the Workflow:

1st: Read all the required data into FME Memory/File Cache, and then close the FGDB connection and Remove Schema Locks

2nd: Open an new FGDB connection and write the data

 

Which generally avoids and Schema lock issues on FGDBs


The likely reason is because there is a Schema Lock on the FGDB. Until any Read processes have finished reading from the FGDB and released their Schema Locks, then data can't be written to the same FGDB Schema objects.

 

In your workflow, you have "Slope" as the Initiator that is initiating a downstream FeatureReader 51 times and then trying to write to the same FGDB 51 times with what appears to be the same data, so it is likely the first FeatureWriter hasn't released its Schema lock before the workflow attempts to commence the Final Writer on the same FGDB Feature Class.

 

To step back somewhat and look at the overall Workflow design, on first impressions this seems a very convoluted way of performing Updates/Inserts and not the way most workflows would do this.

 

Most workflows would Read the data in with a single Reader (or single FeatureReader), transform the data inside the Workflow, and then write any Inserts/Updates with a single Writer (or single FeatureWriter). So I can't see why there is a need for any intermediate FeatureReaders or FeatureWriters here.

 

For FGDBs in particular using this conventional method makes the Workflow:

1st: Read all the required data into FME Memory/File Cache, and then close the FGDB connection and Remove Schema Locks

2nd: Open an new FGDB connection and write the data

 

Which generally avoids and Schema lock issues on FGDBs

So I see... Thank you for your answers. This is very helpful for my task :)


Reply