Skip to main content

I've got a bunch of input files that are stored in sub-directories, I use a path and directory reader to grab these and send them into a workspace runner. A bunch of things take place and then I want to write them all out to 2 layers within a gdb. So dozens of files being brought together to 2 central layers (point, polyline).

That's all fine...when I don't run the child workspaces concurrently. I of course get layer locks if I run multiple processes.

Ideally I'd like to run multiple processes and have each feature inserted/appended to the table.

Any suggestions on how to get around this gdb locking? I'm happy to go out to another spatial db format, that I can later update.

From ESRI's FAQ:

Question

Can multiple users edit a file geodatabase at the same time?

Answer

Yes, it is possible to have multiple users edit a file geodatabase at the same time provided that they are editing:

• separate stand-alone feature classes at the root level

OR

• separate feature datasets (this is regarded as a workspace containing one or more feature classes)

OR

• the same feature datasets in the file geodatabase but different feature classes

If these rules are not followed, a schema lock error is received by all users editing the geodatabase, other than the primary user that locked the file geodatabase. If a file geodatabase is not editable at the current time, an appropriate message is received by the client application.

source: https://support.esri.com/en/technical-article/000012031

If I understand it correctly I think you can't work around this and need a real (not a file based) database to do concurrent writes. Something like SDE, Oracle Spatial or PostGIS.


From ESRI's FAQ:

Question

Can multiple users edit a file geodatabase at the same time?

Answer

Yes, it is possible to have multiple users edit a file geodatabase at the same time provided that they are editing:

• separate stand-alone feature classes at the root level

OR

• separate feature datasets (this is regarded as a workspace containing one or more feature classes)

OR

• the same feature datasets in the file geodatabase but different feature classes

If these rules are not followed, a schema lock error is received by all users editing the geodatabase, other than the primary user that locked the file geodatabase. If a file geodatabase is not editable at the current time, an appropriate message is received by the client application.

source: https://support.esri.com/en/technical-article/000012031

If I understand it correctly I think you can't work around this and need a real (not a file based) database to do concurrent writes. Something like SDE, Oracle Spatial or PostGIS.

Thanks Niels,

I do have a real (SDE) db at my disposal but it's production. I guess we use gdb as our working dirs, then push to SDE. They don't have identical rules but they do have similar.

Maybe I'll spin up 'working' SDE.

I'm still not sure how this will go though, the same issue might be present. I don't want to tell each child workbench that's running to write to a different db version.


Thanks Niels,

I do have a real (SDE) db at my disposal but it's production. I guess we use gdb as our working dirs, then push to SDE. They don't have identical rules but they do have similar.

Maybe I'll spin up 'working' SDE.

I'm still not sure how this will go though, the same issue might be present. I don't want to tell each child workbench that's running to write to a different db version.

As I do not know the next steps it is hard to say what a possible solution may be.

What is the current workflow? Collect the changes in one gdb and manually check and submit to SDE?

You could dynamically generate a file geodatabase per file and merge the these gdb's into one session gdb.


As I do not know the next steps it is hard to say what a possible solution may be.

What is the current workflow? Collect the changes in one gdb and manually check and submit to SDE?

You could dynamically generate a file geodatabase per file and merge the these gdb's into one session gdb.

Yep the current workflow is to collect the data in the 2 layers in the gdb, give them a vague QA (gross error check) and then push into an SDE layer.

 

I guess I COULD produce 80 odd GDBs, just feels cumbersome.


Yep the current workflow is to collect the data in the 2 layers in the gdb, give them a vague QA (gross error check) and then push into an SDE layer.

 

I guess I COULD produce 80 odd GDBs, just feels cumbersome.

If you go the SDE route I would add a staging table to write the results in. When check is OK then push to production.


If you go the SDE route I would add a staging table to write the results in. When check is OK then push to production.

Just an informal staging area? polyline_staging?


Just an informal staging area? polyline_staging?

Yeah, just like the way you used the "staging" file geodatabase.


Reply