Skip to main content

Hi,

I'm trying to create a dynamic workspace to create file geodatabase extracts of data out of ArcSDE. Ideally, I would like to make use of change detection in some form, so that data that hasn't changed doesn't get processed for no reason. Even more ideally, I'd like to query the database system tables to quickly narrow down what objects have experienced any change (I have some ideas on how to satisfy the second part, but haven't gotten to it yet).

The SDE extracts in FGDB form will end up on a network share and be replicated out to the users.

This article is pretty much what I want to do, I think.

https://knowledge.safe.com/articles/1157/dynamic-workflow-tutorial-destination-schema-as-a.html

I have to presume that I don't know the schema, nor the quality of the data, in an effort to make this flexible for the many SDE databases we're running this for.

My workspace looks like this:

 

Where I'm getting stuck, right now anyway, is applying the delete operation to rows that no longer exist in SDE. In specifying the fme_db_operation keyword as the 'Feature Operation' method on the writer, I must supply criterion (which is part of the FGDB schema, it seems) so that the right rows are removed. Screenshot:

If the task here is to mirror an SDE database with any number of tables, all with different fields, how can I do this reliably? More frustrating: I have seen an example that uses Esri's Data Interop extension that doesn't seem to be restricted by this requirement, even though it's using the FGDB writer.

Am I missing something obvious here?

Thanks,

Chris

Hi @chris2,

Thanks for your question! I believe you're on the right track in using the dynamic schema here. For where you're getting stuck, try setting the Row Selection to Match Columns by your primary key attribute. Check out the Mixed Operations section in this article on Incremental Updates that uses the fme_db_operation for some more information on it. For setting up fme_db_operation, I'd also suggest taking a look through this article which has a downloadable workspace template that might be helpful to reference even though they don't use the dynamic schema. Hope this helps a bit!

- Jo


Hi @chris2,

Thanks for your question! I believe you're on the right track in using the dynamic schema here. For where you're getting stuck, try setting the Row Selection to Match Columns by your primary key attribute. Check out the Mixed Operations section in this article on Incremental Updates that uses the fme_db_operation for some more information on it. For setting up fme_db_operation, I'd also suggest taking a look through this article which has a downloadable workspace template that might be helpful to reference even though they don't use the dynamic schema. Hope this helps a bit!

- Jo

Hi @JovitaAtSafe,

Thanks for your reply. One of the challenges I think I have is that I don't know my primary keys beforehand. Let's say I replicate ten database tables on Monday, on Tuesday there might be eleven, and I know nothing about that eleventh table--field names, keys, no idea. Is that kind of flexibility possible?

Thanks,

Chris


Hi @JovitaAtSafe,

Thanks for your reply. One of the challenges I think I have is that I don't know my primary keys beforehand. Let's say I replicate ten database tables on Monday, on Tuesday there might be eleven, and I know nothing about that eleventh table--field names, keys, no idea. Is that kind of flexibility possible?

Thanks,

Chris

Hi @chris2,

My apologies for the lapse in response, I'm not too sure if it would be possible without knowing the primary key. There are ways to test against a schema, or match columns... It may be possible to test for unique values in a column, but I'm afraid this is outside of my expertise. Would you be open to posting a new question directed at this specific problem? It would open it up and invite some new perspectives from the community. Sorry I'm not able to offer more help on this particular issue.


Reply