Skip to main content

Forgive me if this is a stupid question, but I haven't found this process to be very intuitive and I haven't found a solution in the documentation.

 

Let's say I have a database with 10 different tables. There are some common columns in each of these tables, but they also all have unique columns. I want to create a dynamic workspace so that a user can open it, run it, and specify the name of the table they're interested in to send only the features from that one table on to the writer.

 

So I create a reader, I select "single merged feature type", I specify the names of all the tables in the Table Names box in the reader parameters, create it and add it to the workspace. I create a writer (let's say it's an ESRI shapefile), I set the User Attributes to Dynamic, and I connect the two. I run the workspace and in the "Feature Types to Read" parameter I provide the name of just one of the tables.

 

I look at the output and it looks okay at first; this new shapefile only contains the features from the table I specified. However, if I look at the attributes it has every column from every table - even columns that don't exist in the table I specified. 

 

I also tried not merging the features, connecting each individual feature to the writer and sending only one table at a time thinking it's only getting the schema from that one table, but I still ended up with a bunch of blank columns.

 

I could probably come up with a shutdown Python script that removes all these extra columns, but I feel like there must be something I'm missing that would allow this process to be automatic.

Okay, I figured it out. I was creating a blank workspace, adding the reader, then adding the writer. When I connected the two the writer had all the attributes listed in the User Attributes list; even though I had selected the Dynamic attribute definition it was still trying to include every attribute in that list (ie, every attribute amongst all the tables!).

 

I removed all of the columns from this list, attempted this again and now only the attributes from the selected dataset are being included in the output.


Hi @bgfield​ ,

 

Dynamic Workflows are not so easy as we could expect, I spent a lot of time to make them work (sometimes I had to give up and search a workaround) and never find a standard technique to follow, but if you read carefully this tutorial, may be you can find a solution.

 

Hope thats helps!


Hi @bgfield​ ,

 

Dynamic Workflows are not so easy as we could expect, I spent a lot of time to make them work (sometimes I had to give up and search a workaround) and never find a standard technique to follow, but if you read carefully this tutorial, may be you can find a solution.

 

Hope thats helps!

Thanks @davtorgh​ , I actually was able to find a solution (as mentioned above) by going through that tutorial and comparing the differences between theirs and mine.

 

I'm glad to know I'm not the only one having difficulty creating a dynamic workflow!


Reply