Forgive me if this is a stupid question, but I haven't found this process to be very intuitive and I haven't found a solution in the documentation.
Let's say I have a database with 10 different tables. There are some common columns in each of these tables, but they also all have unique columns. I want to create a dynamic workspace so that a user can open it, run it, and specify the name of the table they're interested in to send only the features from that one table on to the writer.
So I create a reader, I select "single merged feature type", I specify the names of all the tables in the Table Names box in the reader parameters, create it and add it to the workspace. I create a writer (let's say it's an ESRI shapefile), I set the User Attributes to Dynamic, and I connect the two. I run the workspace and in the "Feature Types to Read" parameter I provide the name of just one of the tables.
I look at the output and it looks okay at first; this new shapefile only contains the features from the table I specified. However, if I look at the attributes it has every column from every table - even columns that don't exist in the table I specified.
I also tried not merging the features, connecting each individual feature to the writer and sending only one table at a time thinking it's only getting the schema from that one table, but I still ended up with a bunch of blank columns.
I could probably come up with a shutdown Python script that removes all these extra columns, but I feel like there must be something I'm missing that would allow this process to be automatic.