I seem to be running into the same problem again and again. That problem is how do I pass a series of attributes to a transformer using what would be considered a variable in a scripting language. I can generate an attribute or variable containing a list or string containing the attributes, but I can't seem to figure out how to pass that value into a transformer and have it produce the desired result. I know it has to be possible (you can't know every attribute you want to keep/expose/use in every workflow every time, so passing them dynamically has to be a thing), I just can't figure it out. Plus, I can see having to use dynamic inputs in a hundred different ways I can't imagine yet, so I have to get this bit. Can anyone point me to a resource where I can learn the technique? Thank you.
Most FME workflows tend to be for specific source and destination formats/datasets so are actually fairly static in terms of attributes.
Can you describe a situation where you think you want to do this dynamically? With an actual example it would be easier for people here to come up with solutions.
Most FME workflows tend to be for specific source and destination formats/datasets so are actually fairly static in terms of attributes.
Can you describe a situation where you think you want to do this dynamically? With an actual example it would be easier for people here to come up with solutions.
Sorry for how long this is.... We get parcel data from 60+ jurisdictions. Sometimes it’s a simple shapefile or a single feature class in a file gdb, but sometimes its more complex than this. Sometimes it’s a feature class containing just a unique identifier with related tables that hold the more specialized data about each parcel. We are taking this data from the 60+ sources and standardizing it to a single format. The end-result (hopefully) being one large parcel feature class containing polygons for all the jurisdictions in a standardized form. To assist our efforts, we have developed a database containing tables that will drive the process. One table defines table relationships, another defines which fields in each jurisdiction’s source data map to our standardized output schema. As a test we picked a jurisdiction that requires a table join. We set up a file geodatabase reader, read just the tables we wanted, used attribute keeper to keep just the attributes we wanted in each table, joined them, and used schema mapper to write our output. All worked well. But it was not dynamic, we "hardcoded" the database name, table names and attribute names into the transformers. Now we want to automate this process so it can work when the workspace is provided a list of jurisdictions. Each will have unique table and attribute names and only some will require a join. Hence I find that I would like to pass dynamic sets of attribute names to different transformers and I’m having a problem.
Sorry for how long this is.... We get parcel data from 60+ jurisdictions. Sometimes it’s a simple shapefile or a single feature class in a file gdb, but sometimes its more complex than this. Sometimes it’s a feature class containing just a unique identifier with related tables that hold the more specialized data about each parcel. We are taking this data from the 60+ sources and standardizing it to a single format. The end-result (hopefully) being one large parcel feature class containing polygons for all the jurisdictions in a standardized form. To assist our efforts, we have developed a database containing tables that will drive the process. One table defines table relationships, another defines which fields in each jurisdiction’s source data map to our standardized output schema. As a test we picked a jurisdiction that requires a table join. We set up a file geodatabase reader, read just the tables we wanted, used attribute keeper to keep just the attributes we wanted in each table, joined them, and used schema mapper to write our output. All worked well. But it was not dynamic, we "hardcoded" the database name, table names and attribute names into the transformers. Now we want to automate this process so it can work when the workspace is provided a list of jurisdictions. Each will have unique table and attribute names and only some will require a join. Hence I find that I would like to pass dynamic sets of attribute names to different transformers and I’m having a problem.
It sounds like the SchemaMapper might be useful for at least part of your input. You can use it to change the dataset's schema based on an external table. There's a series of tutorials for it here
It won't do table joins though.
My gut feeling says you're probably going to end up with a hybrid solution. So part SchemaMapper and part jurisdiction-specific workflows (e.g. the one with table joins). If that is going to be the case I think you should also strongly consider whether you want everything to happen in a single workspace (which can potentially grow into a very large one, we've come across one with 1800+ transformers...) or split it up into smaller ones. My preference would be to split it up. Mind you, that may not be the most efficient solution, but I think it's the most manageable (especially if you need to transfer management of it to somebody else).
Reply
Enter your E-mail address. We'll send you an e-mail with instructions to reset your password.