I have a workflow that consists of several different municipalities of input parcel data that will all be delivered at different times. The steps of the workflow are as follows:
- Load raw data into a GDB
- ETL the data into target schema and load into municipality-specific GDB
- Run a validation on the data
- Load individual municipalities into an aggregated single feature class in a single gdb.
I ultimately have to deploy these workspaces to FME server and have the entire workflow run for JUST a single municipality upon data delivery for that municipality. I have it set up so that the ETL and validations for each municipality are separated out into municipality specific workspaces.
What I am wondering is if it is possible to build all of the different municipalities into a single initial data load ETL, and a single aggregation ETL and still ONLY have the whole process for the municipality that was delivered run (and not all of the other ones that don't have a new data delivery)? The alternative would be building a municipality-specific workspace for each step of the process, which would result in over 200 different workspaces.
Thank you!