Skip to main content

I have a workflow that consists of several different municipalities of input parcel data that will all be delivered at different times. The steps of the workflow are as follows:

 

  1. Load raw data into a GDB
  2. ETL the data into target schema and load into municipality-specific GDB
  3. Run a validation on the data
  4. Load individual municipalities into an aggregated single feature class in a single gdb.

 

I ultimately have to deploy these workspaces to FME server and have the entire workflow run for JUST a single municipality upon data delivery for that municipality. I have it set up so that the ETL and validations for each municipality are separated out into municipality specific workspaces.

 

What I am wondering is if it is possible to build all of the different municipalities into a single initial data load ETL, and a single aggregation ETL and still ONLY have the whole process for the municipality that was delivered run (and not all of the other ones that don't have a new data delivery)? The alternative would be building a municipality-specific workspace for each step of the process, which would result in over 200 different workspaces.

 

Thank you!

Hi @nking​,

 

Not sure what version of FME Server you are running, but this seems like a strong use case for an Automaton that uses the new (as of 2020.1) Dynamic Workspaces Action. The idea behind Dynamic Workspaces is that the data determines what happens in the Automation - i.e. which workspace should run based on the data that is currently being processed.

 

Don explains this new feature in greater detail in the Keeping Cool, Calm, and Connected with FME 2020.1 webinar around the 35 minute mark.

 

On the authoring side of things, if you want to set these up as single workspaces, you may need to look into building Dynamic Workspaces. The idea behind Dynamic Workspaces either on FME Server or Desktop is to increase flexibility within a workspace/automation and minimize long-term maintenance.


Reply