Hi everyone,
I have a simple workbench where I added a reader that reads 20+ featureclasses from a nationwide dataset from a file geodatabase.
From there the workbench selects 11 featureclasses and uses the clip transform (clip polygon is a province) to clip out a certain area and saves it to a new file geodatabase.
Problem with the workbench is that it has milions of features, and the machine I run it on will run out of memory after 5+ hours of processing.
Is there a way to split up the workbench so it will read/transform/write the workbench in 1 to 3 feature classes and waits till the memory is free again before starten the next 1-3 features classes? (maybe with embedding multiple workbenches?)
thanks in advance!