Skip to main content

For my workbench I have one input dataset from an oracle spatial reader. The dataset has over 2 million features. At the moment workbench takes over 12 hours to run as the features get stuck at a featurejoiner transformer near the end of the workbench.

@buckrogers​ I think you can probably leverage WorkspaceRunner and a BETWEEN query to your database. In the 'parent' workspace calculate how many records you have (COUNT(*)) and then divide that by your batch size. You should be able to calculate the value1 & value2 for the BETWEEN. Then pass those to WorkspaceRunner which will call the child workspace - that actually does the work. Something along those lines...

I've included a couple of example workspaces and sample data that you can use as a possible starting point.


@Mark Stoakes​ that's working perfectly and has quartered the run time. Thank you so much


Reply