I have a reader that has two sets of variables roads and nodes. I want it to go through and process all the roads and run that through the transformers and save to the database before the nodes part of the file gets read and processed. Need some of the road info to help process the nodes. Is their a way to ensure that the roads are completed before the nodes start to process in a single workbench or do I need to break it up and use a FMEserverworkspacerunner. If possible i would like to avoid this and keep everything on one workspace.
You could use the FeatureHolder transformer to hold the nodes until the roads are processed. This will use more resources though.
You could use the FeatureHolder transformer to hold the nodes until the roads are processed. This will use more resources though.
I thought feature holder only holds until the feature is completely read through. So it would wait until all the nodes and roads are read then release?
You basically have two options (well, a couple more, but I think they're less than ideal).
One is to break it up into several workspaces using the FMEServerJobSubmitter, as you suggest.
The second option is to wait for FME 2016 and to use the new FeatureWriter, which will enable you to chain the writing of different datasets, using e.g. a FeatureHolder in between to make sure one dataset has been fully written before writing the next.
Bonus suggestion: If you feel up for it, use a SQLCaller as some sort of poor-man's FeatureWriter.
David
You basically have two options (well, a couple more, but I think they're less than ideal).
One is to break it up into several workspaces using the FMEServerJobSubmitter, as you suggest.
The second option is to wait for FME 2016 and to use the new FeatureWriter, which will enable you to chain the writing of different datasets, using e.g. a FeatureHolder in between to make sure one dataset has been fully written before writing the next.
Bonus suggestion: If you feel up for it, use a SQLCaller as some sort of poor-man's FeatureWriter.
David
You basically have two options (well, a couple more, but I think they're less than ideal).
One is to break it up into several workspaces using the FMEServerJobSubmitter, as you suggest.
The second option is to wait for FME 2016 and to use the new FeatureWriter, which will enable you to chain the writing of different datasets, using e.g. a FeatureHolder in between to make sure one dataset has been fully written before writing the next.
Bonus suggestion: If you feel up for it, use a SQLCaller as some sort of poor-man's FeatureWriter.
David
@david_r SQLCaller? Do you mean SQLExecutor?
The beta for FME 2016 is available at:
http://www.safe.com/support/support-resources/fme-downloads/beta/
And yes, the FeatureWriter has an output port for all written features.
@david_r SQLCaller? Do you mean SQLExecutor?
Yes I do, good catch. Apparently I'm ready to go home for the weekend ;-)
So here's something to consider.
You'd use this pattern at the END of your translation flows.
By using the trick I'm showing here, you can control the order that 2 groups of features let loose. This would be just before their respective feature types in a single writer.
(Now, having written that line, I can see that a VERY EASY way to ensure that two tables get written one after the other is just to use 2 writers in one workspace, one for each table. The second writer will not start until the first writer is done. So that is the simplest of all. But I'll still show the other trick below in case someone finds that useful in some other situation).
If your writer is a SQL Executor you can use the FeatureHolder transformer. FeatureHolder is powerful if you have some workflows. But the best thing is to use the JobSubmitter with the option Complete is YES.