Good morning all - hoping that you can help me; I’ve got a fairly lengthy workbench which we run daily updating our internal data warehouse from a variety of sources. due to the complexisty, there are several no features testers included in the workbench which continue the translation when no data feeds in at certain points. However, there are certain instances where we would want the translation to stop (no new source file available, source system not ready for update etc.). We currently use terminators in order to stop the subsequent no features testers from triggering - however it would make it easier for us to keep on top of our monitoring if this didn’t then appear as a failed job in our server log - is there any alternative way of stopping the translation without it appearing as a fail?
I’ve achieved similar outcomes with a nofeaturetester and featureMerger.
Essentially, the featureMerger acts as a gate. So you have a single feature going to the supplier port and the rest of the features going to the requester (merge 1=1). Depending on how your process is configured the supplier port can either ‘allow’ features out the merged port, or ‘block’ features coming out the unmerged requester port
Yeah, I was looking at that as a possible solution - the only issue being is that, due to the size of the workbench and a series of subroutines contained in custom transformers, I’ve got something like 21+ No features testers at various stages in the routine - doing it this way means that if the fail/terminate occurred on the first one I’d have to carry that through to all of the subsequent ones with a combination of mergers/testers/no feature testers to prevent them being triggered, as there could be unintended consequences to this if earlier parts of the routine haven’t run.
Its do-able, but could end up being long-winded, so was just hoping there might be a more elegant way around it; as it stands, i think I’d be happier just living with the fail notice coming through
Breaking down the workbench into multiple workbenches and use workspaceRunners could help keep the overview?
If you build your workbench with
If you don't run the workbench with FeatureCaching enabled you could also use VariableSetters and VariableRetrievers. If there is a trigger that concludes that parts do not have to run you set a NoRun variable in the VariableSetter and a VariableRetriever to test if NoRun is true and test if the NoRun is true to block the features.
In my case workbenches look like this:
Creator → FeatureReader → do stuff → Sampler (1) → Keeper → other FeatureReader → do more stuff → FeatureWriter → Sampler (1) → Keeper → FeatureReader etc.
This way I only read additional features if this is needed. I hate it if my workbench is reading many features first to find out that my input (first reader) is wrong to begin with.
And if you only want to proceed if something reads 0, 1 or more than 1 feature you test with a FeatureMerger solution and use that output as the next trigger.
Rebuilding an existing workbench could indeed be harder.
without working this through fully I can potentially remove the Terminators and replace them with LogWriters to make a note of what has happened - each instance where we would previously have had a Terminator can be replaced with a VariableSetter, each NoFeaturesTester can then be followed by a VariableRetriever and Tester to go against the NOINPUT output, which will prevent anything further from running if the VariableRetriever brings back a ‘No Run’ value
thanks for your input, hopefully this will give me what I need
Another nice thing:
Replace the terminator with a AttributeCreator with 3 attributes:
Logtype
Logseverity
Logmessage
Then send all those AttributeCreator outputs to Aggregators to create a nice overview which you can write to a results or mail to a user. Because you don't want to read all the logging to know something didn't have input.