Skip to main content

Dear all,

 

I am trying to do the following : I have a yes/no user parameter that determines if additional output files are written or not. At several steps of the workflow, if this parameter is set to 'yes', intermediary files are written (before the final output file).

 

To implement that, before all the intermediary writers, I put a sampler (to go through the filter with a single feature instead of the > 300'000 features of my data) followed by a TestFilter inside which I test the value of the parameter.

 

It works but I don't find it very ''elegant''. And as I am new to FME, I was wondering, what would be the more "FMEic-way" to implement it ?

 

Does a transformer (or something like a junction?) exist which would act as a ''gateway'' and pass the data further downstream ONLY if a certain condition is met ?

 

Thanks in advance for any information

I'm not sure how what you implemented works for all features. When I do this I just put a tester in the connection to the FeatureWriter with the condition $(Debug) = Yes. This performs quite well.


I'm not sure how what you implemented works for all features. When I do this I just put a tester in the connection to the FeatureWriter with the condition $(Debug) = Yes. This performs quite well.

Same here. The tester shouldn't have much or a measurable impact on performance I wouldn't think


I'm not sure how what you implemented works for all features. When I do this I just put a tester in the connection to the FeatureWriter with the condition $(Debug) = Yes. This performs quite well.

but let's say the data you want to write in the file contain a lot of features (>100'000), if you put a Tester between the Writer and the Transformer that precedes the Writer, all the > 100'000 features will be processed by the filter, isn'it ? it is still more efficient than using a sampler before the Tester ? (sorry if the questions sound naive, I am really newbie to FME)


but let's say the data you want to write in the file contain a lot of features (>100'000), if you put a Tester between the Writer and the Transformer that precedes the Writer, all the > 100'000 features will be processed by the filter, isn'it ? it is still more efficient than using a sampler before the Tester ? (sorry if the questions sound naive, I am really newbie to FME)

Sure, but then only the sampled number of features will be written. If that is what you need, no problem.


but let's say the data you want to write in the file contain a lot of features (>100'000), if you put a Tester between the Writer and the Transformer that precedes the Writer, all the > 100'000 features will be processed by the filter, isn'it ? it is still more efficient than using a sampler before the Tester ? (sorry if the questions sound naive, I am really newbie to FME)

FME (at least when it's processing in BulkMode) can process 100,000+ features very quickly. Attached is an example which creates 1,000,000 features and runs the test and then opens an inspector. The workspace takes only 3 seconds. Without the inspectors its more like 1-2 seconds.

 


thanks to both of you for your insights !


Reply