Skip to main content

Hello,

 

I use a Reader to list files inside a directory, connected to a FeatureReader to read the features.

 

Currently, all features of all files are merged. ie. if I have 100 files with 100 features each, the output of FeatureReader will be 10.000 features.

 

Is it possible to iterate incrementally over each file, ie. to execute a pipeline independently for each input file, so that the FeatureReader will be executed 100 times with 100 output features ?

 

Thanks!

@0x974​ The FeatureReader is triggered when a feature enters it. In your case it is "executed" 100 times. After the FeatureReader you'll need to group by the fme_basename or another attribute value that is unique for each input file. Those 10,0000 features should come out of the FeatureReader in the same order that the 100 files went in.


If you put the FeatureReader and everything downstream into a separate workspace, you could feed your 'files' reader into a WorkspaceRunner. For each file that enters the WorkspaceRunner, it would process that independently in the 2nd workspace.


If you put the FeatureReader and everything downstream into a separate workspace, you could feed your 'files' reader into a WorkspaceRunner. For each file that enters the WorkspaceRunner, it would process that independently in the 2nd workspace.

Yes, I did this, that worked like a charm. Thanks!


Reply