Hello everyone,
I’m working on an FME workspace where I need to process multiple input files sequentially within the same workflow, and I want to make sure that each file is processed independently, without any data or state being kept in memory from the previous file.
Conceptually, what I’m looking for is a way to flush or reset the internal state between files, so that:
- cached features,
- temporary aggregates,
- statistics,
- or in-memory objects
from a previously processed file do not affect the processing of the next one.
Context
- I’m looping over multiple files (using a file-based workflow, not separate workspaces).
- The workflow includes transformers that may keep internal state (e.g. aggregations, statistics, grouping, geometry operations).
- The goal is to guarantee deterministic, per-file processing, similar to a “clean run” for each file.
My questions
- Is there a native way in FME to explicitly flush / reset the internal state of a workspace between input files?
- Are there specific transformers or workspace settings that:
- automatically reset state per file, or
- should be avoided when processing multiple files in the same run?
- Is the recommended approach to:
- use a workspace per file,
- use feature-based partitioning (e.g. Group By),
- or rely on specific transformers (e.g. FeatureHolder, FeatureMerger, etc.) to isolate state?


