In large and complex FME workspaces, memory usage can become a major performance bottleneck, especially when feature caching is enabled for debugging or when large feature streams are processed sequentially.
It would be incredibly useful to have a dedicated transformer (e.g., DropFeatureCache
, ClearMemory
, or similar) that can be placed mid-flow to explicitly clear cached data, or release memory from earlier processing paths.
This would be especially helpful in long chains of transformations, loops, or branching logic where intermediate data is no longer needed but still retained in memory.