Skip to main content

I was wondering if anyone had any information on best practices for junctions and bookmarks use when building Custom Transformer?

 

I have a pretty excessive custom transformer built that takes hours to complete. It is a looping transformer that utilize multiple blocking transformers (Group By), list, and many pipelines branching off junctions. I am trying currently taking steps to tune its performance. The log file had junction appearing over and over, leading me to suspect that eliminating them will reduce overall compute time. I am experimented with collapsed vs expanded bookmarks using cache feature, and yet to find differences--my logic was collapsing bookmarks reduce feature caching, therefore speed up compute.

 

Thanks and cheers!!

 

-Alex

 

 

@alexlynch3450​  Junctions shouldn't add any significant stress to your workflow, outside of the fact that they do also cache data when feature Caching is turned on.

When you collapse a bookmark, feature caching for all the contents of the bookmark is turned off, so this will significantly reduce the stress on processing, if you run your workspaces with feature caching on.

For production jobs, we'd recommend turning off feature caching.

Other performance tips are here. If you're using feature caching a lot, then try and set your FME_TEMP to an SSD (solid state drive).


Reply