Skip to main content
Solved

Clipper-Transformer: stop clipper-reader from re-reading when batching

  • July 8, 2021
  • 1 reply
  • 23 views

m_vollmer
Participant

Hi!

 

I have an issue with the Clipper-Transformer, that I would like to use for a batch deploy.

 

The clipper input is a Shapefile with about 1.7 Million polygons, defining a grid. The clippee are a few hundred Point Clouds as LAZ, that should be clipped according to the grid. I use batch deploy because otherwise the Point Clouds fill up the disk space and make FME crash.

 

The issue I have with batch deploy is, that the $hapefile (Clipper) is re-read everytime a new clippee is loaded, which is very time consuming.

 

Thanks a lot and with regards,

Matthias

Best answer by cfvonner

If you use a FeatureReader to read in the clipper features, you can specify on the FeatureReader parameters caching settings:

 

imageIf the clipper feature data does not change frequently, set Cache Timeout to a high number of hours. The first time the workspace is run, the clipper features will be read from the source and cached. Subsequent runs within the configured timeout period will use the cache instead of re-reading the data from the source.

This post is closed to further activity.
It may be an old question, an answered question, an implemented idea, or a notification-only post.
Please check post dates before relying on any information in a question or answer.
For follow-up or related questions, please post a new question or idea.
If there is a genuine update to be made, please contact us and request that the post is reopened.

1 reply

cfvonner
Supporter
Forum|alt.badge.img+24
  • Supporter
  • 46 replies
  • Best Answer
  • June 13, 2023

If you use a FeatureReader to read in the clipper features, you can specify on the FeatureReader parameters caching settings:

 

imageIf the clipper feature data does not change frequently, set Cache Timeout to a high number of hours. The first time the workspace is run, the clipper features will be read from the source and cached. Subsequent runs within the configured timeout period will use the cache instead of re-reading the data from the source.