Skip to main content

Hello

 

I have an FME Server (2019) instance hosted on FME Cloud (standard instance size). One of the hosted workbenches allows users to upload PDFs as input, then reads the pages as raster data and carries out some operations on that imagery.

 

Most of the time this workbench works without any issue, but a large PDF (or a small PDF with lots of pages) will cause the workbench to use all available memory (8GB, standard instance), and FME Server becomes unresponsive. A reboot and cancellation of the job is required in this situation.

 

I have a couple of ways to mitigate the risk of crashing the server (PDF file size limit, using a job expiry time etc), but I'm wondering if there is any way to manage the memory FME Server is committing to the job. E.g. only 4GB of memory is ever committed to this workbench.

 

Or should I be revisiting the workbench to make it use memory more efficiently?

 

Looking for a pointer in the right direction, appreciate any help.

Revisiting the workspace to see if it can be improved is always a good idea of course, this article has some helpful hints, although it's mostly written for desktop. Another thing you can look at is your temporary disk space (if memory fills up that's where it'll swap to). Also, are you sure the job hangs or is it just taking a long time? You can check that by going to the Running jobs list, selecting it and then looking at the log, refresh it every now and then and you should see lines being added if the job is running.

There is no way to specify how much memory can be allocated to one workspace. Might be a useful feature though.

 


Reply