Skip to main content

I get this error for only this scheduler job. All other jobs fine.

Insufficient memory
available -- error code was 2 - please read the FME Help section 'Adjusting
Memory Resources' for workarounds.

Nothing much to clean up in /temp folder. How can I make this job run?

Probably would help to explain about the context of the job, server environment 32/64bit etc.

 


Also, how much data is there? Sometimes this is due to really running out of memory (keep an eye on the system resources while running, to see if it is being constantly reduced) but I have also seen such an error occur because of a piece of bad data. If the dataset has only a few features it might be a corrupt feature rather than a lack of memory.

 


Also, how much data is there? Sometimes this is due to really running out of memory (keep an eye on the system resources while running, to see if it is being constantly reduced) but I have also seen such an error occur because of a piece of bad data. If the dataset has only a few features it might be a corrupt feature rather than a lack of memory.

 

@Mark2AtSafe - Yes system resoures (memory) is being used up almost 90% of it. There is nothing wrong with the dataset as the job is successful in other environements.

 


Probably would help to explain about the context of the job, server environment 32/64bit etc.

 

@mark_1spatial - It runs on FME 2015.1.3.2 version, 32-bit.

 


@Mark2AtSafe - Yes system resoures (memory) is being used up almost 90% of it. There is nothing wrong with the dataset as the job is successful in other environements.

 

Fair enough. Then I think you'll have to post the workspace so we can see what it is doing. Either it's a memory leak (and we'd need to run it at Safe to figure that out) or it's just more data than that environment can handle (so if we can see the workspace we might be able to figure out how to cut back on memory usage).

 

Either way, if it's using 90% of system memory resources then changing to 64-bit won't help. Still surprising it doesn't cache data first before failing, but I guess that can happen sometimes. What is the environment, btw? How much memory does it have installed?

 


 

Not FME Server but I saw significant improvements between FME Desktop 2015 vs 2016.

 

Not FME Server but I saw significant improvements between FME Desktop 2015 vs 2016.
@mark_1spatial, we are yet to upgrade it to FME 2017

 


Probably would help to explain about the context of the job, server environment 32/64bit etc.

 

@mark_1spatial, fyi more detailed specs

 

 

OS :
Windows 2012 R2

 



4
CPU

 



16 GB RAM

 


As @mark_1spatial says, it's nearly impossible to give a precise answer to such a broad question, there are so many factors that influence memory consumption, and most of them related to your data and how your workspace is authored.

However, Safe has published a great resource with a lot of general tips and tricks here: Performance Tuning FME It's a very good starting point for narrowing down the issue.

My number one tip: If you have a lot of data passing through blocking transformers (e.g. the FeatureMerger), try your hardest to "unblock" them by carefully manipulating the data flow and how they're configured. Done right, this can drastically reduce the memory consumption while executing your workspace. More info here: https://knowledge.safe.com/articles/38700/clearing-blocking-transformers.html


@Mark2AtSafe - Yes system resoures (memory) is being used up almost 90% of it. There is nothing wrong with the dataset as the job is successful in other environements.

 

 

@fmeuser_gc you said: "There is nothing wrong with the dataset as the job is successful in other environements."

 

Can you elaborate - other FME Server environments? Running on FME Desktop? What are the differences between these and the one where it fails? Version, Memory, CPU?

Reply