Skip to main content

Hi there,

I'm having problems with my workspace. It's quite big:

  • reading an oracle data base
  • transforming and calculating data
  • writing to SpatiaLite

And all this on a virtual machine.

I cannot provide the workspace and data here as they are sensitive, but I'll try to explain what I already did:

  • The wohle workspace will throw an error. Sometimes because of memory space issues, sometimes unspecified (just 'Error during translation').
  • I already set FME_TEMP to another disk.
  • I did run it in debug mode: no hints on the abrupt termination.
  • I disabled some parts of the workspace to get a focus + activated caching: while running part by part including writing to SpatiaLite, it didn't fail! But: I saw the temp-directory increasing up to 40GB.
  • I ran the workspace in batch mode - failed.

 

What else can I try? It seems to be a space issue (disk and/or memory). Does anyone else have experience on that (especially with a virtual machine)?

 

Cheers,

Maria

You should try to make sure that the C drive​ has enough space to support the maximum allocated virtual memory (this is separate to the FME_TEMP). You should see a line in the log file about this near the top.

Typically this is about 3 times the amount of physical memory. ​

Virtual memory uses the hard disk and so in the situation where you have limited space you can run into issues.​

This is just one thing to try. Of course it might be that you just need to make more memory available to the process. ​


Seeing the temp directory increase to 40GB isn't uncommon when using feature caching, I'll filled up a C:\\ drive with it yesterday! In my case it was getting a large json response from a web service and exploding that into 5000 features each of which then had the response attached to it, as well as a formatted version, and this was repeated over several transformers!

Go through each transformer and get rid of any attributes that are no longer needed as soon as possible, particularly large ones like json responses.


One thing to check, that i've had problems with, is that on the VM host, check that the RAM allocation is reserved solely for that machine. i.e. no over provisioning of RAM or dynamic allocation


Seeing the temp directory increase to 40GB isn't uncommon when using feature caching, I'll filled up a C:\\ drive with it yesterday! In my case it was getting a large json response from a web service and exploding that into 5000 features each of which then had the response attached to it, as well as a formatted version, and this was repeated over several transformers!

Go through each transformer and get rid of any attributes that are no longer needed as soon as possible, particularly large ones like json responses.

Yes, I did clean the attributes. Only the important attributes are going through the workspace. Still an error ... 😶


You should try to make sure that the C drive​ has enough space to support the maximum allocated virtual memory (this is separate to the FME_TEMP). You should see a line in the log file about this near the top.

Typically this is about 3 times the amount of physical memory. ​

Virtual memory uses the hard disk and so in the situation where you have limited space you can run into issues.​

This is just one thing to try. Of course it might be that you just need to make more memory available to the process. ​

Thanks for the hint with the virtual memory. We did adapt it. But the workspace still fails :-(


Hi all,

well, we did some adaption to virtual memory and cleaned the attributes, but the workspace still fails :-(

Now I even have a more simple workspace, which fails. The reader is still Oracle. I cannot provide the workspace and data (security issues). I added the log-file including debug-information. Hope this helps


Hi all,

well, we did some adaption to virtual memory and cleaned the attributes, but the workspace still fails :-(

Now I even have a more simple workspace, which fails. The reader is still Oracle. I cannot provide the workspace and data (security issues). I added the log-file including debug-information. Hope this helps

and a screenshot of the workspace


@gpt_geoinfo​ 

Could you share how long this takes and how many records is expected to be read (just to get the size of the data process)?

Something you can try is to add some Recorders to the Oracle Readers and record the data locally. (All other transformers disabled, feature caching turned off).

If this completes... you have the data locally now.

Next, disable the Oracle Readers, add Player Transformer to your workspace (to read back the output of the FFS files). Run the workspace... does it fail again? It might help you isolate a transformer that is causing the process to fail.

 

Check your disk space before and after. It sounds like you are pushing the system to the limit. More Ram? More Diskspace? FME_TEMP is a good option but again... maybe the drive is being filled up so watch the drive space closely.

 

Are there any other competing applications on the server that could be taking memory or resources away from FME?

Good luck!


Reply