Question

Out of memory - Linux

  • 10 February 2016
  • 5 replies
  • 6 views

Badge

Hey All,

So i'm running a pretty intensive FME workflow consisting of multiple workspaces, some memory intensive and some which i didn't think were too bad:

Workspace 1: clip large dataset A by large dataset B;

Workspace 2: take large dataset from postgres rename some attributes and push back into postgres.

My problem is...the memory intensive workspace (workspace 1) ran successfully, but the workspace 2 terminate unexpectedly with the following error:

WARN  |The system is running low on memory. FME is at risk of being terminated by the OS. Please read the FME Help section 'Adjusting Memory Resources' for workarounds.

I'm running 64bit FME2015 on a linux box with 32 cores and 64G of ram

So to get the above error is odd. I though 64bit versions of FME managed the memory usage?

I though that as FME should be managing the memory...that as the workspaces are run sequentially one after the other, that the possibly the memory intensive workspace may not have released all the memory after it terminated...is this possible? 


5 replies

Badge

This looks to be a multi writer issue.

As the dataset (~32Million records) needs to be writen to 2 different schemas in the database, and FME can only have 1 writer open at a time...therefore while allt the data is written to the 1st schema the same data held in cache until writer 1 closes. Once writter 1 closes...writer opens its connection and inserts the data from the cache!

The sample data i was using wasn't large enough to raise this as an issue.

I'll have to come up with a work around! :)

Userlevel 4

This looks to be a multi writer issue.

As the dataset (~32Million records) needs to be writen to 2 different schemas in the database, and FME can only have 1 writer open at a time...therefore while allt the data is written to the 1st schema the same data held in cache until writer 1 closes. Once writter 1 closes...writer opens its connection and inserts the data from the cache!

The sample data i was using wasn't large enough to raise this as an issue.

I'll have to come up with a work around! :)

You're probably on to something, you'll want to reorder your writers so that the one that recieves the most data comes first in the Navigator. Have a look here for more info.

You can add swap space to your Linux machine to give FME access to more memory. The result is FME running much faster than if it tried to manage memory itself.

For more information see this FME documentation on adjusting memory resources, and Adding more swap on Linux.

Badge

This looks to be a multi writer issue.

As the dataset (~32Million records) needs to be writen to 2 different schemas in the database, and FME can only have 1 writer open at a time...therefore while allt the data is written to the 1st schema the same data held in cache until writer 1 closes. Once writter 1 closes...writer opens its connection and inserts the data from the cache!

The sample data i was using wasn't large enough to raise this as an issue.

I'll have to come up with a work around! :)

The same data is written to 2 schema...so no matter what order the writers are ordered, you get the out of memory. 1 schema gets 2 additional attributes...so i've written to that and then created a Pgres function to replicate the data to the second schema. :)

Badge

You can add swap space to your Linux machine to give FME access to more memory. The result is FME running much faster than if it tried to manage memory itself.

For more information see this FME documentation on adjusting memory resources, and Adding more swap on Linux.

Given the size of the dataset...fme running fast would be a bonus. Cheers Chris...i'll give this a go!

Reply