Skip to main content

Hi all,

i'd like to read a large Smallworld dataset (millions of features) in several tranches in FME Workbench. Maybe in blocks of 500.000 records at a time, since my workspace takes a lot of time to complete cause the complexity of geometries.

If i want to use "Start feature" and "Max feature to read" parameters, can i assume the reading order is always respected without skipping any features?

Any other technics to read large datasets?

Thanks in advance!!!

What SmallWorld Reader are you using (The SmallWorld edition or the SpatialBiz plugin)?


What SmallWorld Reader are you using (The SmallWorld edition or the SpatialBiz plugin)?

Hi eric_jan,

i'm using the Smallworld edition.

I tried to skip millions of records but i got "Insufficient memory available -- error code was 2". RAM finished!!!

FME reads all the records instead of skip them!


Instead of placing the entire dataset to the "Export to FME" group in Explorer, you can also run a Smallworld Query that results in a subset of it being selected, and then export that selection only. One way of divide-and-conquer is by exporting the records based on their spatial regions within a set of non-overlapping administrative boundaries.

I think even setting the "Start feature" will mean that FME workbench have to at least read in all of the input features. If that process itself consumes a lot of memory, that would not help much.

Another couple of potential ways to reduce memory consumption: simplify the input feature schema by removing attributes that are not necessary, and remove pseudo fields that aren't needed.


Hi,

Have you got any suggestion/idea to solve this???


Reply