Skip to main content
Hi,

 

 

I created a custom transformer to analyse relationship between one polygon's vertex and segment, used some group-based transformers in it.

 

 

Now I have millions of polygons, but I don't konw how to made the transformer to deal the polygons one by one. It needs too many memory to run the translation.

 

 

Thanks
Hi,

 

 

It's difficult to consider general way. The concrete solution would be different according to the actual conditions. Or there may not be any solution...

 

 

What kind of processing does the custom transformer perform? If the processing is not so complicated, replacing those group-based transformers with a PythonCaller could be a solution.

 

 

Takashi
Hi,

 

 

The transformer is to find a polygon's narrow part. PythonCaller is an easy way, but I cann't write so complicate code.

 

 

In the transformer I set the polygon an unique ID first, other group-base transformers Group By the ID.

 

 

This morning I tried custom transformer's "Parallel Process By" function, split the source 10 polygons into 5 parts and set them the partID, then send them into the custom transformer orderd. The custom transformer Parallel Process By the partID, ("Parallel Process Groups are Ordered" option I choose yes).

 

Althought the result is right, I found FME created 10 fmeworkers sessions to finish the job from the log window. This method use little memory but much more time to turn on and off the fme session.

 

 

If this method could just create 5 sessions it may a good one. I can split source data into parts, to have the balance of memory and time.

 

 

Endest

Reply