Skip to main content

Hello everyone,

I have line
feature which I have to buffer and dissolve. When I use the bufferer and
dissolver, it takes very long time because I have a large amount of feature. If I Buffer the lines with group by
fme_feature_type than the output buffer polygon is completely dissolved. I do
not need to use dissolver. But it takes also much time. I tried this way with small
set of features, it is faster than using Dissolver. But it shows this massage and running with a
long time. Should I have to use any additional transformer to improve the
work-space? I attached my workflow below.

 


 


Constructing donuts from 11750
features...

PolygonDissolveFactory: Completed 100%
of processing on pass 5.

Completed 379339 segments of
intersection processing

Performing low-level intersection at
phase #1... 3.44% done

Performing low-level intersection at
phase #1... 100% done

Completed intersection processing, phase
#2. 2 new nodes were generated among 11599 intermediate lines

Finish splitting 11600 lines into 11611
intermediate lines for phase 2

Thanks

If you're dissolving buffer overlaps, try generalizing the buffers as much as possible before dissolving.


If you're dissolving buffer overlaps, try generalizing the buffers as much as possible before dissolving.

thanks david_r I have tried also generalizing the buffers before dissolving, but I takes also very very long time. Is it possible to do this task in an another approach with a reyasonable time?

 

 


Are you using rounded caps on your buffers? If yes, is that really necessary or could you consider using square caps?

Also, how many buffer features are you sending into the Dissolver?


Hi @yasinmelon1, I think the performance of geometric operations specifically depends on the data conditions (sizes, number of features, their spatial relations etc.) , and there is no generic way to improve that. However, in my experiences, dissolving operation could take a long time (or generate unexpected result in worst) when there were very narrow overlap areas between adjacent polygons. In some cases, it could be effective to use the Snapper to resolve the narrow overlap areas before dissolving.


Hi @yasinmelon1, I think the performance of geometric operations specifically depends on the data conditions (sizes, number of features, their spatial relations etc.) , and there is no generic way to improve that. However, in my experiences, dissolving operation could take a long time (or generate unexpected result in worst) when there were very narrow overlap areas between adjacent polygons. In some cases, it could be effective to use the Snapper to resolve the narrow overlap areas before dissolving.

Snapping Type: Segment Snapping

 

 


You could also use the 'Group By' function in the Bufferer transformer. You can refer to the same attributes as you refer to in the Dissolver transformer.


Are you using rounded caps on your buffers? If yes, is that really necessary or could you consider using square caps?

Also, how many buffer features are you sending into the Dissolver?

thanks for your suggestion. I was tried rounded caps on buffer and will definitely try with square caps.

 

I was send 6 million buffer features into the Dissolver.

 

 


Hi @yasinmelon1, I think the performance of geometric operations specifically depends on the data conditions (sizes, number of features, their spatial relations etc.) , and there is no generic way to improve that. However, in my experiences, dissolving operation could take a long time (or generate unexpected result in worst) when there were very narrow overlap areas between adjacent polygons. In some cases, it could be effective to use the Snapper to resolve the narrow overlap areas before dissolving.

takashi thanks for your suggestion. As you know, I was tried to completing this task with huge features last 2 months but still now, I have no better workflow to finished this task.

 

 


If you have about 6 million buffered road segments (which by their nature I guess are all connected in some way), won't dissolving all those just leave you with one single, monstrously complex polygon? If so, even if you wanted to wait long enough for FME to generate it, that sounds pretty unusable.

How about using the Tiler (with a fixed seed coordinate) on your buffers and then using the _row and _column attributes as Group By for the Dissolver? That way you get smaller, more usable polygons in the end.


Snapping Type: Segment Snapping

 

 

what value should be assign for the snapping tolerance?
takashi thanks for your suggestion. As you know, I was tried to completing this task with huge features last 2 months but still now, I have no better workflow to finished this task.

 

 

Just a heads up, doing segment snapping on 6 million features will take a loooong time...
takashi thanks for your suggestion. As you know, I was tried to completing this task with huge features last 2 months but still now, I have no better workflow to finished this task.

 

 

You need to find a reasonable value depending on the data conditions, if you would try the Snapper. However, @david_r is right. The number of feature is too large in your case. I would recommend you to try David's suggestion first.

 


If you have about 6 million buffered road segments (which by their nature I guess are all connected in some way), won't dissolving all those just leave you with one single, monstrously complex polygon? If so, even if you wanted to wait long enough for FME to generate it, that sounds pretty unusable.

How about using the Tiler (with a fixed seed coordinate) on your buffers and then using the _row and _column attributes as Group By for the Dissolver? That way you get smaller, more usable polygons in the end.

Should I use any group by (I mean "highway") in the Bufferer? And I put the value of tile size 100*100 m. Is it okay?

 

 


If you have about 6 million buffered road segments (which by their nature I guess are all connected in some way), won't dissolving all those just leave you with one single, monstrously complex polygon? If so, even if you wanted to wait long enough for FME to generate it, that sounds pretty unusable.

How about using the Tiler (with a fixed seed coordinate) on your buffers and then using the _row and _column attributes as Group By for the Dissolver? That way you get smaller, more usable polygons in the end.

I tried this work flow with Tiler. I took the tile size 100*100 . I tied for 10000 features. I took long time (15 min running) and shows

 

ResourceManager: Optimizing Memory
Usage. Please wait...

 



Adding Clipper 30771553

 



ResourceManager: Optimizing Memory
Usage. Please wait...

 



Adding Clipper 31083930

 



ResourceManager: Optimizing Memory
Usage. Please wait...

 



Adding Clipper 31385846

 



ResourceManager: Optimizing Memory
Usage. Please wait...

 



Adding Clipper 31683072

 



ResourceManager: Optimizing Memory
Usage. Please wait...

 



Adding Clipper 31931492

 



ResourceManager: Optimizing Memory
Usage. Please wait...

 

 

On the other side, I tried to 10000 features with using buffer+generalizer+dissolver workflow, it takes only 1 min and 18 sec.

 

 


takashi thanks for your suggestion. As you know, I was tried to completing this task with huge features last 2 months but still now, I have no better workflow to finished this task.

 

 

I tried the workflow with Tiler. I took the tile size 100*100 . I tied for 10000 features. I took long time (15 min running) and shows

 

ResourceManager: Optimizing Memory Usage. Please wait...

 

Adding Clipper 30771553

 

ResourceManager: Optimizing Memory Usage. Please wait...

 

Adding Clipper 31083930

 

ResourceManager: Optimizing Memory Usage. Please wait...

 

Adding Clipper 31385846

 

ResourceManager: Optimizing Memory Usage. Please wait...

 

Adding Clipper 31683072

 

ResourceManager: Optimizing Memory Usage. Please wait...

 

Adding Clipper 31931492

 

ResourceManager: Optimizing Memory Usage. Please wait...

 

 

On the other side, I tried to 10000 features with using buffer+generalizer+dissolver workflow, it takes only 1 min and 18 sec.

 

 

 


tile the workspace and use paralellprocessing. Grouped by tile row and columns (or whatever id you gave them)

WHen done another dissolve run without the tile group by.


I tried this work flow with Tiler. I took the tile size 100*100 . I tied for 10000 features. I took long time (15 min running) and shows

 

ResourceManager: Optimizing Memory
Usage. Please wait...

 



Adding Clipper 30771553

 



ResourceManager: Optimizing Memory
Usage. Please wait...

 



Adding Clipper 31083930

 



ResourceManager: Optimizing Memory
Usage. Please wait...

 



Adding Clipper 31385846

 



ResourceManager: Optimizing Memory
Usage. Please wait...

 



Adding Clipper 31683072

 



ResourceManager: Optimizing Memory
Usage. Please wait...

 



Adding Clipper 31931492

 



ResourceManager: Optimizing Memory
Usage. Please wait...

 

 

On the other side, I tried to 10000 features with using buffer+generalizer+dissolver workflow, it takes only 1 min and 18 sec.

 

 

Seems like FME is running out of memory, you should consider switching to the 64-bit version of FME, is possible.

Reply