Skip to main content

Hello everyone,

 

I am working with line geometry as a reader. Then i buffered the line feature and used generalizer (with Algorithm: Douglas Generalize, share boundaries: No, Tolerance:30), to simplify the buffered line feature and need a output with Esri Geodatabase (File Geodb API).

But it shows the errors:

Failed to write Geometry to feature class 'way_r' with geometry type 'esriGeometryPolyline'. Dropping containing feature

I have also attached the ERROR text. Please help me.

Thanks.

Hi @yasinmelon1, according to the log, it seems that the reason for the failure is mismatch on geometry type between the features (polygon) and the writer feature type setting (line).

"FileGDB Writer: Failed to write Geometry to feature class 'way_r' with geometry type 'esriGeometryPolyline'. Dropping containing feature"

 

Try changing the geometry type of the writer feature type to polygon.


Hi @yasinmelon1, according to the log, it seems that the reason for the failure is mismatch on geometry type between the features (polygon) and the writer feature type setting (line).

"FileGDB Writer: Failed to write Geometry to feature class 'way_r' with geometry type 'esriGeometryPolyline'. Dropping containing feature"

 

Try changing the geometry type of the writer feature type to polygon.

like this.

 

 


You could use the "GeometryCoercer" and assign as geometry type "fme_line" to convert the polygons into polylines before you write them in the db.


Hi @yasinmelon1, according to the log, it seems that the reason for the failure is mismatch on geometry type between the features (polygon) and the writer feature type setting (line).

"FileGDB Writer: Failed to write Geometry to feature class 'way_r' with geometry type 'esriGeometryPolyline'. Dropping containing feature"

 

Try changing the geometry type of the writer feature type to polygon.

The Bufferer outputs Polygon features, so you cannot write them into a Line feature class as-is. If you need to boundary lines of the buffer areas, you can use the GeometryCoercer (Geometry Type: fme_line) to transform the buffers to their boundary lines, as @ioannakostara suggested already.

 


The Bufferer outputs Polygon features, so you cannot write them into a Line feature class as-is. If you need to boundary lines of the buffer areas, you can use the GeometryCoercer (Geometry Type: fme_line) to transform the buffers to their boundary lines, as @ioannakostara suggested already.

 

Hi ! takashi You was right I have work with polygon feature after buffer. I have to make buffer with a country highway(line) feature and dissolve the buffered amount, after that I have to generalized the polygon. At the final stage, I have to clip it from the country administrative boundary.

 

In this picture is my workflow. Is it the correct decision to use AreaOnAreaOverlayer after buffer?

 

This workflow this running until 17 hours. Now it is stuck and shows

 

ResourceManager: Optimizing Memory Usage. Please wait...

 

could you please tell me, is my workflow is correct? Or some suggestion please.

 

Thank you.

 

 

 

w-1.jpgw2.jpg

 

 


According to the log you have posted, the Dissolver is working very long time because it consumes huge memory space to process all the buffer areas at once.

Since the buffer areas would be clipped by administrative boundary finally, I think you can clip them first and then dissolve them for each administrative boundary, so that the memory usage in the Dissolver can be reduced. That is, my idea is to modify the workflow (skeleton) to:

Bufferer -> Generalizer -> Clipper -> Sorter -> Dissolver

These are important points to increase the performance. These all are indispensable.
  • Read the administrative boundary polygons first (i.e. move the reader to the top of the Navigator), and set "Clippers First" to the Clipper Type parameter in the Clipper.

  • In the Clipper, check "Merge Attributes" to merge unique ID attribute of the administrative boundary polygons (say 'admin ID') to clipped buffer areas.

  • Use a Sorter to sort the clipped buffer areas by 'admin ID'.

  • In the Dissolver, set 'admin ID' to the Group By parameter, and set "By Group" to the Input Ordered parameter.

If the administrative boundary polygons don't have any unique ID attribute originally, use a Counter to add sequential number to them and use it as temporary ID.

I'm just afraid that the Sorter might consume large memory space at run-time, but I expect the entire performance could be better than the current workflow.

Hope this works.


According to the log you have posted, the Dissolver is working very long time because it consumes huge memory space to process all the buffer areas at once.

Since the buffer areas would be clipped by administrative boundary finally, I think you can clip them first and then dissolve them for each administrative boundary, so that the memory usage in the Dissolver can be reduced. That is, my idea is to modify the workflow (skeleton) to:

Bufferer -> Generalizer -> Clipper -> Sorter -> Dissolver

These are important points to increase the performance. These all are indispensable.
  • Read the administrative boundary polygons first (i.e. move the reader to the top of the Navigator), and set "Clippers First" to the Clipper Type parameter in the Clipper.

  • In the Clipper, check "Merge Attributes" to merge unique ID attribute of the administrative boundary polygons (say 'admin ID') to clipped buffer areas.

  • Use a Sorter to sort the clipped buffer areas by 'admin ID'.

  • In the Dissolver, set 'admin ID' to the Group By parameter, and set "By Group" to the Input Ordered parameter.

If the administrative boundary polygons don't have any unique ID attribute originally, use a Counter to add sequential number to them and use it as temporary ID.

I'm just afraid that the Sorter might consume large memory space at run-time, but I expect the entire performance could be better than the current workflow.

Hope this works.

Hmm, I assumed that you need to finally create dissolved buffer areas inside of each administrative boundary area. Was I wrong? What are the geometries you finally required?

 

 


According to the log you have posted, the Dissolver is working very long time because it consumes huge memory space to process all the buffer areas at once.

Since the buffer areas would be clipped by administrative boundary finally, I think you can clip them first and then dissolve them for each administrative boundary, so that the memory usage in the Dissolver can be reduced. That is, my idea is to modify the workflow (skeleton) to:

Bufferer -> Generalizer -> Clipper -> Sorter -> Dissolver

These are important points to increase the performance. These all are indispensable.
  • Read the administrative boundary polygons first (i.e. move the reader to the top of the Navigator), and set "Clippers First" to the Clipper Type parameter in the Clipper.

  • In the Clipper, check "Merge Attributes" to merge unique ID attribute of the administrative boundary polygons (say 'admin ID') to clipped buffer areas.

  • Use a Sorter to sort the clipped buffer areas by 'admin ID'.

  • In the Dissolver, set 'admin ID' to the Group By parameter, and set "By Group" to the Input Ordered parameter.

If the administrative boundary polygons don't have any unique ID attribute originally, use a Counter to add sequential number to them and use it as temporary ID.

I'm just afraid that the Sorter might consume large memory space at run-time, but I expect the entire performance could be better than the current workflow.

Hope this works.

The "Clipped Indicator" attribute (you have named 'Admin_ID') just stores 'yes' (clipped) or 'no' (not clipped). It cannot be used as ID of administrative boundary areas. Why not use a Counter as I suggested?

 

Reader feature type (administrative boundary areas) -> Counter -> Clipper Clipper port]

 

 


Hmm, I assumed that you need to finally create dissolved buffer areas inside of each administrative boundary area. Was I wrong? What are the geometries you finally required?

 

 

I'm getting confused...

 

If you need to finally get areas which are outside of the buffer areas of highways, I think you will have to clip the administrative boundary area by the buffer areas.

 

Opposite to the current workflow, send the buffer areas to the Clipper port and send the administrative boundary areas to the Clippee port. The features output via the Outside port would be your desired areas, I think.

 

 


I'm getting confused...

 

If you need to finally get areas which are outside of the buffer areas of highways, I think you will have to clip the administrative boundary area by the buffer areas.

 

Opposite to the current workflow, send the buffer areas to the Clipper port and send the administrative boundary areas to the Clippee port. The features output via the Outside port would be your desired areas, I think.

 

 

No, the "Clipped Indicator Attribute" parameter should be the name of an attribute which will be added to every output feature to indicate whether it has been clipped ('yes') or not ('no'). See the help on the Clipper to learn more. Anyway, I don't think you have to use this parameter in this case.

 

 

I'm still unclear what is your requirement. In this super simplified example, which areas do you need to create finally - lA], B], or ]C]?

 

  • Gray: Highway Line, Blue: Boundary of Administrative Areas,
  • lA] (Orange): Highway Buffer Areas within Administrative Areas
  • lB] (Green): Administrative Areas outside of Highway Buffer Areas
  • lC] (Light Blue): Highway Buffer Areas outside of Administrative Areas

 


I'm getting confused...

 

If you need to finally get areas which are outside of the buffer areas of highways, I think you will have to clip the administrative boundary area by the buffer areas.

 

Opposite to the current workflow, send the buffer areas to the Clipper port and send the administrative boundary areas to the Clippee port. The features output via the Outside port would be your desired areas, I think.

 

 

OK. I think this workflow creates the green areas. You don't need to dissolve the buffers. To increase performance, read the highway lines first and set "Clippers First" to the Clipper Type parameter in the Clipper.

 

 


OK. I think this workflow creates the green areas. You don't need to dissolve the buffers. To increase performance, read the highway lines first and set "Clippers First" to the Clipper Type parameter in the Clipper.

 

 

A possible reason is mismatch on coordinate system setting between Clipper features and Clipee features, or incorrect coordinate system setting.

 

In your workspace, "ESRIWKT|..." has been set to the coordinate system for the source datasets, but in my understanding, a coordinate system from the FME Coordinate System Gallery should be selected here.

 

How FME Identifies Coordinate Systems

 

How/Why did you set "ESRWKT|..." coordinate system?

 

 


OK. I think this workflow creates the green areas. You don't need to dissolve the buffers. To increase performance, read the highway lines first and set "Clippers First" to the Clipper Type parameter in the Clipper.

 

 

When I took the reader, I set the geographical coordinate system WGS84. i did not set "ESRWKT|..." coordinate system.

 


OK. I think this workflow creates the green areas. You don't need to dissolve the buffers. To increase performance, read the highway lines first and set "Clippers First" to the Clipper Type parameter in the Clipper.

 

 

Strange. In my Wokbench, "ESRIWKT|..." is shown in the Coordinate System parameter for the reader and it cannot be changed.

 

That aside, I found an issue.

 

You have set the Group By parameter in the Bufferer. Because of this setting, Bufferer works as a blocking transformer and therefore the Clipper won't work as expected with "Clippers First" mode.

 

If you need to create buffer areas grouping by "highway", you cannot use "Clippers First" mode.

 

Is it necessary to group buffer areas?

 


OK. I think this workflow creates the green areas. You don't need to dissolve the buffers. To increase performance, read the highway lines first and set "Clippers First" to the Clipper Type parameter in the Clipper.

 

 

Try clearing (not setting) the Group By parameter in the Bufferer.

 

 


OK. I think this workflow creates the green areas. You don't need to dissolve the buffers. To increase performance, read the highway lines first and set "Clippers First" to the Clipper Type parameter in the Clipper.

 

 

Yes ( addition] should I say 'No' in English logic?), you don't need to set Group By in the Bufferer in this case.

 

Regarding coordinate system, make sure that Clipper and Clippee have the same coordinate system.

 

 


OK. I think this workflow creates the green areas. You don't need to dissolve the buffers. To increase performance, read the highway lines first and set "Clippers First" to the Clipper Type parameter in the Clipper.

 

 

Is the Buffer Amount you've set in the Bufferer reasonable for the distance units in the coordinate system?

 

 


OK. I think this workflow creates the green areas. You don't need to dissolve the buffers. To increase performance, read the highway lines first and set "Clippers First" to the Clipper Type parameter in the Clipper.

 

 

If the data is created in degrees (latitude, longitude), the buffer amount 1000 is an 'astronomical' number.

 

 


OK. I think this workflow creates the green areas. You don't need to dissolve the buffers. To increase performance, read the highway lines first and set "Clippers First" to the Clipper Type parameter in the Clipper.

 

 

Hi @yasinmelon1, a general approach is to divide the source dataset into small subsets and process them for each, but I'm not sure this approach can be applied to your situation. Regarding the performance, to be honest, it's hard to find an appropriate solution unless knowing all the conditions about your situation.

 

If you post a new question containing detailed and precise explanation on your situation, you might get some good answers.

 

Anyway, this thread has become too clutter already. I would recommend you to once close this thread and then post a new question specific to the performance issue.

 


Reply