Skip to main content

Hi All, 

 

I am seeking assistance in developing an FME workflow to perform clearance analytics for both roads and power lines in terms of trees - utility vegetation management. Our source data includes:

  • Mobile Laser Scanner (MLS) Point Clouds: Representing the surrounding environment. The data includes the canopy and branch classification of each individual trees.
  • Pre-defined Clearance Profiles (Vectors): Defining the acceptable clearance zones for roads and power lines.
  • Linear Reference Data (Polylines): Representing the road edges or power line geometry.

Workflow Goal:

The objective is to identify and classify obstacles within the pre-defined clearance profiles. This involves:

  1. Spatial Analysis: Identify point cloud data falling within the designated clearance profiles.
  2. Obstacle Classification: Classify the identified points as potential obstacles.
  3. Obstacle Dimensioning: Measure the dimensions of the classified obstacles.

Risk Assessment Output:

Based on the identified obstacles, we aim to generate a risk assessment report with three categories:

  • Immediate Action: Obstacles of tree branches / canopies requiring urgent attention.
  • Review Needed: Obstacles of tree branches / canopies requiring further on-site investigation.
  • No Review Needed: Acceptable obstacles of tree branches / canopies within clearance tolerances.

Request for Expertise:

We are inquiring if anyone in the FME community has experience with similar workflows. We would be grateful for any insights or suggestions you can provide to help us achieve this functionality.

Hi @gyulafekete,


Thanks for posting. After reviewing your post, here is a rough draft/idea of how you could approach it:

  1. Clip the Point Cloud with the Clearance Profile to extract all the points inside.
  2. Round point coordinates to a specific value to create a regular grid.
  3. Remove duplicate points in XYZ, and the remaining points will provide the volume measurement.
  4. Classify points based on their z-value from the lowest point within a specified tolerance; for instance, if the z-value is higher than 1 meter, classify it as medium vegetation, and if it's higher than 3 meters, classify it as high vegetation. We have a webinar that explains this process in detail (second chapter).

Please note: Determining dimensions may be more complex, but it could be limited to a bounding box. Identifying 3D obstacles, rather than just projecting to the ground (for example, distinguishing branches from the ground or low vegetation), may present additional challenges. PointCloudDuplicateRemover might be a good tool to look into; using its concepts and algorithms to figure out the duplication.

A process like this will take some discovery and testing, you might want to test on a subset of data, confirm results, and then apply to the larger dataset. Hope this information can help you get started!


Hi @gyulafekete,


Thanks for posting. After reviewing your post, here is a rough draft/idea of how you could approach it:

  1. Clip the Point Cloud with the Clearance Profile to extract all the points inside.
  2. Round point coordinates to a specific value to create a regular grid.
  3. Remove duplicate points in XYZ, and the remaining points will provide the volume measurement.
  4. Classify points based on their z-value from the lowest point within a specified tolerance; for instance, if the z-value is higher than 1 meter, classify it as medium vegetation, and if it's higher than 3 meters, classify it as high vegetation. We have a webinar that explains this process in detail (second chapter).

Please note: Determining dimensions may be more complex, but it could be limited to a bounding box. Identifying 3D obstacles, rather than just projecting to the ground (for example, distinguishing branches from the ground or low vegetation), may present additional challenges. PointCloudDuplicateRemover might be a good tool to look into; using its concepts and algorithms to figure out the duplication.

A process like this will take some discovery and testing, you might want to test on a subset of data, confirm results, and then apply to the larger dataset. Hope this information can help you get started!

Great!


Reply