Skip to main content

I have a number of GPS tracking records that contain straight line segments i.e. noise arisen from loss of signal. I would like to remove lines > 1 km to improve visualising the data.

 

The workbench would need to:

  • Find straight line segments within each dataset > 1km
  • Break/Split the line on the vertex
  • Delete the line > 1km
  • The lines that have been split at either end can remain as they are i.e. do not need to be merged or joined

 

Please can someone briefly describe the workflow that I would need to create to achieve this?

What I did in this situation:

  • UUIDGenerator to create unique ID for each polyline.
  • Keeper to remove all attributes except _uuid. (For performance, adding original attributes back later.)
  • Chopper (2 vertices) to chop all polylines to lines with 2 points.
  • LengthCalculator to calculate length of lines.
  • Tester to filter out lines > 1km.
  • LineCombiner, group by _uuid, to restore the polylines.
  • FeatureMerger, based on _uuid to restore original attributes.

 

 


Excellent idea regarding the UUIDs to merge them back together! This was the part I have been missing... I have played around with this before (tried to group by TimeStamps etc.) but to no avail. I look forward to giving this a blast. Thank you @nielsgerrits​ 


Excellent idea regarding the UUIDs to merge them back together! This was the part I have been missing... I have played around with this before (tried to group by TimeStamps etc.) but to no avail. I look forward to giving this a blast. Thank you @nielsgerrits​ 

You probably don't need to merge the attributes back together, but if you explode tracks into lines of 2 vertices with all original attributes you are generating a lot of data which consumes ram. The FeatureMerger is a blocking transformer which has it's disadvantages as well, but my feeling is this is probably the more efficient way to do it.


You probably don't need to merge the attributes back together, but if you explode tracks into lines of 2 vertices with all original attributes you are generating a lot of data which consumes ram. The FeatureMerger is a blocking transformer which has it's disadvantages as well, but my feeling is this is probably the more efficient way to do it.

@nielsgerrits​ Thanks again for your help. This looks like an excellent solution so far...

 

However, I'm having an issue combining the lines - do you know what is going wrong here?

Due to the data and attribute info I'm working with, I need to get back to the original 107 records but the LineCombiner seems to having trouble. Do you know how to fix this?FME


I have some suggestions, but without data it is always hard to be sure.

  • Is it possible the data had aggregates (collections of lines) to begin with? You can test this with an AggregateFilter. If this is the case, you can recreate the collections after the LineCombiner using the Aggregator, group by _uuid.
  • In the Chopper, at leas 4 records seem to disappear as you let the Untouched output unprocessed. I would connect that to the LengthCalculator as well. It is probably data you want to filter out, but now you have one point to check.
  • It is possible that some of the records only have large segments, so you might lose records in the Tester. This is correct, but might influence the number of records at the end of the process.

Reply