Solved

FME 2012 vs 2019 - AutoCAD feature types and Self Intersections

  • 28 February 2020
  • 2 replies
  • 8 views

Badge +2

Hello.

This is a two-parter question, I hope that is ok. This is a bit of a long post, but both questions are related. And I'll start by saying I don't know much about AutoCAD but..

Question #1: I have a question about the Autodesk AutoCAD DWG/DXF reader at FME 2012 vs FME 2019. I know this is a very big leap in terms of software versions, but we're doing an upgrade at the moment so I am going through updating scripts. Where I can, I've been updating transformers to the newest versions. What I've noticed, is that the same dataset read by FME 2012 identifies fme_type / autocad_entity areas and ellipses differently than FME 2019. For example (using the exact same dataset):

FME 2012 reads this feature as autocad_entity = autocad_polygon / fme_type = fme_area:

But FME 2019 reads the exact same feature as autocad_entity = autocad_ellipse / fme_type = fme_ellipse:

I guess I'm just trying to confirm whether this makes sense - I'm assuming geometry handling has been improved over the years and this is why?

 

Question #2: In our 2012 script, we use a Self Intersector followed by a Snapper transformer to remove fishtails from the area features. The Self Intersector has been replaced by the Geometry Validator since FME 2012, so I'm using the "Self Intersections in 2D" test in the Geometry Validator instead. Does this single test output via the Passed port accomplish the same thing as the Self Intersector + Snapper (output via Passed port on Snapper)? I ask because I'm seeing a slight difference in feature counts between this 2012 script and the 2019 version (in 2 ways, see examples below). Question #1 above may be related in that many more of the area feature types identified in FME 2012 are identified as ellipses in 2019; I don't believe ellipses would benefit from going through the "Self Intersections in 2D" test anyways. For example (using the exact same dataset):

In the FME 2012 script running in FME 2012, the features ‘written’ equaled 21122, with 19667 being AREA and 1458 being ELLIPSE (total of 21125). The AREA features, when snapped (after the Self Intersector) removed 3 features, giving us the 21123:

But the 2012 script running in FME 2019 without changing anything at all (leaving the Self Intersector and Snapper in place) reads the same number of features (21125) but as different types and only snaps a handful of the area features so many are 'lost' (or would be 'untouched'). So the end result is far fewer written features:

And with the 2019 script running in FME 2019 having updated to use GeometryValidator in place of the Self Intersector + Snapper, the same total number of features is read (21125) but there's 1 additional total feature written. As with the above example, I recognize the difference in feature types (so ties in to Question #1). This would be my question about whether the "Self Intersections in 2D" test in the Geometry Validator accomplishes the same thing as the Self Intersector + Snapper:

 

Basically I'm trying to make sense of it all. I was hoping to validate this upgrade script by feature numbers written but I can't seem to get them to match. I'm wondering if it's simply the difference in geometry handling at 2019 vs 2012, with 2019 presumably having improved handling.

I'm hoping this has enough info to provide an answer or two, or further discussion. The dataset is quite large and sensitive so I don't think I can share it, but I could likely share some logs if it helps.

Thanks in advance to anyone who may have some info, I know someone out there has all the answers!

Tim

 

FME 2019.1.3.1

icon

Best answer by chrisatsafe 5 March 2020, 00:47

View original

2 replies

Badge +2

Hi @timboberoosky,

I reached out to our development teams to get a little bit more info about this and here is the information they were able to provide me with.

Question 1:

 

Correct, this is a change in how the geometry is represented for better support throughout FME. In 2012 the circular geometry is actually stroked with 73 vertices (it is not an arc, but a series of line segments). Now in 2019 you can see that it is an fme_ellipse, meaning that it is comprised of real arc(s) and no longer stroked to line segments.

 

 

Question 2:

Likely no. If the SelfIntersector > Snapper is either removing parts or separating single features into multiple feature in then no. If you want that as the output, you may want to use a Deaggregator after the Repaired port of the GeometryValidator as it will break up the multipolygons that you may be getting as an output from that port.

Additionally, it's possible that one of the two self-intersecting input features has two self-intersecting points. If these are regular areas and not multis/aggregates, then 'InvalidParts' will emit the whole feature that failed the test. 'Issue locations' will emit problem points with attributes that help track down the problem, and will be >= the number of repaired + failed features.

There is a bit more background information about this in the Data QA: Identifying Self-Intersections with FME tutorial and the Self-Intersections in 2D section of the GeometryValidator documentation if you are wanting to learn more.

Hope that helps!

Badge +2

Thanks so much @chrisatsafe for your reply.

The geometry handling differences between 2012 and 2019 are noted, I'm glad this is expected and has been explained.

On the Self Intersector > Snapper front, I'm still having trouble re-creating the logic that this had in FME 2012. Below are snippets of log files from 2012 and the 2019 scripts - from a different one that I'm hoping illustrates differences better, since there are quite a few features getting processed differently in this 2012 script. Did the Self Intersector split out or identify vertices of areas so the Snapper could use them to snap back together (for example "Incorporated 484150 vertices into SurfaceModel")? In 2012, the behavior of the Snapper after running through the Self Intersector snaps a good percentage of the total, several untouched and 40K+ dropped. But in the 2019 script, 4 features fail the self-intersection test, then all remaining Passed features sail through the Snapper 'untouched'.

Admittedly I did not try the de-aggregator after the Repaired port as suggested in the previous post but this is because for this example, only 1 feature even passes through this port so it seems there's something different about the Geometry Validator > Passed features. In the log snippets attached you can see the total features written for "Storm_Sewer_Area" differ greatly between 2012 and 2019 (I tried to keep these logs simple to idenfiy the specific areas I'm looking at, but let me know if the full log for each would have any added benefit).

Any advice?

2012 log:

2012_Log_Snippet.txt

2019 log:

2019_Log_Snippet.txt

Thanks again!

Reply