Question

Smallworld to file geodatabase writing changed in 2021

  • 24 March 2022
  • 4 replies
  • 0 views

Badge +5

I have a process that reads a dataset from Smallworld to a point feature class in an esri file geodatabase. I works fine in 2019. In 2021 it now fails because apparently there are a few fme_aggregates in this dataset. How can I get these features to write again? GeometryCoercer and DeAggregator aren't working. I need this to work in 2021 but can't figure out how


4 replies

Badge +2

@swach​ I would suggest running the workspace with feature caching on to determine exactly they type of geometry that you're dealing with. Perhaps also look at the FME log - if the feature is being rejected by the Geodb writer then there will be an example recorded there. The most likely cause of an aggregate when reading from Smallworld is a multiple geometry (two or more spatial columns) and a multi-point (several points on the same geometry.

Badge +5

@swach​ I would suggest running the workspace with feature caching on to determine exactly they type of geometry that you're dealing with. Perhaps also look at the FME log - if the feature is being rejected by the Geodb writer then there will be an example recorded there. The most likely cause of an aggregate when reading from Smallworld is a multiple geometry (two or more spatial columns) and a multi-point (several points on the same geometry.

When I run them through a GeometryFilter they come out as Null. The log says 'fme_geometry' has value `fme_aggregate' and `fme_type' has value `fme_no_geom'

Badge +2

@swach​ You can try the AggregateFilter. But you need probably to establish where the aggregates originate and their actual structure, as described above.

Badge +5

@swach​ You can try the AggregateFilter. But you need probably to establish where the aggregates originate and their actual structure, as described above.

Their structure in Smallworld is just a tabular feature. I decided to avoid the problem altogether and just write those features out to a separate table to be written back into the dataset later. Thanks for the help.

Reply