Question

Rounding to whole number only when decimal figures = 0

  • 21 August 2019
  • 8 replies
  • 23 views

I am trying to manage updates between a WFS and a GDB. Note that for this I am using the UpdateDetector as my organisation's current installed version of ArcGIS Pro (using the Data Interoperability tool) restricts me to FME 2018.

The issue is that I have a number of fields which are being provided from the WFS in float format, and even when the figure is a whole number it is still represented as such (eg. 1700.0000). When this is stored in the GDB it is automatically trimmed to the whole number (1700) so it isn't being detected as a matching value and giving inaccurate information about updated records. I could in theory us the AttributeRounder to overcome this, but some fields have decimal figures containing useful information, eg. dollar values (2357.87) so rounding to 0 decimal places will remove this. I'd like a blanket solution that can be applied across all attributes- I need to apply the template to ~60 feature classes which require update management, and data changes potentially mean that fields previously containing only irrelevant decimal information are updated then I would need to change how that field is managed.

Ideally I'd like to be able to round to a whole number when decimals are zero, otherwise leave unchanged. I've tried to figure out if this can be determined mathematically (ie. decimal figures * 1 = 0) however I can't figure out how to isolate just the decimals.


8 replies

Userlevel 1
Badge +21

The following formula will equal 0 whenever there are no decimals

@fmod(@Value(attr),1)

The following formula will equal 0 whenever there are no decimals

@fmod(@Value(attr),1)

Thanks @egomm

So I think this will work if I use it in a conditional statement for the attribute rounder. However as I need to test each attribute value without knowing beforehand which fields are to be tested, I need a way to state that more generically. I've tried @fmod(@Value(@CurrentAttribute()),1) and @fmod(@CurrentAttribute(),1) but both are generating errors stating that 'CurrentAttribute is not valid for this transformer'. Do I need to define CurrentAttribute at an earlier stage in the model? Or is there a way to use @Value in a more general sense to apply to a loop of all attributes in the feature?

Userlevel 1
Badge +11

Hi @r.houg,

This is a really interesting question! One possibility is to write your WFS to an intermediary geodatabase if you are only comparing tabular (non-spatial) data. This would then convert your '.00' floats to whole numbers for you to compare accurately without having to apply a rounding effect or test to each incoming attribute. You can use the FeatureWriter to get your WFS to geodatabase and the FeatureReader to bring it in to compare to your original geodatabase. Using something like a TempPathnameCreator might help to create your temporary file/folder for use in the translation, that will then be deleted upon translation completion.

Hi @r.houg,

This is a really interesting question! One possibility is to write your WFS to an intermediary geodatabase if you are only comparing tabular (non-spatial) data. This would then convert your '.00' floats to whole numbers for you to compare accurately without having to apply a rounding effect or test to each incoming attribute. You can use the FeatureWriter to get your WFS to geodatabase and the FeatureReader to bring it in to compare to your original geodatabase. Using something like a TempPathnameCreator might help to create your temporary file/folder for use in the translation, that will then be deleted upon translation completion.

Thanks @jovitaatsafe, this is also something I had considered but wasn't sure about how to create a temporary GDB which would only persist for runtime, so I'll look into this. I am comparing the spatial component as well- is there a reason why this methodology would only work for attribute data and not spatial?

Userlevel 1
Badge +11

Thanks @jovitaatsafe, this is also something I had considered but wasn't sure about how to create a temporary GDB which would only persist for runtime, so I'll look into this. I am comparing the spatial component as well- is there a reason why this methodology would only work for attribute data and not spatial?

No problem! Perhaps you can do the geospatial comparison in the original WFS format then?

I would have confidence in the tabular data being represented accurately into GDB for when I compare for changes, but I'd be more hesitant with spatial data because from WFS to GDB, geometries may be handled differently do the formats being different (possibly arcs, for example). This could then cause more changes to be highlighted that perhaps were not very different at all, but through the translation have been highlighted as changed features.

No problem! Perhaps you can do the geospatial comparison in the original WFS format then?

I would have confidence in the tabular data being represented accurately into GDB for when I compare for changes, but I'd be more hesitant with spatial data because from WFS to GDB, geometries may be handled differently do the formats being different (possibly arcs, for example). This could then cause more changes to be highlighted that perhaps were not very different at all, but through the translation have been highlighted as changed features.

Hi @jovitaatsafe, perhaps you could give me a quick rundown of the steps you would take to implement the TempPathnameCreator -> FeatureWriter -> FeatureReader workflow. I've been playing around with it, but I don't think its correctly writing to a GDB format, as the reader is giving me an error on the first feature, and trying to run the workflow without the reader results in unchanged decimal data (so the same issue).

In my TempPathnameCreator, am I supposed to specify .gdb in the extension?

Userlevel 1
Badge +11

Hi @jovitaatsafe, perhaps you could give me a quick rundown of the steps you would take to implement the TempPathnameCreator -> FeatureWriter -> FeatureReader workflow. I've been playing around with it, but I don't think its correctly writing to a GDB format, as the reader is giving me an error on the first feature, and trying to run the workflow without the reader results in unchanged decimal data (so the same issue).

In my TempPathnameCreator, am I supposed to specify .gdb in the extension?

Hi @r.houg,

You can add the TempPathnameCreator as it is without changing the parameters. It will give you the attribute _pathname which you can then use in your FeatureWriter, setting it in the TextEditor 

@Value(_pathname)\temp.gdb

Then set the rest of the FeatureWriter as you would for a file geodatabase. I actually found this article to be really helpful as it showed an example of using the transformer with the FeatureWriter and FeatureReader. Dave uses a StringSearcher to replace the slashes in the file path with the correct ones saving it as the attribute _dataset and then uses _dataset for the Dataset in the FeatureReader. 

This will take some tweaking I think especially if you have different geometries in your geodatabase. The simplest method I think, though not very pretty, is to have the workflow repeated for each geometry type or feature class (reader feature type in FME). It'll take some experimenting but I think it should be possible!  

Userlevel 2
Badge +17

Hi @r.houg, if your purpose is to check whether match or not numeric values between the WFS dataset and the GDB dataset, how about unifying the precision (the number of decimal places) of all the values read from the two datasets, rather than rounding?

For example, this StringFormatter setting unifies the number of decimal places into 6.

Format String:

.6f

0684Q00000ArLlPQAV.png

Reply