Skip to main content

Hi all,

I noted that coordinate precision of the objects is changed when loaded into the ESRI FGDB.

Using the coordinate rounder, all objects in my migration process are rounded on three decimals (on a mm), and even the data in the source system have only 4 decimals.

I want to load them as such into the ESRI FGDB using FGDB writer. Source for writing is FFS file.

After successful loading process, I checked results in ArcMap and in Inspector. What I can see is that objects and vertices have many more decimals then in the data, which I loaded.

The resolution of the database is 0,0001 and tolerance is 0,001. This is also not clear to me, how I managed to write so many digits with 0,0001 resolution.

I removed all coordinate systems in order to avoid coordinate transformations. I tried to set same coordinate system on the target as on the source, but unfortunately, the result is the same.

Does someone have an idea why does extra vertices are appearing? How I can simply write the data which I have and not more or less?

Thanks in advance.

 

This is normal, it's due to limitations in the floating point arithmetic when represented in binary internally.

See for example: https://floating-point-gui.de/


This is normal, it's due to limitations in the floating point arithmetic when represented in binary internally.

See for example: https://floating-point-gui.de/

HI @david_r,

Thanks for your answer.

I understand that we have problems when trying to represent decimal numbers with floats and I woundn't have problem when I have a difference on 16th decimal like in the example you shared. But, I have a difference already on the fourth decimal and I consider this as a significant difference in value.


HI @david_r,

Thanks for your answer.

I understand that we have problems when trying to represent decimal numbers with floats and I woundn't have problem when I have a difference on 16th decimal like in the example you shared. But, I have a difference already on the fourth decimal and I consider this as a significant difference in value.

There's a lot of stuff happening under the hood when writing to a file geodatabase, some of it is due to decimal number imperfections, and some of it is (usually) due to the ArcGIS "snapping grid" that is implied through the resolution and tolerance settings.

My suggestion would be to write the same data to a feature class with a different resolution and tolerance setting and see how that affects the result.


Here is a small update in the case someone else encounters similar issue.

If the FGDB to which we are loading data is created using ArcGis Pro and not ArcCatalog, I get expected results with the decimal difference on the nineth decimal

FGDB is created in ArcGis Pro using same parameters as in ArcCatalog.


Reply