Skip to main content

I want to write to an existing FGDB and I'm getting "Esri Geodatabase Writer: XXX attribute value(s) were truncated" (in my case: XXX = 500+) in my translation log. Concerning the attribute values that should get written and given data types, everything seems to be fine. What could be a way to determine which attribute values are truncated and why? I can't find further information on this.

Are there any other warnings in the logfile? I would expect a bit more information as to what was truncated. At least which attribute or column.

 

You can check the output FeatureType attribute definition and look at the column definition and see if any of the lenghts look smaller than you expect. In some cases a attribute might be expecting some kind of shot code value rather that the actual text string?

 

If you really can't tell by looking here then there are a couple of options.

 

  1. Add an AttribtueValidator and test for lenghts longer than what is defined in the output schema. This is really annoying if there are lots of attributes.
  2. Try and use a Matcher to compare the input data and whats in the file geodatabase. You could first try and match by some known common id and then later by the attribute values. If you get matches on the id but not on the attribute values this should help identify the problem features.

Are you sure the attribute definition in the writer is the same as the existing featureclass? You can import existing featureclassdefinitions from a gdb to the (Feature)Writer, to make sure they are the same.


The GDB truncation warnings are useless, unless they have been improved in a more recent FME version. No attribute or column information

 

"The level of detail you get with truncate warnings seems to vary depending on the format being written, e.g. shape file provides the truncated value in the log (although not what field it's in), writing to ArcSDE and you get "Geodatabase Writer: 3 attribute value(s) were truncated" only.

I've just written a little bit of python that will check the string lengths based on a schema list which at least avoids exploding all the data (the schema data needs to be hardcoded at the moment)"

 

see https://community.safe.com/s/bridea/a0r4Q00000Hbr5EQAR/enhanced-logging-of-data-truncation-warnings


Hello,

don´t you know about any update in this case? It looks that the idea is still active but not implemented. 

I am transforming million XML files to FGDB and so it´s hard to find which attribute is truncated and where.

 

 

As @virtualcitymatt   mentioned I, took a look to AttributeValidator, but I am not sure, if that can set the String length dynamically, or is there a way how to make a statistic after Feature merger and then used the value in the Attribute manager, where the final type of the field is set?

 

 

I see several post here, but so far nothing what would be helpful - have you workaroudn this @killphil84?

 

Thank you.


Hi, @vilemrousi.

Thank you for replying. To best help you out, we recommend posting a new question thread detailing your issue for better community visibility and a faster response time.

Thanks!


@vilemrousi  This may assist

 


Reply