Skip to main content
Question

Once attributes are resolved, is there a way to remove the original (coded) attributes automatically?


nwse
Contributor
Forum|alt.badge.img+1
  • Contributor

In a SDE reader, you have the option to ‘Resolve Domains’:

In this example, this will take fields that have a domain, and create a new field with a suffix ‘_resolved’ and populates it with the descriptions from the coded values.

I.e:

The original field with the domain codes is ‘Accuracy_Spatial_Source’, while the ‘Accuracy_Spatial_Source_resolved’ are the domain descriptions.

Once the codes are resolved, I no longer need the original fields, is there a simply way to remove these automatically?

Sure, I can use an AttributeManager and remove them manually, however, it is not very practical or efficient when having more than 50 fields.

One way to somewhat ‘cheat’ is to use a BulkAttributeRenamer and remove the suffix ‘_resolved’ to basically ‘overwrite’ the original fields, however, this causes schema issues.

Another method I tested was using a PythonCaller and generating a list of all the “unresolved” fields, however, I could not find a way after to bulk remove the attributes from a list.

Has anyone figured out a way to do this?

4 replies

hkingsbury
Celebrity
Forum|alt.badge.img+53
  • Celebrity
  • April 29, 2025

I’m not quite sure what you mean by schema issue with the BulkAttributeRenamer….

There is also the BulkAttributeRemover (and Keeper). Have you had a look at either of these?


nwse
Contributor
Forum|alt.badge.img+1
  • Author
  • Contributor
  • April 29, 2025

By schema issue I mean, if you simply remove the suffix from the _resolved fields, this places the description values in the original field.

When you resolve domains, the original field is a ‘coded_domain’ type, while the resolved field is type ‘char’.

Therefore, if you use the BulkAttributeRenamer to strip “_resolved”, all the values are now going into a coded_domain field, thus causing schema issues when you go to write.

If you try to use a GDB writer, your translation will fail.

With BulkAttributeRemover, yes, this technically could be used but it’s the same method as the AttributeManager. I want to prevent doing it manually if possible.


hkingsbury
Celebrity
Forum|alt.badge.img+53
  • Celebrity
  • April 29, 2025

Are you using a dynamic writer (with a schema feature?) or are you using a standard writer?

 

If you’re using a schema feature, you’ll be able to break down the schema and adjust the field types from coded_domain to the desired type. This may require a bit of python to achieve smoothly.

If you’re just using a standard writer you can change the type under the user attributes tab
 

 


nwse
Contributor
Forum|alt.badge.img+1
  • Author
  • Contributor
  • April 29, 2025

I’m just using a standard writer, however, if you use a feature writer, the translation is successful. Not sure what the difference is between a GDB Writer and a Feature Writer, but one works and the other does not. 

To reiterate, I am simply just trying to find a fast and efficient way to remove the coded fields as I only need the resolved ones to be written. Changing things one by one is tedious and depending on how many coded fields a dataset may have, could take quite a bit of time, not to mention user error as well.


Reply


Cookie policy

We use cookies to enhance and personalize your experience. If you accept you agree to our full cookie policy. Learn more about our cookies.

 
Cookie settings