Skip to main content

I am running a verification process and there can be one or more validation errors to my data for each feature. I would like to define my own error messages and push them to a list so i later easily can combine an error report.

To solve it I am thinking of using lists. But i cannot figure out how to create an empty list instead of an attribute.

 

I am thinking to have a list looking somthing like this:

 

errors{0}.height: "Negative values are not allowed"

errors{1}.height: "Value must not be empty"

errors{0}.quality: "Quality code invalid"

errors{1}.quality: "Quality code must not be empty"

errors{0}.bbox: "Feature is out of bounds, check coordinate system"

 

I hope you get the idea.

Later i want to use the list to report all the errors each feature might have.

 

But how can i define an empty list so that i can later populate it with values as the feature go through a step of testst? Can I use Feature manager to make my test and add the validation result to the list?

I think I solved it myself actually if someone out there does not have a smarter way to do it.

By running the validation steps in parallel instead of chaining and write the verbose error using the AttributeManager. I can in the end of the process use the ListBuilder to create a list by grouping the features with their ID.

It will be an awfull a lot of AttributeManagers (one for each potential error!)

Maybe using python is a better way to handle it?


I think I solved it myself actually if someone out there does not have a smarter way to do it.

By running the validation steps in parallel instead of chaining and write the verbose error using the AttributeManager. I can in the end of the process use the ListBuilder to create a list by grouping the features with their ID.

It will be an awfull a lot of AttributeManagers (one for each potential error!)

Maybe using python is a better way to handle it?

Hi @so_much_more​! I would assume there is some type of query transformer in your workspace to verify the transformation. In that case would it be an option to generate a list from that transformer and connect an AttributeManager to customize the message? How are you currently assigning/defining your error messages?

 

These articles and past posts might be helpful as well


Hi @so_much_more​! I would assume there is some type of query transformer in your workspace to verify the transformation. In that case would it be an option to generate a list from that transformer and connect an AttributeManager to customize the message? How are you currently assigning/defining your error messages?

 

These articles and past posts might be helpful as well

Hi Jenna and thanks for your reply and supplemented links.

Most of my validations are very basic and made by a tester or attributeManager. Using the attributeManager to apply verbose messages to a field. A few of the spatial tests i am doing with existing transformers I think/assume can output a list as you say.

I played with the python caller and think that might be my best route as I can append errors based on category to a python dict after with a serie of "if this..." append <error message..> .

In the end I assume i can combine all lists and exlode them into a table to be used in my result rapport in the end.

 


Reply