Skip to main content

The workbench I am putting together contains a feature reader to a dynamic write in several locations.

 

 

The workbench itself detects shape files in a folder location, validates projection, geometry and AssetID attributes.

 

Then writes out any identified errors to a shape file for upload in another step.

The dynamic write of the "errors" shape file works however the dynamic write of the upload file encounters a warning where the below appears in the log:

 

Cannot define schema for '@Value(ENCODED,fme_feature_type)' as the feature does not contain schema information.

 

While we have the schema mismatch the remainder arrive correctly.

The features arrive with the new layer name that I want to use.

 

Testing my workbench I have found that the dynamic writer performs as expected (and outputs the shape file) for every stage up to exposing the fme_feature_type, mapping the new layer name and setting the dynamic reader to the new layer name.

I have tested this in the latest version (2023.1.1.1) and received a similar result.

 

Has anyone experienced this encoded value mismatch on the schema before and what is the best way around it?

 

Thank you.

Michael.

Hi @michaelbreen​ I can't seem to reproduce the issue you're having with the @Value(ENCODED,fme_feature_type) message when trying to write out the features. One thing I noticed is the fme_feature_type coming from the schema has the value of "PATH" (originating from the PATH Reader) whereas the features reaching the writer have the value of "Draft_Footpath_Valuation_2022_Map" for fme_feature_type. This mismatch could be what's causing the writer to fail.

 

In instances like these where the Writer doesn't seem to accept the schema it's being sent, I like to use the SchemaScanner before the writer to re-scan the schema mid workspace and ensure a clean schema with the correct values for fme_feature_type & fme_feature_type_name exist.

 

When using the SchemaScanner, you'll need to set the Schema Definition Name to fme_feature_type_name so the writer receives the correct schema.

image 

I've attached an edited version of your workspace that runs to completion if you wanted to take a look at the SchemaScanner method.

 

Hope this helps!


Hi @danminneyatsaf​.

 

Thank you very much for your suggestion.

The schema scanner works like magic and has resolved the issue I was running into.

In saying that the customers are trying to move the goal posts again so I may be coming back to this earlier that I expect.

 

I am thinking of replacing all my schema work in other workbenches with the scanner as it seems to make the process much easier.

Is there any reason not to or limitations where we should be managing the schema separately now?

 

Thanks again.

Michael.


Hi @danminneyatsaf​.

 

Thank you very much for your suggestion.

The schema scanner works like magic and has resolved the issue I was running into.

In saying that the customers are trying to move the goal posts again so I may be coming back to this earlier that I expect.

 

I am thinking of replacing all my schema work in other workbenches with the scanner as it seems to make the process much easier.

Is there any reason not to or limitations where we should be managing the schema separately now?

 

Thanks again.

Michael.

I would recommend changing the schema work in your other workbenches if you find the SchemaScanner easier to work with. It adds flexibility to your workflows and an extra layer of correction to ensure that your Writer is receiving a valid schema.

 

I've compiled a list of a few things to be aware of when working with the SchemaScanner. These won't break your workflow but can upset the schema in some cases:

 

  1. Keep an eye on is the Ignore Attributes Containing parameter. Features/rows coming from different reader formats can introduce additional schema attributes such as xlsx_col_id. To get rid of this you would use the regular expression: xslx_
  2. If you're working with any date attributes have a play with the Date Handling parameters if you run into any issues with the dates not getting output.
  3. Make sure to use the Group Processing parameter and group your features by fme_feature_type so you get a separate schema for all your incoming feature types.

 

After those few considerations you should be good to go! Let me know if you have any additional questions or concerns.


Reply