Skip to main content

Hi.

I'm trying to utilize the "schema features" for dynamic schema definition output, but have a hard times grasping exactly how FME uses and don't use these schema features.

FeatureReader has a special "<Schema>" output, but readers don't seem to have a similar mechanism, even though one can designate them in the dynamic output as "Schema Source".

Does this mean that there is an underlying hidden connection between the readers and the writers ?

Specifying "Schema From Schema Feature" does _not_ seem to enable the writer to read schema features, but requires the schema feature content to be superimposed upon all data features. Is this correct?

I have several needs to be able to manipulate the schema before output, so understanding the functionality is important to me.

One need is to "clean up" a schema defintion before output, especially if the input if WFS (FME adds all sorts of nonsense into the schema, e.g. "*.xsi_nil").

Another need is to be able to utilize a fixed schema from some source (e.g. template database table), and use this to control the output.

Are there any documentation on how the whole schema feature/dynamic output works in detail ?

Has anyone else been messing around with this functionality, and have some valuable insights into this ?

Cheers.

Your best bet is probably to check out this article: https://community.safe.com/s/article/dynamic-workflows-advanced-example-modifying-the-s

 

There is indeed a special link between readers and writers. The link is based on the feature_type_name, when a schema on the writer is based on a readers schema in a dynamic way the writer will look at the schema from the reader for the feature_type_name and use that. This means that any changes to the schema in the workflow will not be supported, however, you can always add manual attributes into the output schema which can be used in addition to the dynamic attributes.

 

Short note: FME also has a SchemaReader which also produces the same thing as the SchemaPort of the feature reader.

 

As for the Schema Feature it is important that the Schema Feature for each feature type is received by the writer before the data features arrives. You can also have a schema feature defined on each feature, however, typically only the first feature will be used to build the schema. This depends on the format I think although I'm not 100%.

 

From my experience, there are two important attributes in the Schem Feature, however, it will depend on your use case I suspect as to what is important. 'fme_feature_type_name' and 'fme_schema_handelling'.

 

The fme_feature_type_name attribute is what controls which layer/feature type gets the defined schema. If you rename the the output feature type then fme_feature_type_name attribute in the schema feature will need to be updated to reflect it.

 

when 'fme_schema_handelling' is set to schema_only it tells the writer to not use this feature as a data feature (only use it for the schema).

 

In my experience most formats work well with this, however, in some cases/formats there can be unexpected results.

 

Transformers which might help you with creating/working with schema features are the SchemaSetter and AttributePivitor.

 

I haven't found this stuff really documented on how it all works. Mostly the tutorials and trial and error.


Your best bet is probably to check out this article: https://community.safe.com/s/article/dynamic-workflows-advanced-example-modifying-the-s

 

There is indeed a special link between readers and writers. The link is based on the feature_type_name, when a schema on the writer is based on a readers schema in a dynamic way the writer will look at the schema from the reader for the feature_type_name and use that. This means that any changes to the schema in the workflow will not be supported, however, you can always add manual attributes into the output schema which can be used in addition to the dynamic attributes.

 

Short note: FME also has a SchemaReader which also produces the same thing as the SchemaPort of the feature reader.

 

As for the Schema Feature it is important that the Schema Feature for each feature type is received by the writer before the data features arrives. You can also have a schema feature defined on each feature, however, typically only the first feature will be used to build the schema. This depends on the format I think although I'm not 100%.

 

From my experience, there are two important attributes in the Schem Feature, however, it will depend on your use case I suspect as to what is important. 'fme_feature_type_name' and 'fme_schema_handelling'.

 

The fme_feature_type_name attribute is what controls which layer/feature type gets the defined schema. If you rename the the output feature type then fme_feature_type_name attribute in the schema feature will need to be updated to reflect it.

 

when 'fme_schema_handelling' is set to schema_only it tells the writer to not use this feature as a data feature (only use it for the schema).

 

In my experience most formats work well with this, however, in some cases/formats there can be unexpected results.

 

Transformers which might help you with creating/working with schema features are the SchemaSetter and AttributePivitor.

 

I haven't found this stuff really documented on how it all works. Mostly the tutorials and trial and error.

Thanks Matt.

One of your last comment got things to click into place for me: "when 'fme_schema_handelling' is set to schema_only it tells the writer to not use this feature as a data feature (only use it for the schema). "

This explains perfectly why a single feature in one of my workspaces fails to produce an output. I merged the schema feature from a FeatureReader onto all data features, and used dynamic output. I have another output that worked, with multiple features, but I'm guessing that here the first feature also was taken aside as well as a schema-only feature, I just didn't notice.


Reply