Skip to main content

I've read a number of articles here regarding "dynamic schemas", but it appears I'm missing something fundamental, and so I will try asking my question a different way. I want to create a workflow that will read a CSV file that will change from time to time, either in the number and names of its columns, or in the number of rows that exist. The workflow will then "enrich" the data found in the source table by querying an API using the value of one of the source columns (which will not vary in name), and finally the workflow will write a CSV with the original columns AND the "enrichment" columns. My problem is not with the API calls or the "enrichment" process, but rather that I can't seem to figure out how to set up the workflow so that the correct columns are read and interpreted unless I manually press the "parameters" button in the FeatureReader transformer. (I'm using the FeatureReader and FeatureWriter transformers in lieu of Readers and Writers.) The reason I want to avoid pressing the "parameters" button is that I want to fully automate the process by publishing to FME Server and creating an associated Server App, so that coworkers without FME Desktop can process input CSV files without using FME Desktop. Any help or clues will be appreciated.

The easiest is probably to do something like this:

  • Dynamic CSV reader (or FeatureReader, makes no difference)
  • AttributeExposer to expose the attributes that are the same for all files
  • ...do whatever you need to call the API to enrich the features
  • SchemaSetter from the FME Hub, don't forget to exclude format attributes, e.g. "fme_" and "csv_"
  • Dynamic CSV writer (or FeatureWriter), with schema source set to "Schema from schema feature"

The SchemaSetter basically analyses the passing features and generates an associated schema definition that is passed on to the writer.


David,

Thanks for the quick response. I had not tried the SchemaSetter transformer, but it sounds like a good approach.

Phil


The easiest is probably to do something like this:

  • Dynamic CSV reader (or FeatureReader, makes no difference)
  • AttributeExposer to expose the attributes that are the same for all files
  • ...do whatever you need to call the API to enrich the features
  • SchemaSetter from the FME Hub, don't forget to exclude format attributes, e.g. "fme_" and "csv_"
  • Dynamic CSV writer (or FeatureWriter), with schema source set to "Schema from schema feature"

The SchemaSetter basically analyses the passing features and generates an associated schema definition that is passed on to the writer.

Hi @david_r​  , I am trying to do a similar thing where I read a csv file of point locations and want to rename all attributes names that contain spaces and brackets so that I can use the ChangeDetector to compare these with the names held in an ArcSDE database (whose attributes have already been renamed). The differences will be flagged in a report and the source data written to a new layer in ArcSDE.

 

SchemaSetter seems one way to go in order to allow for Dynamic changes in source attributes (fields being added/removed in the future) and in the attached example, the attribute fields get renamed correctly but it doesn't write the values out to the renamed fields (I am writing back to csv just as proof to myself that it works ok). Also, I get the same result with the attached if I remove the SchemaSetter completely - so can the list that is built during the process be used by itself(?)

 

Any ideas here? - I think I am misunderstanding something fundamental..!


Hi @david_r​  , I am trying to do a similar thing where I read a csv file of point locations and want to rename all attributes names that contain spaces and brackets so that I can use the ChangeDetector to compare these with the names held in an ArcSDE database (whose attributes have already been renamed). The differences will be flagged in a report and the source data written to a new layer in ArcSDE.

 

SchemaSetter seems one way to go in order to allow for Dynamic changes in source attributes (fields being added/removed in the future) and in the attached example, the attribute fields get renamed correctly but it doesn't write the values out to the renamed fields (I am writing back to csv just as proof to myself that it works ok). Also, I get the same result with the attached if I remove the SchemaSetter completely - so can the list that is built during the process be used by itself(?)

 

Any ideas here? - I think I am misunderstanding something fundamental..!

You may want to look into using one or several BulkAttributeRenamers before the SchemaSetter and the dynamic writer.

You cannot rename or introduce new attributes after the SchemaSetter, as they won't be included in the schema definition (created by the SchemaSetter) and thus not considered by the dynamic writer.


Reply