Skip to main content

Hi

I have a workbench that takes a csv file and creates 10 csv files , 1 for each site - it does this using AttributeFilter.

This works great but the csv reader gives the last 8 months of data with each month as a column. So the first version we ran had Nov to Jun. This latest month is Jan to Aug. So every time we run this the attributes change and I had to manually update this last time.

Is there a way of automatically picking up new attributes from the reader and adding them to the writer (and removing any no long needed) please?

I'd ideally like to schedule this with server but I can't unless this is automatic.

Thanks

Hi @lorraine_s

 

 

One suggestion might be to re-add the CSV reader in dynamic mode. i.e., as a 'single merged feature type'. This mode may work better for your workflow and be better at picking up the changes to the schema. You can read more about it here

Hi @lorraine_s

 

 

One suggestion might be to re-add the CSV reader in dynamic mode. i.e., as a 'single merged feature type'. This mode may work better for your workflow and be better at picking up the changes to the schema. You can read more about it here
My testing seemed to show that the writer won't honour the new attributes in subsequent csv input files (attributes that were not in the original csv file used when adding the reader *merged). The symptom is attributes that were not in the original merged reader, won't be written.

 

 


@lorraine_s You might have better luck if you do what Matt says and ensure you select all possible csv's that will cover all your attributes (all months in this case). This would then create the appropriate schema for writing out.

Another way that worked for me is to create a dummy csv file with all the attributes and 1 line of data. Then add this as the Reader.... I found when I read in other files with different selections of months, the output was appropriate, however you do get empty attributes for the months that are not found.


My testing seemed to show that the writer won't honour the new attributes in subsequent csv input files (attributes that were not in the original csv file used when adding the reader *merged). The symptom is attributes that were not in the original merged reader, won't be written.

 

 

 

@MattAtSafe What I found might be a bug, but I'm not sure... 🙂 Lets chat!

Thank you for your quick answers @MattAtSafe @SteveAtSafe. I have tried re-adding the writer as dynamic and this seems to have worked. Thanks

Lorraine


@lorraine_s I think I have a solution for you. You can use the Schema (any format) reader to get the schema of your CSV files and pass that across to the destination CSV file.

 

 

Add a second reader to read the actual data - but ignore and skip the column names.

 

 

Schema(any format) needs a second CSV reader to set the parameters for the CSV reader used by the Schema (Any Format) - otherwise it won't return the correct column names.

 

 

AttributeManager can be used to map the column numbers (col0) to the correct column name in attribute{}.name using "@Value(attribute{1}.name) = col1". A small pythonCaller script would give you a more flexible approach if the number of columns in the source CSV varies a lot.

 

A little trickier than I would have hoped, but I think this should give you a starting point. I've attached the workspace build in FME 2017.1

 

csvdynamicschema.fmwt


Hi @lorraine_s, the FeatureReader might be a good choice in this case. If the source CSV table has 9 columns - one column called "Site" that stores site ID and other columns are the target eight month names, this workflow performs dynamic destination schema configuration and also feature type Fanout based on the site ID.

Writer Feature Type Properties


Reply