Skip to main content

Hi all,

I've run into a bit of an issue where I've got to figure out the following. Bear with me, as my thought process is still recovering -

 

  • Each month Postcodes and data come to us, and these are in a zip file.
  • I read in the starting path with the "Path" reader as the names are not the same each month. That's the key for this approach. It could be something like "Postcodes_Nov20.zip" for example.
  • Parsing that detail, using the Unzipper, I unzip the contents to a working folder.#
  • I feed these through, with the path_windows value, to the FeatureReader, as it holds the complete paths in; eg - Z:\\FME\\Working Folder\\Postcodes_Nov20\\data\\Part1.csv
  • These are then connected to a FeatureReader, but it's just parsing the same file path, and not the CSV files.

 

I've attached the workspace, as well as the FeatureReader parameters, as this has stumped me and is just the start of a longer process.

 

Any help is appreciated. Thank you

I'm not sure I understand what the issue is.

What you need to do next is connect an AttributeExposer to the Generic Outputport and expose the columns you need, copied from the inspector view. Or, if you are in 2020, select them from the FeatureCached values.


Just to be sure, not a FeatureCaching issue? If you did run the workspace, changed the files in the directory, and run the workspace again, the workspace won't know you changed the files in the input folder, so it will display the cached features.


I'm not sure I understand what the issue is.

What you need to do next is connect an AttributeExposer to the Generic Outputport and expose the columns you need, copied from the inspector view. Or, if you are in 2020, select them from the FeatureCached values.

Hi niels, I can confirm I've kept the temporary files purged, so as to avoid errors.

I'm using 2020, and there's quite a lot of columns to expose, so I wondered whereabouts the FeatureCached values are. I am carrying through these from the first FeatureReader but they aren't the column names in the CSV


Hi niels, I can confirm I've kept the temporary files purged, so as to avoid errors.

I'm using 2020, and there's quite a lot of columns to expose, so I wondered whereabouts the FeatureCached values are. I am carrying through these from the first FeatureReader but they aren't the column names in the CSV

When I select at a Feature (coming from the Generic Outputport) in the inspector, I can see the attribute names and values in the Feature Information window.

FmeDataInspectorYou can manually add them to the AttributeExposer, or open the AttributeExposer and hit the Import button. Point to one of the CSV files, next, next, select column values.

What exact version do you use? I checked in 2020.2.


Hi niels, I can confirm I've kept the temporary files purged, so as to avoid errors.

I'm using 2020, and there's quite a lot of columns to expose, so I wondered whereabouts the FeatureCached values are. I am carrying through these from the first FeatureReader but they aren't the column names in the CSV

Awesome, thank you! Had a dig and found what I needed. The field headers were hidden for some reason.

 

Back to progress :)


Hi niels, I can confirm I've kept the temporary files purged, so as to avoid errors.

I'm using 2020, and there's quite a lot of columns to expose, so I wondered whereabouts the FeatureCached values are. I am carrying through these from the first FeatureReader but they aren't the column names in the CSV

If your csv files are all the same schema you don't need to use the dynamic port in the FeatureReader.

If you make sure that in the csv parameters it is set to Feature Type from Format Name

CaptureYou can then set up a CSV output port on your FeatureReader and all the attributes will be exposed


Hi niels, I can confirm I've kept the temporary files purged, so as to avoid errors.

I'm using 2020, and there's quite a lot of columns to expose, so I wondered whereabouts the FeatureCached values are. I am carrying through these from the first FeatureReader but they aren't the column names in the CSV

Hi,

I've tried this but no luck with the extra port - the data currently comes through the Generic port. Not sure how else to address this, though yes all the input data has the same schema


Reply