Solved

Zip multiple files downloaded with HTTP Caller

  • 27 January 2022
  • 4 replies
  • 33 views

Badge

Hi FME'ers,

I am using the HTTP caller to download multiple CSV files to a folder. I'd like to zip the folder and output the zip folder path as a single features to a automation writer. Can this be done? The automate writer would then trigger a second workbench to read all the CSV files and process them. I don't want to automation writer to trigger this workbench per file.

 

Thanks,

David

icon

Best answer by takashi 27 January 2022, 12:21

View original

4 replies

Userlevel 2
Badge +17

Hi @djmcdermott​ , you can use the File Copy writer to save all the download files into a single zip archive, if you set a zip file path (i.e. ending with .zip extension) to the Dataset. And if you wrap the writer with the FeatureWriter, the feature output from the Summary port contains an attribute called "_dataset" which stores the destination zip file path. You can then use the feature in the subsequent process.

The attached screenshot illustrates the workflow.save-files-into-zip-with-filecopy

Badge

That's brilliant @Takashi Iijima​ Thank you. I've always wondered how to use the FileCopy without using the Subfolder name so you've solved two problems for me!

Userlevel 4
Badge +36

@djmcdermott​ , you mentioned triggering a second workspace to read all the CSV files and process them.

You may also consider writing the zip file to a temporary location (see TempPathnameCreator), and connect the Summary port of the FeatureWriter to a FeatureReader to process all CSV files in the zip file in the same workspace.

Badge

@djmcdermott​ , you mentioned triggering a second workspace to read all the CSV files and process them.

You may also consider writing the zip file to a temporary location (see TempPathnameCreator), and connect the Summary port of the FeatureWriter to a FeatureReader to process all CSV files in the zip file in the same workspace.

Hi @geomancer​ . Thank you for your response. This particular workbench is designed to be reusable. It has a parameter for the dataset ID so can be use to download different datasets. We then plug the relevant workbench in next as part of an automation to transform and load the data to a staging area. I'm not sure if the temp file path would be retained between the two workbenches. I could set it up with a FMEServerJobSubmitter but then it eats two engines.

Reply