Hi Community,
After reading this article here, I felt I should tailor my question for my particular challenge and request help from @GerhardAtSafe and @stewartharper, (and if you could point me to the youtube videos and github repo mentioned there I'd be so grateful!) My question pertains to creating a similar process as the "FME Cloud API return ZIP file", but with a few variations in the workflow.
What I want to do is call a varying number of CSV files, created as output from another set of FME workspaces executed on Cloud/server, into a workspace(s) which parses some media UUIDs from a couple columns in the CSV into a single line item list. This list of media UUIDs is then used to pull the media from an S3 bucket, and push to another S3 bucket. Now, this next steps is where it gets tricky in the workflow.
* How can I enable to ZIP up all the media into one zipped file, give it a date_timestamp file name, such as update_03OCT2018_0830utc.zip, (or what ever may come from system parameter) and then move it into the destination S3 bucket. There are a couple specific variables that may influence how I develop this workflow:
- The number of CSV files may vary between 0-23 when written output in the (FME_SHAREDRESOURCE_DATA/mycsvdirectory). This is because there are workspaces which execute on a schedule to update a database, and so they may not always have record updates to add to database.
- These CSVs need to be moved from this shared directory so the next batch is written in clean directory and no duplicate CSVs are processed again.
- I currently have set up a directory watch topic and notification to watch this directory, as a trigger to notify the next workspace, the one which generates the media parsing list, to run on these CSV files once they are created in directory.
- All output used in creating the final output ZIP file needs to be purged and reset for next schedule iteration of updates.
I hope this makes sense and I realize it's a bit lengthy, but if I could get some direction to start a workflow it would help me immensely.
Thanks, Todd