in regards to a workspace that runs on multiple files, we had to accomplish something like this for one of our workflows. We have a process that reads in anywhere from 1 to 4 files ata given time.
to get around this, our automation uses a S3 watcher which is the only big difference but the logic SHOULD be mostly the same. We built two custom transformers, one that lists all the files in a given directory or directories that were modified / uploaded on a given date. And then another that counts all of the files and groups them by file name and makes sure that each group has a specific amount of files. We accomplish this with one automation and one workspace.
An example; John smith dental sends us 2 files with the goal of generating a list of patients with insurance. The automation runs whenever either of those files are dropped, we get JSD_Insurance and JSD_Patients, the transformers would get the list of files from the S3 bucket. IF both files are there then the workspace continues using the files it gathered from the list. if it doesn't it will error out - but I suppose you could configure this to do something else.
I'll include a screenshot of the file lister and the custom transformer file for the second transformer that filters based on that list. I can't include the file for the first one because it has some proprietary info in it.
hopefully this helps
in regards to a workspace that runs on multiple files, we had to accomplish something like this for one of our workflows. We have a process that reads in anywhere from 1 to 4 files ata given time.
to get around this, our automation uses a S3 watcher which is the only big difference but the logic SHOULD be mostly the same. We built two custom transformers, one that lists all the files in a given directory or directories that were modified / uploaded on a given date. And then another that counts all of the files and groups them by file name and makes sure that each group has a specific amount of files. We accomplish this with one automation and one workspace.
An example; John smith dental sends us 2 files with the goal of generating a list of patients with insurance. The automation runs whenever either of those files are dropped, we get JSD_Insurance and JSD_Patients, the transformers would get the list of files from the S3 bucket. IF both files are there then the workspace continues using the files it gathered from the list. if it doesn't it will error out - but I suppose you could configure this to do something else.
I'll include a screenshot of the file lister and the custom transformer file for the second transformer that filters based on that list. I can't include the file for the first one because it has some proprietary info in it.
hopefully this helps
Thank you
i did find a solution to this.
used a filter for the passed and routed the fail out to another filter for a second file type.
Thank you
i did find a solution to this.
used a filter for the passed and routed the fail out to another filter for a second file type.
makes sense! I'm glad you found a solution :)
Thank you
i did find a solution to this.
used a filter for the passed and routed the fail out to another filter for a second file type.
also - just in case someone stumbles across this in the future - you may want to mark your reply as the answer if possible
Thank you
i did find a solution to this.
used a filter for the passed and routed the fail out to another filter for a second file type.
Has anyone successfully set up a two-way communication system with SMS in FME? I'm interested in receiving responses and updating my workflow accordingly.