Skip to main content

One of the biggest issues that we have with reading in data from various web sources (several XML feeds that we read are especially prone to this) is that sometimes they like to hang or read-in an incomplete data set. To mitigate this issue, we are daisy chaining workflows together so 1 triggers the other one and on down the line so 1 bad dataset doesn't bring down the whole job and the other datasets. Ideally it would be nice if server was able to handle invalid datasets (possibly via auto completing/closing - even if all the data wasn't able to be read in - or more ideally would be attempting a second read at the dataset) and then proceed onto the other datasets.

In general, FME could do with some improvements so that all workspaces can be run in a non-disruptive manner. But I believe Safe is working on that already (e.g. by adding <Rejected> ports to all transformers that could terminate the workflow). However, readers (and in our case, FeatureReader transformers) are also on the top of my list for some improved error handling, so you have my vote! :)

Speaking of FeatureReaders: these transformers might offer a more stable approach for you, @runneals, since they *should* not terminate the workspace. Don't forget to set the "Rejected Feature Handling" workspace parameter to "Continue Translation" as described in this blog post.

Would also love to see the actual reader error in the fme_rejection_code attribute on the <Rejected> port of the FeatureReader. Now it usually returns the "A fatal error occurred. Please check the log file above.." message, which doesn't make any sense when this message is passed on to the end-user in an FME Server setup, because they can't access the log file and the "above" implies that this error targets FME Desktop users only.. ;)


I agree completely with this idea and @sander above. I'm setting up workbenches to request data from a number of web services that aren't always reliable. It would be great if the readers had some built in resilience re rerun requests, for instance when looping an ESRI feature service if one request fails you loose the previous efforts, it would be great if it had a 'number of retries' and 'retry delay'.


OpenArchived