One of the biggest issues that we have with reading in data from various web sources (several XML feeds that we read are especially prone to this) is that sometimes they like to hang or read-in an incomplete data set. To mitigate this issue, we are daisy chaining workflows together so 1 triggers the other one and on down the line so 1 bad dataset doesn't bring down the whole job and the other datasets. Ideally it would be nice if server was able to handle invalid datasets (possibly via auto completing/closing - even if all the data wasn't able to be read in - or more ideally would be attempting a second read at the dataset) and then proceed onto the other datasets.
Enter your E-mail address. We'll send you an e-mail with instructions to reset your password.



