It’s kind of hard to tell what exactly is going on - From what I can see most of those warnings can be ignored I think. From what I can see the job is successful in the fist two transactions but fails in the 3rd (transaction size is 1000 features). From my view the most likely culprit is some bad data value somewhere, although I would expect more information in the log.
You can try and reduce the features per transaction - if you reduce it to 100 for example, and it is data related then you should get somewhere between 2000 and 3000 features written - You can try and take a look at the batch which failed to see if you’re able to identify anything that looks strange.
If it’s not data related - then it might error again after the second transaction (200-300) this would suggest something else is up.
When testing I recommend using the partial runs feature to run all of the process up to the last AttributeManager there. This should mean you only need to tweak you’re write and then have another go.
Thanks for replying. I tried reducing the features per transaction, first from 1000 to 500, which failed, then again from 500 to 100, with the same result. I looked at the 100 features in the last failed run and couldn’t see anything that might be causing the failure. My co-worker did mention that he was told by one of our vendors that AGOL and FME can have issues when the reader and the writer are the same layer. I don’t know what the workaround for that would be though...
Did it fail at the same point ~2000 - 3000 features? Or did the number of "Successfully written features" go down to less than 2000?
In that case you can try and use a FeatureHolder before the writer - this will wait for all of the features to arrive at the FeatureHolder before writing begins.
If that still doesn't work you can also do a test where you run all the way up to the FeatureHolder (using the partial runs function) the you will have all the features locally cached and then just run the writer to AGOL. This should rile out any interference from reading at the same time (it's just a test though, not a solution)
Yes, it was failing at the same point. I’ll definitely look into FeatureHolder. I’m also looking at another workaround where I have a copy of this same layer with all of attribute fields that needed updating already updated within a FGDB within ArcGIS Pro. If the version of ArcGIS Pro I’m currently using didn’t have a bug that is causing the issue of me being unable to run a field calculation on for the 12 attribute fields needing updated due to the size of the layer, then I wouldn’t be trying to find a workaround here. I could overwrite my AGOL layer with the FGDB layer, but then this will destroy my AGOL dashboard since I have so many Arcade expressions and other things used throughout, thus why I really want to find a solution in FME that will allow me to just update those fields.
Oh, same error?
"ArcGIS Online Feature Service Writer: 3000 features successfully written to 'HII_2020_2022', but the server rejected the xxxx features in the last request due to errors. See warnings above. Ending translation"
Where xxxx is the transaction size? This does still point me to a data issue somehow. What happens if you reduce the transaction size to 1? It will for sure slow it down but it might help you to identify a specific feature, or the number of features which makes the commits break.
If I remember correctly, it failed at 3400, with the transaction size being 100. I looked at the features, but just wasn’t seeing anything that stood out as to why it would fail at that point. I’m going to try running this again in the morning with the FeatureHolder added and see if I get the same results or not.
I once had to encode the value of some fields to HTML using the TextEncoder transformer before updating a feature class in ArcGIS Online. They contained some characters that weren’t accepted by ArcGIS Online.
Hi! Im curious, i have the same issue, it fails at about 2000. Did you find a solution to it?
Hi Jonathan Slope!
No, I never did find a solution for the issue. I tried many different things. I got it to go as far as 4500 at one point before it failed again. I ended up just doing my updates through ArcGIS Pro, which took a really long time due to I could only update 1,000 features at a time for the 12 different attribute fields I had to update from a layer that had almost 20,000 features. I was really hoping FME would be the one to save me all of that time, but since I’ m still relatively new to FME, I couldn’t find a working solution.
@Jonathan Slope I am not sure if this would be the best solution, but have you considered separating the process into two stages (which can be controlled by a workspacerunner). I know it probably is not ideal, but was trying to think of potential workarounds.
I am wondering if you had your current workspace process the data, outputting to multiple files in a temporary location. And then another workspace to read in just one file at a time with the writer. That way the writer is disconnected from the reader, and it is processing less data at a time. Also, if it fails to write on one of those smaller chunks, you could allow it to keep processing the rest of the data and then perhaps only have to troubleshoot a smaller dataset.
well, apparently i have solved it. Thanks @liamfez! i have splitted it into two, and i use a GeoJSON as a temporary file. Ill run the process in a batch file, one after another, then the cirkle is round again. Thanks for your help!