Hi,
My workspace is reading data from excel and uploading it to gis database. Data streaming is used to be able to return an html to the user.
My implementation works if it's run natively from the fme application, but when it's uploaded to the server it will start a duplicate of the same job after 5 minutes. Both of these jobs will then run from start to finish, meaning that duplicates will be created.
It seems that this happens only when the excel file has atleast 330 rows. I have also checked that engine does not crash.