Hi! I have a workspace that converts LAS into Recap using a pointcloudlasclassifier before writing out to Recap. This workspace runs 100% flawless on desktop. When publishing to FME Server however, the workspace always fails the first run after logging into a new session in the browser. When running the workspace the second time (and following runs) without editing any parameter it runs successfully. Any ideas what can cause this issue? It seems that it is the rcs-files (which are written to a subfolder) that is causing the workspace to fail.
Hi @atle_hoidalen
Are you able to upload a job log from a successful and failed run? That way I can have a look to see if there's any clues into what's going on.
Hi @atle_hoidalen
Are you able to upload a job log from a successful and failed run? That way I can have a look to see if there's any clues into what's going on.
Hi thanks. I've allready looked into the the job logs and they do not contain any information about why the files are not written, in fact, the say spesifically that the files were written without errors. I have also discovered that the issue is not spesifically connected to the Recap writer, but also other writers in the workspace. (I'm running a generic reader and routing features to different writers with a tester).
I will do some more testing and upload the logs even if they do not seem to contain the answer to my issue. :)
Hello again, I'm uploading the job log file for one of the jobs that will not write to a folder under resources on the Server. Is it possible that the writing of the resulting files fails the first time and succeds every following time in each session because FME Server has to "wake up" the network drive before it can write to it?job_504.txt
Hello again, I'm uploading the job log file for one of the jobs that will not write to a folder under resources on the Server. Is it possible that the writing of the resulting files fails the first time and succeds every following time in each session because FME Server has to "wake up" the network drive before it can write to it?job_504.txt
Hi,
Is job-504 a successful log file? IT seems like data was written with that one. I'm wondering if you can upload one where it fails and I can see the errors that you mentioned about writing out the rcs files?
Also would you be able to provide an fmeprocessmonitorengine log file that covers the time where you're finding the workspace fail and then succeed? You can find it here:
Resources > Logs > engine > current > fmeprocessmonitorengine.log
I wouldn't have thought that it needed to 'wake up' a network drive. Is the network drive configured through the FME Server resources page as an accessible location, or is the path written inside the parameters?
Do you have any distributed engines?
Hi,
Is job-504 a successful log file? IT seems like data was written with that one. I'm wondering if you can upload one where it fails and I can see the errors that you mentioned about writing out the rcs files?
Also would you be able to provide an fmeprocessmonitorengine log file that covers the time where you're finding the workspace fail and then succeed? You can find it here:
Resources > Logs > engine > current > fmeprocessmonitorengine.log
I wouldn't have thought that it needed to 'wake up' a network drive. Is the network drive configured through the FME Server resources page as an accessible location, or is the path written inside the parameters?
Do you have any distributed engines?
Hi! Thank you for the reply. Job-504 is succesful according to the log, but now data was actually written to the folder. There is no distributed engines in play. Since my initial post I've changed my workspaces to only using the "Data download" service and for some strange reason the scripts seem to run flawlessly every time now. I will get back to you if the problem emerges again.
Hi! Thank you for the reply. Job-504 is succesful according to the log, but now data was actually written to the folder. There is no distributed engines in play. Since my initial post I've changed my workspaces to only using the "Data download" service and for some strange reason the scripts seem to run flawlessly every time now. I will get back to you if the problem emerges again.
I'm glad that's working for you with the Data Download at least! Because that service writes to a different location, I'm guessing there is something funny going on with the location specified when using the job submitter.