So without having access to logs this would be hard to tackle.
Since the workspace runs fine locally I'm assuming the data in the database is correct. So it's likely to be an issue with the Cloud environment. The HTML error did seem to indicate something was wrong Tomcat.
A few things to check:
- What version of FME are you running on FME Cloud?
- What kind of instance is it?
- Check its monitoring page in the FME Cloud portal. What does its memory usage look like? Free disk space? Server load?
- Are there any job logs (for jobs where you output this data)? Any odd messages in there?
- Do other jobs run correctly on that instance? If you have the standard Samples repository, does the AustinDownload workspace run?
This reply and attachments were send to: community@safe.com
#
Thanks for the reply!
We could not see anything special in the various screens in the dashboard, so we added them as an attachment.
Maybe you notice something.
By the way,, the export also creates 450 microstation v8 DGN files in which the BGT_WGL objects do exist.
So the Postgress database seems OK.
And how do we get access to the logs to gain better insight into the fme export process?
This reply and attachments were send to: community@safe.com
#
Thanks for the reply!
We could not see anything special in the various screens in the dashboard, so we added them as an attachment.
Maybe you notice something.
By the way,, the export also creates 450 microstation v8 DGN files in which the BGT_WGL objects do exist.
So the Postgress database seems OK.
And how do we get access to the logs to gain better insight into the fme export process?
It appears only one of the attachments made it (maybe that's due to the fact you emailed your response, @mark2atsafe can you check whether that option can handle multiple attachments?)
The logs should be accessible through your Files & Connections -> Resources and then the Logs folder. Mind you, not every user / group has access to that by default, usually you have to be an admin.
So other than that, a few things to check:
- Job logs are removed after a certain amount of time, check if you can access one from a job with errors. If not, go into the System Cleanup settings and extend that time period
- Any other information you can share? FME version, type of instance etc?
- The parts where the errors occur, are they geographically close to eachother? It is always the same ones that cause errors?
Hereby the 5 screenshots in 1 zip-file. 🙂
- FME version -> FME 2019.1 (19630)
- instance -> Premium m5.4Xlarge, 16 CPU's, 64.0 GB ram
- the errors occur all over the spatial dataset (City of Amsterdam). The dataset is BGT (large scale topography) It's about the object roadparts(?) (WeGdeLen -> BGT_WGL, example: BGT_WGL_parkeervlak.shp/csv)
- allways the same ones that causes errors.
Hereby the 5 screenshots in 1 zip-file. 🙂
- FME version -> FME 2019.1 (19630)
- instance -> Premium m5.4Xlarge, 16 CPU's, 64.0 GB ram
- the errors occur all over the spatial dataset (City of Amsterdam). The dataset is BGT (large scale topography) It's about the object roadparts(?) (WeGdeLen -> BGT_WGL, example: BGT_WGL_parkeervlak.shp/csv)
- allways the same ones that causes errors.
Okay. It looks like you're running this on a 2-week schedule. I can't see anything odd about the dashboard screenshots either to be honest, except that the primary disk seems to be a bit on the small side (but it's not actually full) and that the instance may actually be a bit overpowered for what you're doing.
Now, since this is on a 2-week schedule that may be an issue when it comes to troubleshooting: job logs are removed after a week by default, so unless that has been changed (System Configuration -> System Cleanup) you're unlikely to see any logs after a scheduled shutdown.
If it's always the same features, can you check them in your source database? See if there's anything odd about them? Things like a multipart feature with a lot of parts (or a donut with a lot of holes) or a single feature with a lot of vertices.
On a side note, I would recommend upgrading your FME Server to FME 2022, support for 2019 is already reduced and next year will be the last year it's officially supported by Safe.
Hm, the log-file (the part we need) was hard too find. When we found the log file it turned out not to exist...:
Error: The log file for job ID '41201' does not exist.
Forgot the screencopy 😁 ....
We also did try the workspace viewer but is was too slow (perhaps because of our Wifi?)
Have you looked at the contents of BGT_WGL_parkeervlak.csv inside BGT_shp_csv_Wrong.zip? (see attachment). It's actually not a csv file but an HTML file, and looks like an error log.
Renaming the file to HTML (which I cannot upload here) part of the file reads:
Message java.io.FileNotFoundException: Could not create namespace, unable to create directories for /data/fmeserver/resources/system/temp/tomcat/tempstore/fmerest/CMP
So there appears to be a problem with creating a namespace and directories. Which may explain why the workspace runs locally, but not on the server.
Yes we noticed. We posted this file(and screenshot). But how can we change this? Why does this happen? Our scripts has not been changed the last few years.
Ah yes, you wrote "Attached are some examples of a shape and csv file which look like an error file in html format.". Sorry, missed that one.
Why does this happen? Probably something happened to the location where the workspace tries to write the BGT_WGL_* data.
How can we change this? Look at the location where FME Server tries to write the BGT_WGL_* data. Is this different from the location where the other data is written? Does the folder exist? Does the user have writing permissions at this location? Is there already data present that cannot be overwritten?
@mark2atsafe is it possible you look into our instance/workspace? We haven't really gotten any further so far.
Why does this happen? Probably something happened to the location where the workspace tries to write the BGT_WGL_* data.
How can we change this? Look at the location where FME Server tries to write the BGT_WGL_* data. Is this different from the location where the other data is written? Does the folder exist? Does the user have writing permissions at this location? Is there already data present that cannot be overwritten?
What process creates this folder/these folders?
Why did the process work before and now it doesn't. We did not change anything, specially not in the folder /data/fmeserver/resources/system/temp/tomcat/tempstore/fmerest/CMP
It is or looks like a "temp" folder. Is there a temporary lack of disk space that causes the crash of the last part of the process with the largest files?
FME uses a lot of tempory files before exporting to the final csv and shape files.
@mark2atsafe is it possible you look into our instance/workspace? We haven't really gotten any further so far.
Hi @datapunt ,
Sorry to hear you're experiencing this issue. To confirm, your "scripts not being changed in the last few years", are you referring to your workspaces? Were these workspaces authored in the same version as FME Cloud 2019? Are you aware of any changes around the time this issue arose?
It might be best to submit a case at this point for us to look at your logs and troubleshoot this issue.
Thanks,
Kezia
Hi @datapunt ,
Sorry to hear you're experiencing this issue. To confirm, your "scripts not being changed in the last few years", are you referring to your workspaces? Were these workspaces authored in the same version as FME Cloud 2019? Are you aware of any changes around the time this issue arose?
It might be best to submit a case at this point for us to look at your logs and troubleshoot this issue.
Thanks,
Kezia
Yes, we did not change our fme workspace scripts, and they have the same version 2019. We saw the problem quite late (😅 )because we didn't see any error messages. Files were exported but these turned out to contain some sort of log information. Only after we received reports from customers that files could not be used, we did some research.
Hello @datapunt ,
Thank you for your reply. Could you please submit a case with the associated logs and workspaces so we can troubleshoot this issue?