Skip to main content

We are seeing about a week and a half of .mdmp files in Server\\repositories\\ directory on 2017.1 and they are causing drive space to be consumed? Each file is about 2GB. Does anyone know where these come from? What they are used for? and how they can get purged after a set time?

Hi @ea,

Could you share us a print file?

Thanks,

Danilo


That is not a good sign. Those are memory dumps, usually the result of one or more serious errors. Better contact support@safe.com for this!


Hi @ea,

Could you share us a print file?

Thanks,

Danilo

Rather not! The files are around 2GB 😉

 

 


Rather not! The files are around 2GB 😉

 

 

Yes.... 🙂 I agree you

 

 


I will submit a ticket with them. We did a new install on the 16th of August and could not figure out late last week why our drive space was being eaten up. had aggressive log scrubbing going, etc.


Put in a ticket. Here is the header from one of the .log files associated with the MDMP

#
# A fatal error has been detected by the Java Runtime Environment:
#
# EXCEPTION_ACCESS_VIOLATION (0xc0000005) at pc=0x000000005b4f82fb, pid=12332, tid=0x00000000000042d4
#
# JRE version: (8.0_131-b11) (build )
# Java VM: Java HotSpot(TM) 64-Bit Server VM (25.131-b11 mixed mode windows-amd64 compressed oops)
# Problematic frame:
# C 0x000000005b4f82fb
#
# Core dump written. Default location: xxxxxxxxxxHiddenxxxxxxxxxxx\\hs_err_pid12332.mdmp
#
# If you would like to submit a bug report, please visit:
# http://bugreport.java.com/bugreport/crash.jsp
#


We'll see what the support experts say in the case, but I did a quick look and found previous reports of memory dumps like this were usually related to a specific transformer. For example one user experienced dumps when trying to upload a giant file caused the S3Uploader to fail.

So... can you identify a specific transformer that is common to all the failures? Obviously we'd like it not to crash anyway, but for now if we can identify the transformer then we might be able to suggest an alternative method or workaround.


We'll see what the support experts say in the case, but I did a quick look and found previous reports of memory dumps like this were usually related to a specific transformer. For example one user experienced dumps when trying to upload a giant file caused the S3Uploader to fail.

So... can you identify a specific transformer that is common to all the failures? Obviously we'd like it not to crash anyway, but for now if we can identify the transformer then we might be able to suggest an alternative method or workaround.

Checking that now. We have about 14 jobs that run during the time in question. A few of those were not converted to 2017 yet. Going to update the workbenches now.

 

 


@mmccart please post update


After further testing, we determined that our Shutdown Python Scripts we built to re-build indexes and re-analyze feature classes in our 10.4.1 SDE Oracle Spatial database were causing the issue. What's even more odd is that if a feature class had no specific field identified to have unique index built, the script ran fine. If the FC had a specific field identified to have an index, the script failed on FME Server only. The script ran successfully in FME Workbench 2017.1 but not on FME Server 2017.1. We did set custom Python Interpreter on FME Server using the command line as instructed in other posts. We confirmed that the FME Server was using the correct Python 2.7 Interpreter in the job log. Finally, we have other Shutdown Python scripts that do other functions on AGOL and in SDE databases that run without issue.

Solution: We removed the shutdown python scripts from all jobs that had them on it will approach the process of rebuilding indexes/re-analyzing feature classes within Oracle itself or with a separate Python script. Attached is an example of the script we were using.rebuild-indexes-example-python-script.txt


Reply