Skip to main content

I need to make the error messages of logfiles output on fmeserver (after running a workbench) more "readable/helpful/shorter" and finally return it to an email sender.

As far as I see, there are the following options:

a) I could use the usual logfile (like on fme Workbench) and parse it somehow within a shutdown script of the workspace, handing the result over to the emailer (or trigger a notfication) - don't know the exact workflow on fmeserver, yet.

b) if I don't want to write python or tcl, I could probably use a job chain: running another workspace interpreting the logfile and sending out another email with interpretational text. This should be possible to be used directly by the emailer (within the workbench), right?

c) I could go for an automation on fmeserver and use those logfile-options there (don't find good hints on how to produce a good output, there).

Any hints what to go for? Are there any best practice examples available ?

Thanks for your help, Marion

Probably option B would be the best one to look into (but it depends on what you want to log). The log file has a predictable name/location on both desktop and server, you can use a Textfile reader to work through it.

You may be able to use Loggers in the "main" workspace to log specific messages which you can pick up in the processing workspace, that could save you a lot of work.


Thanks for your input. I am already producing "nice" output for those files running through the workbench and don't crash it.

Only "bad" files crashing the process are now considered. This usually would stop the workbench very early because of unsuitable data. Not even getting to the first transformer. Any hints for that?


Thanks for your input. I am already producing "nice" output for those files running through the workbench and don't crash it.

Only "bad" files crashing the process are now considered. This usually would stop the workbench very early because of unsuitable data. Not even getting to the first transformer. Any hints for that?

When the process "crashes" with the bad files, is there not a log produced? Just wondering if you can use the same process as @redgeographics describes for this case as well.


Reply