Skip to main content

When adding a new part (containing 5 feature mergers) to an already excisting workspace, which was already published on FME server, FME Server runs into a Failure without expressing why. The workspace is mainly made out of FeatureMergers and AttributeManagers.

 

When replicating this problem in FME Workbench 2021.2 with cashing on the whole workspace runs smoothly without any problems. If I run the workspace normal manually the FME workbench stops working.

 

Hopefully anyone knows how to deal with this problem? Maybe there is a work-around?

Just checking: Are Desktop and Server the same version? Because if Desktop is newer then Server, this must be corrected first.

 

Then, do I understand correctly Desktop does run when using FeatureCaching and does crash without FeatureCaching? Because if this is the case, I suspect workbench is reading from and writing to the same file. This can work using FeatureCaching, forcing it to first read the file and later on writing the file. But when running without FeatureCaching it is possible workbench is still reading the file and locking it, causing the other writing to fail. You can influence the order of things using FeatureReaders / FeatureWriters / Holders / Samplers, so it is solveable.

 

If this is not the case, please try and explain more 🙂


Are you able so switch out the FeatureMergers for FeatureJoiners to see if that helps.

 

Another thing you should try and do it more specifically pinpoint where it's crashing. You can do this by disabling parts of the workspace. If you can narrow it down to a single transformer causing the problem that will help.

 

 


Just checking: Are Desktop and Server the same version? Because if Desktop is newer then Server, this must be corrected first.

 

Then, do I understand correctly Desktop does run when using FeatureCaching and does crash without FeatureCaching? Because if this is the case, I suspect workbench is reading from and writing to the same file. This can work using FeatureCaching, forcing it to first read the file and later on writing the file. But when running without FeatureCaching it is possible workbench is still reading the file and locking it, causing the other writing to fail. You can influence the order of things using FeatureReaders / FeatureWriters / Holders / Samplers, so it is solveable.

 

If this is not the case, please try and explain more 🙂

The version of FME Server and FME Desktop are the same. And the type of transactions made in this workspace is joining 2 different dataset into 1 new dataset (Not working within the same database). This is just table data, without any geometry attached to it. Normally the workspace runs smoothly within 2 minutes, but now the workspace crashes randomly. Even with disabling parts of the workspace the crash happens.


Are you able so switch out the FeatureMergers for FeatureJoiners to see if that helps.

 

Another thing you should try and do it more specifically pinpoint where it's crashing. You can do this by disabling parts of the workspace. If you can narrow it down to a single transformer causing the problem that will help.

 

 

We pinpointed it down to a featuremerger where the workspace before collapsed. With a Decelerator on 0.1 second before the FM the 4000 features went through without a problem. This seemed like a bug in FME Workbench / FME Server.

 

With using this "fix" the problem happens somewhere else in the workspace again.


We pinpointed it down to a featuremerger where the workspace before collapsed. With a Decelerator on 0.1 second before the FM the 4000 features went through without a problem. This seemed like a bug in FME Workbench / FME Server.

 

With using this "fix" the problem happens somewhere else in the workspace again.

You should be able to se the Decelerator to be 0 seconds per feature and get the same result. This seems then like an issue with batch features. You might have better luck with using FeatureJoiners instead. But yes batch features have the tendency to be a little unexpected sometimes.

The unexpected behaviour is reduced in FME 2022. The Decelerator will split up the batch features and revert them back to the old single feature, however, as the data gets processed the batch tables will get rebuilt.

My guess is there is a problem feature (or features) somewhere in the 4000 features which is causing the batch table to fail hard for some reason. Either way definitely an FME bug.

Good luck for the workaround. Putting decelerators throughout the workspace is not really a great solution...

 


Reply