Skip to main content

I’ve alluded to my question on a few other posts about FeatureWriters:

When FeatureWriter fails there is no output | Community

Add a Rejected port to FeatureWriter | Community

But figured I’d ask a more directed question in its own posting. 

I’m in a situation where I want to do some post-processing after writing data to each of my 40 feature types. I have a FeatureWriter for each feature type, in an attempt to catch and do no post-processing for feature types that fail, but continue with post-processing on feature types that write successfully. I’ve come to realize that any FeatureWriter that fails, stops the entire workspace. Setting “Rejected Feature Handling” to “Continue Translation” doesn’t apply to a failed FeatureWriter. The “Rejected port” link above is suggesting adding such a port to the FeatureWriter, which would solve my predicament, at least as I understand it, but it’s not implemented as of FME 2024.1.1, and I don’t know if there are plans to do so. 

I’ve also tested out setting “Ignore Failed Feature Types” to Yes on each of my FeatureWriters (Geodatabase SDE), but what that ends up doing is including all features, failed or successful, in the summary and feature type output port, misrepresenting the number of features actually successfully written (if any failed). 

I guess I could use a WorkspaceRunner / FMEFlowJobSubmitter for processing each feature type separately so that each child run fails or succeeds but doesn’t fail the parent workspace, but it seems a bit much for what I’m trying to do with 40 feature types.

Ultimately I’m looking to continue a translation on all feature type writes that are successful, even if one or more fail, all the while maintaining the correct results of records that were successfully written to be used in post-write processing.

Thoughts are greatly appreciated.

I do not know of a good way to work around this. I think the workspacerunner alternative is worth trying. I would try to configure the child workspace in a way it can dynamically handle the data and not a child workspace for each featureclass. Or only one child workspace with all readers and writers, but only activate one reader writer trace for each run, managed by a testfilter reacting on a parameter value.


What is the reason the features are failing to write?

 


I do not know of a good way to work around this. I think the workspacerunner alternative is worth trying. I would try to configure the child workspace in a way it can dynamically handle the data and not a child workspace for each featureclass. Or only one child workspace with all readers and writers, but only activate one reader writer trace for each run, managed by a testfilter reacting on a parameter value.

Thanks, I’ll explore this. There’s a lot of meat in each feature class workflow, with FeatureReaders, etc. specific to each so I hadn’t developed my workspace for dynamic inputs and outputs. That said it’s probably better to do this anyways, and try the common child workspace that can run on all feature classes.  


What is the reason the features are failing to write?

 

I don’t have a specific reason, as the data we get is from a third party. I know it’s best practice to test for data issues that might cause errors prior to writing, which I’m doing some of, but I’m attempting to avoid throwing away the entire run of the workspace if 1 feature in 1 feature class fails to write.

I’ll work on the dynamic reading/writing and using WorkspaceRunners to see if it helps (and works!).


Reply