Skip to main content

We are working with simple workflow of reading in data from a SQL Spatial Table reader, setting null attributes to missing , and writing out to fGDB/eGDB/Spatial table. Our end goal is to remove the fields that are empty but there are occasions when these fields may have values.

 

When writing out the features after mapping all null to missing, the fields that are entirely missing still write out when writing to a fGDB/eGDB/Spatial table (they appear as null in the featureclass).  We are unsure if there is a way around this or if it's a limitation of the GDB format.

 

My only idea of bypassing this is writing out to another format that would remove the field (like JSON) and then reading back in using feature writer and feature reader. This is a poor solution so any other ideas are appreciated.

 

@wgraham​ Looks similar to the discussion here. The Geodb Feature Class is created before the data is written. So if the attribute is on the schema it'll get created in the file geodb if you're using the 'Automatic At the point, the FME Geodb writer doesn't know when an entire column of data is going to have a missing value. So you can use something like StatisticsCalculator Total Count to determine if all the values in a column are <missing> . Then you'll have to adjust the schema. This would entail using Dynamic Workflows to create the revised output schema.


@wgraham​ Looks similar to the discussion here. The Geodb Feature Class is created before the data is written. So if the attribute is on the schema it'll get created in the file geodb if you're using the 'Automatic At the point, the FME Geodb writer doesn't know when an entire column of data is going to have a missing value. So you can use something like StatisticsCalculator Total Count to determine if all the values in a column are <missing> . Then you'll have to adjust the schema. This would entail using Dynamic Workflows to create the revised output schema.

Hi Mark, thanks for the reply. I wanted to see if I could get more detail on the intermediary steps between the calculator and the dynamic writer. There may be times where the source dataset is changing so it's ideal trying to come up with an efficient solution these instances.


I've attached an example workspace with our current solution for reference.


Reply