Here's some Python code to split up an FME binary (e.g. from the GeometryExtractor) into chunks of a pre-defined size (1600 bytes in this small example, you probably want to set it to 16777216, or one less):
def chunked(size, source):
for i in range(0, len(source), size):
yield source[i:i+size]
def FMEBufferSplitter(feature):
chunk_size = 1600 # change as needed
geom = feature.getAttribute('_geometry')
parts = chunked(chunk_size, geom)
feature.setAttribute('_geometry_parts{}', list(parts))
And conversively, here's how to join them back together:
def FMEBufferJoiner(feature):
parts = feature.getAttribute('_geometry_parts{}') or []
joined = b''.join(parts)
feature.setAttribute('_geometry', joined)
I've also attached a demo workspace (2022.2) with sample data from the FME training dataset.
If possible, I would rather consider writing the geometries in something like PostGIS and then use a GUID in Snowflake as a foreign key to the table holding the actual geometry.
Here's some Python code to split up an FME binary (e.g. from the GeometryExtractor) into chunks of a pre-defined size (1600 bytes in this small example, you probably want to set it to 16777216, or one less):
def chunked(size, source):
for i in range(0, len(source), size):
yield source[i:i+size]
def FMEBufferSplitter(feature):
chunk_size = 1600 # change as needed
geom = feature.getAttribute('_geometry')
parts = chunked(chunk_size, geom)
feature.setAttribute('_geometry_parts{}', list(parts))
And conversively, here's how to join them back together:
def FMEBufferJoiner(feature):
parts = feature.getAttribute('_geometry_parts{}') or []
joined = b''.join(parts)
feature.setAttribute('_geometry', joined)
I've also attached a demo workspace (2022.2) with sample data from the FME training dataset.
If possible, I would rather consider writing the geometries in something like PostGIS and then use a GUID in Snowflake as a foreign key to the table holding the actual geometry.
This worked perfectly thanks!