@adrian_farrell Sorry Adrian for the delayed reply here! We hit a little snag on the Iceberg front but it’s still on the roadmap for sometime later this year.
We are back in active planning on the Reader and Writer, and I want to make sure we build what you actually need first. If you are evaluating Iceberg or already using it, I would love concrete detail on your setup and the workflows you want FME to cover.
A few specific things that would help us prioritize:
Read, write, or both? Which one would be the most important?
Which catalog are you on (or planning to use): REST, AWS Glue, Hive Metastore, Nessie, Snowflake Open Catalog / Polaris, Unity, BigQuery, other?
Which engines also touch these tables: Spark, Trino, Flink, Snowflake, BigQuery, Dremio, DuckDB, Athena, other?
Where does the data live: S3, ADLS, GCS, MinIO, on-prem HDFS / POSIX?
Rough table scale: rows, partitions, typical file sizes, how often you write.
Write patterns you need: bulk load, append, full overwrite, row-level UPDATE / DELETE, MERGE / upsert, streaming?
Geospatial: are you storing geometry in Iceberg today? As WKB in a binary column, as GeoParquet, waiting on native v3 Geometry?
Any must-have features beyond the above: time travel, branches / tags for Write-Audit-Publish, partition or schema evolution, specific transforms?
An example workflow you are trying to build (source to Iceberg, Iceberg to target, or both) is worth more than a checklist answer. If your setup is sensitive, feel free to DM instead.
Thanks for the patience on this one. The more real use cases we hear, the better the first release will be.