Hi @johnt I just tested dynamic writing + Upsert with the PostgreSQL Writer and was able to get it working correctly.
In the User Attributes tab of your Dynamic PostgreSQL Writer, you'll need to make sure that you've exposed the primary key column so you can use it in the Match Columns parameter.
In my workflow I used the SchemaScanner to create the schema to be used in the writer mid-workspace. You can see how I configured the writer in the screenshot below. Because we're using the schema from the SchemaScanner, we need to set the Schema Source to "Schema From Schema Features" and SChema Definition Name must be set to fme_feature_type_name (this is the name of the corresponding schema feature created by the SchemaScanner).
I find if I'm having issues with the schema being picked up then the SchemaScanner is a good solution, but it isn't necessary to use if the schema coming from your Dynamic CSV Reader is working fine.
If you're interested in the SchemaScanner, you can find more info here: https://community.safe.com/s/article/How-to-SchemaScanner-Transformer
Hope this helps!
Is there a specific technical reason why you can’t use where-clause in Upsert?
Hi @mathiku great question, sounds like this is a limitation on PostgreSQL and other writers that have similar functionality. Looks like PostgreSQL does not have the UPSERT
statement but it supports the upsert operation by the INSERT...ON CONFLICT
statement. Because of the limitations on the Insert On Conflict statement, it requires an ID that specifies the conflict target, which is the unique constraint or unique index that may cause a conflict. This is a requirement, and why the where clause is disabled.
If you need to use a where clause, a workaround would be to use separate feature types for Insert, Update, and Delete. Or, use the fme_db_operation
from the ChangeDetector in the Insert & Update mode.
Hi danminneyatsaf,
I am trying to replicate you solution. What it is not clear to me is that the key_id column is not the PK column, but is the value of that column, that is why johnt is asking for a dynamic way to do this.
I say this because in my case the PK column could be code, or iso2code or a combination of both, and I need to set this dynamically for each table, but just putting “key_id” and not @value(key_id) does not feel right to me.
Thanks!
Hi @hectorhonrubia, apologies for the delayed reply.
If you have multiple keys, and the keys that are present depends on the dataset read, then you will need to include logic in your workspace that redirects your data to the correct writer with the key field already specified.
This can be done with an AttributeExposer that exposes the potential attributes that contain the key value. A TestFilter can then be used to redirect the data to the correct writer based on the key fields present.
I’ve attached an example workspace to help illustrate the workflow.