I have a custom transformer that caches API call results to a PostgreSQL table. It tries a DatabaseJoiner on the key, and if it finds a matching record, it passes the joined feature directly to the main Output port. Otherwise, it calls the API, passes the result to a SQLExecutor to INSERT it for joining on same key value on subsequent features passed to the transformer.
Unfortunately, the DatabaseJoiner is unable to find records stored by the SQLExecutor for subsequent features during the same workspace execution. I've tried explicit COMMITs (with appropriate delimiter declaration) in the SQLExecutor (it says no transaction is in progress) and a 5-second-per-feature Decelerator after the SQLExecutor in case there is some kind of race condition between FME Workbench and PostgreSQL. No luck. I still get duplicate key violations because the failure of the DatabaseJoiner causes the same key to be sent to the API and storage attempted again via the SQLExecutor. I've even tried moving the SQLExecutor to another workspace called via WorkspaceRunner, but the errors persist.
There is no parallelism in this workspace, and it appears that the INSERT is committed every time the SQLExecutor is called. So I can sense no race condition that should cause this behavior. Hoping someone else can shed some light.