Skip to main content

I am reading data from 5 WFS feeds and writing them to the same PostGIS database table. I schedule the FME workspace to run once a week so I get fresh data.

 

 

I only want to drop the data from the PostGIS table if the WFS readers complete. At the moment it is dropping the table, and then writing it as it reads the new data. If the WFS feed reader fails, I am left with a partial dataset.

 

 

So I have two questions:

 

 

1) I would rather that FME didnt write the data until after it had downloaded all of the data from the WFS readers. Is this possible?

 

 

2) Is it possible to make FME continue with the other WFS readers in the workspace if one reader fails?

 

 

I have found this page, is this the best way to achieve what I want?

 

 

http://safe.force.com/articles/How_To/Rolling-back-a-database-load-in-one-transaction
Hi,

 

 

in my opinion, the best solution would be to first write the data into a staging schema, including a metatable that contains the number of features read from each input source (WFS Reader).

 

 

Then two workspaces: The first workspace would read the WFS data and write it to the staging schema, including the metadata. The second workspace would analyse the staging metadata and copy it over to the production schema if everything was ok.

 

 

You could easily chain the two workspaces using a WorkpaceRunner.

 

 

David
A FeatureHolder would ensure that the writer did not commence until all readers had finished.

 

 

I would agree with David that the solution that you're looking for is to split the workspace in two though.

Reply