Skip to main content

I've taken some Workspaces created in 2015/2016 and saved them as 2017.0.1.1 Workspaces. I've removed Readers and Writers and added them in fresh. In some cases I get this report in the log:

Workbench: Workspace integrity check during load found one or more readers/writers with configuration issues.

Workbench: Backed up current workspace as: E:\\AddressBase\\Plus\\FME\\LACHS (backup).fmw

Workbench: Fixed reader/writer configuration issues.

I know this means the issues should be fixed, but it raised the question in my mind about the best way to upgrade Workspaces. I know old ones should generally run in a later version of FME but in these cases I'm changing the Reader(s) and/or Writer(s) so I might as well update the Workspace version.

Some Workspaces have a lot of transformers in, so is a good upgrade path to create a blank Workspace, add the Reader(s) and Writer(s) then copy and paste the transformers from the old Workspace (then upgrade transformers and edit parameters if necessary)?

What I would personally do is re-add the readers and writers, to make sure they're up to the latest version, and then go through the Updateable Transformers list and, if necessary, update them.

Then you'd still have the issue that a new transformer can replace a bunch of old ones, unfortunately there's no way to do that automatically so you'd have to keep an eye on those developments.


Have you right clicked on the readers/writers under the navigation window and selected "update reader/writer"?


@tim_wood

In my experience it is best to open and save the workspace in each version if you can afford to do that. Meaning, opening in 2015 , saving it then 2016 saving it then 2017 and saving it. This would sound silly but copying from 2016 and pasting in 2017 might work as well.

As @redgeographics solution should work as well.


As @runneals says, if you can use FME2017.1 then there is a new option there to update readers and writers directly - just right-click on them in the Navigator window. There isn't a version number in the same way as there is for transformers. Instead it's tied to a particular version/build number.

Personally I'd upgrade transformers slowly and with frequent testing, just as if you were just placing/creating them in the first place. Just to make sure. In fact, I think the philosophy of "if it ain't broke, don't fix it" is a good one. If the workspace already produces the correct result, performance is good, and there isn't any new functionality you specifically need - well do you really need to make any updates?


What I would personally do is re-add the readers and writers, to make sure they're up to the latest version, and then go through the Updateable Transformers list and, if necessary, update them.

Then you'd still have the issue that a new transformer can replace a bunch of old ones, unfortunately there's no way to do that automatically so you'd have to keep an eye on those developments.

just right click on reader or writer and upgrade to latest version, no need for re adding them.

 

 

But you need to take care, with writers that are dynamic, sometimes it revert to default settings, and if you a published parameter that is no longer valid, it will be removed, as i found out in SQL Server non-spatial writer

 

 


As @runneals says, if you can use FME2017.1 then there is a new option there to update readers and writers directly - just right-click on them in the Navigator window. There isn't a version number in the same way as there is for transformers. Instead it's tied to a particular version/build number.

Personally I'd upgrade transformers slowly and with frequent testing, just as if you were just placing/creating them in the first place. Just to make sure. In fact, I think the philosophy of "if it ain't broke, don't fix it" is a good one. If the workspace already produces the correct result, performance is good, and there isn't any new functionality you specifically need - well do you really need to make any updates?

Just a thought, but is it possible to do benchmarking of transformers/readers/writers for each release and make a table that notes if performance increased to be able to see performance enhancements (if any), and if so, how much? For example the big thing with 2019 was that the Dissolver/Shapefile reader: https://www.safe.com/blog/2019/04/fme-2019/ but what about the others? With how things are moving to being based in FME Hub, Maybe an automated service could do the testing for each version?


Just a thought, but is it possible to do benchmarking of transformers/readers/writers for each release and make a table that notes if performance increased to be able to see performance enhancements (if any), and if so, how much? For example the big thing with 2019 was that the Dissolver/Shapefile reader: https://www.safe.com/blog/2019/04/fme-2019/ but what about the others? With how things are moving to being based in FME Hub, Maybe an automated service could do the testing for each version?

We do actually benchmark all our transformers to see if performance increases (or decreases), using automated tools, and we're improving those processes continuously. But we don't generally release the figures, and I'm very loathe to suggest that we do because it can vary so much by data specifics, and it's often easy to misinterpret.

For example, in 2019.1 the PointOnAreaOverlayer is potentially 100-200 times faster than before... but I wouldn't promote that number because it applies only for areas with large boundaries (>256 vertices) and/or donuts with many holes. So there are some HUGE benefits, especially in extreme geometries, but not everyone will see them. That makes it really difficult to publish any meaningful figures like "average" performance.

So... I suggest you keep asking this sort of question, to keep us on our toes, but I don't see us exposing all of our performance numbers in the near future. Not to say it won't happen eventually, and you will see more numbers from us as our measurement tools improve, but right now I don't see a big public dashboard of performance numbers coming.

Actually the biggest thing to be aware of now is the bulk mode/feature tables. The biggest gains occur when we convert a transformer to support these, but the entire workspace is only faster when all of its transformers are converted. So the StatisticsCalculator got this in 2019, but it only becomes faster if it receives data in bulk mode, and passes it to a transformer that supports it. So keep an eye on announcements for transformers supporting bulk mode, because the more that do the more likely your entire workspace will run quicker.

Hope this is of interest. In short, our automated testing is improving, and as it does we'll be in a position to release more information. But right now there's no plan I know of to create the tool that you're suggesting.


We do actually benchmark all our transformers to see if performance increases (or decreases), using automated tools, and we're improving those processes continuously. But we don't generally release the figures, and I'm very loathe to suggest that we do because it can vary so much by data specifics, and it's often easy to misinterpret.

For example, in 2019.1 the PointOnAreaOverlayer is potentially 100-200 times faster than before... but I wouldn't promote that number because it applies only for areas with large boundaries (>256 vertices) and/or donuts with many holes. So there are some HUGE benefits, especially in extreme geometries, but not everyone will see them. That makes it really difficult to publish any meaningful figures like "average" performance.

So... I suggest you keep asking this sort of question, to keep us on our toes, but I don't see us exposing all of our performance numbers in the near future. Not to say it won't happen eventually, and you will see more numbers from us as our measurement tools improve, but right now I don't see a big public dashboard of performance numbers coming.

Actually the biggest thing to be aware of now is the bulk mode/feature tables. The biggest gains occur when we convert a transformer to support these, but the entire workspace is only faster when all of its transformers are converted. So the StatisticsCalculator got this in 2019, but it only becomes faster if it receives data in bulk mode, and passes it to a transformer that supports it. So keep an eye on announcements for transformers supporting bulk mode, because the more that do the more likely your entire workspace will run quicker.

Hope this is of interest. In short, our automated testing is improving, and as it does we'll be in a position to release more information. But right now there's no plan I know of to create the tool that you're suggesting.

Totally agree with your points about the different run times as you're right on about the machine specs (which I found out after comparing a job on my desktop to a job on FME server). Also agree with your points about the bulk data options, as working with data in bulk to ArcGIS Online is much faster than writing it using the traditional insert methods. Can't wait to see all the great stuff you guys are working on!


I would go trough the list and upgrade them yes.

But relating transformers, like mergeerrs, spatial) relators etc., have had the option to merge attributes under a roll down menu. You'll fnd that you have to actually activate those.

Also if you have nested customrtansformers and pass data through parameters ...we'll you might have your work cut out. As the treatment of those have seriously changed.

I had some where rather then the parameter content it suddenly show the parameter names. Some I had to actively fetch, using parameterfetcher.

 

Also I found that depending on the version of the transformere, creating an attribute in a Attribute creator and using it in the same attribute creator does not necessarily work.

 

So yeah, backup and go trough conversion and test a lot.


Reply