Skip to main content

I need to load data from multiple reader to a single writer . I would like to take a backup of the current data before the process begins . I would like to set the option to truncate the features in the output GDB and reload from all the readers . During this process if it fails is there any way to roll back the process ?

Can anyone help please . Thanks 

Hi,

 

 

Two approachs can be considered.

 

 

1. Copy the GDB to any location as a backup with startup script, recover it with shutdown script if the translation failed. A built-in global variable called FME_Status can be used in shutdown script to determine whether the translation was success or not.

 

 

2. Create 3 workspaces.

 

(a) copy a GDB,

 

(b) main process,

 

(c) run (a) to create a backup, (b) and (a) to recover the original GDB with three WorkspaceRunners in a series (Wait for Job to Complete = Yes). Run the last (a) to recover the original GDB from the backup only if the (b) main process failed.

 

 

Takashi
Hello Takashi, 

 

 

I am new to FME and I was trying to do the second option what you suggested . 

 

 

I have one workspace which creates backup from one filegeodatabase to another . In this workspace I have made my Source FGDB to be published parameter . This is called backup workspace

 

 

I created another workspace . Added the reader which contains the file geodatabase feature class . I added workspace runner transformer and pointed to backup workspace and pointed to the right gdb for source 

 

 

When I run this workspace it runs successfully in infinite loop .. Can you please point out my mistake 

 

 

Thanks

 

 

 


My intention is such as this image.

 

 

1st WorkspaceRunner creates a backup of the original GDB, 3rd WorkspaceRunner restores it if the 2nd WorkspaceRunner failed.

 

Although the 1st and 3rd WorkspaceRunner run the same workspace (a), they are completely different processes. 

 

The workspace (a) copis a GDB, so it can be used in both backup and restoring.

 

 


Thanks Takashi Initially I did something similar . Then I just tried to run the First workspace runner which creates the Backup . I made the source GDB to be a published parameter for the backup workspace. So I added a reader and then connected to the workspace runner .When I tried to run the workspace it runs in infinite loop . I made the option wait for the job to yes and no (in either case it runs in infinite loop)

 

 

 


One possible way.

 

Assuming that there is a workspace which creates copy of a GDB dataset, and the workspace publishes 2 parameters to receive source and destination GDB path.

 

If you define parameters called SOURCE_GDB storing the original GDB path and BACKUP_GDB storing the backup GDB path, a WorkspaceRunner with this setting would work to create a backup.

 

 

 

To restoring original GDB, specify BACKUP_GDB to Source and SOURCE_GDB to Destination in another WorkspaceRunner.

 

In this way, you don't need to add any reader. Just use a Creator transformer to create an initiator feature to kick the 1st WorkspaceRunner.
Hi Takashi, 

 

 

It worked for me . Thanks . I had another question related to this . Is it possible to have create dynamic readers which has data coming from different file geodatabases?
Possibly this thread helps you.

 

Add data from a table containing file paths

 

https://safecommunity.force.com/CommunityAnswers?id=906a0000000d29RAAQ

 


Reply