Skip to main content

Hi,

I am new to FME. I currently have 286 old gdb that consist of 15 feature classes, they needs to be updated to a new gdb. Some of the updates includes: changing a point to polygon, merging feature classes, changing fields name, and also mapping value for subtype. I have already figure out how to update an individual gdb.

Right now, I am playing around with Workspace runner and Directory and File pathnames reader. I am uncertain about some of the parameters.

In my workspace, for my reader and writer, does it matter if I load any old gdb as the reader? and any new gdb as the writer?

I have attached below what my current workspace looks like.

Can you please help me out? It would be very much appreciated :-)

Thanks,

Vincent

I have some questions to clarify the situation.

 

  1. Are the 286 old gdb datasets (*.gdb folders) saved under the same parent folder?
  2. Is the current workspace shown in your screenshot to perform the translation from a single old gdb dataset to a single new gdb dataset?
  3. Do you intend to use a WorkspaceRunner in another workspace to run the current workspace 286 times for each old gdb dataset?
  4. How do you determine the 286 new destination gdb dataset names (i.e. *.gdb paths)?

I have some questions to clarify the situation.

 

  1. Are the 286 old gdb datasets (*.gdb folders) saved under the same parent folder?
  2. Is the current workspace shown in your screenshot to perform the translation from a single old gdb dataset to a single new gdb dataset?
  3. Do you intend to use a WorkspaceRunner in another workspace to run the current workspace 286 times for each old gdb dataset?
  4. How do you determine the 286 new destination gdb dataset names (i.e. *.gdb paths)?

 

Hi @takashi,

 

 

1. The old gdb datasets are sorted under the same parent folder (i.e. M:\\Parks\\HQ\\OPERATIONS\\CPI\\Asset_Management\\GNSS Projects\\Parks Inventory\\MergedParks\\*\\Old\\GDB\\*.gdb)

 

 

2. The screenshot I've attached above is my current workspace to translate a single old gdb to a new gdb dataset.

 

 

3. I have another workspace with WorkspaceRunner and the directory pathname reader. I want to use it to run 286 times for each of the old gdb dataset.

 

 

4. Right now I am still figuring out the directory fanout. I want the output of each of the gdb dataset into separate folders.

Hi @vincentlaw94, you can use the Directory and File Pathnames (PATH) reader to read every *.gdb folder path in a specific folder, and pass them to the FGDB Source Dataset parameter of the current workspace through the WorkspaceRunner, one by one.

Regarding destination *.gdb path), consider using the Fanout Dataset functionality.


Hi @takashi, I am still unsure to what to put for the Dataset source parameter for the current workspace's reader. Right now I have it set to an individual old gdb directory.


Hi @takashi, I am still unsure to what to put for the Dataset source parameter for the current workspace's reader. Right now I have it set to an individual old gdb directory.

Workbench has automatically created a published user parameter called SourceDataset_*** that is linked to the Dataset parameter of the FGDB reader when you added the reader to the workspace.

 

Then, when you select the current workspace in the WorkspaceRunner, the parameter will appear in the parameters list. Pass the path_windows (or path_unix) attribute storing *.gdb path read by the PATH reader to the parameter here.

 


Workbench has automatically created a published user parameter called SourceDataset_*** that is linked to the Dataset parameter of the FGDB reader when you added the reader to the workspace.

 

Then, when you select the current workspace in the WorkspaceRunner, the parameter will appear in the parameters list. Pass the path_windows (or path_unix) attribute storing *.gdb path read by the PATH reader to the parameter here.

 

Thank you for your clarification @takashi

Reply