Skip to main content

We are deploying two FME servers, one for production, and one for testing (staging).

The workflow I've built (7-8 separate workspaces) all refer to one MS/SQL database and one external HTTP service endpoint. There's one "connection set" for production, and another for testing.

Using the same set of workspaces, what is the best practice to maintain identical workflows on the two servers, each with its own "connection set".

I've streamlined all database connections to a single linked database connection, and know that it can be replaced quite easily in Workbench. But this will require me to publish twice every time, which is not optimal.

And what about the HTTP endpoints, is there a way to store these on each separate FME server, so they automagically work ?

Any ideas and insights on making the maintenance as simple as possible ?

Cheers

Lars I

We use python scripted parameters for this. Setting the paths etc based on fme server engine name. Will post a sample later, on mobile atm.


It did occur to me after posting, that I at least could publish the same database connection to each server, and edit its content on each server to fit the correct usage.

But it still leaves the HTTP endpoint.


Not sure if this help, however, you can configure connections directly in FME Server.

  1. Publish the first set of workspaces to Staging.
  2. Create a project with everything (including connections).
  3. Import that project into the production environment and reconfigure the connections in the production environment.
  4. Now you can go back to the staging environment are remove the connections from the project.
  5. When you next export and import the project to production the connections will remain unchanged and you can overwrite the old project content.

This will let you also keep the connection names the same in the workspaces and the connections used will simply be dependent on the environment.

 

I'm not entirely sure what you mean by end points here. Are you referring to the FME Server endpoint or the external service. If external then yes, this could be stashed somewhere as a sting in a textfile in some data folder. @nielsgerrits​  scripted parameter option, however, is probably the cleanest solution (at least to the HTTP external service). It also will reduce any dependancies.

 

It's also important to consider that testing might be done on fme desktop so adding a file into a location on the FME Server might make testing on desktop tricky.

 

A scripted parameter can help you determine if your running on desktop or indeed which environment. it's a nice option for sure.

 


Not sure if this help, however, you can configure connections directly in FME Server.

  1. Publish the first set of workspaces to Staging.
  2. Create a project with everything (including connections).
  3. Import that project into the production environment and reconfigure the connections in the production environment.
  4. Now you can go back to the staging environment are remove the connections from the project.
  5. When you next export and import the project to production the connections will remain unchanged and you can overwrite the old project content.

This will let you also keep the connection names the same in the workspaces and the connections used will simply be dependent on the environment.

 

I'm not entirely sure what you mean by end points here. Are you referring to the FME Server endpoint or the external service. If external then yes, this could be stashed somewhere as a sting in a textfile in some data folder. @nielsgerrits​  scripted parameter option, however, is probably the cleanest solution (at least to the HTTP external service). It also will reduce any dependancies.

 

It's also important to consider that testing might be done on fme desktop so adding a file into a location on the FME Server might make testing on desktop tricky.

 

A scripted parameter can help you determine if your running on desktop or indeed which environment. it's a nice option for sure.

 

Yes, the end point refers to the external service.

Python scripting in the workspaces is a work-around, not a solution, imho. It's still somewhat hardcoded, and needs to be maintained.

It would be awfully nice, if one could store server-specific configuration metadata (like strings etc.), which could be utilized in a workspace. The desktop here is just another "server" in that respect, or Workbench could prompt for a value if not defined, just like with published parameters.

I guess I could utilize environment variables and use EnvironmentVariableFetcher, but maintaining these requires desktop access, not just access to the FME Server web interface. Maybe Safe would consider adding editing environment variables to the web interface?


We use python scripted parameters for this. Setting the paths etc based on fme server engine name. Will post a sample later, on mobile atm.

I understand this is not what you are looking for, but for documentation:

For my server workspaces I use a "Environment when on desktop" switch. (Private Parameter type Choice.) This can be set to Test or Prod. 

Next I use a parameter for each source which determines if the workpace is run on desktop or on server, and if on desktop which dataset it needs to run. (Private Parameter type Python Script)

This way I can build on desktop, publish to test, publish to prod and eventually debug prod using desktop, without changing anything to the workspace.

VarTest = '\\\\testserver\\data\\test.txt'
VarProd = '\\\\prodserver\\data\\prod.txt'
 
if FME_MacroValues.get('FME_ENGINE', '') == '':
    if FME_MacroValues.get('Environment', '') == 'Test':
        return VarTest
    elif FME_MacroValues.get('Environment', '') == 'Prod':
        return VarProd
    else:
        return ''
elif 'TEST' in FME_MacroValues.get('FME_ENGINE'):
    return VarTest
else:
    return VarProd

 


Reply