Solved

Moving Workspaces between FME installs - tips and tricks

  • 7 October 2016
  • 5 replies
  • 7 views

Userlevel 4
Badge +13

I've heard that there are some users out there doing really cool things with python scripted parameters to help move workspaces between different FME Desktop installs and FME Server as well!!

I'd love to know what FME users have been up to on this front! Please share your solutions.

icon

Best answer by david_r 10 October 2016, 10:48

View original

5 replies

Userlevel 4

I have a pattern that I'm often using when there are multiple environments, e.g. development, staging, production and fail-over, and the workspace has to connect to the corresponding resources depending on which environment it is running. The pattern works for both FME Desktop and Server.

The reason for this pattern is to avoid having to remember to manually set all the parameters correctly when publishing to each environment, eliminating human errors.

The idea is to have a centralized configuration file for each environment and having FME read parameters such as database connections, path names, etc from the configuration file rather than having them hard-coded in the workspaces. This way you can publish / use the exact same workspace on each environment without any changes.

I personally like to use ini-files as they are easy to manipulate, but you can of course use whatever suits you, even XML! Sample config.ini file:

[DBCONNECTION]dbhost=srv-db-production.mydomain.comdbinstance=gis-prd-db[DATA_INPUT]parcels=//nas/production/data/input/parcelsaddress=//nas/production/data/static/address

This configuration file can either be placed in a common folder or uploaded as a resource to FME Server.

The workspaces can then access the settings through private scripted Python parameters. This example is for a parameter that returns the directory containing the 'parcels' dataset from the config.ini example above:

from ConfigParser import SafeConfigParser # Built-in Python moduleif FME_MacroValues.has_key('FME_SHAREDRESOURCE_DATA'):    # The workspace is running on FME Server    # config.ini is a shared data resource    ini_file = FME_MacroValues['FME_SHAREDRESOURCE_DATA'] + '/config/config.ini'else:    # The workspace is running on FME Desktop    # config.ini is located relative to the workspace directory    ini_file = FME_MacroValues['FME_MF_DIR'] + '/config/config.ini'    config = SafeConfigParser()                  # Initialize parserconfig.read(ini_file)                        # Read the ini filereturn config.get('DATA_INPUT', 'parcels')   # Return value

The private parameter can then be linked wherever necessary throughout the workspace. The private parameter script will only be evaluated once when the workspace starts, even if the script is referenced multiple places.

If you use this pattern a lot you might want to create a small Python module that you can import into your workspaces. That way you can streamline it even further, e.g. by avoiding reading and parsing the config.ini for each parameter.

I've attached a small smaple workspace that shows the above in action: scriptedparameters.zip

Badge

Like davir_r, we are using one configuration file per environment we are deploying to.

We've also added an encryption layer to let customer encrypt any parameter (such as user names and passwords) if they don't want them in clear text inside the configuration files.

Userlevel 4

Like davir_r, we are using one configuration file per environment we are deploying to.

We've also added an encryption layer to let customer encrypt any parameter (such as user names and passwords) if they don't want them in clear text inside the configuration files.

Fully agree on the encryption of passwords in plain text configuration files.

 

Badge

Hi all,

To answer this question of having mutliple environnement to deploy to, I started from David_r's proposition but I used the XML version of it. I have one configuration file per environnement which lists all connection parameters, webservices, ressources files... (no encryption) 

 <fme:config>      <fme:DATASET>ThisIsMyDataset</fme:DATASET>      <fme:USER>MyUser</fme:USER>      <fme:PWD>MyPassword</fme:PWD>  </fme:config>

With a simple scripted parameter I can read and access the above xml:

VAR=str(FME_MacroValues['FME_MF_DIR']) + 'Config.xml'
return VAR

XMLFile = str(FME_MacroValues['XML_CONFIG'])
from xml.dom import minidom
xmldoc = minidom.parse(XMLFile)
itemlist = xmldoc.getElementsByTagName('fme:DATASET') 
VAR = itemlist[0].firstChild.nodeValue
return VAR

In this situation, each configuration file has to be renamed before publication to fme server.

We do that with a build server which packages FME code while renaming config files and tagging version control number on each peace of code. Then we can automate the publication of the package on each environement using the FME server API.

The python scripted parameter is used mostly for datastreaming services. 

When using jobsubmitter services I prefer a controler/worker approach to read the xml configuration file with a reader and parse all parameters to the workers. The xml reader replaces the python script in thi case.

Badge +13

We use a solution similar to @david_r. For every deploy during the travel through /dev/test/staging/prod a different configuration file is created during the installation of the whole solution. The (python) install script replaces parts of a config-template with the values needed for the specific environment based on the input given by the maintainer of the specific environment.

I won't include my code as @david_r seems to 'at least on par, ahem..' to me in Python ;-).

The current solution we build works with jobsubmitter services called through the REST API. The only published parameter, which is set in the REST call, is the location of this configuration file.

Reply