Skip to main content

Hopefully this doesn't ramble too much...

 

I've been tasked by my Organization to standardize our development processes and implement clean, repeatable deployment processes across all of our environments. This means storing all of our integration configurations in version control (git), and being able to use pipelines to auto deploy to any environment at will.

 

Over the last 6 months, I've built a PowerShell module that:

  • Deploys Workspaces (if they have changed against the server version)
  • Registers Services
  • Deploys/Updates Topics
  • Deploys/Updates Subscribers
  • Deploys/Updates Publishers
  • Deploys/Updates Schedules

 

The process is getting much better, but still has a few dependencies:

  • A "sandbox" FME server environment, where all of our work is deployed manually
  • Developers create a "Project" on the Sandbox FME server and adds all the dependencies for their integration
  • A workspace is run that exports that Project into JSON format, along with all settings, parameters and options for each item using the Rest API
  • Powershell can then be run against this Json file, and dynamically rename entire processes between environments, create copies and change parameters based on flags.

 

My Actual Questions:

  • My team is really getting excited about Automations and Server Apps. From what I can tell, Automations and Server Apps do not have accessibility via the REST API to download their schema that could then be republished to other servers for replication. This limits our ability to really use these features, as we can't source control and auto deploy these items.
  • What is the Roadmap for Notifications (Topics/Pubs/Subs). Are they going away?
  • Are there other considerations I should be making as I work through this initiative?

 

I'll leave the official responses to Safe, but here's my take on your questions, as far as I've understood things:

  • Automations: If you need to migrate automations from one server to another, you may want to consider using the Projects functionality, which was developped specifically for this scenario. You can use the REST API to manipulate Project, including adding Automations (and practically any other server object).
  • I believe that the underlying technology for Notifications will exist for the forseeable future, although it might get a bit more "tucked away" in the server GUI to incite new users to go for Automations in stead. Automations are built on top of the Notification architecture, so I don't think they're going anywhere.
  • Considerations (i.e. my opinions)
    • be careful about keeping up with the API, e.g. the v2 is going to be deprecated soon.
    • rather than using the API specific to each object type (e.g. workspaces, topics, schedules, etc), perhaps consider using Projects for everything. That way you'll only have a single service endpoint to maintain, and you'll leverage the efforts from Safe in keeping things compatible between server versions. This will also allow you to package functionality into logical units and can also be used for backup/restore purposes.

Thanks for the response! I've actually worked hard to try to use FME functionality and not custom develop anything as much as possible. The issue I have run into with projects is that I have not been able to make projects work for our use case.. but maybe I'm missing something?

 

What I've been able to figure out about FME Projects:

  • Unable Change published parameters, names, settings at deployment time (when moving to another environment).
  • Project exports seem to contain binary data, which makes them much harder to keep in version control and manage Diffs/changes.
  • Projects API doesn't contain the settings for each object type, only the names/locations.
  • Projects only allows an object to be a member of 1 project. We have many processes that may share a generic notification topic, which then causes issues with order-of-operations when we deploy.

 

That said..

The scripts I've built uses the UI for projects as a starting point to create our exports.

When our scripts "export" all of the project items, it makes a JSON file that can be duplicated/modified per environment and saved to version control.

I've been able to demonstrate that the deployment files we've built can be used to effectively rebuild an FME server from the group up, EXCEPT for automations/server apps.


Thanks for the response! I've actually worked hard to try to use FME functionality and not custom develop anything as much as possible. The issue I have run into with projects is that I have not been able to make projects work for our use case.. but maybe I'm missing something?

 

What I've been able to figure out about FME Projects:

  • Unable Change published parameters, names, settings at deployment time (when moving to another environment).
  • Project exports seem to contain binary data, which makes them much harder to keep in version control and manage Diffs/changes.
  • Projects API doesn't contain the settings for each object type, only the names/locations.
  • Projects only allows an object to be a member of 1 project. We have many processes that may share a generic notification topic, which then causes issues with order-of-operations when we deploy.

 

That said..

The scripts I've built uses the UI for projects as a starting point to create our exports.

When our scripts "export" all of the project items, it makes a JSON file that can be duplicated/modified per environment and saved to version control.

I've been able to demonstrate that the deployment files we've built can be used to effectively rebuild an FME server from the group up, EXCEPT for automations/server apps.

That's a lot of interesting arguments, I'm sure several of these would be of interest to the developers at Safe, e.g. I wasn't aware that an object could only be a member of a single project.

Concerning the point about published parameters that depend on environment, I tend to stay away from these. I usually deploy envrionment-specific configuration files (e.g. JSON, YAML or INI) that live in a subfolder of the server shared data resources. The location of this configuration file should be identical on all environments. All the workspaces then read from the same configuration file using private scripted parameters, which allows them some introspection on the type of environment they're running on, global paths, etc. I've been using this strategy for many years and it works really well. The downside is that a few lines of Python is needed, but the code is the same all over so it's a matter of copy-paste once it's written.


That's a lot of interesting arguments, I'm sure several of these would be of interest to the developers at Safe, e.g. I wasn't aware that an object could only be a member of a single project.

Concerning the point about published parameters that depend on environment, I tend to stay away from these. I usually deploy envrionment-specific configuration files (e.g. JSON, YAML or INI) that live in a subfolder of the server shared data resources. The location of this configuration file should be identical on all environments. All the workspaces then read from the same configuration file using private scripted parameters, which allows them some introspection on the type of environment they're running on, global paths, etc. I've been using this strategy for many years and it works really well. The downside is that a few lines of Python is needed, but the code is the same all over so it's a matter of copy-paste once it's written.

Thanks Again for the thoughts! I've looked at potentially using configuration files and private parameters for settings, but we also have a lot of configurations that have to use passwords - obviously we don't want to store those in plain text. Web/Database connections don't work in some instances because of how the APIs function, though we are getting past that. Sticking the passwords in private parameters also don't work, because then we'd have to update every workspace when a password changes.

 

For non-sensitive data, the configuration files could work, but i'm still limited in my ability to use source control to manage the projects and atuo deploy with custom/different names.


That's a lot of interesting arguments, I'm sure several of these would be of interest to the developers at Safe, e.g. I wasn't aware that an object could only be a member of a single project.

Concerning the point about published parameters that depend on environment, I tend to stay away from these. I usually deploy envrionment-specific configuration files (e.g. JSON, YAML or INI) that live in a subfolder of the server shared data resources. The location of this configuration file should be identical on all environments. All the workspaces then read from the same configuration file using private scripted parameters, which allows them some introspection on the type of environment they're running on, global paths, etc. I've been using this strategy for many years and it works really well. The downside is that a few lines of Python is needed, but the code is the same all over so it's a matter of copy-paste once it's written.

Thanks for sharing your thoughts, it's an important issue for sure.

Security is hard!


Thanks for the response! I've actually worked hard to try to use FME functionality and not custom develop anything as much as possible. The issue I have run into with projects is that I have not been able to make projects work for our use case.. but maybe I'm missing something?

 

What I've been able to figure out about FME Projects:

  • Unable Change published parameters, names, settings at deployment time (when moving to another environment).
  • Project exports seem to contain binary data, which makes them much harder to keep in version control and manage Diffs/changes.
  • Projects API doesn't contain the settings for each object type, only the names/locations.
  • Projects only allows an object to be a member of 1 project. We have many processes that may share a generic notification topic, which then causes issues with order-of-operations when we deploy.

 

That said..

The scripts I've built uses the UI for projects as a starting point to create our exports.

When our scripts "export" all of the project items, it makes a JSON file that can be duplicated/modified per environment and saved to version control.

I've been able to demonstrate that the deployment files we've built can be used to effectively rebuild an FME server from the group up, EXCEPT for automations/server apps.

@mconway​ @david_r​ Does anyone know if the issues with projects for CI/CD (listed above) have been fixed with 2022 or the upcoming 2023 release?

@mconway​ did you use projects in your roadmap/implementation in 2020 or did you just stick to the API?


Reply