I am trying to modify the docker-compose file available here to work on a Docker Swarm but I end up with an FME Server core that does not see my engines. My modifications to the docker-compose file included updating the version to 3 and removing the networking information. If I run the compose file with the networks attributes I received errors about them being out of scope of the swarm. Any hints?
- Home
- Forums
- FME Flow
- Architecture
- Docker Swarm for FME Server Containers
6 replies
- 3 replies
- November 2, 2017
Docker Swarm doesn't like our defined networks because they use the "bridge" driver. Deployments on Docker Swarm should use the "overlay" networking driver. The overlay network allows for cross-host communication in your swarm. Instead of removing the network part, instead change all the places that specify "driver: bridge" to "driver: overlay". This should allow your stack to come up and engines to connect.
If you are planning on deploying across multiple physical hosts, there is more work you will need to do. Since our engine, core and web containers all need access to the shared FME Server data, they need to share the docker volumes. In our compose file we define docker volumes with the "local" driver which means they will not be shared across hosts. I believe that swarm will schedule your containers all on the same host since they all require access to those shared volumes.
In order to properly deploy across hosts, you will need to properly share the data. Unfortunately Docker doesn't have a built in volume driver that will work across hosts. We have found the easiest way to do this currently is to set up an NFS share somewhere to handle the shared data. If you are deploying in AWS, using EFS is the easiest, but otherwise any NFS Server will work. Then you need to mount the NFS share in the same place on each host that will be running in your swarm. For example you could mount the same NFS share in /mnt/fmeserverdata on the physical hosts. Then you need to modify the compose file to remove all the volumes except for "database" that are defined. Then in the "fmeservercore", "fmeserverweb", and "fmeserverengine" services, define a single volume like:
volumes:
- /mnt/fmeserverdata:/data/fmeserverdata
This will mount your NFS share into the containers in the right place, and they will be properly shared across the hosts since they will all have the same share mounted in the same directory.
- Author
- 18 replies
- November 2, 2017
Docker Swarm doesn't like our defined networks because they use the "bridge" driver. Deployments on Docker Swarm should use the "overlay" networking driver. The overlay network allows for cross-host communication in your swarm. Instead of removing the network part, instead change all the places that specify "driver: bridge" to "driver: overlay". This should allow your stack to come up and engines to connect.
If you are planning on deploying across multiple physical hosts, there is more work you will need to do. Since our engine, core and web containers all need access to the shared FME Server data, they need to share the docker volumes. In our compose file we define docker volumes with the "local" driver which means they will not be shared across hosts. I believe that swarm will schedule your containers all on the same host since they all require access to those shared volumes.
In order to properly deploy across hosts, you will need to properly share the data. Unfortunately Docker doesn't have a built in volume driver that will work across hosts. We have found the easiest way to do this currently is to set up an NFS share somewhere to handle the shared data. If you are deploying in AWS, using EFS is the easiest, but otherwise any NFS Server will work. Then you need to mount the NFS share in the same place on each host that will be running in your swarm. For example you could mount the same NFS share in /mnt/fmeserverdata on the physical hosts. Then you need to modify the compose file to remove all the volumes except for "database" that are defined. Then in the "fmeservercore", "fmeserverweb", and "fmeserverengine" services, define a single volume like:
volumes:
- /mnt/fmeserverdata:/data/fmeserverdata
This will mount your NFS share into the containers in the right place, and they will be properly shared across the hosts since they will all have the same share mounted in the same directory.
- Author
- 18 replies
- January 11, 2018
Docker Swarm doesn't like our defined networks because they use the "bridge" driver. Deployments on Docker Swarm should use the "overlay" networking driver. The overlay network allows for cross-host communication in your swarm. Instead of removing the network part, instead change all the places that specify "driver: bridge" to "driver: overlay". This should allow your stack to come up and engines to connect.
If you are planning on deploying across multiple physical hosts, there is more work you will need to do. Since our engine, core and web containers all need access to the shared FME Server data, they need to share the docker volumes. In our compose file we define docker volumes with the "local" driver which means they will not be shared across hosts. I believe that swarm will schedule your containers all on the same host since they all require access to those shared volumes.
In order to properly deploy across hosts, you will need to properly share the data. Unfortunately Docker doesn't have a built in volume driver that will work across hosts. We have found the easiest way to do this currently is to set up an NFS share somewhere to handle the shared data. If you are deploying in AWS, using EFS is the easiest, but otherwise any NFS Server will work. Then you need to mount the NFS share in the same place on each host that will be running in your swarm. For example you could mount the same NFS share in /mnt/fmeserverdata on the physical hosts. Then you need to modify the compose file to remove all the volumes except for "database" that are defined. Then in the "fmeservercore", "fmeserverweb", and "fmeserverengine" services, define a single volume like:
volumes:
- /mnt/fmeserverdata:/data/fmeserverdata
This will mount your NFS share into the containers in the right place, and they will be properly shared across the hosts since they will all have the same share mounted in the same directory.
any suggestions?
- 3 replies
- January 13, 2018
Docker Swarm doesn't like our defined networks because they use the "bridge" driver. Deployments on Docker Swarm should use the "overlay" networking driver. The overlay network allows for cross-host communication in your swarm. Instead of removing the network part, instead change all the places that specify "driver: bridge" to "driver: overlay". This should allow your stack to come up and engines to connect.
If you are planning on deploying across multiple physical hosts, there is more work you will need to do. Since our engine, core and web containers all need access to the shared FME Server data, they need to share the docker volumes. In our compose file we define docker volumes with the "local" driver which means they will not be shared across hosts. I believe that swarm will schedule your containers all on the same host since they all require access to those shared volumes.
In order to properly deploy across hosts, you will need to properly share the data. Unfortunately Docker doesn't have a built in volume driver that will work across hosts. We have found the easiest way to do this currently is to set up an NFS share somewhere to handle the shared data. If you are deploying in AWS, using EFS is the easiest, but otherwise any NFS Server will work. Then you need to mount the NFS share in the same place on each host that will be running in your swarm. For example you could mount the same NFS share in /mnt/fmeserverdata on the physical hosts. Then you need to modify the compose file to remove all the volumes except for "database" that are defined. Then in the "fmeservercore", "fmeserverweb", and "fmeserverengine" services, define a single volume like:
volumes:
- /mnt/fmeserverdata:/data/fmeserverdata
This will mount your NFS share into the containers in the right place, and they will be properly shared across the hosts since they will all have the same share mounted in the same directory.
The docker exec command to get into the core container should look something like:
docker exec -it <name of core container> bash

- 364 replies
- January 13, 2018
The docker exec command to get into the core container should look something like:
docker exec -it <name of core container> bash
@grantarnold Related Q&A; post: https://knowledge.safe.com/questions/61328/where-are-the-dashboard-workspaces-statusmessage-f.html?
- Author
- 18 replies
- January 16, 2018
The docker exec command to get into the core container should look something like:
docker exec -it <name of core container> bash
Reply
Related Topics
How to create Zero Row dBase File?icon
DataWorkflow brainstorm: Creating polylines with zero or many midpointsicon
TransformersESRI Shapefile to ArcGIS Enterprise Portal (overwrite daily)icon
DataHow to create an excel report of features that intersect a polygonicon
TransformersHow to plot a line from a text file when all the Y coordinates are in a row followed by all the X coordinates in a row?icon
Transformers
Helpful Members This Week
- takashi
15 votes
- alexbiz
8 votes
- virtualcitymatt
8 votes
- redgeographics
8 votes
- david_r
8 votes
- ebygomm
5 votes
- sparks
5 votes
- merlinegeorge
4 votes
- philippeb
4 votes
- liamfez
4 votes
Community Stats
- 31,402
- Posts
- 119,078
- Replies
- 39,019
- Members
Latest FME
Enter your E-mail address. We'll send you an e-mail with instructions to reset your password.
Scanning file for viruses.
Sorry, we're still checking this file's contents to make sure it's safe to download. Please try again in a few minutes.
OKThis file cannot be downloaded
Sorry, our virus scanner detected that this file isn't safe to download.
OKCookie policy
We use cookies to enhance and personalize your experience. If you accept you agree to our full cookie policy. Learn more about our cookies.
Cookie settings
We use 3 different kinds of cookies. You can choose which cookies you want to accept. We need basic cookies to make this site work, therefore these are the minimum you can select. Learn more about our cookies.