I am trying to modify the docker-compose file available here to work on a Docker Swarm but I end up with an FME Server core that does not see my engines. My modifications to the docker-compose file included updating the version to 3 and removing the networking information. If I run the compose file with the networks attributes I received errors about them being out of scope of the swarm. Any hints?
Docker Swarm doesn't like our defined networks because they use the "bridge" driver. Deployments on Docker Swarm should use the "overlay" networking driver. The overlay network allows for cross-host communication in your swarm. Instead of removing the network part, instead change all the places that specify "driver: bridge" to "driver: overlay". This should allow your stack to come up and engines to connect.
If you are planning on deploying across multiple physical hosts, there is more work you will need to do. Since our engine, core and web containers all need access to the shared FME Server data, they need to share the docker volumes. In our compose file we define docker volumes with the "local" driver which means they will not be shared across hosts. I believe that swarm will schedule your containers all on the same host since they all require access to those shared volumes.
In order to properly deploy across hosts, you will need to properly share the data. Unfortunately Docker doesn't have a built in volume driver that will work across hosts. We have found the easiest way to do this currently is to set up an NFS share somewhere to handle the shared data. If you are deploying in AWS, using EFS is the easiest, but otherwise any NFS Server will work. Then you need to mount the NFS share in the same place on each host that will be running in your swarm. For example you could mount the same NFS share in /mnt/fmeserverdata on the physical hosts. Then you need to modify the compose file to remove all the volumes except for "database" that are defined. Then in the "fmeservercore", "fmeserverweb", and "fmeserverengine" services, define a single volume like:
volumes:
- /mnt/fmeserverdata:/data/fmeserverdata
This will mount your NFS share into the containers in the right place, and they will be properly shared across the hosts since they will all have the same share mounted in the same directory.
Docker Swarm doesn't like our defined networks because they use the "bridge" driver. Deployments on Docker Swarm should use the "overlay" networking driver. The overlay network allows for cross-host communication in your swarm. Instead of removing the network part, instead change all the places that specify "driver: bridge" to "driver: overlay". This should allow your stack to come up and engines to connect.
If you are planning on deploying across multiple physical hosts, there is more work you will need to do. Since our engine, core and web containers all need access to the shared FME Server data, they need to share the docker volumes. In our compose file we define docker volumes with the "local" driver which means they will not be shared across hosts. I believe that swarm will schedule your containers all on the same host since they all require access to those shared volumes.
In order to properly deploy across hosts, you will need to properly share the data. Unfortunately Docker doesn't have a built in volume driver that will work across hosts. We have found the easiest way to do this currently is to set up an NFS share somewhere to handle the shared data. If you are deploying in AWS, using EFS is the easiest, but otherwise any NFS Server will work. Then you need to mount the NFS share in the same place on each host that will be running in your swarm. For example you could mount the same NFS share in /mnt/fmeserverdata on the physical hosts. Then you need to modify the compose file to remove all the volumes except for "database" that are defined. Then in the "fmeservercore", "fmeserverweb", and "fmeserverengine" services, define a single volume like:
volumes:
- /mnt/fmeserverdata:/data/fmeserverdata
This will mount your NFS share into the containers in the right place, and they will be properly shared across the hosts since they will all have the same share mounted in the same directory.
Docker Swarm doesn't like our defined networks because they use the "bridge" driver. Deployments on Docker Swarm should use the "overlay" networking driver. The overlay network allows for cross-host communication in your swarm. Instead of removing the network part, instead change all the places that specify "driver: bridge" to "driver: overlay". This should allow your stack to come up and engines to connect.
If you are planning on deploying across multiple physical hosts, there is more work you will need to do. Since our engine, core and web containers all need access to the shared FME Server data, they need to share the docker volumes. In our compose file we define docker volumes with the "local" driver which means they will not be shared across hosts. I believe that swarm will schedule your containers all on the same host since they all require access to those shared volumes.
In order to properly deploy across hosts, you will need to properly share the data. Unfortunately Docker doesn't have a built in volume driver that will work across hosts. We have found the easiest way to do this currently is to set up an NFS share somewhere to handle the shared data. If you are deploying in AWS, using EFS is the easiest, but otherwise any NFS Server will work. Then you need to mount the NFS share in the same place on each host that will be running in your swarm. For example you could mount the same NFS share in /mnt/fmeserverdata on the physical hosts. Then you need to modify the compose file to remove all the volumes except for "database" that are defined. Then in the "fmeservercore", "fmeserverweb", and "fmeserverengine" services, define a single volume like:
volumes:
- /mnt/fmeserverdata:/data/fmeserverdata
This will mount your NFS share into the containers in the right place, and they will be properly shared across the hosts since they will all have the same share mounted in the same directory.
any suggestions?
Docker Swarm doesn't like our defined networks because they use the "bridge" driver. Deployments on Docker Swarm should use the "overlay" networking driver. The overlay network allows for cross-host communication in your swarm. Instead of removing the network part, instead change all the places that specify "driver: bridge" to "driver: overlay". This should allow your stack to come up and engines to connect.
If you are planning on deploying across multiple physical hosts, there is more work you will need to do. Since our engine, core and web containers all need access to the shared FME Server data, they need to share the docker volumes. In our compose file we define docker volumes with the "local" driver which means they will not be shared across hosts. I believe that swarm will schedule your containers all on the same host since they all require access to those shared volumes.
In order to properly deploy across hosts, you will need to properly share the data. Unfortunately Docker doesn't have a built in volume driver that will work across hosts. We have found the easiest way to do this currently is to set up an NFS share somewhere to handle the shared data. If you are deploying in AWS, using EFS is the easiest, but otherwise any NFS Server will work. Then you need to mount the NFS share in the same place on each host that will be running in your swarm. For example you could mount the same NFS share in /mnt/fmeserverdata on the physical hosts. Then you need to modify the compose file to remove all the volumes except for "database" that are defined. Then in the "fmeservercore", "fmeserverweb", and "fmeserverengine" services, define a single volume like:
volumes:
- /mnt/fmeserverdata:/data/fmeserverdata
This will mount your NFS share into the containers in the right place, and they will be properly shared across the hosts since they will all have the same share mounted in the same directory.
The docker exec command to get into the core container should look something like:
docker exec -it <name of core container> bash
The docker exec command to get into the core container should look something like:
docker exec -it <name of core container> bash
@grantarnold Related Q&A; post: https://knowledge.safe.com/questions/61328/where-are-the-dashboard-workspaces-statusmessage-f.html?
The docker exec command to get into the core container should look something like:
docker exec -it <name of core container> bash