Skip to main content

To give a brief introduction. We have setup a containerized FME stack where the engines are residing on a different server.

 

As is recommended and because the external engines are not running in the local Docker network, I am using a declarative "PORTPOOL" range. So both in environments "PORTPOOL=9000-9200" and in the ports "9000-9200:9000-9200".

 

Using this setup, I notice the following message keep coming back in the FME Core container: "Trying to update FME Server services..." This keeps on going, where a message every seconds is being send.

 

This seems to only happen when setting both the "PORTPOOL" and defining the ports. Removing this, does not let this message come up.

 

To have some insight in this, I was wondering what this message is actually indicating and what is meant with "update" and "services".

Hi @smol​ 

 

Can you share your compose file?

Are the engines running on the same machines that's running Docker or a completely separate server?

When the message doesn't come up does this deployment work?


Hi @smol​ 

 

Can you share your compose file?

Are the engines running on the same machines that's running Docker or a completely separate server?

When the message doesn't come up does this deployment work?

Indeed, the engines are running on a complete separate machine.  

Certainly, here the docker-compose files. Note that some aspect, like reverse-proxy setup, have been omited as they are bound to our personal setup and should not relate to this problem as we use a subnet to connect the Core and the Engines.

 

> When the message doesn't come up does this deployment work?

Without using the "PORTPOOL", resulting in no message, the setup does seem to work. However, without knowing what this message is refering to, it is hard for me to conclude what does and does not work.

 

Docker-compose for FME Core

version: '3.7'
 
services:
  fmeserverdb:
    image: postgres:9.6.16
    volumes:
      - 'database:/var/lib/postgresql/data'
    restart: unless-stopped
    networks:
      - database
    ports:
      - ${FME_CORE_IP:?}:5432:5432
      - 127.0.0.1:5432:5432
    healthcheck:
      test: pg_isready --host fmeserverdb -U fmeserver || exit 1
      interval: 10s
      timeout: 5s
      retries: 6
  fmeservercore:
    image: 'quay.io/safesoftware/fmeserver-core:2019.2.0-20191106'
    environment:
      - PRIMARY_PROCESS=core
      - PORTPOOL=9000-9200
      - EXTERNALHOSTNAME=${DOMAIN:?}
      - EXTERNALPORT=${EXTERNALPORT:-443}
      - WEBPROTOCOL=${WEBPROTOCOL:-https}
      # Makes sure the dependent folders are created when they do not exist
      - RUNSETUP=true
    volumes:
      - 'fmeserver:/data/fmeserverdata'
    hostname: fmeservercore
    ports:
      # These are the ports that are not established through the Docker network
      # See http://docs.safe.com/fme/html/FME_Server_Documentation/ReferenceManual/architecture.htm
      # Mail Publisher port 
      - 25:7125
      # FME Server Database communications
      - ${FME_CORE_IP:?}:7069:7069
      # Manage FME Engine processes
      - ${FME_CORE_IP:?}:7070:7070
      - ${FME_CORE_IP:?}:7501:7501
      # For the REST API to send requests to the FME Server Core
      - ${FME_CORE_IP:?}:7071:7071
      # Manage Notification Services
      - ${FME_CORE_IP:?}:7072-7076:7072-7076
      # Configuration, Backup & Restore requests and System Cleanup tasks
      - ${FME_CORE_IP:?}:7077:7077
      - ${FME_CORE_IP:?}:7081:7081
      # Handles FME Server Resource requests
      - ${FME_CORE_IP:?}:7079:7079
      # Dynamic ports based on the `PORTPOOL` environment variable
      # TODO: These "ephemeral ports" are causing issues and creates the messages of "Trying to update FME Server services..."
      - ${FME_CORE_IP:?}:9000-9200:9000-9200
    restart: unless-stopped
    healthcheck:
      test: nc -z fmeservercore 7071 || exit 1
      interval: 10s
      timeout: 1s
      retries: 6
    depends_on:
      - fmeserverdb
      - fmeserverqueue
      - fmeserverdbinit
    networks:
      - database
      - web
      - queue
  fmeserverdbinit:
    image: 'quay.io/safesoftware/fmeserver-core:2019.2.0-20191106'
    networks:
      - database
    restart: "no"
    depends_on:
      - fmeserverdb
    environment:
      - PRIMARY_PROCESS=initpgsql
  fmeserverwebsocket:
    image: 'quay.io/safesoftware/fmeserver-core:2019.2.0-20191106'
    environment:
      - PRIMARY_PROCESS=websocket
    volumes:
      - 'fmeserver:/data/fmeserverdata'
    hostname: fmeserverwebsocket
    ports:
      # handles WebSocket Server requests
      - ${FME_CORE_IP:?}:7078:7078
    restart: unless-stopped
    healthcheck:
      test: nc -z fmeserverwebsocket 7078 || exit 1
      interval: 10s
      timeout: 1s
      retries: 6
    networks:
      - web
  fmeserverqueue:
    image: 'quay.io/safesoftware/fmeserver-queue:2019.2.0-20191106'
    volumes:
      - 'fmeserver:/data/fmeserverdata'
    hostname: fmeserverqueue
    ports:
      - ${FME_CORE_IP:?}:6379:6379
    restart: unless-stopped
    healthcheck:
      test: redis-cli -a sozPpbLfgdI9WJoPejNMpSxGw -h fmeserverqueue ping || exit 1
      interval: 5s
      timeout: 1s
      retries: 5
    networks:
      - queue
  fmeserverweb:
    image: 'quay.io/safesoftware/fmeserver-web:2019.2.0-20191106'
    volumes:
      - 'fmeserver:/data/fmeserverdata'
    environment:
      - EXTERNALHOSTNAME=${DOMAIN:?}
      - EXTERNALPORT=${EXTERNALPORT:-443}
      - WEBPROTOCOL=${WEBPROTOCOL:-https}
    hostname: fmeserverweb
    restart: unless-stopped
    healthcheck:
      test: wget --quiet --tries=1 --spider http://fmeserverweb:8080/ || exit 1
      interval: 10s
      timeout: 5s
      retries: 6
    depends_on:
      - fmeservercore
    networks:
      - web
 
networks:
  database:
    driver: bridge
  web:
    driver: bridge
  queue:
    driver: bridge
 
volumes:
  database:
    driver: local
  fmeserver:
    driver: local

Docker-compose of the FME Engine

 

version: '3.7'
 
services:
  fmeserverengine:
    image: quay.io/safesoftware/fmeserver-engine:2019.2.0-20191106
    restart: unless-stopped
    hostname: "${HOSTNAME:?}"
    ports:
      # manages FME Server Core processes
      - ${FME_ENGINE_IP:?}:7500:7500
    extra_hosts:
      - "fmeserverdb:${FME_CORE_IP:?}"
      - "fmeservercore:${FME_CORE_IP:?}"
    environment:
      # We want FME Core to connect with the engine through the subnet
      - "NODENAME=${FME_ENGINE_IP:?}"
      - "EXTERNALHOSTNAME=${FME_CORE_DOMAIN:?}"
    volumes:
      - 'nfs:/data/fmeserverdata'

 


Hi @smol​ 

 

Can you share your compose file?

Are the engines running on the same machines that's running Docker or a completely separate server?

When the message doesn't come up does this deployment work?

@jlutherthomas​ Is there anything I can do to make this easier to be reviewed?

 

Maybe it would be a good starting point to identify if the message of "Trying to update FME Server services..." from the Core container is a critical message. Currently, I am unable to identify the risk of this message as I can not find any documentation about this message.

 

Also, how important is it to identify a custom "PORTPOOL" range, I have seen it work without this. But I can imagine this causing issues when a lot of Workbenches have to run.

 

Another question: Would there be a product-ready docker-compose example of running the FME Engine on a separate server? I have not found a ready docker-compose for this setup, but only for when the engine is running on its own server. This can be useful to be absolutely certain that all the correct ports are opened, which can be related to this initially reported message of "Trying to update FME Server services...".


@jlutherthomas​ Is there anything I can do to make this easier to be reviewed?

 

Maybe it would be a good starting point to identify if the message of "Trying to update FME Server services..." from the Core container is a critical message. Currently, I am unable to identify the risk of this message as I can not find any documentation about this message.

 

Also, how important is it to identify a custom "PORTPOOL" range, I have seen it work without this. But I can imagine this causing issues when a lot of Workbenches have to run.

 

Another question: Would there be a product-ready docker-compose example of running the FME Engine on a separate server? I have not found a ready docker-compose for this setup, but only for when the engine is running on its own server. This can be useful to be absolutely certain that all the correct ports are opened, which can be related to this initially reported message of "Trying to update FME Server services...".

Hi @smol​ 

 

I will have to get this up and running for myself (hopefully tomorrow) to be able to confirm that your docker-compose files are set up correctly, but I'm am confident that you do need the port pool set because those ephemeral ports are how FME Server Core and Engine communicate after the initial registration on 7070.

 

With that error or warning you're seeing about FME Server Services... which log file are you seeing this in?

Aside from that message, do you see your external engines in the FME Server web ui, and can successfully run jobs on them? Are your engines logging correctly in the resources folder?

You should also be able to check the log files of the FME Server services in the resources and see if they look ok.

 

Ports, in a recent example I saw someone expose:

6379:6379 (redis / fme server queue)

7500:7500 (fme server process monitor)

7070-7082:7070-7082 (I think you're just missing the websocket port, 7078)

9000-9200:9000-9200

 

Regarding a product-ready docker-compose example, we do not have anything publicly available as this isn't a deployment we want to encourage customers to be doing (dev was surprised we'd made the note about external engines public in our doc, and we may be removing it).

For multi-host, containerized deployments of FME Server we would recommend Kubernetes.


@jlutherthomas​ Is there anything I can do to make this easier to be reviewed?

 

Maybe it would be a good starting point to identify if the message of "Trying to update FME Server services..." from the Core container is a critical message. Currently, I am unable to identify the risk of this message as I can not find any documentation about this message.

 

Also, how important is it to identify a custom "PORTPOOL" range, I have seen it work without this. But I can imagine this causing issues when a lot of Workbenches have to run.

 

Another question: Would there be a product-ready docker-compose example of running the FME Engine on a separate server? I have not found a ready docker-compose for this setup, but only for when the engine is running on its own server. This can be useful to be absolutely certain that all the correct ports are opened, which can be related to this initially reported message of "Trying to update FME Server services...".

@jlutherthomas​ Thanks for the fast and detailed response!

 

> With that error or warning you're seeing about FME Server Services... which log file are you seeing this in?

I am seeing this in the docker logs of the "fmeservercore" container in my posted docker-compose. I cannot find any problems, Engines are connected, logs in the resource folder do not show any problems, so at this point, I am not even sure if this is a problem apart from filling the Docker logs.

 

> Ports, in a recent example I saw someone expose:... I think you're just missing the WebSocket port, 7078

I have used https://docs.safe.com/fme/html/FME_Server_Documentation/ReferenceManual/architecture.htm to find the related ports. I exposed 7078 in the specific WebSocket container ("fmeserverwebsocket").

 

> Regarding a product-ready docker-compose example, we do not have anything publicly available as this isn't a deployment we want to encourage customers to be doing (dev was surprised we'd made the note about external engines public in our doc, and we may be removing it).

For multi-host, containerized deployments of FME Server we would recommend Kubernetes.

I am kind of surprised to read this as I was indeed under the assumption that this was a route to set up the FME structure. I totally understand the benefits of a Kubernetes setup, but in our case, this is too big of a technological leap. Docker-compose is a nice middle ground until we feel more comfortable using Kubernetes or any other orchestration for that matter.

 

Thanks for the insight once again. I will be waiting for further feedback once you find the time to test the docker-compose I have sent.


@jlutherthomas​ Is there anything I can do to make this easier to be reviewed?

 

Maybe it would be a good starting point to identify if the message of "Trying to update FME Server services..." from the Core container is a critical message. Currently, I am unable to identify the risk of this message as I can not find any documentation about this message.

 

Also, how important is it to identify a custom "PORTPOOL" range, I have seen it work without this. But I can imagine this causing issues when a lot of Workbenches have to run.

 

Another question: Would there be a product-ready docker-compose example of running the FME Engine on a separate server? I have not found a ready docker-compose for this setup, but only for when the engine is running on its own server. This can be useful to be absolutely certain that all the correct ports are opened, which can be related to this initially reported message of "Trying to update FME Server services...".

@smol​ I'm sad to report I have not been able to set up my distributed environment like you have because of the volumes... How did you set yours up? In your core docker-compose file it likes like you've kept the original named volume, but for the engine compose file you're using NFS.

I've tried to use NFS and a bind mount for the FME Server System Share but am not having success when the core/queue/websocket containers try to start.

If you have steps that you could share, that would be much appreciated!

 

I also got some more background about the services message you're seeing. When the FME Server core container first starts (ever) it runs a script to set up the services correctly. In your case, that seems to be failing for whatever reason. Because you say all of the services actually work fine, you can run jobs etc it's hard to say exactly what has failed. But that will cause it to retry every 5(?) seconds, which is why you keep seeing the errors.

 

If I can get passed my volume hurdle, I'd like to see if I can reproduce it.


@jlutherthomas​ Is there anything I can do to make this easier to be reviewed?

 

Maybe it would be a good starting point to identify if the message of "Trying to update FME Server services..." from the Core container is a critical message. Currently, I am unable to identify the risk of this message as I can not find any documentation about this message.

 

Also, how important is it to identify a custom "PORTPOOL" range, I have seen it work without this. But I can imagine this causing issues when a lot of Workbenches have to run.

 

Another question: Would there be a product-ready docker-compose example of running the FME Engine on a separate server? I have not found a ready docker-compose for this setup, but only for when the engine is running on its own server. This can be useful to be absolutely certain that all the correct ports are opened, which can be related to this initially reported message of "Trying to update FME Server services...".

@jlutherthomas​ Thanks for taking the time.

 

I see I did not share the full FME Engines docker-compose, so hereby:

version: '3.7'
 
services:
  fmeserverengine:
    image: quay.io/safesoftware/fmeserver-engine:2019.2.0-20191106
    restart: unless-stopped
    hostname: "${HOSTNAME:?}"
    ports:
      # manages FME Server Core processes
      - ${FME_ENGINE_IP:?}:7500:7500
    extra_hosts:
      - "fmeserverdb:${FME_CORE_IP:?}"
      - "fmeservercore:${FME_CORE_IP:?}"
    environment:
      # We want FME Core to connect with the engine through the subnet
      - "NODENAME=${FME_ENGINE_IP:?}"
      - "EXTERNALHOSTNAME=${FME_CORE_DOMAIN:?}"
    volumes:
      - type: volume
        source: nfs-fme-core
        target: /data/fmeserverdata #/tpg/shared/${FME}/data dir
        volume:
            # Used so the container data is not being written to the volume on creation
            nocopy: true
 
# Note if this part is changed, you have to manually delete the volume on the server first
volumes:
  nfs-fme-core:
    driver_opts:
      type: nfs
      # "nolock", make sure  multiple clients are not locking each other out
      # "soft", make sure we do not have frozen unresponsive connections as we prioritize responsiveness over data-loss
      # "nosuid", prevents remote users from gaining higher privileges by running a 'setuid' program.
      o: "addr=${FME_CORE_IP:?},nolock,soft,nosuid,rw"
      device: ":/nfs" #/tpg/shared/${FME}/data dir

And with the that the full docker-compose of FME Core:

 

version: '3.7'
services:
  # The NFS Server that defines which data is accessible through NFS
  # Based on https://github.com/ehough/docker-nfs-server/blob/develop/doc/feature/auto-load-kernel-modules.md
  nfs:
    # Needed because AppArmor is currently blocking some of kernel modules read that are needed by this container
    # See: https://github.com/ehough/docker-nfs-server/blob/develop/doc/feature/apparmor.md
    privileged: true
    image: erichough/nfs-server:2.2.1
    restart: unless-stopped
    environment:
      - NFS_LOG_LEVEL=DEBUG
      # See: https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/5/html/deployment_guide/s1-nfs-server-config-exports
      - "NFS_EXPORT_0=/nfs ${FME_CORE_SUBNET:?}/24(rw,sync)"
    volumes:
      - fmeserver:/nfs
      - /lib/modules:/lib/modules:ro
    ports:
      - ${FME_CORE_IP:?}:111:111
      - ${FME_CORE_IP:?}:2049:2049
      - ${FME_CORE_IP:?}:32765:32765
      - ${FME_CORE_IP:?}:32767:32767
  fmeserverdb:
    image: postgres:9.6.16
    volumes:
      - 'database:/var/lib/postgresql/data'
    restart: unless-stopped
    networks:
      - database
    ports:
      - ${FME_CORE_IP:?}:5432:5432
      - 127.0.0.1:5432:5432
    healthcheck:
      test: pg_isready --host fmeserverdb -U fmeserver || exit 1
      interval: 10s
      timeout: 5s
      retries: 6
  fmeservercore:
    image: 'quay.io/safesoftware/fmeserver-core:2019.2.0-20191106'
    environment:
      - PRIMARY_PROCESS=core
      - PORTPOOL=9000-9200
      - EXTERNALHOSTNAME=${DOMAIN:?}
      - EXTERNALPORT=${EXTERNALPORT:-443}
      - WEBPROTOCOL=${WEBPROTOCOL:-https}
      # Makes sure the dependent folders are created when they do not exist
      - RUNSETUP=true
    volumes:
      - 'fmeserver:/data/fmeserverdata'
    hostname: fmeservercore
    ports:
      # These are the ports that are not established through the Docker network
      # See http://docs.safe.com/fme/html/FME_Server_Documentation/ReferenceManual/architecture.htm
      # Mail Publisher port
      - 25:7125
      # FME Server Database communications
      - ${FME_CORE_IP:?}:7069:7069
      # Manage FME Engine processes
      - ${FME_CORE_IP:?}:7070:7070
      - ${FME_CORE_IP:?}:7501:7501
      # For the REST API to send requests to the FME Server Core
      - ${FME_CORE_IP:?}:7071:7071
      # Manage Notification Services
      - ${FME_CORE_IP:?}:7072-7076:7072-7076
      # Configuration, Backup & Restore requests and System Cleanup tasks
      - ${FME_CORE_IP:?}:7077:7077
      - ${FME_CORE_IP:?}:7081:7081
      # Handles FME Server Resource requests
      - ${FME_CORE_IP:?}:7079:7079
      # Dynamic ports based on the `PORTPOOL` environment variable
      # TODO: These "ephemeral ports" are causing issues and creates the messages of "Trying to update FME Server services..."
      - ${FME_CORE_IP:?}:9000-9200:9000-9200
    restart: unless-stopped
    healthcheck:
      test: nc -z fmeservercore 7071 || exit 1
      interval: 10s
      timeout: 1s
      retries: 6
    depends_on:
      - fmeserverdb
      - fmeserverqueue
      - fmeserverdbinit
    networks:
      - database
      - web
      - queue
  fmeserverdbinit:
    image: 'quay.io/safesoftware/fmeserver-core:2019.2.0-20191106'
    networks:
      - database
    restart: "no"
    depends_on:
      - fmeserverdb
    environment:
      - PRIMARY_PROCESS=initpgsql
  fmeserverwebsocket:
    image: 'quay.io/safesoftware/fmeserver-core:2019.2.0-20191106'
    environment:
      - PRIMARY_PROCESS=websocket
    volumes:
      - 'fmeserver:/data/fmeserverdata'
    hostname: fmeserverwebsocket
    ports:
      # handles WebSocket Server requests
      - ${FME_CORE_IP:?}:7078:7078
    restart: unless-stopped
    healthcheck:
      test: nc -z fmeserverwebsocket 7078 || exit 1
      interval: 10s
      timeout: 1s
      retries: 6
    networks:
      - web
  fmeserverqueue:
    image: 'quay.io/safesoftware/fmeserver-queue:2019.2.0-20191106'
    volumes:
      - 'fmeserver:/data/fmeserverdata'
    hostname: fmeserverqueue
    ports:
      - ${FME_CORE_IP:?}:6379:6379
    restart: unless-stopped
    healthcheck:
      test: redis-cli -a sozPpbLfgdI9WJoPejNMpSxGw -h fmeserverqueue ping || exit 1
      interval: 5s
      timeout: 1s
      retries: 5
    networks:
      - queue
  fmeserverweb:
    image: 'quay.io/safesoftware/fmeserver-web:2019.2.0-20191106'
    volumes:
      - 'fmeserver:/data/fmeserverdata'
    environment:
      - EXTERNALHOSTNAME=${DOMAIN:?}
      - EXTERNALPORT=${EXTERNALPORT:-443}
      - WEBPROTOCOL=${WEBPROTOCOL:-https}
    hostname: fmeserverweb
    restart: unless-stopped
    healthcheck:
      test: wget --quiet --tries=1 --spider http://fmeserverweb:8080/ || exit 1
      interval: 10s
      timeout: 5s
      retries: 6
    depends_on:
      - fmeservercore
    networks:
      - web
 
networks:
  database:
    driver: bridge
  web:
    driver: bridge
  queue:
    driver: bridge
# TODO: Look how we can move these to our shared volume
volumes:
  database:
    driver: local
  fmeserver:
    driver: local

This only needs the Nginx attached, which we do not use.

 

Thanks for the time!


@jlutherthomas​ Is there anything I can do to make this easier to be reviewed?

 

Maybe it would be a good starting point to identify if the message of "Trying to update FME Server services..." from the Core container is a critical message. Currently, I am unable to identify the risk of this message as I can not find any documentation about this message.

 

Also, how important is it to identify a custom "PORTPOOL" range, I have seen it work without this. But I can imagine this causing issues when a lot of Workbenches have to run.

 

Another question: Would there be a product-ready docker-compose example of running the FME Engine on a separate server? I have not found a ready docker-compose for this setup, but only for when the engine is running on its own server. This can be useful to be absolutely certain that all the correct ports are opened, which can be related to this initially reported message of "Trying to update FME Server services...".

Hi @smol​ 

 

I managed to get this working and 'out of the box', adapting your compose files with my IP addresses etc it works totally fine for me.  PORTPOOL was exposed in this test.

 

There are 2 situations that I think might cause this error that you're seeing:

 

  1. After setting up FME Server, if you change the FME Server users by creating a new superuser account and delete the existing one, I can get this critical error to show by manually starting the services script that's throwing those errors.
  2. If after the initial set up of FME Server you change either the EXTERNALHOSTNAME, EXTERNALPORT or WEBPROTOCOL values in your core compose file and then restart (recreating the core or web containers) you may find this error appearing.

 

PORTPOOL  should have no impact on the error messages coming from the services script. My guess is that something changed  with one of the inputs (mentioned above) after the initial set up which when containers are recreated, the values no longer match what was set on first install, so the script throws an error. As your services are working correctly (confirmed by the ability to run jobs etc) I think that script must have run successfully at some point, my assumption is the very first time you started FME Server.

 

I think you have two options:

 

  1. Easy one, ignore the error message. However as it's logging every 5 seconds this takes up unnecessary log file space.
  2. Try bringing your deployment down completely - making sure both the database and fme server file share are removed. Then re-deploy your FME Server and hopefully everything is ok on the first start. (If you do change the admin user post install you may still end up with this issue down the line)

 

 

One thing to be aware of with this type pf deployment (you may have already figured it out) is that you'll be stuck with 1 engine per VM (which maybe you're happy with).

FME Server engines in container deployments are managed by docker (or kubernetes) so if you wanted to increase engines you would do this. In this distributed deployment, you're exposing 7500 for one container. If you try to scale up more engine containers on the same host you'll get port binding errors:

ERROR: for fmeserverengine  Cannot start service fmeserverengine: driver failed programming external connectivity on endpoint azureuser_fmeserverengine_2 (ceb18e2cf1e380d00088c6744c6a07b0bfdf3bed82534d6e8f0465b2eabd245a): Bind for 10.1.0.5:7500 failed: port is already allocated

 


@jlutherthomas​ Is there anything I can do to make this easier to be reviewed?

 

Maybe it would be a good starting point to identify if the message of "Trying to update FME Server services..." from the Core container is a critical message. Currently, I am unable to identify the risk of this message as I can not find any documentation about this message.

 

Also, how important is it to identify a custom "PORTPOOL" range, I have seen it work without this. But I can imagine this causing issues when a lot of Workbenches have to run.

 

Another question: Would there be a product-ready docker-compose example of running the FME Engine on a separate server? I have not found a ready docker-compose for this setup, but only for when the engine is running on its own server. This can be useful to be absolutely certain that all the correct ports are opened, which can be related to this initially reported message of "Trying to update FME Server services...".

@jlutherthomas​ This has been a creat in-depth answer and I have just confirmed one of our development environments restarting the stack from scratch and keeping the admin superuser account no longer shows the message. Even with a (non-overwrite) restore it all seems to work. I would have never guessed this would have been an issue.

> One thing to be aware of with this type of deployment (you may have already figured it out) is that you'll be stuck with 1 engine per VM (which maybe you're happy with).

Thanks for the notice. We are indeed aware of the shortcomings and are scaling by extra VMs; 1-engine per server. Kubernetes, or container orchestration in general, is too big of a technical leap for us at this moment. We are however in the transition to make steps towards this so we feel comfortable enough managing a container cluster.

 

----

 

Thanks for the great support for such specific problem 😃


@jlutherthomas​ Is there anything I can do to make this easier to be reviewed?

 

Maybe it would be a good starting point to identify if the message of "Trying to update FME Server services..." from the Core container is a critical message. Currently, I am unable to identify the risk of this message as I can not find any documentation about this message.

 

Also, how important is it to identify a custom "PORTPOOL" range, I have seen it work without this. But I can imagine this causing issues when a lot of Workbenches have to run.

 

Another question: Would there be a product-ready docker-compose example of running the FME Engine on a separate server? I have not found a ready docker-compose for this setup, but only for when the engine is running on its own server. This can be useful to be absolutely certain that all the correct ports are opened, which can be related to this initially reported message of "Trying to update FME Server services...".

@smol​ That's great news!

I wasn't sure if you were aware of the limitation with engines, so thought I'd let you know up front just in case (or for anyone else who stumbles upon this thread).

 

I'll be here if you have any more questions, especially when you start your journey into K8S! :)


@jlutherthomas​ Is there anything I can do to make this easier to be reviewed?

 

Maybe it would be a good starting point to identify if the message of "Trying to update FME Server services..." from the Core container is a critical message. Currently, I am unable to identify the risk of this message as I can not find any documentation about this message.

 

Also, how important is it to identify a custom "PORTPOOL" range, I have seen it work without this. But I can imagine this causing issues when a lot of Workbenches have to run.

 

Another question: Would there be a product-ready docker-compose example of running the FME Engine on a separate server? I have not found a ready docker-compose for this setup, but only for when the engine is running on its own server. This can be useful to be absolutely certain that all the correct ports are opened, which can be related to this initially reported message of "Trying to update FME Server services...".

@smol​  what do you mean by restarting stack from scratch? Can you please provide steps you used, I am seeing the same error in my development instance and trying to fix it?

 

I have removed volumes and containers and started core but still keep seeing the same error. I am using AWS RDS for DB which I didn't recreate, not sure if I need to scrap everything and start over to get rid of this error?


Reply