Skip to main content

Connection to postgres seem not to close after query execution to postgres db. We are here allowed to have 5 connections to the postgres db and with the SQLExecution for each feature entering the transformer, a connection is established but seem not to close after the the query has been executed. After 5 features have entered the sqlExecutor I receive an error message: Error connecting to PostgreSQL database.....................: 'FATAL: too many connections for role "xxxxxxx"

Is there a way to close the connection after each featue that enters the SQLExceutor? We are using FME 2021.1

Hello @wil​, this is interesting! As far as I know, SQLCreators/SQLExecutors make non-persistent connections to the specified target database. These connections are closed as soon as possible after data processing is completed. May I ask how many SQLExecutors are in your workspace?

 

That being said, each SQLExecutor in a FME workspace will create its own independent connection to the target database. Further, depending on the process or SQL statement being executed, it is also possible for a single SQLExecutor to make multiple connections to the target database.

 

There is an issue filed for allowing SQLExecutors/Creators to utilize the same database connection, but it doesn't look like there is a timeline for this as of yet (FMEENGINE-10246).

 

A few ideas come to mind:

  • Segmenting processes out and chaining workspaces (eg. WorkspaceRunner, or even Flow automations)
  • Running SQLExecutors in succession(eg. one after another, instead of all at once)
  • FeatureHolding (eg. to allow one SQLExecutor to terminate the connection before the next one is established)
  • Attempt to add a SQL process in the end of each Executor that will force close all connections
  • Try JDBC

 

Would you be able to share a copy of your workspace here? It may be helpful to see how your workspace is designed and the kinds of statements being executed. If you're unable to share the workspace here, consider creating a support case with Safe. Happy to help, Kailin Opaleychuk


Hello @wil​, this is interesting! As far as I know, SQLCreators/SQLExecutors make non-persistent connections to the specified target database. These connections are closed as soon as possible after data processing is completed. May I ask how many SQLExecutors are in your workspace?

 

That being said, each SQLExecutor in a FME workspace will create its own independent connection to the target database. Further, depending on the process or SQL statement being executed, it is also possible for a single SQLExecutor to make multiple connections to the target database.

 

There is an issue filed for allowing SQLExecutors/Creators to utilize the same database connection, but it doesn't look like there is a timeline for this as of yet (FMEENGINE-10246).

 

A few ideas come to mind:

  • Segmenting processes out and chaining workspaces (eg. WorkspaceRunner, or even Flow automations)
  • Running SQLExecutors in succession(eg. one after another, instead of all at once)
  • FeatureHolding (eg. to allow one SQLExecutor to terminate the connection before the next one is established)
  • Attempt to add a SQL process in the end of each Executor that will force close all connections
  • Try JDBC

 

Would you be able to share a copy of your workspace here? It may be helpful to see how your workspace is designed and the kinds of statements being executed. If you're unable to share the workspace here, consider creating a support case with Safe. Happy to help, Kailin Opaleychuk

Hello Kailin,

Thank you for you reply.

In the mean time we were trying to solve another issue here with the same workbench which couldn't connect to the database from one specific server and by replacing the 'alias' in the Host address that is used to connect to the database by the 'actual' address the issue of connecting from this specific server was solved. I don't know the technical terms so I hope you understand my explanation, but at the same time the issue of too many connections was solved as well.

I don't know the technical reasons why this is the case but I am happy the issues of too many connections is solved this way.

 

The workbench also had this problem when run on Server but also on Server is the issue solved by replacing the address with the actual link instead of the alias.


Reply