Skip to main content

I am using a SQLExecutor transformer to execute a large number of SQL queries (writing to a Sybase IQ DB via JDBC connection as a work around for the Sybase Writer that didn't work for my specific translation). I've found that not all of the features that are routed through the translation and should be written via the SQLExecutor are actually getting written. There is an error "java.sql.SQLException: JZ006: Caught IOException: com.sybase.jdbc4.jdbc.SybConnectionDeadException: JZ0C0: Connection is already closed.". I'm thinking that this is related to the "Query Timeout" default of 30 seconds. However I'd like to confirm this so when does the commit part of the SQLExecutor query actually happens. For instance if 30 features get routed to the SQLExecutor do each of those get written one by one or all in a batch at the end?

 

Thanks in advance to anyone who has any info on this!

Unless you take active measures to modify this behavior, every single input feature will be a separate transaction.


Unless you take active measures to modify this behavior, every single input feature will be a separate transaction.

Thanks David.

Maybe without knowing workspace specifics this is a difficult question to answer, but would you know what those SQLException "Connection is already closed" errors would be? Features read from a JDBC reader and run through an SDEQuerier are routed to the SQLExecutor as well as GeodatabaseSDE and Excel writers, and I am seeing that the number of features routed doesn't match the number of features actually written to the JDBC (Sybase) DB via SQLExecutor for runs that include the SQLException error. I've attached an example log of where only 202 of the 2689 features routed through to the SQLExecutor were actually committed (rather, 202 were there in the Sybase DB table PA_PREMISE when I checked after as the log stops showing each SQL query after the first 100). I am thinking this is possibly related to the "QueryTimeout (Seconds)" on the JDBC parameter within the SQLExecutor which is currently set as 30 seconds (the default). I plan to change this to 0 (no timeout) but am having some difficulty re-producing the issue to test it and thought I'd ask here if anyone has any ideas.

Thanks!


Thanks David.

Maybe without knowing workspace specifics this is a difficult question to answer, but would you know what those SQLException "Connection is already closed" errors would be? Features read from a JDBC reader and run through an SDEQuerier are routed to the SQLExecutor as well as GeodatabaseSDE and Excel writers, and I am seeing that the number of features routed doesn't match the number of features actually written to the JDBC (Sybase) DB via SQLExecutor for runs that include the SQLException error. I've attached an example log of where only 202 of the 2689 features routed through to the SQLExecutor were actually committed (rather, 202 were there in the Sybase DB table PA_PREMISE when I checked after as the log stops showing each SQL query after the first 100). I am thinking this is possibly related to the "QueryTimeout (Seconds)" on the JDBC parameter within the SQLExecutor which is currently set as 30 seconds (the default). I plan to change this to 0 (no timeout) but am having some difficulty re-producing the issue to test it and thought I'd ask here if anyone has any ideas.

Thanks!

Sorry, I've no idea about those exceptions, I've never run into them with either Oracle, Postgres or SQL Server. You may want to get in touch with your FME reseller or Safe support about it.


Thanks David.

Maybe without knowing workspace specifics this is a difficult question to answer, but would you know what those SQLException "Connection is already closed" errors would be? Features read from a JDBC reader and run through an SDEQuerier are routed to the SQLExecutor as well as GeodatabaseSDE and Excel writers, and I am seeing that the number of features routed doesn't match the number of features actually written to the JDBC (Sybase) DB via SQLExecutor for runs that include the SQLException error. I've attached an example log of where only 202 of the 2689 features routed through to the SQLExecutor were actually committed (rather, 202 were there in the Sybase DB table PA_PREMISE when I checked after as the log stops showing each SQL query after the first 100). I am thinking this is possibly related to the "QueryTimeout (Seconds)" on the JDBC parameter within the SQLExecutor which is currently set as 30 seconds (the default). I plan to change this to 0 (no timeout) but am having some difficulty re-producing the issue to test it and thought I'd ask here if anyone has any ideas.

Thanks!

Thanks David, will do.


Reply