Unless you take active measures to modify this behavior, every single input feature will be a separate transaction.
Unless you take active measures to modify this behavior, every single input feature will be a separate transaction.
Thanks David.
Maybe without knowing workspace specifics this is a difficult question to answer, but would you know what those SQLException "Connection is already closed" errors would be? Features read from a JDBC reader and run through an SDEQuerier are routed to the SQLExecutor as well as GeodatabaseSDE and Excel writers, and I am seeing that the number of features routed doesn't match the number of features actually written to the JDBC (Sybase) DB via SQLExecutor for runs that include the SQLException error. I've attached an example log of where only 202 of the 2689 features routed through to the SQLExecutor were actually committed (rather, 202 were there in the Sybase DB table PA_PREMISE when I checked after as the log stops showing each SQL query after the first 100). I am thinking this is possibly related to the "QueryTimeout (Seconds)" on the JDBC parameter within the SQLExecutor which is currently set as 30 seconds (the default). I plan to change this to 0 (no timeout) but am having some difficulty re-producing the issue to test it and thought I'd ask here if anyone has any ideas.
Thanks!
Thanks David.
Maybe without knowing workspace specifics this is a difficult question to answer, but would you know what those SQLException "Connection is already closed" errors would be? Features read from a JDBC reader and run through an SDEQuerier are routed to the SQLExecutor as well as GeodatabaseSDE and Excel writers, and I am seeing that the number of features routed doesn't match the number of features actually written to the JDBC (Sybase) DB via SQLExecutor for runs that include the SQLException error. I've attached an example log of where only 202 of the 2689 features routed through to the SQLExecutor were actually committed (rather, 202 were there in the Sybase DB table PA_PREMISE when I checked after as the log stops showing each SQL query after the first 100). I am thinking this is possibly related to the "QueryTimeout (Seconds)" on the JDBC parameter within the SQLExecutor which is currently set as 30 seconds (the default). I plan to change this to 0 (no timeout) but am having some difficulty re-producing the issue to test it and thought I'd ask here if anyone has any ideas.
Thanks!
Sorry, I've no idea about those exceptions, I've never run into them with either Oracle, Postgres or SQL Server. You may want to get in touch with your FME reseller or Safe support about it.