Forgot to add, when it fails without the logger and using params, it fails midstream. Meaning it fails after some rows have been already acquired.
Forgot to add, when it fails without the logger and using params, it fails midstream. Meaning it fails after some rows have been already acquired.
Can you please attach the log here with all the error messages. Perhaps also enable debug messages in the log (FME Options) before re-running the workspace.
Very strange
Could be something related to Batch mode? you could try sticking a Decelerator (0 second delay) after the reader to split up the batch into single features.
If you are using a user parameter value directly in your SQLExecutor somewhere try using first a ParameterFetcher and then using an attribute value instead.
Another thing to try (if possible) is to embed the connection parameters in the SQLExecutor rather than using a database connection.
Also worth testing this out on the latest FME to see if has been fixed. Sounds like a stupid bug.
If you're happy with using the logger you should be able to modify the logger so that it doesn't blow up your logfile too much.
Can you please attach the log here with all the error messages. Perhaps also enable debug messages in the log (FME Options) before re-running the workspace.
That's part of the problem. There are no error messages-just the last one. I have every logging option turned on. All rows are 'inform' and 'stats'. I can't really post logs unless I redact a lot on info. I may do that later. I can post the last few lines. This is from one I just ran [with logger disabled]. It returned 92 rows of about 140 then just ended. It always seems to terminate after the same last line in the log. The csv reader in the log is disabled. I was trying different input methods.
2020-07-10 07:15:38| 2.1| 0.0|INFORM|MULTI_READER(MULTI_READER): Done reading 1 features from 1 readers
2020-07-10 07:15:38| 2.1| 0.0|INFORM|Emptying factory pipeline
2020-07-10 07:15:38| 2.1| 0.0|STATS |Router and Unexpected Input Remover (RoutingFactory): Tested 1 input feature(s), wrote 1 output feature(s): 1 matched merge filters, 1 were routed to output, 0 could not be routed.
2020-07-10 07:15:38| 2.1| 0.0|STATS |Unexpected Input Remover Nuker (TeeFactory): Cloned 0 input feature(s) into 0 output feature(s)
2020-07-10 07:15:38| 2.1| 0.0|STATS |text_line (TEXTLINE_1) Splitter (TeeFactory): Cloned 1 input feature(s) into 1 output feature(s)
2020-07-10 07:15:38| 2.1| 0.0|STATS |text_line_TEXTLINE_1 Feature Counter -1 24 (TeeFactory): Cloned 1 input feature(s) into 1 output feature(s)
2020-07-10 07:15:38| 2.1| 0.0|STATS |CSV (CSV2_1) Splitter (TeeFactory): Cloned 0 input feature(s) into 0 output feature(s)
2020-07-10 07:15:38| 2.1| 0.0|STATS |CSV_CSV2_1 Feature Counter -1 30 (TeeFactory): Cloned 0 input feature(s) into 0 output feature(s)
2020-07-10 07:15:38| 2.1| 0.0|STATS |StatementCreator (Disabled) Nuker (TeeFactory): Cloned 0 input feature(s) into 0 output feature(s)
Error running translation.
Very strange
Could be something related to Batch mode? you could try sticking a Decelerator (0 second delay) after the reader to split up the batch into single features.
If you are using a user parameter value directly in your SQLExecutor somewhere try using first a ParameterFetcher and then using an attribute value instead.
Another thing to try (if possible) is to embed the connection parameters in the SQLExecutor rather than using a database connection.
Also worth testing this out on the latest FME to see if has been fixed. Sounds like a stupid bug.
If you're happy with using the logger you should be able to modify the logger so that it doesn't blow up your logfile too much.
At this point there is just one result from the reader. Its basically a parameter file with keys listed for the 'in()' part of the sql statement.
Not sure how to hard code the connection in the sql executor and not sure if you can.
Good idea on modifying logger. I will do that if needed.
@robotix Definitely odd behaviour. You might ask you database administrator to dig out the database event log for when you run the workspace. This could tell you what is causing of the failure. You can also check ODBC trace logs I believe