Skip to main content

Hi.

I just had a job on our FME server 2022.2.2 go haywire.

When reading from one or more database tables, it suddenly spews out a _lot_ of error messages, repeating the same two lines from line 319 to line 865997 (on page 1734):

Microsoft SQL Server Spatial Reader: An internal error has occurred. Please visit http://www.safe.com/support
Microsoft SQL Server Spatial Reader: Failed to parse datetime, attribute set as null

What gives ? Should I be concerned ?

I've occasionally encountered huge log files before, but haven't had any idea why. Next time I'll see if it's the same issue.

The job took several minutes (as opposed to normally a handful of seconds) to finish, which is not good.

Cheers.

My guess is that for ERROR message is that there is a feature for each line. Presumably these features have dates in them which are not supported or expected by FME. Do the numbers make sense there?


Thank you for your post @Lars I Nielsen​ !

If you run the same workspace on FME Form, does it run successfully or does it give the same errors? As @virtualcitymatt​ suggested, it's possible that the format of the dates is not a supported format, in which case, you can try using the DateFormatter on your workspace in Workbench, it put it in a supported format (e.g. 'FME Date (%Y%m%d)' for 'Date Format' parameter of the DateFormatter transformer) and see if that makes a difference.

Kate


Thank you for your post @Lars I Nielsen​ !

If you run the same workspace on FME Form, does it run successfully or does it give the same errors? As @virtualcitymatt​ suggested, it's possible that the format of the dates is not a supported format, in which case, you can try using the DateFormatter on your workspace in Workbench, it put it in a supported format (e.g. 'FME Date (%Y%m%d)' for 'Date Format' parameter of the DateFormatter transformer) and see if that makes a difference.

Kate

The problem is this:

  • the workspace is used all the time, on this very dataset and on other similar datasets, without creating this issue.
  • the workspace is just reading the data via a "generic" port. No attributes are being exposed.
  • there is but a single field with datetime information, and all values are valid (some are NULL).
  • it outputs similar error messages for some other tables, but not all, even though all tables have this very datetime field.

It seems like an error message is created for each row, regardless of the encountered datetime value. So it's really not a validation issue.

And it seems like only some tables are peppered with error messages, while others are not, even though they have data and include the offending datetime field.

And it only occurs on some runs of the workspace.

 

As my title says: "curious"

 

I'm now scanning my entire set of relevant log files, to see whether there's a trend of some kind.


The problem is this:

  • the workspace is used all the time, on this very dataset and on other similar datasets, without creating this issue.
  • the workspace is just reading the data via a "generic" port. No attributes are being exposed.
  • there is but a single field with datetime information, and all values are valid (some are NULL).
  • it outputs similar error messages for some other tables, but not all, even though all tables have this very datetime field.

It seems like an error message is created for each row, regardless of the encountered datetime value. So it's really not a validation issue.

And it seems like only some tables are peppered with error messages, while others are not, even though they have data and include the offending datetime field.

And it only occurs on some runs of the workspace.

 

As my title says: "curious"

 

I'm now scanning my entire set of relevant log files, to see whether there's a trend of some kind.

Ok, I have a bit of a theory (it's probably wrong) - The reader might be complaining about the null values in the table. Could be it that when this log message isn't there there are no null values in the table?

 


The problem is this:

  • the workspace is used all the time, on this very dataset and on other similar datasets, without creating this issue.
  • the workspace is just reading the data via a "generic" port. No attributes are being exposed.
  • there is but a single field with datetime information, and all values are valid (some are NULL).
  • it outputs similar error messages for some other tables, but not all, even though all tables have this very datetime field.

It seems like an error message is created for each row, regardless of the encountered datetime value. So it's really not a validation issue.

And it seems like only some tables are peppered with error messages, while others are not, even though they have data and include the offending datetime field.

And it only occurs on some runs of the workspace.

 

As my title says: "curious"

 

I'm now scanning my entire set of relevant log files, to see whether there's a trend of some kind.

Out of 197 runs (from July 1st) of the same workspace on the same dataset, only 3 erred in this way.

I'm looking into whether runs with different datasets have a similar pattern.


The problem is this:

  • the workspace is used all the time, on this very dataset and on other similar datasets, without creating this issue.
  • the workspace is just reading the data via a "generic" port. No attributes are being exposed.
  • there is but a single field with datetime information, and all values are valid (some are NULL).
  • it outputs similar error messages for some other tables, but not all, even though all tables have this very datetime field.

It seems like an error message is created for each row, regardless of the encountered datetime value. So it's really not a validation issue.

And it seems like only some tables are peppered with error messages, while others are not, even though they have data and include the offending datetime field.

And it only occurs on some runs of the workspace.

 

As my title says: "curious"

 

I'm now scanning my entire set of relevant log files, to see whether there's a trend of some kind.

Hi Matt.

I don't think so. It seems, like I wrote, that it complains about every row, when it complains.

Across all datasets, there are approximately 30% NULL's, and 70% non-NULL values, in this particular datetime field.

And there are NULL values present in most datasets.

I'm still digging through the log files, but it's very tedious and slow work.

Scanning just 197 log files from one dataset took me 10 minutes. Now I'm scanning 2200 log files for multiple datasets.


The problem is this:

  • the workspace is used all the time, on this very dataset and on other similar datasets, without creating this issue.
  • the workspace is just reading the data via a "generic" port. No attributes are being exposed.
  • there is but a single field with datetime information, and all values are valid (some are NULL).
  • it outputs similar error messages for some other tables, but not all, even though all tables have this very datetime field.

It seems like an error message is created for each row, regardless of the encountered datetime value. So it's really not a validation issue.

And it seems like only some tables are peppered with error messages, while others are not, even though they have data and include the offending datetime field.

And it only occurs on some runs of the workspace.

 

As my title says: "curious"

 

I'm now scanning my entire set of relevant log files, to see whether there's a trend of some kind.

right yep, I did read the note about it being every row - it was what you said about the data coming out of the generic port which is what made me think about it. To me it definitely seems like a bug with FME rather than a data issue.

 

My guess is (again, probably wrong) that FME will first determine the schema because it's a dynamic workflow and when porting into FME objects (specifically Bulk Mode objects) it has some problem within the batch - my thought here was a null value in the wrong place at the wrong time somehow (maybe the first record?!). This assumes that in each run there is different data in the table though which from what your saying is not the case.

 

Another thing could be that FME is wrongly determining the type of datetime when building the schema - This link hints at potential issues with datetime2 in SQL server: https://stackoverflow.com/questions/13405572/sql-statement-to-get-column-type

In fact the datetime2 type suggests that you can specify the length of the fractional seconds in the schema (https://www.tutorialsteacher.com/articles/datetime-vs-datetime2-in-sqlserver). Perhaps because of rounding some records have a different length of fractional seconds? Or FME is automatically somehow truncating them when it's something like "43.120000".

 

The error states "Failed to parse datetime, attribute set as null" - do you actually see that the values have been set to null - if indeed this happens for each row you should have an attribute which contains all nulls. If not then it seems like the log message is actually being reported way more than it should be.


@Lars I Nielsen​ Thank you for the details! It might be worthwhile for our team to take a closer look at this! If you'd like to submit a support case and provide the log files, and if possible the workspace template we can take a look!


Thank you for your post @Lars I Nielsen​ !

If you run the same workspace on FME Form, does it run successfully or does it give the same errors? As @virtualcitymatt​ suggested, it's possible that the format of the dates is not a supported format, in which case, you can try using the DateFormatter on your workspace in Workbench, it put it in a supported format (e.g. 'FME Date (%Y%m%d)' for 'Date Format' parameter of the DateFormatter transformer) and see if that makes a difference.

Kate

So my initial investigation into this issue revealed as follows (only looking at log files from July 1st).

2190 log files from running the workspace in question were examined. Three datasets have been affected by the "huge-datetime-warning-log-issue".

  • The first dataset triggered 2 errors within 5 seconds of each other (on July 3rd).
  • Another dataset triggered 3 errors inside an hour on the same day (on July 3rd).
  • A third dataset triggered 2 errors half an hour apart (on July 10th), and another one on July 7th.

--

I'll leave it be for now, as the workspace in question only needs the returned geometries for an overlap analysis, and so is not critically affected by the error.

But I am still concerned whether this error could potentially affect other workspaces, that do need to read the affected datetime values correctly.

 

Maybe I should make the same log analysis on the log files from the critical workspace too ?

 

Could this be an intermittent database error ? It's the same Microsoft SQL Server 2016 in all cases.

If so, what are my options to fix it ?

 

Cheers


@Lars I Nielsen​ Thank you for the details! It might be worthwhile for our team to take a closer look at this! If you'd like to submit a support case and provide the log files, and if possible the workspace template we can take a look!

Hi @katebrillinger​ 

I'll look into it today, because ..

In the meantime, I ran a scan of log files for the error message for another workspace, that actually reads and uses the datetime values. And I found more runs with the problem, including 5 yesterday morning within a few minutes :-(

So I'll escalate the issue from irritating/worrisome to critical.

The dates where it occurs does not correspond 100% with the dates found for the first workspace, but it matches the datasets (three rather large customer datasets, we recently brought into production).

--

As this might be an issue only on our server, and not something you can reproduce easily, what do you need from me ? Log files, obviously. Workspaces ? Template with caches ? Data ?


Hi @katebrillinger​ 

I'll look into it today, because ..

In the meantime, I ran a scan of log files for the error message for another workspace, that actually reads and uses the datetime values. And I found more runs with the problem, including 5 yesterday morning within a few minutes :-(

So I'll escalate the issue from irritating/worrisome to critical.

The dates where it occurs does not correspond 100% with the dates found for the first workspace, but it matches the datasets (three rather large customer datasets, we recently brought into production).

--

As this might be an issue only on our server, and not something you can reproduce easily, what do you need from me ? Log files, obviously. Workspaces ? Template with caches ? Data ?

I'm sorry to hear that this is causing problems elsewhere. I would say definitely start with the logs, workspaces or workspace templates if possible, if you are able to share a data sample that includes some of the ones that are failing that would be great as well! Also - are the times listed in the data, in the same timezone as your server, or are we dealing with a timezone difference there?


@Lars I Nielsen​ Thank you for the details! It might be worthwhile for our team to take a closer look at this! If you'd like to submit a support case and provide the log files, and if possible the workspace template we can take a look!

Hi again @katebrillinger​ .

I've been having a closer look at this issue.

I'm still getting lots of errors, but there is no decisive cause that I can see.

It occurs intermittently for a few minutes, rendering some jobs with errors. But it's not so, that all jobs in the time period fails, some do and some don't.

And then, no errors for 1-3 hours, and then it happens again in the same manner.

And sometimes even a few days passes without errors.

It's not a single dataset that errs, multiple customers data (schemas) are affected.

--

I'm beginning to wonder, whether it's an untrapped error with the database response, that the reader doesn't handle well.

It would be nice, if the reader was a little more fault tolerant, e.g. by trying this particular read/parse 2-3 times before giving up.

I've only seen this error with datetime, at least as far as what's written to the log file.

--

I've created a limited version of the workspace in question, and hard coded it to run with a dataset/request that has erred previously. I'm running it every three minutes in 18 hours (same as the original, business hours), and will see if that fails as well. Sofar it hasn't.

The error is so frustratingly elusive, that 3 minutes may not even be enough, so fingers crossed 🙂

Cheers.


Hi again @katebrillinger​ .

I've been having a closer look at this issue.

I'm still getting lots of errors, but there is no decisive cause that I can see.

It occurs intermittently for a few minutes, rendering some jobs with errors. But it's not so, that all jobs in the time period fails, some do and some don't.

And then, no errors for 1-3 hours, and then it happens again in the same manner.

And sometimes even a few days passes without errors.

It's not a single dataset that errs, multiple customers data (schemas) are affected.

--

I'm beginning to wonder, whether it's an untrapped error with the database response, that the reader doesn't handle well.

It would be nice, if the reader was a little more fault tolerant, e.g. by trying this particular read/parse 2-3 times before giving up.

I've only seen this error with datetime, at least as far as what's written to the log file.

--

I've created a limited version of the workspace in question, and hard coded it to run with a dataset/request that has erred previously. I'm running it every three minutes in 18 hours (same as the original, business hours), and will see if that fails as well. Sofar it hasn't.

The error is so frustratingly elusive, that 3 minutes may not even be enough, so fingers crossed 🙂

Cheers.

Success, of sorts. I managed to get a single failure, using my limited workspace. At 23:01 (11:01PM) last night, the workspace issued the dreaded error messages.

I've looked into the database logs for that time, and nothing out of the ordinary shows :-(

I can see the logins, but nothing else.

I'll investigate a little further, and get back.


Hi @katebrillinger​ 

I'll look into it today, because ..

In the meantime, I ran a scan of log files for the error message for another workspace, that actually reads and uses the datetime values. And I found more runs with the problem, including 5 yesterday morning within a few minutes :-(

So I'll escalate the issue from irritating/worrisome to critical.

The dates where it occurs does not correspond 100% with the dates found for the first workspace, but it matches the datasets (three rather large customer datasets, we recently brought into production).

--

As this might be an issue only on our server, and not something you can reproduce easily, what do you need from me ? Log files, obviously. Workspaces ? Template with caches ? Data ?

I've now setup the exact same job, using the exact same dataset and schedule, on our test FME server.

I've just made an engine upgrade on this test server from 2022.2.2 to 2022.2.6.

I'll have a look tomorrow to see, whether the new version yields errors or not.

If the issue can be solved by an engine upgrade, that is highly preferable.


Hi again @katebrillinger​ .

I've been having a closer look at this issue.

I'm still getting lots of errors, but there is no decisive cause that I can see.

It occurs intermittently for a few minutes, rendering some jobs with errors. But it's not so, that all jobs in the time period fails, some do and some don't.

And then, no errors for 1-3 hours, and then it happens again in the same manner.

And sometimes even a few days passes without errors.

It's not a single dataset that errs, multiple customers data (schemas) are affected.

--

I'm beginning to wonder, whether it's an untrapped error with the database response, that the reader doesn't handle well.

It would be nice, if the reader was a little more fault tolerant, e.g. by trying this particular read/parse 2-3 times before giving up.

I've only seen this error with datetime, at least as far as what's written to the log file.

--

I've created a limited version of the workspace in question, and hard coded it to run with a dataset/request that has erred previously. I'm running it every three minutes in 18 hours (same as the original, business hours), and will see if that fails as well. Sofar it hasn't.

The error is so frustratingly elusive, that 3 minutes may not even be enough, so fingers crossed 🙂

Cheers.

The preliminary results are in this morning:

2022.2.6 - no errors

2022.2.2 - another error run ocurring at the exact same time yesterday (23:01)

I will wait another 24 hours, and look at it again tomorrow morning.

But it does suggest, that upgrading the engine to 2022.2.6 may solve the issue.

Fingers crossed.


Hi again @katebrillinger​ .

I've been having a closer look at this issue.

I'm still getting lots of errors, but there is no decisive cause that I can see.

It occurs intermittently for a few minutes, rendering some jobs with errors. But it's not so, that all jobs in the time period fails, some do and some don't.

And then, no errors for 1-3 hours, and then it happens again in the same manner.

And sometimes even a few days passes without errors.

It's not a single dataset that errs, multiple customers data (schemas) are affected.

--

I'm beginning to wonder, whether it's an untrapped error with the database response, that the reader doesn't handle well.

It would be nice, if the reader was a little more fault tolerant, e.g. by trying this particular read/parse 2-3 times before giving up.

I've only seen this error with datetime, at least as far as what's written to the log file.

--

I've created a limited version of the workspace in question, and hard coded it to run with a dataset/request that has erred previously. I'm running it every three minutes in 18 hours (same as the original, business hours), and will see if that fails as well. Sofar it hasn't.

The error is so frustratingly elusive, that 3 minutes may not even be enough, so fingers crossed 🙂

Cheers.

That would be a nice outcome


Hi again @katebrillinger​ .

I've been having a closer look at this issue.

I'm still getting lots of errors, but there is no decisive cause that I can see.

It occurs intermittently for a few minutes, rendering some jobs with errors. But it's not so, that all jobs in the time period fails, some do and some don't.

And then, no errors for 1-3 hours, and then it happens again in the same manner.

And sometimes even a few days passes without errors.

It's not a single dataset that errs, multiple customers data (schemas) are affected.

--

I'm beginning to wonder, whether it's an untrapped error with the database response, that the reader doesn't handle well.

It would be nice, if the reader was a little more fault tolerant, e.g. by trying this particular read/parse 2-3 times before giving up.

I've only seen this error with datetime, at least as far as what's written to the log file.

--

I've created a limited version of the workspace in question, and hard coded it to run with a dataset/request that has erred previously. I'm running it every three minutes in 18 hours (same as the original, business hours), and will see if that fails as well. Sofar it hasn't.

The error is so frustratingly elusive, that 3 minutes may not even be enough, so fingers crossed 🙂

Cheers.

Fingers crossed! If it is still causing problems even after the upgrade, I would definitely recommend opening a Support Casen at that point with the log files so that we can take a closer look


Hi again @katebrillinger​ .

I've been having a closer look at this issue.

I'm still getting lots of errors, but there is no decisive cause that I can see.

It occurs intermittently for a few minutes, rendering some jobs with errors. But it's not so, that all jobs in the time period fails, some do and some don't.

And then, no errors for 1-3 hours, and then it happens again in the same manner.

And sometimes even a few days passes without errors.

It's not a single dataset that errs, multiple customers data (schemas) are affected.

--

I'm beginning to wonder, whether it's an untrapped error with the database response, that the reader doesn't handle well.

It would be nice, if the reader was a little more fault tolerant, e.g. by trying this particular read/parse 2-3 times before giving up.

I've only seen this error with datetime, at least as far as what's written to the log file.

--

I've created a limited version of the workspace in question, and hard coded it to run with a dataset/request that has erred previously. I'm running it every three minutes in 18 hours (same as the original, business hours), and will see if that fails as well. Sofar it hasn't.

The error is so frustratingly elusive, that 3 minutes may not even be enough, so fingers crossed 🙂

Cheers.

Alas.

I upgraded the production server to 2022.2.6 this morning (09:15), and two hours later the first error in the production workspace occurred.

It's still the same error message as always:

124	2023-7-21 11:02:59 | Microsoft SQL Server Spatial Reader: An internal error has occurred. Please visit http://www.safe.com/support
125 2023-7-21 11:02:59 | Microsoft SQL Server Spatial Reader: Failed to parse datetime, attribute set as null

So I'm kinda back to square one :-(

And since this is an intermittent problem, I'm at a loss about how to post an operational support case.


Hi again @katebrillinger​ .

I've been having a closer look at this issue.

I'm still getting lots of errors, but there is no decisive cause that I can see.

It occurs intermittently for a few minutes, rendering some jobs with errors. But it's not so, that all jobs in the time period fails, some do and some don't.

And then, no errors for 1-3 hours, and then it happens again in the same manner.

And sometimes even a few days passes without errors.

It's not a single dataset that errs, multiple customers data (schemas) are affected.

--

I'm beginning to wonder, whether it's an untrapped error with the database response, that the reader doesn't handle well.

It would be nice, if the reader was a little more fault tolerant, e.g. by trying this particular read/parse 2-3 times before giving up.

I've only seen this error with datetime, at least as far as what's written to the log file.

--

I've created a limited version of the workspace in question, and hard coded it to run with a dataset/request that has erred previously. I'm running it every three minutes in 18 hours (same as the original, business hours), and will see if that fails as well. Sofar it hasn't.

The error is so frustratingly elusive, that 3 minutes may not even be enough, so fingers crossed 🙂

Cheers.

Too bad - I would provide the link to this thread, submit some log files and an example workspace template with data stored in in the cache.

with the data in the cache they can at least recreate the table.

 

Important will probably be the SQL Server version etc and probably OS version.

 

They will work with you - Developers can also get involved and if need be you can probably get s screen share with one who might want to look at the DB more carefully.


Hi again @katebrillinger​ .

I've been having a closer look at this issue.

I'm still getting lots of errors, but there is no decisive cause that I can see.

It occurs intermittently for a few minutes, rendering some jobs with errors. But it's not so, that all jobs in the time period fails, some do and some don't.

And then, no errors for 1-3 hours, and then it happens again in the same manner.

And sometimes even a few days passes without errors.

It's not a single dataset that errs, multiple customers data (schemas) are affected.

--

I'm beginning to wonder, whether it's an untrapped error with the database response, that the reader doesn't handle well.

It would be nice, if the reader was a little more fault tolerant, e.g. by trying this particular read/parse 2-3 times before giving up.

I've only seen this error with datetime, at least as far as what's written to the log file.

--

I've created a limited version of the workspace in question, and hard coded it to run with a dataset/request that has erred previously. I'm running it every three minutes in 18 hours (same as the original, business hours), and will see if that fails as well. Sofar it hasn't.

The error is so frustratingly elusive, that 3 minutes may not even be enough, so fingers crossed 🙂

Cheers.

Hi @katebrillinger​ 

Support case C692332 submitted.

I'm on vacation from today until August 15th, so I hope I have sent all the necessary info.

I will be looking at my mail intermittently next week, but after that I'm not available until I'm back.

Cheers, and have a happy summer!


Hi again @katebrillinger​ .

I've been having a closer look at this issue.

I'm still getting lots of errors, but there is no decisive cause that I can see.

It occurs intermittently for a few minutes, rendering some jobs with errors. But it's not so, that all jobs in the time period fails, some do and some don't.

And then, no errors for 1-3 hours, and then it happens again in the same manner.

And sometimes even a few days passes without errors.

It's not a single dataset that errs, multiple customers data (schemas) are affected.

--

I'm beginning to wonder, whether it's an untrapped error with the database response, that the reader doesn't handle well.

It would be nice, if the reader was a little more fault tolerant, e.g. by trying this particular read/parse 2-3 times before giving up.

I've only seen this error with datetime, at least as far as what's written to the log file.

--

I've created a limited version of the workspace in question, and hard coded it to run with a dataset/request that has erred previously. I'm running it every three minutes in 18 hours (same as the original, business hours), and will see if that fails as well. Sofar it hasn't.

The error is so frustratingly elusive, that 3 minutes may not even be enough, so fingers crossed 🙂

Cheers.

Perfect thank you! We have several specialists who are great with databases, so you should hear from someone soon!


Hi again @katebrillinger​ .

I've been having a closer look at this issue.

I'm still getting lots of errors, but there is no decisive cause that I can see.

It occurs intermittently for a few minutes, rendering some jobs with errors. But it's not so, that all jobs in the time period fails, some do and some don't.

And then, no errors for 1-3 hours, and then it happens again in the same manner.

And sometimes even a few days passes without errors.

It's not a single dataset that errs, multiple customers data (schemas) are affected.

--

I'm beginning to wonder, whether it's an untrapped error with the database response, that the reader doesn't handle well.

It would be nice, if the reader was a little more fault tolerant, e.g. by trying this particular read/parse 2-3 times before giving up.

I've only seen this error with datetime, at least as far as what's written to the log file.

--

I've created a limited version of the workspace in question, and hard coded it to run with a dataset/request that has erred previously. I'm running it every three minutes in 18 hours (same as the original, business hours), and will see if that fails as well. Sofar it hasn't.

The error is so frustratingly elusive, that 3 minutes may not even be enough, so fingers crossed 🙂

Cheers.

Just before I left for summer vacation, I set up two log file scans, to find problematic runs with the issue present.

The "production" shows the issue occurring within our production setup, while "debug" shows the very same dataset being tried again and again every few minutes.

The interesting part is, that the occurence of the issue is not limited to a single dataset (= schema in production), and how the run times seems to cluster for a few minutes at a time across multiple schemas.


Was this issue ever resolved? Seeing something similar myself in one of my workspaces. 

2024-03-27 09:00:35| 368.3|  0.0|WARN  |Microsoft SQL Server Non-Spatial Reader: Failed to parse SQL Server data '2022-11-23 20:39:36.730000' from column 'UPPTSTAMP' for table 'dbo.TB90704'
 

Column is a datetime2(6) and the date seems valid. 

 


Reply