Skip to main content
Question

SQL Server CLR error when using SQLExecutor to run STWithin


tim_wood
Contributor
Forum|alt.badge.img+8

I'm using SQLExecutor to fire queries like this at an SQL Server Spatial point layer (Geometry data type), except the polygon geometries are much more complex:

SELECT Count(MyField) AS CountMyField FROM [db].[owner].[table] WHERE WKB_Geometry.STWithin(geometry::STGeomFromText('MULTIPOLYGON (((527049.5 137150.4,527056.6 137139.7,527063.3 137134.4,527049.5 137150.4)))', 27700)) = 1

However, after about 2400 features have entered the Initiator port, I get this error and the translation fails:

"The app domain with specified version id (%d) was unloaded due to memory pressure and could not be found."

Ultimately I will be processing hundreds of thousands of features so I need a solution that will work with that volume of data, not just a few thousand input features.

I know I can read the point layer in to my Workspace, but SQLExecutor is SO much faster.

Any suggestions?

5 replies

david_r
Evangelist
  • November 5, 2018

This sounds like a back-end issue that most probably isn't related to FME.

First hit on Google is this ESRI article, maybe it can give you some pointers?

https://community.esri.com/thread/180046


tim_wood
Contributor
Forum|alt.badge.img+8
  • Author
  • Contributor
  • November 5, 2018
david_r wrote:

This sounds like a back-end issue that most probably isn't related to FME.

First hit on Google is this ESRI article, maybe it can give you some pointers?

https://community.esri.com/thread/180046

Thanks. I think I may have come across that one in my own searching. I've asked our DBAs if they've got any ideas.


tim_wood
Contributor
Forum|alt.badge.img+8
  • Author
  • Contributor
  • November 5, 2018

One thing that does seem to help is the Decelerator, at least on the test dataset. I'm not sure what the rate of features per second is normally but I've gone past 4000 with the Decelerator set to 100 per second.

There doesn't seem to be any point in running parallel Workspaces via a WorkspaceRunner or using parallel processing within a Workspace as this would just increase the load on SQL Server. But I could split the data into chunks and use a WorkspaceRunner with Wait for job to complete set to Yes to act as kind of Decelerator as well.

I've tried a few alternatives to SQLExecutor but they are all WAY slower.


david_r
Evangelist
  • November 5, 2018

The way you describe gives the impression that the database cannot handle the load of all the requests. It could be that the memory garbage-collector is unable to keep up with the frequency and the number of requests. Maybe look at the memory settings of your SQL Server instance.


tim_wood
Contributor
Forum|alt.badge.img+8
  • Author
  • Contributor
  • November 5, 2018
david_r wrote:

The way you describe gives the impression that the database cannot handle the load of all the requests. It could be that the memory garbage-collector is unable to keep up with the frequency and the number of requests. Maybe look at the memory settings of your SQL Server instance.

I've had to add a Dissolver upstream of the SQLExecutor which I think is acting as a de facto Decelerator and therefore I'm not seeing the problem any more. However if the error occurs again, our DBA team will monitor SQL Server while the translation is running to see what happens.


Reply


Cookie policy

We use cookies to enhance and personalize your experience. If you accept you agree to our full cookie policy. Learn more about our cookies.

 
Cookie settings