Hi !
I have a simple process to query a SQL Server database, create a json, then pass each feature to a python script.
My issue is that the JSON Templater's ram usage grows and grows, storing features even when they have already been passed to python (up to 64Go for about 300000 lines).
How can I split my features stream, in order to work with n-sized batches (e.g. take the first 20 000 features, run the json templater and python, free any ram usage, and then take the 20 000 next features to do the same thing.)