If you know in advance how many requests to make, you can avoid looping completely and just make multiple HTTPCalls
Sometimes looping is unavoidable, there is some more information in answers to this question
https://community.safe.com/s/question/0D54Q0000930eGGSAY/is-there-a-tutorial-on-how-to-page-through-the-api-call
As I understand from the API docs you can use this link https://catalog.goteborg.se/rowstore/dataset/ce5b8afd-6721-49c7-9ebc-f1ad33fba266/info to get a rowcount: 1024135 attribute. You divide that by 500 and get a number of calls you have to make.
When making HTTP calls make sure to set the parameter _offset Swagger UI (entryscape.com)
As I understand from the API docs you can use this link https://catalog.goteborg.se/rowstore/dataset/ce5b8afd-6721-49c7-9ebc-f1ad33fba266/info to get a rowcount: 1024135 attribute. You divide that by 500 and get a number of calls you have to make.
When making HTTP calls make sure to set the parameter _offset Swagger UI (entryscape.com)
Thank´s for response! How do I go about to do multiple http calls, just multiple http call parts in fem workbench or?
Thank´s for response! How do I go about to do multiple http calls, just multiple http call parts in fem workbench or?
You use the Clonner as @ebygomm linked to.
After getting your total rows and knowing that the limit is 500 you calculate a new attribute that increments by 500 until you get to total rows. Each clone has this attribute incremented by 500.
Now you should have enough clones to cover all the rows for http caller.
You use the calculated attribute as the _offset parameter in HTTPCaller as the API specifies.
Thank´s for response! How do I go about to do multiple http calls, just multiple http call parts in fem workbench or?
I get stuck anyway I´m afraid, could you do a simple sketch or maybe a workbench model og how the different parts should be linked together, it would realy men a lot.
Thank´s for response! How do I go about to do multiple http calls, just multiple http call parts in fem workbench or?
Here you go!
It is good to know here that the amount of features that enter a http caller equals the number of queries that the http caller fires. That's why you use the cloner, to make the correct number of input features for the caller. If you give the correct attribute values to these cloned features you can use them to query the whole dataset in chunks of 500 rows.