Skip to main content
Solved

How to download job results using the REST API?

  • April 15, 2026
  • 3 replies
  • 55 views

bveldkamp
Participant
Forum|alt.badge.img+2

Hi,

 

I am a software developer, but don’t know much about FME yet. Please forgive my ignorance :-)

I was asked to create a simple wrapper around the REST API v4 that lets a user start a job, and download the results as a zip file.

Generating a token, submitting a job, and getting the job’s status is fairly straightforward, but I am at a loss when it comes to downloading the results. I see that there is another API /fmedatadownload, but it seems that the token I use when submitting the job does not work here. I tried passing it in the Authorization header, as a query parameter, and I also tried Basic authentication, but all of these return HTTP 401

I am sure this must be very simple, but I am probably overlooking the obvious here.

Best answer by david_r

Somewhat un-intuitively, the /fmedatadownload webservice is not a part of the REST API, you should consider it a legacy endpoint that works quite differently.

Most importantly, it’s tightly bound to the workspace you are running: the FME workspace must have been authored and published to the server with this functionality in mind. FME will then “magically” zip the resulting datasets and stream them back to you when the job completes. This works best for jobs that are fairly short-running, in my experience.

If you would like to use the REST API, you would typically have your FME workspace output the results to a shared resource, then use the endpoints approximately like this:

  1. POST /jobs - retrieve job id
  2. GET /jobs/{id} - to poll job status for job id
  3. GET /resources/connections/{connection}/download - caveat: you’ll need to know the path and filename that was written by the job

There would of course need to be some sort of contract between the FME workspace logic and the way you’d actually use the REST API functionality, in particular on how to request the corresponding data file for a given job id. An external database to link job id’s with output dataset names might be useful here, which both FME and the REST API consumer could share. Alternatively, the FME Workspace could simply write all the data to something like “output_{id}.zip” so you could infer the name from the job id.

3 replies

nordpil
Enthusiast
Forum|alt.badge.img+12
  • Enthusiast
  • April 15, 2026

Check out the jobs and log endpoint, are you looking at the documentation on your server?

https://{your server}/fmeapiv4/docs/index.html#/jobs/getJobLog


david_r
Celebrity
  • Best Answer
  • April 15, 2026

Somewhat un-intuitively, the /fmedatadownload webservice is not a part of the REST API, you should consider it a legacy endpoint that works quite differently.

Most importantly, it’s tightly bound to the workspace you are running: the FME workspace must have been authored and published to the server with this functionality in mind. FME will then “magically” zip the resulting datasets and stream them back to you when the job completes. This works best for jobs that are fairly short-running, in my experience.

If you would like to use the REST API, you would typically have your FME workspace output the results to a shared resource, then use the endpoints approximately like this:

  1. POST /jobs - retrieve job id
  2. GET /jobs/{id} - to poll job status for job id
  3. GET /resources/connections/{connection}/download - caveat: you’ll need to know the path and filename that was written by the job

There would of course need to be some sort of contract between the FME workspace logic and the way you’d actually use the REST API functionality, in particular on how to request the corresponding data file for a given job id. An external database to link job id’s with output dataset names might be useful here, which both FME and the REST API consumer could share. Alternatively, the FME Workspace could simply write all the data to something like “output_{id}.zip” so you could infer the name from the job id.


nordpil
Enthusiast
Forum|alt.badge.img+12
  • Enthusiast
  • April 17, 2026

If the workspace is set to run with “Data Streaming” you could start it using the API and the endpoint would return the (zipped) results from the translation.