As long your workspace isn't doing anything that takes a long time it can sure handle it. I'm averaging 30000 such posts a day and have yet to see anything queued. Just make sure if you run other long running jobs on the same server that they are restricted to one of the engines leaving one always free for the short running ones.
In addition to what @gazza said, I would recommend to keep an eye on memory usage. I'm running something similar (average of about 1000 such small jobs per day, each one taking just a few seconds, in addition to some light other usage) on a FME Cloud Starter instance and find that there is a bit of memory creep going on (so the instance is rebooted once per 24 hours)
The only issue might be if you get a lot of requests in a very short amount of time, like a burst of dozens in a few seconds, then you might end up having jobs in the queue which might delay things. Based on what it is you're actually doing with the data this may or may not be a problem.
And if you don't mind tying up an engine you can use FME Server's built in WebSocket Server to process millions of feeds per day. This will be more responsive as it doesn't need to launch a new workspace for each incoming request but it will tie up the Engine running the WebSocketReceiver workspace.
And if you don't mind tying up an engine you can use FME Server's built in WebSocket Server to process millions of feeds per day. This will be more responsive as it doesn't need to launch a new workspace for each incoming request but it will tie up the Engine running the WebSocketReceiver workspace.
Yes, this I believe is how the Weather Network handles their lightning detection notifications, and it was tested at up to 50,000 per second (see https://www.safe.com/presentations/real-time-lightning-alerts/)