Dear FME friends,
We upgraded our fme 2023.1.1 to 2024.0.3. To test the new version, several test were done on the new environment, among which a performancetest for databases PostgreSQL and Oracle.
The workspaces that are run in a automation do 3 things:
- Download open data en put this data in our own postgresql datase
- Read the data from the database
- Delete the data from the database
We saw strange things going on in the numbers for both our postgresql and Oracle test. We saw for the postgresql test that the CPU-time was longer than the elapsed time. Furthermore the running time in 2024 was generally longer than in 2021. Please find attached the examples of the runs on fme 2021 and fme 2024. It is clearly seen something is wrong. Can you explain why in the 21 runs the elapsed time seems to be more than the cpu time and in the 24 run it is the other way around. The used dataset is exacly the same in both workspaces publised on fme server/flow.
We did the same experiment 5 times for several different sized datasets ranging from 4.000 to 400.000. The pattern repeats itself every run. For example in the postgresql test runs:
CPU time
400000 fme21 400000 fme 24
run 1: 00:00:51:25 00:01:30:20
run 2: 00:00:51:54 00:01:32:17
run 3: 00:00:50:70 00:01:24:03
For Oracle an overview with CPU-time is added.
The new engine machines (2024), have 12 cores, 64gb ram.
The old machines (2021) have 4 cores and 24gb ram
Could you please explain the difference in cpu time between the versions and why cpu time is sometimes longer than the elapsed time. If you need more examples or the raw test-data, please let me know.
Thanks in advance,
Matthijs Kastelijns