In a number of situations schema scanning and bulk mode can drastically decrease performance.
One example is when reading a few records from a database with big blobs and writing to another database (postgres to postgres in this case). Reorganizing records into feature tables and then reorganizing them back into records at once was in one case causing a lot of unnecessary work and io.
By editing the metafile for postgres we could avoid this but tampering with the installation-files is probable often very inconvenient..
Having the option to configure readers and transformers in this aspect would really help.
It would also help in many situations to be able to control of the process of supplying the schema.
One example here is the SQLExecutor-transformer. The extra execution of the SQL-statement that take place in order to resolve schema could have side-effects that can be hard to forsee. I’d vote for making this “not default behaviour” and also marked as advanced. Another idea here could be to give the developer some support to specify (and validate?) a “canned schema” instead.
To summerize - convenience should in my opinion never stand in the way of fine grained control and I belive that the promotion of bulk mode and dynamic schema has in some situations led to that.