I've looked too within both the Postgres database and the internal server logs and haven't been able to find such a metric. But what you could do if you want to go log splunking is to look at the backgrounder log, see when the query finished (i.e. 2:43pm) and then see when the extract refresh finished (i.e. 3:03pm) and attribute the import of the data to the difference (i.e. 20 minutes). And then if you sum the # of records in the extract (i.e. 20 million rows), then you can see how many records per second or minute.
Whatever the case may be, I find at least 2 things that help speed up extracts, and there's likely more.
- Hide columns that are not being used. Because the extract architecture is columnar based, the more columns you have, the longer it will take.
- Increase CPU speed. See this post for more details. Performance improvement - rendering and extracts - big find
Hi Jeff! Thanks for replying - I am going to give this a try and post back with results!