The Tableau Server db? Why, to what end?
Hi Nobusu. It's best practice to not custom configure the system memory for the database as generally speaking, the Tableau internal database isn't ever a bottleneck. The recommendation for 25% of system memory for hosting Postgres is a very general statement, and most likely applies more so to databases that have heavy query loads such as data warehouses / predictive mining / etc. In the case of Tableau, all queries are quite light against the db as it's more of an app database, and the default set amount memory is enough.
I suppose it depends on the dataset size you're working with. The `shared_buffers` parameter is used for practically everything in Postgres (joins, subqueries, indexes, literally everything). I would like to attempt modifying it and I take full responsibility for whatever bad thing might happen. It's a test server .
The reason I'm interested in this is because we implement filters for datasets. This will undoubtedly hit indexed columns and with such a small amount of memory allocated to shared_buffers, it means that the majority of indexes will not be in memory. This in turn will be bad for performance.
As I have a server with a lot of memory, and a lot of it is sitting free I would like to investigate the performance impact of bumping up shared_buffers.
But Nobu, the data you're working with isn't stored in the Postgres database. That database exists to support Tableau's own needs, not your data.
Any filters you implement on your own data have nothing to do with Tableau's own Postgres instance.
I'm happy to be corrected, but I think that's not correct when Tableau is configured to use a BigQuery connector in the temp table (spool) approach.
We have a lot of data in Google BigQuery, and we have Tableau configured to save the rows from BigQuery to a temp table. It is my understanding that this temp table lives in the Postgres database on the Tableau server. It's (almost) certainly not in BigQuery as I do not see any extra charges from Google when churning over the data in the temp table.
I'm happy to be corrected by more knowledgable persons, but my assumption is that the temp table is in the Postgres database. If that is correct, then the 16MB `shared_buffers` setting is going to be inadequate to deal with the datasets I'm working with.
Additionally I should say that we have an daily Extract scheduled, that would download the data into Tableu and our workbooks query the extract. This must mean that the data is in the Postgres database. Right?
Well, no actually. My understanding of the architecture is that the Hyper in-memory engine handles all of the interaction with source data, not the Postgres database
From the Tableau docs: "If the result set is too large, the BigQuery API returns an error and the Tableau BigQuery connector retries the query by saving the results into a BigQuery temp table. The BigQuery connector then reads from that temp table, which is a spool job that uses the bq-large-fetch-rows setting." - Note the language: it puts the results in a BigQuery temp table, not a Tableau temp table.
Extracts do not use Postgres, only Hyper (or the TDE engine in older versions).
I would be thrilled for a Tableau employee to weigh in on this thread. I see a lot of confusion about what the Postgres database is used for and having a KB article or at least a forum discussion that lays out the facts once and for all would be extremely helpful.
I think it would be best to contact Tableau Support about this. Also, with a fully-running (i.e. production) TS, memory is being used for various caching activities, extract and subscription processing, etc. so there typically isn't a huge chunk being unused -- at least in my environment.
It would be interesting to see if this is possible and how it affects things but, again, Tableau Support is your best best in getting any real answers.