I would advice you to submit a support case.
In the meanwhile, there are other tools than Tableau performance recorder you can use such as TabJolt, TabMon, Replayer, Scout, etc. See the page below:
There are many factors that can affect performance in Tableau (and most of the time, it would be related to dashboard design).
We would recommend to use the performance recorder and have a quick check of this list:
And finally, the "Bible" for performance issues:
- Designing Efficient Workbooks (Whitepaper)
Sometimes I have the same problem, Could you please share how you fix it? It was a network issue?
First off, Marina's suggestions are a model answer for a failing dashboard, although you have categorically explained that this is not due to poor dashboard design, unfortunately, I have to say, Tableau can be very protective over the Server product insisting that it has to be badly designed workbooks - I know, I have been there soo many times in the last 8 years!
But 3 concurrent users, this is apalling.
The first step unfortunately will be to create a basic workbook using superstore - if possible, load to a data server to rule out network latency, then to create a plan of test items and then actually use the performance recorder to measure the amount of time taken for Tableau to actaully process the vis, send the sql to the server for execution, how long this takes, to return the output and for Tableau to build the vis.
Then, rerun the exact same test at around the same time (same server load) using the server performance recorder - activated using the url parameter record_performance=yes
This will at least allow you to measure the actual timings; the Tableau 1st line guys may ask you to run the clear cache technique with the url parameter, this is largely false (a red herring) as it is not possible to simply clear the cache on a basic command: to properly clear the cache, the server must be taken offline, and restarted - a point I made to the Tableau engineers last November.
Given your low user count, I guess you are not running a high-load multi-cluster environment so your options to tune and move workers between clusters will be unavailable.
Incidentally, where is the datasource that powers said workbook? I ask as one of the items I found last year and reported back to Tableau (as far as I am aware, this is still being fixed) is when connecting to published datasources: Despite being deployed to Tableau Server, TS will treat datasources as an external source, so it is not as simple as saying "the source is on the same server", and to make matters worse, rather than simply executing sql commands whenrunning in desktop, Tableau will convert all interactions with the source to xml,
Which takes time
And for big queries, lots of time,
And even more time when calculations and visuals are placed over the top!
For clarity, with Vodafone last November, I had a situation where a worbook needed to run sql that resulted in a 200-line statement, and some calcs over the top which pushed the over execution statement to around 280 rows, then fired ata a published datasource.
TS happily returned the results in 6-seconds, however, when the workbook was published to server, 6-seconds became a minimum of 81-seconds, because Tableau was converting the 280-line sql statement to a 1700-line xml statement!
It was this conversion, and reading from, and returning the results for and then converting again that was slowing the rendering down, problematic when the source was 7GB.
The way around this was to create an empty extract, upload and fill online which although we didn't see the 6-second desktop performance when published, 20-seconds was acceptable compared to 81.