1 Reply Latest reply on Dec 20, 2017 7:14 AM by Jeff Strauss

    KPIs for Tableau Server performance?

    Toby Erkson

      I'm looking to best tune our environment and am fiddling with various existing reports.  Honestly, I'm getting over-whelmed and probably way over-thinking stuff (no comments from you, Jeff Strauss  ).  I have 24 cores and currently one [VM] server.  After adding 8 cores to the existing 16 (along w/RAM increase) I really am not seeing noticeable performance improvements which seems really odd to me.  However, I'm not positive on the vizes I should be looking at as maybe there has been an improvement but I'm using the wrong vizes to track it   The plan is to branch into a 2-node environment but even then how will I know if there is a performance improvement if I don't have the right vizes to see it?  And use the same vizes to check tuning changes (e.g. edit number of processes)?


      I don't need to know about failing extracts as those are user issues but, for example, I DO need to know about delays, meaning, when does the extract/subscription/whatever actually begin versus when it was scheduled to run -- that could mean additional backgrounders and/or data engines are necessary (right?).  So stuff beyond the stock admin views in the Tableau Server status page.  What do you use, why it's important to you, and share the wealth by providing the workbook.


      Note:  I am following the Collect Data with Windows Performance Monitor section in the admin guide but since my production server is using Windows 2008 R2 I cannot get "Processor Information\%Processor Utility" so it's not a 100% solution.  I also don't see the benefit of the "Users and Actions" viz, at least for my purposes.

        • 1. Re: KPIs for Tableau Server performance?
          Jeff Strauss

          There's no overthinking this one as there's a lot that goes into optimal performance .  What do render timings look like in your environment currently?  Much of this you can get from the built-in dashboards.


          Here's my method to the madness.  Let me know if you want more details in any of these and I'm happy to share any workbooks that may be relevant.  Seriously, I spent excessive time on tuning / transparency into the metrics during the first half of 2017.  And now we're typically under 5 seconds average rendering time across about 3000 dashboard rendering on a daily basis.

          1. make sure that infrastructure has necessary capacity in terms of CPU, BIOS clock speed, RAM, disk, network, and available threads.  Out of all of these, in our environment, we ran into BIOS, available thread, and on occasion network constraints and adjusted how each is handled.

          2. configure processes, caching, etc. - none of these seem to have any significant impact (> 1 second) on rendering performance, however we did make adjustments just to have the confidence for our shop that things were optimal

          3. Look at long running dashboards that are most hit throughout the day.  Then tune the data access in order to trim down the average daily rendering run time.  This is what Alan Eldridge's best practices for performance focuses in on and this is often the first place to look assuming that #1 and #2 are not running into issues

          1 of 1 people found this helpful