Without seeing more detailed information on your hardware (like number of cores available, CPU stats, etc.), the best recommendation I can make is to refer to the Minimum Hardware Requirements and Recommendations for Tableau Server guide which advises that each node in a cluster should have at least 32GB of RAM each. Another general guideline is that for each process instance, Tableau recommends that the machine running the process have at least 1 GB of RAM and 1 logical CPU core.
I also recommend checking out Performance Tuning Examples for some configuration examples.
I hope this helps!
Thanks Caitlin for replying.
I have tried to apply as per my best understanding after going through all articles but my workbook view performance is very slow.
Regarding core and memory I have 16 core all total and All machine have 500 GB memory.
My dashboard size varied from 85 MB to 4.5 GB. I can see performance lagging in all dashboard !!
Any help will be appreciated.
Are all reports slow (both the 85 MB and the 4.5 GB)... what do your dashboards actually contain e.g. filters, params, what date range etc? What is the traffic on the server?
4.5 GB is always slow. 85MB one behave sometime slow(Even if only this report is access on that time).
My dashboard contain many filters (10+) and traffic in server is 50+user.
We have optimized extract,USed required context filter,also optimized workbook before publishing report.
10+ filters sounds problematic, I would recommend reducing the number if possible and perhaps try to optimise calculations if they are based on filters.
I personally don't like context filters, I would recommend datasource filters instead - much faster.
10+ is the requirement . Cant apply datasource one because other dashboard is using same source.
you can clone the datasource, and use ds filters only in one ...