I posted this here just in case people aren't watching this thread but...
How much hard drive space do the worker nodes need in a cluster setup?
Currently I've spec'ed out just enough to run tableau and restore a local backup with a small buffer for growth (~820gb) should I match this? Do the workers need local copies of the backup if the primary has a copy? Should I expect the primary/workers to require as much hard drive space as our current Tableau instance on a single machine setup, each?
We have tried around 40K client sync (imported from 3 different AD) and it did sync with in no time (no more than 5 minutes). It may take little longer when you import clients first time (8 hours is too much). Not sure number of groups can cause this issue (we had about 175) or all with one AD? I suggest please check your configuration (server processes) or authentication with AD that can help.
We have done the tabadmin cleanup, did not cleanup any abandoned tde files.
We see 10-15 tde files for the same datasource extract......each being as big as 10GB. These are being refreshed every day.
Something is not deleteing old tde files.....
Thank you everyone who attended and asked questions during our Dev Office Hours last week! It was a great session and we loved hearing all the feedback and questions from the community. There were a couple follow-ups that I wanted to post about:
1. There was a question about how to monitor Tableau Server with New Relic. Here's the post on how you can set this up: Set Up Tableau Server to be Monitored by New Relic
2. Also, thanks Eric Liong for doing a great job summarizing the response to the question about running with less than the full recommended specs on the primary and backup primary. Just to reiterate, you can run the primary/backup primary on lower spec machines if they are not running any licensed processes. They still need to meet the minimum HW specifications for server, but can be relaxed down from the minimum recommended specs. As noted, you should ensure you have enough to run the administrative tasks, especially taking a backup which may require significant free hard disk space.
Sr. Product Manager
Thanks for listening.
Our business problem is our Tableau adoption has been slowed because of prompting to authenticate to the data sources. We have workbooks based off Essbase, Oracle, DB2, and cloudera hadoop.
Would like to be able manage a master data source username/password that would be used automatically when loading a workbook that requires the user to enter credentials.
Can you specify why you can't implement embedded passwords? Or service accounts (a.k.a machine account, faceless account)?
Our database(s) manage the security....row & column level security. Specifically things like financial numbers, budget & expense information, etc...
Nick - Looks like it is moreover a trusting issue. Also, if datasets are static/small and not used dynamically (real-time), you can publish as Datasource on Tableau server and refresh daily or as appropriately. So Dashboards or reports can directly connect to datasets and don't have to worry about authentication permission.
1 of 1 people found this helpful
There needs to be a change the handling of suspended tasks. We had a database issue which means that for 24 hours extracts that used that data source were failing. After 5 attempts the extracts have been suspended. The admins get the emails and the alert icon in the top right of the server page shows all the attempts to refresh the server. However, getting a list of those suspended tasks is nigh on impossible from the default status pages. There is no indication from the tasks pages, that some of these tasks will never run again until something is done to them. As a server admin i find that unacceptable, that Tableau has decided to take tasks off the schedule, but not identify them. There should be an icon or some clear indication that a task in the task list has been suspended and a clickable link or some other action so that you can restart a selection in bulk. If the alert window gets cleared there is then no indication of anything wrong and then relies on the users noticing the data hasnt updated. I have built a dashboard to show me this information from the postgresdb but this should be built into the server itself. You cannot have the server decided to not run a task, and not make it clearly identifiable to admins
I've created an idea whihc you can vote up
Hi erik lundberg,
RE: old extract files not getting cleaned-up. We have identified an issue in the product that may be causing this that was introduced in 10.1. I would suggest that you upgrade to the next maintenance release of either 10.1 or 10.2 and see if this fixes the issue. Another possibility is there there are some old sessions that are still referencing the old extracts; restarting the server may help to reset all old connections.
Hope this help!
Thanks for your response. After waiting for 10 days and not getting a solution from Tableau Support, we made the decision to
Reboot our primary server(restart of the Tableau server did not release these files) and directly the 800GB disk space was returned to worker01 and worker01.
Now everything is running perfectly again.
Erik Lundberg | Sr. Object Oriented Programmer Analyst, RM BI | 817-931-7020
image001.png 13.8 KB