Skip navigation
1 2 3 4 Previous Next

Toby Erkson's Blog

56 posts

Not much to say   I got my additional secondary nodes -- a total of 4 additional ones, bringing my environment up to 6 servers -- so I got those up and going.  Through trial and several errors I found out that Tableau Server is particular about how the Coordination Service Ensemble is made.

 

I worked with another Tableau employee, Kevin, who is a Principal Solutions Engineer.  He collected my TS health check data and gave me a document that had recommendations for setting up my new environment to handle high extract and high view usage.  I will use this to set up my environment.  With so many ways to set up a TS his help was invaluable for me.  His recommendation is also built to easily accommodate current demands -- the environment will also be expandable vertically and horizontally for future demands when needed.

 

I found that if I follow the instructions on the Example: Install and Configure a Three-Node HA Cluster - Tableau  page the process went smoothly.  I already had a primary and secondary environment going so this was just installing the additional nodes.  I followed the "Use the TSM web interface" instructions, skipping Step 1 of course since the initial node was already established.  On Step 2 I saved the bootstrap file on a network drive so it was easy to access on the other four nodes. Briefly, here's what I did on each node:

  1. I performed Step 3 (Step 4 is the same instructions).  You NEED to follow the process as given!
    1. Don't configure the node, just leave it be.  Configuration will happen later.  Be patient.
    2. Close the browser as it won't be needed any more.
  2. Repeat #1 for the next node.

Perform the installs sequentially, meaning, don't try to install all of them in parallel.  You can install the software (Run As Administrator) on all the additional nodes in parallel (do them all at once) but when you get to the "Node Configuration" screen asking for the bootstrap file and credentials you need to stop!  This is where you now work serially:  Perform the node configuration on your first new node and perform each step until the end.  Then go to the next node and perform the node configuration on that one.  Repeat until all are completed.

 

Now move on to Step 5 on the primary node and follow the instructions.

Move on to Step 6 and follow the instructions.  Since I have 6 nodes I used the recommendation of deploying a Coordination Service to 5 nodes.  Below is my screen shot of the steps for deploying and then cleaning the coordination service:

 

Step 7 & Step 8 are the same and didn't apply to me so I skipped them, making the configuration changes as recommended to me.

 

I did run into an issue that stopped me from successfully completing and that was my #3 Gateway was failing to work.  After a search I found the problem and fixed it.  See the document here, Tableau Server Gateway Services fail to start or TSM becomes inaccessible on Windows Server 2016 (by Matthew Brimer) , and my comment in it (it's the first Comment).  Once I fixed the Windows registry value in four of my offending servers everything  worked perfectly!  I then ran my server setup script (custom settings, banner images, etc.), proxy/gateway setup script, and data source drivers on all of the new servers since they require it.

 

Here's how my environment looks at the time of this writing:

Here's my steps I had for updating our TEST environment last week:

Began shortly before 8AM.
Add core license TS**-****-****-****-**** back to TEST - DONE
Copy PROD backup to TEST - DONE
Restore repository on TEST - DONE
Remove Guest account from server (all sites) - DONE
Turn off AD synchronization, disable all Schedules, and suspend all Sites - DONE
Create Delete .csv files (using the DEMOTE sheet!) - DONE
Execute on TEST D:\temp\delete-script.bat - DONE - 100 minutes
Refresh extract on Tableau - Tableau License Cleanup.twbx - 12PM -> 12:45PM
Publish wkbk to area51 site (overwrite) - DONE
Make adjustments to config file - DONE
Change directory to D:\temp\Tableau_License_Cleanup\Tableau_License_Cleanup - DONE
Execute on TEST D:\temp\Tableau_License_Cleanup\Tableau_License_Cleanup\Tableau_License_Cleanup.exe - 4.5 minutes
Remove core license & restart - DONE
Ended 1:08PM

 

Notice the entry on line 5?  Well, if Guest Access is not turned off then after the conversion there will be a [non-functional] Guest user account!  So it's best to remove that option from the TS (Tableau Server) before moving on.

Lines 7 & 8 deal with the TLC (Tableau License Conversion) workbook I got from Rafael.  I decided to not keep Unlicensed Users on our TS so anyone demoted to Unlicensed is simply deleted from the Users.

Line 9 is refreshing the TLC connected to the TS to be converted, in our case, the TEST environment.  It is then published to a project on the TS (line 10).

A script provided by Rafael, as part of the conversion process, is executed in line 13 and it uses the packaged TLC workbook to adjust the site role for every person on every site!

 

The longer part of this is then verifying what's in TEST appropriately matches what is in PROD.  This means for each site I need to make sure that the new site role is equivalent to the original site role.  Due to limited licenses I also have to make sure only those that fall in the designated time period are included, with the others being deleted/removed from the TS.

So, do the results from TEST... ...match PROD... ...after the conversion criteria have been applied?  While Unlicensed is sort of ignored it also indicates that there are unlicensed users remaining on the TS because they still own content!  However, after almost a day of verification I can confidently say that, yes, the conversion was successful!

 

What's next?  I have to talk with our Tableau technical sales representative about building a new, better performing hardware environment to handle our high-extract, high-usage environment now that we won't be limited on CPUs   Also necessary is getting the subscription-based licenses out to our Tableau Desktop license key holders.  Somewhere amongst this I need to convert our PROD environment to role-based licensing -- I've already installed the licenses -- but I need to make sure those that rely upon Guest Access are prepared to lose it and make necessary changes.  Uhg

It's been a busy few days!  I have my regular work of supporting end users in Tableau and Alteryx but also doing server duties there and in the other BI environments.

giphy.gif

 

Okay, these last few days were figuring out what site roles I have for my core-based users and their equivalent for the upcoming role-based environment.  Rafael is my Tableau Professional Services contact and he's been great!  He sent me some TLC (Tableau Licensing Conversion) workbooks that we connected to my TEST Tableau Server (TS) environment and pulled in the data.  We were able to see the distinct counts of site roles by month as well as the total active users.  I also went to our Tableau Customer Portal and downloaded an Excel workbook of all our licenses and joined that to the data set so we could see who currently has a Desktop license.

 

Because the data on TEST was old I performed a restore on it using the previous night's [automated] backup from PROD so our data would be a bit more current.  After that we used his TLC workbook to produce a .csv list of users per site that we ran through a simple tabcmd script to be demoted in their licensing role.  Next we produced a .csv list of users per site who were Unlicensed and ran it through a simple tabcmd script to delete them.  That alone reduced our registered user count by over 26,000!  It should be noted that the workbook needed several hours of tweaking and testing before it was pronounced correct for my demotion and deletion operations.

 

Okay, so what is meant by "...demoted in their licensing role"?  The current criteria for demotion to Unlicensed for us is:

AND also filtered by:

This means if they are already Unlicensed they will remain so AND we want to purge everyone EXCEPT proposed admins and Creators.

 

I wrote this blog posting a couple of days ago but with Alteryx Server crashing, a couple of mandatory meetings, and day-to-day catch-up, all cut into my time put me behind.  Oh yeah, I have a family too so there's that.

Let's say there's a workbook and the author/Owner subscribes 5 people to a view (i.e. sheet) in it.  One of those people later becomes Unlicensed.  When the subscription runs the Unlicensed person will, naturally, not get an email but will the other 4 subscribed Users get an email?  Or will if fail for them as well, thus not giving them an email?

 

I'm looking at the Subscription Wait Time tab in the tabbed admin views (the bottom chart labeled "How long did subscriptions wait before execution?") with a filter change to show me failed subscriptions.

 

This allows me to view the underlying data when I select a point (Opinion:  This should be available without us having to hack, there's no reason for Tableau to not include it)...

...and I see this in the View Data...:

It appears that the item in the first row actually failed but only the second row is displayed on the viz, however, every Note uses the unhelpful word "Some".  Does this mean all 4 subscriptions failed to be sent or ???

Looking into the public.background_jobs table in the TS db (Select * from public.background_jobs WHERE job_name = 'Subscription Notifications' ORDER BY started_at desc) I'm able to find each entry that failed and succeeded.  Okay.  What this tells me is that the viz doesn't present ALL subscription info, thus leading to this deep-dive confusion.  Here's a snippet of that output:

The red boxes confirm the errors shown in the details of the viz.  Note that the green box shows that one of the subscriptions was successful, thus the subscription wasn't a total failure.  However, on the viz I could not find "Old Trucks"   Turns out it was under the mark labeled "VET Live":

View Data...

That mystery is solved.  The viz groups by the Wait Time.  My guess is that keeps the number of marks down so the viz will render more quickly.  I don't agree with this as many data points are hidden, meaning, if you don't directly see the mark on the viz then you would need to View Data... and click the Full Data tab in order to check the contents to see if the item you're looking for.

Prologue

I'm going to blog about our journey from to very different licensing schemes that Tableau has for on-prem environments:  Core and Subscription.  I'm the lead Tableau Server administrator for our company and I'm also responsible for administering servers for Alteryx, Power BI On-Prem, and Alation.  Luckily I have two great coworkers in India that greatly help me so my time isn't a total flurry of spinning plates and juggling *****, however, I'm still very busy so these blog entries will not be heavy with finely-grained details.

 

Brief Backstory

It finally happened to the joy or our ever-persistent and forever-awesome Tableau sales representative*.  Behind the scenes my manager and our sales rep completed a nice deal where we will convert over from our core-based environment -- which is very overworked! -- to a subscription-based environment.  The largest benefit of this conversion is that we will no longer be constrained by hardware   My customers will benefit the most from this and I hope the ones on the east coast will experience improvements in access and rendering speeds.

 

The one big concern I have is our dependence on the Guest Access feature of a core-based environment.  We are part of a global company (our parent is in Germany) and we do get other geographies coming to our Tableau reports from time-to-time.  The Guest account is also used at some kiosks and locations that are displaying vizes to a group of people on a continuous basis.  So moving to a subscription-based environment where one needs a license in order to view content is going to be a tough "data is for everyone" pill to swallow.  However, as part of our conversion process and the need for Guest Access we will have a totally separate core-based Tableau Server but it will not be self-service because it will be a basic 8-core server.

 

Today

Part of the package we paid for was Tableau Professional Services -- I'm glad my manager wisely did this.  They have plenty of experience and best-practices/recommendations with such exercises and they will be helping me through all of this, from architecture recommendations to making the switch to subscription-based licenses.  We are in a time-crunch and need to get this going quickly so in a few days I was assigned a Tableau Solutions Architect who will be working with me during my office hours   We can have a person on-site or teleconference, our choice, so I decided to use teleconferencing.

 

We had our first WebEx meeting this morning and to keep in the spirit of this huge step forward I wore my Tableau Ambassador socks   (see attached image).  Our conversation was just over 1.5 hours and our battle plan is getting formed.  More to come...

 

 

 

*Unfortunately, the day the purchase order was approved and we got the new licensing Salesforce assigned her to another account for no reason given to us.  Needless to say, I am very unhappy with that decision

This year of 2019 has been gloomy for my Tableau world as I've been stuck with v2018.1.4 for a year   Why so far behind?  I have to move from our current but "old" VM (Virtual Machine server) running Windows Server 2008 to a new VM with a current Windows Server 2016 OS, the VM team could not upgrade the OS on the VM box plus it would wipe out everything on it (their words).  The new environment was not set up exactly like my current production environment like I requested so it's been a total hassle with me having to work with an IT department that leaves much to be desired.  However, were are a couple superstars that helped me and progress is happening.  Another time loss for me was taking on the additional server administration position of three new environments:  Alteryx Server 2018 (which is horrible software), Alation, and on-prem Power BI (because we have a couple thousand SQL Server databases so it's "free"...whatever ).

 

First, just a recent version upgrade test

I have the VMs setup and after different tests I'm finally ready for a real test drive.  Obliterating the primary and secondary servers I installed 2019.3.0 raw, meaning, no production backup restored, just the basic install and nothing more.  I want to keep it simple.  I installed 2019.3.0 first and nothing noteworthy about it.  Next came the first test (since it originally failed prior to some VM tweak) of the upgrade from 2019.3.0 to 2019.3.1 was just under 20 minutes.  Spinning up TSM was about 5 minutes.  So less than 30 minutes and I had upgraded versions!  That's fast compared to the versions prior to 2018.TSM.  Uninstalling 2019.3.0 took under 30 seconds.  Nice.

 

During the upgrade script I refreshed my TSM screen and got this:

Panic!  But I reminded myself that TSM was being upgraded so it would make sense that it would "fail".  I sat on my hands and waited.

 

After several updates in the command window I clicked the refresh button in the TSM browser window and got this:

Okay, that makes sense.  I like that kind of error message

 

The script popped this up...

...and I thought, great, this is gonna be a long one but it was only a couple of minutes and it continued on.  Yeay!

 

Restore a backup

Next, I restored our production backup.  That took an hour sitting at the 24% mark, 20 minutes at the 27% mark, etc. Here's an interesting bit, the necessary services required for indexing were spun up (75% mark) and here they are:

 

So we get an idea of the basic processes necessary to perform indexing

Okay, from stop to start if looking at it from TSM's viewpoint the total restore time was less than 100 minutes.  The complete output:

According to the Server Disk Space report my primary is 295GB and secondary is 168GB in size.  I happened to open the Background Tasks for Non Extracts and saw this big surprise!

Over 1.6K pending tasks!  And this is after an hour of the restore!  Scrolling through the list there was one line for One Time Enable Flows Site Settings OnPrem, one line for Enqueue thumbnails upgrade, and all the rest were Upgrade thumbnails.  Interesting.  Another reason for performing the upgrade during low peak times.

 

Next?

Actually, now I will blow it all away, downgrade to our current version, restore a backup, then upgrade to 2019.3.1 along with all the server config tweaks we use.  I expect things to flow smoothly now so there's no need for me to document it.  If anything surprising pops up I'll mention it.  If you have questions I highly recommend asking the in the Server Administration forum where you'll get a much better chance of someone answering it...and probably more knowledgeable!

Ciara Brennan posted a link to an interview with current CEO Adam Selipsky, "Tableau Software - tackling the data mountain" Article in gigabitmagazine, and focused on the community piece of it.  To recap:

 

From Adam Selipsky in the interview by GigaBit magazine, Tableau Software: Tackling the data mountain | Cloud Computing | GigaBit,

“The company also has this really unusual and unique asset – and that’s the Tableau community. It's an incredibly energizing group to be a part of and frankly, it's also an incredibly important asset for the company. It's not easily matched by spending money on it. It’s something I think has been very carefully nurtured over a great number of years.”

 

This is my commentary on it:

 

Adam is correct in that having a great community is not something that can be created by just throwing money at it and it does take time.  Now, that's not to say money is completely out of the equation!  Funding is necessary for hiring quality personnel for the social spaces and, in our community forums, for decent software.  I'm sure monies are needed in other aspects of creating, building, and maintaining a customer community as well -- Zen and Ambassador programs, Tableau Conferences, swag, etc.

 

For over 20 years I've been in a lot of community forums (email lists and bulletin-boards before forums existed) -- both for my hobbies and for work.  I even set up and admin'd a forum on my server for a buddy's automotive repair shop.  While every forum will have its share of experts, it's the sense of positive community and friendships that can pull people together to build something that is greater.  I've left a leading, subject-specific enthusiast forum because I finally had enough of the negativity.  Having a thick skin is one thing but requiring to be armored for most conversations is ridiculous!  Establishing and maintaining a friendly environment definitely takes more effort, predominantly on the house-side of the equation:  The admin/moderation team must have rules that they enforce and model (i.e. walk the walk).

 

What I find unique about the Tableau community forums is that they are highly successful despite being owned & operated by Tableau.  The majority of forums are started by an enthusiast and if it's created early enough it tends to be the leader; the one where most will solicit.  The enthusiast-owned forums typically run pre-built forum software that is regularly updated, make use of custom settings, and a moderation team is quickly established.  On the flip-side, corporate-run forums are typically sterile:  Cheap software, minimal moderation, minimal answers from staff that require little product knowledge relying on canned scripts and end-users to fill the gaps.  They tend to generate more frustration than answers when it comes to questions not readily found in the documentation.  Tableau was smart to use a dedicated forum package and after a few years they finally implemented forum moderators.

 

While I believe the gamification (points & badges) assists to engage and encourage participation to help others, I think that there were a small yet active cohort of Tableau Desktop users that were the spark that started the strong community we see today.  Their posts of gathering details and then explaining their answers, being professional yet personable and caring, set the tone of the forums.  By setting a consistent example and encouraging others to learn and increase their own skills by participating in answering questions really got the "community" ball going!  I think this was the "special sauce" that gave life to the Tableau community and what makes it unique compared to other corporate-run forums.  With the inclusion of additional social media outlets the community grew and TC attendance -- putting faces to the names along with face-to-face interaction -- accelerated the maturation of community as we now know it.

 

Tableau's community is a rare asset that not only benefits the company but the users of the products it produces.  It's a synergistic system that other companies could emulate to positively differentiate themselves from competitors and provide a more satisfying user experience.

A question was asked today on Twitter about the handling of production Tableau Server environments.  There were a variety of responses and they varied according to the needs of the particular company, thus there was no -- and never will be! -- one singly correct answer.  However, I did notice that there were those who were not employing their Tableau Server environment in accordance to Tableau's EULA (End User License Agreement)   Tableau only allows the following:

  1. One production environment:  Where content (workbooks, data sources, etc.) is published and consumed in ALL ASPECTS, be it QA/testing, user acceptance, and production usage.
  2. Two non-production environments:  Where software testing can take place.  For example, new software version testing and/or beta testing.  I believe very specific trouble-shooting can happen here if it would normally impact the production environment.

(Like how I used a numbered list that matches the EULA requirements?  )

 

In a nutshell

What this means is that Tableau does NOT allow the common SDLC usage of non-production report testing (QA/DEV) in one server environment and then pushing production-ready reports into another, separate [production] server environment.  All QA/DEV and PROD work must happen in a single Tableau Server environment!

 

Oops and now

Well, we used to follow the common SDLC as that is what is used by our Cognos team and what the prior BI supervisor expected.  It wasn't until some time later, during a move from a single node to a 2-node environment, that we discovered we were not following the EULA   Our Tableau sales reps were very understanding in allowing us time to make the change.  Luckily, due to the self-service process already in place there was very little consternation and my end users took to the new process quite rapidly   When our 2-node environment was switched over, becoming our production environment, we implemented our new Tableau BI paradigm, the continuously-improving life cycle a.k.a. CILC.

 

How CILC came into being was driven by me but decided by my end users.  Yep, I gave all 200+ publishers the option to vote on the method they wanted to employ!  I've attached the two page document I sent to my publishers, SDLC_Proposition.docx.  I briefly explained what was going to happen and gave five options on how to begin the new SDLC.  They were given ample time to respond and provide any feedback.  These were their options:

  1. Have a QA Site and a PROD Site
  2. Have a QA Project and a PROD Project
  3. Use a suffix or prefix on workbook names to designate QA or PROD
  4. No QA on the Tableau Server at all.  Instead, have a "continuously improving", or agile, PROD Server.
  5. A combination of any of the above.

 

The overwhelming vote was for #4 with only one needing to have separate QA and PROD areas (I had them use option #2; this was before child Projects were implemented in TS).  Technically speaking, there's nothing stopping Project Owners from implementing options #2 (as child Projects in our latest version of TS) & #3 if they wanted and this works just fine within the spirit of CILC. 

 

I thus began working on what "continuously improving" meant and how to convey it to my end users.  The below document is what came about after the voting.  Since my end users are notorious for not reading emails and documentation...or at least not reading them comprehensively...I had to keep it short, providing just the necessary info.  This is one of the first documents my users need to read when they are granted the site role of Publisher.  It's also attached if you want to copy and use it in your own organization.

 

-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

 

The Tableau Self-Service & CILC Paradigm

By Toby Erkson, Tableau Administrator, February 2018

With our multi-node environment we are truly an organization that functions in the self-service and continuously-improving life cycle methodology.

What does this mean?

Well, by using Tableau you are already involved in self-service BI (business intelligence reporting).  It basically means you don’t have to rely on another team or person to create your reports because you can easily do it yourself.

What does continuously-improving life cycle mean?  This is where we’re making a slight change to what is called the software development life cycle, or SDLC, which is how tradition BI reporting works (think Cognos). This is basically what it looks like:

Many of you are used to creating a workbook (Design & Develop), publishing it to the QA Server (Develop & Testing), getting end user acceptance (Testing), and then publishing to the PROD Server for production use (Deploy & Maintenance). Well, this is going to change just a little bit…

The QA Server as you know it in the SDLC is gone.  This is 100% driven by the end user license agreement (EULA) from Tableau because they consider how we were using the QA Server as a production environment and the EULA only allows for one production environment[1].

 

So what you will now be doing is what I’ve termed continuously-improving life cycle, or CILC for short (pronounced silk).  You will develop the workbook in Tableau Desktop and when it gets to the point where it’s working for the end user you will publish it directly to the production server (PROD Server).  The end user can use it immediately and if it needs updating/tweaking/maintenance you will make the necessary changes and publish over it, thus creating a revision history.  Rinse and repeat.  This is similar to agile software development and is actually not uncommon in other companies that use Tableau for enterprise reporting.

 

Not too crazy about this? Does it seem too millennial?  Fear not!  You have options to revert back to a more traditional SDLC:

  • Add a suffix or prefix to the workbook name that designates it as a QA report.
  • Set its permissions such that only certain people can see and interact with it while it’s in the UAT phase.
  • Create a sub-project for QA that has greater permission restrictions.  Once the workbook is ready for production usage you simply move it from the QA sub-project to the production project.
  • Or implement a combination of the above options.

It’s up to you or your Project Leader or Project Owner to decide upon the particular development life cycle you wish to implement, SDLC or CILC, as there is no wrong way thanks to Tableau’s fast and flexible reporting environment.

-- End Of Document --

 


[1] We are allowed two non-production environments.  For example, testing Tableau Sever software before implementing it into production.

Check this out:

 

This was my production Tableau Server with 1.99TB hard drive that had only 403GB free space.  Wow, that is a poo-poo ton of usage!!  I was getting disk space warnings every morning when the automatic backup process was running.  While we are an extract-heavy environment I couldn't believe that I was actually running out of hard drive space when the biggest extract was 9GB and the majority of them were 1GB and less!  What the heck was going on?

 

I mentioned my situation with our Tableau tech representative and he asked if the drive was getting defragged and if cleanups were being performed.  I knew the drives on all instances were being defragged so that wasn't the issue.  Because I didn't want to lose TS (Tableau Server) history for debugging I wasn't performing cleanups.  I disliked the idea of doing a cleanup because of the information that is lost.  Archiving the files (.zip) is very inefficient and not ideal.  A db that is continuously being updated would be a better alternative.  Also, performing a cleanup with the TS stopped isn’t a good option as that, of course, would mean the TS is unable to do any reporting work.  Since it’s a production server running 24x7 any downtime is totally undesirable and will impact someone, somewhere

 

However, since I was gradually losing disk space I had to do something...

 

So I bit the bullet and ran a cleanup while the TS was running:

 

20 minutes later the cleanup was done and now here's what my free space looks like:

 

  WOW!  That was a LOT of log file and HTTP requests cleanup goin' on!!

 

Here's how my disk space usage looks now:

 

Quite an improvement, right?

 

An embarrassing but learning moment for me.  Performing a regular cleanup is a necessary routine the TS admin needs to perform.  How often depends on one's particular environment but once a month seems reasonable and that's what I was previously doing on our QA Server.

 

How to preserve the history in an efficient and easy-to-use format is a whole other subject; search the Server Admin forum for suggestions or leave a comment below on how you do it

Looking over one of my blog posts, Tableau Server licensing & hardware upgrade , I can tell I was frustrated with my lack of server knowledge, both in terms of hardware and Tableau software.  Brandyn Moody did an admirable job of stating facts and providing a friendly & helpful reply -- something that I still need to work on

 

As a followup, our Tableau technical account representative has been getting Tableau Server health checks from us for the last few months and showing how things are trending in our environment.  We've also upgraded our hardware to 24 total cores, moved to a two-node environment with a dedicated 100% Backgrounder worker, and just last month upgraded to 10.5.2.  Thus the server health checks have been helpful in that they show a "before and after" of our whole environment with a focus on how it affects the end user, so we can see what is doing better (or worse), how things are trending, etc.  If you don't have a technical account rep. then ask your sales rep about one and getting a Tableau Server health check if you are lacking Jedi server skills.

 

Not to go unmentioned, our sales rep. stays in contact with me/us to make sure support tickets are getting the appropriate response times & priority, we're aware of any training and events, and working with our Tableau technical rep. to keep me, my users, and my manager all happy and productive.  Thanks Tableau for being easy to work with

 

Some examples from the health check:

Purpose*

This unlocks the postgres database on the Tableau Server, allowing full access to all of the tables in the database and not just the views. This gives the Tableau Server administrator visibility into the database. Please note that this is not supported by Tableau.

 

Get the password for the postgres db

This tip comes from Zen Master Tamas Foldi and you need to use it in order to get the password for the access operation below.

To get the pgsql password use the following command on the Tableau Server you will be accessing:

tabadmin get pgsql.adminpassword

In the blurred-out text above is where you'll get the password for your db.

Access for the postgres db

This hack is VERY DANGEROUS and UNSUPPORTED by Tableau! Unless you are VERY EXPERIENCED you should only access the database tables using Tableau Desktop or other read-only tool! This is not the preferred method of gaining access to the Tableau server database. Use at your OWN risk!

How-To

1. Open a command window (DOS prompt) and go to the PostGRES binary directory for your Tableau Server …pgsql\bin directory (most installs will look like C:\Program Files (x86)\Tableau\Tableau Server\8.0\pgsql\bin). The DOS command prompt in the PostGRES binary directory:

2. Now ‘open’ the PostGRESQL command prompt by typing in this:

psql -p 8060 workgroup tblwgadmin

Please note that if you use a different port then you need to change the 8060 value to the value of the port you use.

Here’s what my result looked like:

3. Notice that you need to enter in a password.  That's why you got the password mentioned in the beginning   Enter in your password and then hit the 'Enter' key.  Note that the cursor won't move and you won't see any text to type carefully.  I recommend using copy/paste.

 

4. At the workgroup=# prompt you can now execute commands:

For example, enter this command to change the role of the tableau user to have READ and WRITE access:

alter user tableau superuser;

After the above command executes you’ll see “ALTER ROLE” display and then an empty prompt:

Or in the situation I had, delete a custom view from the database that was no longer needed using the DROP command:

5. That's it. When you're done close the DOS command window.  Either CTRL + c  or  \q  (backslash followed by the letter Q in lower case) will exit you from the workgroup=# prompt and put you back into the cmd prompt.

 

Remember, you’ll now have the ability to write to your database and delete things! This is VERY DANGEROUS and UNSUPPORTED by Tableau! Unless you are an VERY EXPERIENCED you should only access the database tables using Tableau Desktop or other read-only tool! Make sure you backup your database regularly.

 

 

 

*Adapted from my Tabwiki document.

I saw it today on Twitter.  Someone who I consider a key player at Tableau, a self-admitted Tableau stalker, left the company.  Just time for them to move on.  We know the drill.

Last week...or at least sometime between now and the first of the year...two other great Tableau employees I knew had left the company.  There was a hint there are some more.

 

I grew up during, and into the end of, the "work for one company until retirement" paradigm.  As my work life has progressed that paradigm is no longer the expected norm.  Now, thanks to social media and job-focused media like LinkedIn, if people are not looking for a job they are still getting offers.  Having worked for a few contracting agencies this happens to me -- I stay in contact with them juuuuust in case as I learned early in my career that corporate loyalty to employees is an illusion, subject to immediate dismissal at their whim.  I as well left a great company because after a time there I knew my position was a dead-end so when a previous manager reached out to me for a professional skills growth opportunity I went for it (and thus was introduced to Tableau ) so I get it.

 

It saddens me to see such good people leave because I'll miss them.  It's like growing up with a friend during primary school and in high school their family moves out of town.  It's like having a sibling move out of the house.  Their absence is felt in the knowledge void left behind.  You get used to having these people around and using their knowledge to help others, heck, to help yourself, too!  The loss is magnified by the deeper product and tribal knowledge that goes with them.

 

I wish them well.  I know that the companies that now have them have gained an excellent resource.  Just know that you are missed.

 

 

Note:  I'm not diminishing their replacements.  This has nothing to do with them.  There's no doubt Tableau will do its best to replace them with great folks who could possibly <gasp!> be even better and I do look forward in meeting and interacting with them.

 

-- Twitter headstones --

You know, being a relative newbie to the server side of IT can be frustrating because there are so many aspects to it and networking types of issues are my bane.  Firewall, proxy, VIPs, SSL, etc. are so different and have their own teams [at least where I work] but seem like the same thing depending on one's inexperienced viewpoint.

 

While testing our up-coming 2-node Tableau Server environment I was having an issue with the "pretty" VIP (Virtual IP) that my end users would use that would direct them to the Tableau Server.  Screen results from using the VIP address were very random.  Sometimes the login page would display, sometimes not.  Sometimes it would show the list of Sites, other time just a partial list.  After maybe logging in I would get some workbooks.  Clicking on one would result in a "content not found" or "page could not be accessed" types of errors.  Web page rendering performance was often terrible.  As part of the load-balancer setup I carefully followed the instructions from Add a Load Balancer, several times in fact, but still no joy   I set up a case with Tableau Support and sent them my log files and waited.

 

Two weeks went by and no reply.  I pinged them -- because normally they are very quick to respond -- and was told an engineer was looking over my case.  Sigh.

 

My manager, who is much more skilled with networking than I, asked if the F5 was pointing to both the primary server and the worker server.  I said "both".  He suggested that the F5 team point it only to the primary server since there is no need for it to point to the worker server.  So I confirmed with my contact that, yes indeed, the F5 was pointing at both (a load-balancing function) so I asked them to have the "pretty" VIP only point to the primary Tableau Server.  They made the switch within minutes and I tested that.

 

BOOM!  It worked!

 

Lesson learned for this newbie:  Make sure load-balancers, proxy redirects, etc. just point to the primary and make sure you emphasize that requirement to whomever your contact is if you aren't doing it yourself.

Image result for old age wisdom ignorance youth

How...subtle

 

 

A few mornings ago I was going through my Twitter feed and saw a reference to an article that looked good.  I'm not a huge believer in certifications and definitely have no secrets about that if you search the forums.  It's mostly because I've seen so many people get certified who don't have any -- or not enough -- real-life experience.  I've been in the business intelligence realm for over 20 years so I do have the experience and wisdom that comes with living it, not simply reading about it.  The author is someone I respect and could give me a look at the other side of the story so I read their opinion in their blog post.  They did a good job but I tweeted my viewpoint.  Here's part of our conversation (which Twitter absolutely SUCKS at):

For both of us a real conversation would have been nice to clarify (argue? ) our viewpoints.  I let my subconscious sit on this and decided that I really disagree with the two rebuttals (which were thought-provoking), so here's my thoughts and you're free to disagree

 

Experience vs. Study

Okay, why do I emphasize on-the-job experience and claim it's better than "individual practice and mastery"?  While the whole subject is good it's item #2 that makes a perfect point in this article by Chris Love, Quality Assurance: A dirty word for Data? 7 tips for getting QA right.  There is only so much one can plan for or practice in a closed environment.

Here's a real-life example

I took Tae Kwon Do years ago and in class we learned our kicks, our blocks, our punches, as well as close-in combat techniques.  We would practice and practice and become very good with them.  But practice is different from the real world so we would also have the occasional sparing match and that is where things fell apart for some students!  Actually being in a situation where you don't know exactly what your opponent is going to do is quite different.  You can anticipate but you may anticipate wrong.  You could even perform the move correctly but do it too slowly, or at the wrong location, or not with enough strength, and fail.

 

Fail!?  How could you fail if you can demonstrate a perfectly executed example?  Because reality is different from study, that's why.  As such, in the workplace things come up that you didn't or can't expect.  Which brings us to...

 

A lack of vision

Okay, so the slash in "chaos/random factor" wasn't interpreted as I expected as it's meant to basically mean two words that pretty much have the same meaning -- trunk/boot when talking about the rear luggage space in a car, wrench/spanner when talking about a mechanics tool for working with bolts, etc.  The word "chaos" seems to be defined differently between us, too.  Either way, the randomness of humans is chaos; the aspects one can't plan for, can't see coming.  When working with Tableau Desktop and Tableau Server I've had users come to me with questions that I couldn't immediately answer.  Does that mean I have a lack of vision?  No, it means I can't plan for every single eventuality.  NOBODY CAN.

 

As a parent -- and I've found pretty much anyone with children like parents/guardians and our under-appreciated workers, teachers!, can relate to this -- is that children are a perfect example of chaos and no amount of planning, book reading, internet searching, etc. will give one "vision" to 100% counter what issues they may create.  That is not some shallow statement, it's one that has more depth than most think.  Having been childless for decades and suddenly becoming a parent has given me this experienced knowledge.  If you have a parent then they can confirm this, usually with a reminiscing smile.  Oh, and every child is different, even among siblings, so what works for one child may not work on another.

 

Think of getting your first driver's license and being on the road by yourself for the first time versus how you are as a driver now -- we all have stories of some of the silly and terrible things we've seen (or done) while driving that we could never dream up or simulate.

 

Experiencing subject-related chaos leads to deeper knowledge

Something happens, a piece of chaos wedged in the teeth of your machine, something your "vision" was unable to foresee despite all your planning and studying of books and now you have to ask an expert and/or delve deeper into a subject than you thought was unnecessary, boring, forgot due to lack of use (which happens more than you think), or whatever.  It's the solution you get after those "I don't know but let me get back to you" moments where I believe sticky knowledge is gained.  By "sticky knowledge" I mean information that isn't quickly forgotten.  It's that, "oh, that happened to me once and I had to do this..." knowledge.  This knowledge stays with you as it's gained by the on-the-job experience and gives you extrapolated knowledge, that leap from A to C without needing step B.

For example, you can see a file on a network drive but you can't open it so you contact the folder administrator and explain your issue.  They tell you that you don't have permission to read files.  Why weren't you already given access?  Well, you are the bit of chaos that the administrator didn't know would be needing the access (very common in an enterprise environment).  You ask for the permission to be set on the network folder to allow you to read files and they grant your username the permission.  Now you can read files from the network folder.  Weeks later, a Tableau Desktop publisher tells you the Tableau Server is broken because when they refresh their new report extract on the Tableau Server the data does not come through even though they used the proper UNC (Universal Naming Convention) file path to their Excel workbook in one of the many corporate file directories.  Because of your prior experience with being unable to read data in a network drive folder you reference your sticky knowledge to surmise that the reason is because the Tableau Server doesn't have Read permissions in the folder location where the Excel file resides.  This experience has allowed you to trouble-shoot more quickly, reducing down-time from having to go to the forums and post a question or submit a ticket to Tableau Support.

 

Oh, by the way, this tidbit of knowledge above I just shared isn't found in the documentation so if this was a piece of a question was on a certification test your chances of getting it wrong would have greatly increased   Just sayin'...

Update

I wrote the original post yesterday but due to delays I didn't publish it until today.  I've played around with this and came up with an improvement.

 

Onward

Seeing that Tableau translated TRUE to 1=1, which is further complicated by Tableau putting logic around it via the CASE statement, why bother having a CASE statement slow things down when I'm only interested in one condition of the output of the CASE statement?  The only output I care about from the CASE statement is 1 so I changed the left side of the Join Condition from TRUE to just 1:

 

 

When I looked at the SQL using the Convert to Custom SQL that second CASE statement was gone!

...

"ITEM_LOC_VENDOR_DIM_V"."CURR_TS" AS "CURR_TS",

(CASE WHEN ("ITEM_LOC_VENDOR_DIM_V"."PRIM_ALT_VNDR_CD" = 'P') THEN 1 WHEN NOT ("ITEM_LOC_VENDOR_DIM_V"."PRIM_ALT_VNDR_CD" = 'P') THEN 0 ELSE NULL END) AS "$temp0"

  FROM "CUSTOMS"."ITEM_LOC_VENDOR_DIM_V" "ITEM_LOC_VENDOR_DIM_V"

  ) "t0" ON

  (

   ("RVC_TOOL_NAFTA_V"."CO_CD" = "t0"."CO_CD")

   AND ("RVC_TOOL_NAFTA_V"."ORG_CD" = "t0"."ORG_CD")

   AND ("RVC_TOOL_NAFTA_V"."PLNT_CD" = "t0"."PLNT_CD")

   AND ("RVC_TOOL_NAFTA_V"."SUPLR_CD" = "t0"."SUPLR_CD")

   AND ("RVC_TOOL_NAFTA_V"."CHILD_ITEM_NO" = "t0"."ITEM_NO")

   AND (1 = "t0"."$temp0")

   )

  LEFT JOIN "CUSTOMS"."PURCHASING_BUYER" "PURCHASING_BUYER" ON

( ("t0"."PPF_BYR_CD" = "PURCHASING_BUYER"."BYR_CD") AND ("t0"."SUPLR_CD" = "PURCHASING_BUYER"."SUPLR_CD") )

 

See that, I went from a time-consuming CASE statement to a simpler condition of (1 = "t0"."$temp0").  I canceled the custom SQL and ran the extract.  It ran almost 30 seconds faster!