Yes it creates a Tableau backup (.tsbak).
It also has a complete different methodology which enables a very fast
On Wed, Aug 3, 2016 at 12:39 PM, Jeff Strauss <email@example.com>
Appreciated everyone's awesome ideas and discussions. We are doing a few things mentioned by Matt but more has to be done. I will still look for alternative backup approach. More questions:
a. Why import / output site is faster than backup / restore? I thought that it would be the same zip/unzip process
b. Does anyone know how to stop publishers to publish workbook exceeding specific size (like 500M)?
Pretty much what Toby said.....but if baseball bats are not allowed in your workplace, you can also try querying the Postgres database to a list of the offending workbooks and authors on a regular basis.
- Send those authors annoying emails periodically until they fix the size problem
- Write a program using REST API or Tabcmd to delete oversize workbooks periodically. This is a bit extreme so you probably want to send them some warning emails and give them some time to remedy the problem first before doing this....
Obviously the above solution doesn't stop them publishing massive workbooks, but they will get the idea soon enough!
It's a bit more complicated than you might think, as when a backup is taken one must ensure consistency across all persistent data in Tableau. So, for example, if the database backup you create refers to some extract file, you want to be sure that the extract file makes it into the backup---rather than being deleted at some point after the backup process begins, because, say, an extract refresh occurs while the backup is going on. Otherwise you could end up missing data: the database says you have a certain workbook using extract-x, while no extract-x was ever included in the backup. Oops!
Still, there are a number of possible improvements. One that would likely not be too hard to implement would be to provide built-in Tableau support for different compression algorithms. A vast range of such algorithms exists, which offer various trade-offs between compression speed and size.
The use of snapshots of the file system could theoretically be quite useful, if the system supports it.
1 of 1 people found this helpful
We agree with you that Enterprise IT needs a faster backup and restore as well as a warm fully synchronized Standby solution. That is why we created Palette Rescue, our real time disaster recovery and backup solution for Tableau Server.
This coming Wednesday, November 2, Palette will be hosting our next webinar where we will demonstrate how Palette Rescue works.
Palette Rescue Architecture
Palette Rescue incrementally streams your Tableau Repository files and data immediately after each change to your Standby Tableau Server, so if your Active Tableau Server should ever become unavailable for any reason, you will have all the data and files safe and ready to go with no losses. Although your Standby Tableau Server is powered on, its Tableau Server application is stopped while the Standby Repository is receiving updates from your Active Tableau Server, so you will only have one running Tableau Server application at any time. This is good for two reasons. First of all you don't want two Tableau Server instances running extracts simultaneously putting unnecessary load on your databases, and secondly, depending on your situation, you may not need another Tableau license for your Standby Tableau Server, which saves costs.
Additionally, we run accelerated rolling backups continuously to create standard tsbak files that you can use with the TABADMIN RESTORE command on any Tableau Server. We make these tsbak files on the Palette Rescue Server using its own replica of the Tableau Repository, so you have the benefit of not placing any load on your Tableau Server to run backups throughout the day. Palette Rescue provides the comprehensive protection that Enterprise IT needs in order to guarantee the Tableau Server uptime SLAs that their users expect.
One extra benefit of Palette Rescue is that if you currently run reports on the data in your Tableau Server Postgres Repository, you can now do the right thing and point those reports to the Palette Rescue Postgres Repository instead. This ensures that you are never impacting any of the critical real time operational responsibilities of the Tableau Repository to fulfill "view" authorization requests (which require approximately 15 independent SQL statements to grant access). An overtaxed Tableau Repository can create a significant bottleneck during the "Show" stage of the viz load that in turn delays the start of the "Bootstrap" stage to render users' dashboards.
We will explain it all on Wednesday!
For more information, sign up on our webinar registration page:
November 2, 2016 at 11am Pacific Time
CEO+Founder, Palette Software
2 of 2 people found this helpful
Just want to share w everyone that running File Store process on the Primary host reduced about 75% of backup time (like from 4 hrs to 1 hr) based on our test. Of course, if you use core based license, your primary node cores need to have Server license soon as File Store process is added onto Primary node.
The reason why File Store on Primary speeds up backup/restore is due to elimination of extra zip/unzip process among server nodes during backup/restore process.
Thanks Mark for sharing I'm going to be moving from a single node environment to a two-node environment soon so little tricks like this is good to know
Ditto, thank you for sharing that fact!
1 of 1 people found this helpful
Also increasing Process Priority in Task Manager to Highest (not Live) for 7z.exe will reduce the time
Cool Egor, thanks for that!
Also, I don't see it in my list of Processes in the Windows Task Manager.
Heh, I was thinking that...which doesn't really help when backups are happening automatically at night and I don't know how to alter the priority in a Windows cmd batch script.