1 2 Previous Next 21 Replies Latest reply on Jun 26, 2019 1:13 PM by Katherine Woods Go to original post
      • 15. Re: Tableau Server data engine storing old extracts
        lx wei

        We had the problem on 2018.1,2018.2,2018.3.

        • 16. Re: Tableau Server data engine storing old extracts
          lx wei

          I restored from a backup on the original server and it didn't delete those failed files.

          • 17. Re: Tableau Server data engine storing old extracts
            Jay Morehart

            So we have been working with Tableau support for a couple of weeks now, no major help from them, but through all this work, I have come up with a process for cleaning up nodes that have nearly full disks due to this issue. NOTE, this is an unofficial workaround, attempt at your own risk. from my research and conversations with support, the risks do appear to be minimal, however. Additionally this only works for multi-node clusters with more than 1 File store instance: the whole process takes around 35min including a vSphere snapshot, it may take longer depending on your backup mechanism (replace <nodeID> with the ID of the node you are trying to recover, "#" denotes comments) see Move the File Store Process - Tableau  for more info on what filestore decommissioning does:


            #Before action taken: T-0 min 16:35 CST

            [node2]$ df -h

            Filesystem                                  Size  Used Avail Use% Mounted on

            /dev/mapper/rhel-root                       613G  507G  107G  83% /



            tsm stop

            #T + 5min take VM snapshot or other backup

            # T+15min

            tsm topology filestore decommission -n <nodeID> --override --request-timeout 900 #request timeout defaults to 5min contrary to documentation

            # T+17min

            tsm pending changes apply 17:06 CST #this will actually remove the filestore and restart tableau server



            # T+27min: disk space is recovered at this point:

            [node2]$ df -h

            Filesystem                                  Size  Used Avail Use% Mounted on

            /dev/mapper/rhel-root                       613G   65G  548G  11% /



            tsm topology set-process -c 1 -n <nodeID> -pr filestore


            tsm pending-changes apply #add the filestore instance back into the node and restart server

            #complete at T+35min


            What makes this a lower risk operation is that it only uses tsm commands, so while there presumably some risk in removing the file store, it should be minimal. Additionally while this does require some downtime, it should be less than an hour, so depending on your use case and rate that your disks are filling it could be reasonable to do this even as often as weekly.


            I have done this process on 2 different node now. Again this only works on a multi-node Tableau Cluster with more than 1 file store, and I have only tested it on our High Availability 3-node cluster so I cannot speak to how it works in other configurations. I did find out from support that Development is currently looking into this issues as a "possible software issue" (824539), so hopefully this makes it to a known issue, but they could not give me any information on a timeline for that.


            Hopefully this process can help some of you in the mean time




            Marco Varela

            • 18. Re: Tableau Server data engine storing old extracts

              Hey Jay,


              Really thanks for sharing the workaround! I am having the same issue as well but I am running on single node for my client.


              I have read that there was a 'data engine configuration reaper' which will clean up the historical hyper extract. However, I failed to see it in my Background Task for Non-Extract in Tableau Server 2018.3.


              May I know how to trigger this task? Or may be Jeff Strauss, would you be able to help on this?

              I found your reply on this thread Extract Cleanup which mentioned about the extract reaper task.


              Really appreciate your help if you can give us some advice.


              Thank you!

              Hui Fong

              • 19. Re: Tableau Server data engine storing old extracts
                Roberto Lapuente

                The way that we fixed it is writing a script that deletes what looks to be old extracts.


                We refresh our data sources every night, so the script looks to see if there are more than 1 extract for the same data source, if there are, and if it's older than 48 hours then we delete it. It's risky, but it has been running for more than a month now and it's hasn't caused any issues.

                • 20. Re: Tableau Server data engine storing old extracts
                  Marjorie Lim

                  Hi Sir Jay,


                  I have this screenshot of our topology and only have 1 node. I would like to ask if you have the same set up of this node ? I'm encountering the same issue as like the others. We're using Centos 7 v2018.2. as of now to minimize the usage on /var we are moving those data (with .hyper) on another folder, however, we are encountering that after checking or refreshing of data on tableau there are data that is missing. We are using both incremental and full refresh.


                  Please help us on this.

                  Thank you in advance for your help

                  • 21. Re: Tableau Server data engine storing old extracts
                    Katherine Woods

                    I am having a similar problem and upgrading to version 2019.1 only made it worse. Now files that were 3-4 GB on version 10 are now over 10 GB. Deleting extracts doesn't work either as they are no removed from the filestore. The only thing I can think to do is to overwrite the extracts with small, dummy versions by republishing them without user permissions but what a crappy way to address the problem! I also have an open ticket on this and haven't heard anything back. I am going to the Tableau convention in November. Anyone want to gang up on the experts there with me, see if we can get some face time for this issue, assuming we can hold out that long?

                    1 2 Previous Next