8 Replies Latest reply on Jan 5, 2015 10:16 AM by Toby Erkson

    Defrag - geek out...

    Jeff Strauss

      Currently we have an 8 core active / 8 core passive setup.  Each of these boxes have 3 TB of hard drive space, but only a fraction is used.  That's a story for another day.  Anyways, to my demise, I looked and the HD was fragmented at 38%. And then I ran a defrag, and it got better, but then it got worse at one point up to 75% fragmented and an avg of 13 fragments per file. Oy.  What went wrong and why is the HD getting so fragmented in the first place.  Read on if you want to know.


      What went wrong

      The first time I ran defrag, I defragged the files and not the free space.  One of the stats that shows in the analysis is contiguous free space.  This is important because this is what new files are going to use.  So even though the files after the defrag were clean, there were still a smattering of small files spread across the free space thereby causing fragmented free space.  Lesson learned: Defrag both.


      Nowadays, we have included the defrag into the nightly backup script while the service is short stopped for a few minutes.  Seems like an opportune time to do it.  Next step is that we are growing our environment and datasets in a planned manner and expanding our cores and going to a distributed environment.  There are ways to defrag all drives in the environment by way of a script on the primary, but this is still under dev.


      So why did frag happen in the first place:

      We leverage a lot of extracts (via the data server) that are refreshed daily.  Every day these get slightly bigger as we are holding more history.  Each day they get bigger, they can't reuse the existing space, so they take up new space and the existing space is opened for other stuff at which time there could be small files that start to use this space.



      100 gig hard drive


      Day 1 has an extract that is 10 gig in size.  It is replaced each day with slightly bigger (10.01, 10.02, 10.03, etc.), Each day this occurs, the disk looks for contiguous space and is ok the first 9 days.  On around the 10th day, fragmentation starts to occur because 10.10 gig is asked for in contiguous space, in addition 100 1meg files have come along and are sprinkled across the open space.  Even though we are using only 11 gig, the free space is fragmented.  So the 10.10 gig is saved as 2 segments, and then eventually the problem becomes worse.

        • 1. Re: Defrag - geek out...
          Toby Erkson

          Good advice!


          We only have a 300GB hard drive right now but on day 1 I set a defrag schedule to do a full defrag every Wednesday night on production and QA.  The drives are always at 0%.  We have a strong extract environment and with 8 cores and 3 backgrounder processes things are going nicely.

          • 2. Re: Defrag - geek out...
            Jeff Strauss

            yeah, I was once under the belief that my server admin team would take care of windows server internals like this.  But apparently windows is the red herring because 99% of the environment is Linux and therefore this is what gets the most attention...

            • 3. Re: Defrag - geek out...
              Matt Coles

              This is great info, Jeffrey, thanks! Once you defragged your drives, did you notice any quantitatively measurable improvements in extract refresh times and/or dashboard load times for extract-based workbooks? I'm curious as to how much of a difference it made.

              • 4. Re: Defrag - geek out...
                Jeff Strauss

                I am still measuring the effects.  I know that the background refreshes have gotten faster, but not sure yet by how much.

                • 5. Re: Defrag - geek out...
                  shawn harvick

                  Hi Jeffrey, thanks for the info.  I'm curious, are these physical machines or VMs?  Have you been able to verify that this has made a significant impact on performance? 

                  • 6. Re: Defrag - geek out...
                    Jeff Strauss

                    Let's get physical.


                    I'm guessing you don't have to worry as much about this on VM's as hopefully the host handles the defrag component. Maybe somebody can confirm.


                    In terms of impact, I've seen some of the extract refreshes speed up by 10 to 20%, but I haven't felt the full impact due to the condition of scattered unmovable files on the HD and history.  Long story short, now I'm doing defrag each night as part of the backup script.  It eliminates any frag, however frag goes back up throughout my extract cycle and by the end it reaches about 50%.  I have plenty of free space available, however due to the history of not doing a defrag for the past 2 years, my understanding is that this degrades the hard drive conditions and there are some files that can't be moved for one reason or another and they are peppered across the drive.  I am in the process of getting fresh hardware and once I move over then I will probably take the old hardware and format the drives and it will become my dev platform.

                    • 7. Re: Defrag - geek out...
                      Toby Erkson

                      Jeffrey Strauss wrote:


                      I'm guessing you don't have to worry as much about this on VM's as hopefully the host handles the defrag component. Maybe somebody can confirm...

                      Hadn't thought of that but I really don't know a whole lot about servers, that's my wife's expertise.


                      I turned off the defrag scheduling on our QA server and I'll watch what happens.  If it stays at 0% like it has been then I'm guessing that our VMs are set up to auto-defrag.

                      • 8. Re: Defrag - geek out...
                        Toby Erkson

                        I checked my QA server and it showed 47% fragmentation so it appears -- for my servers at least -- that VM doesn't automatically defrag the drives.  I turned the defrag scheduling back on.