Currently we have an 8 core active / 8 core passive setup. Each of these boxes have 3 TB of hard drive space, but only a fraction is used. That's a story for another day. Anyways, to my demise, I looked and the HD was fragmented at 38%. And then I ran a defrag, and it got better, but then it got worse at one point up to 75% fragmented and an avg of 13 fragments per file. Oy. What went wrong and why is the HD getting so fragmented in the first place. Read on if you want to know.
What went wrong
The first time I ran defrag, I defragged the files and not the free space. One of the stats that shows in the analysis is contiguous free space. This is important because this is what new files are going to use. So even though the files after the defrag were clean, there were still a smattering of small files spread across the free space thereby causing fragmented free space. Lesson learned: Defrag both.
Nowadays, we have included the defrag into the nightly backup script while the service is short stopped for a few minutes. Seems like an opportune time to do it. Next step is that we are growing our environment and datasets in a planned manner and expanding our cores and going to a distributed environment. There are ways to defrag all drives in the environment by way of a script on the primary, but this is still under dev.
So why did frag happen in the first place:
We leverage a lot of extracts (via the data server) that are refreshed daily. Every day these get slightly bigger as we are holding more history. Each day they get bigger, they can't reuse the existing space, so they take up new space and the existing space is opened for other stuff at which time there could be small files that start to use this space.
100 gig hard drive
Day 1 has an extract that is 10 gig in size. It is replaced each day with slightly bigger (10.01, 10.02, 10.03, etc.), Each day this occurs, the disk looks for contiguous space and is ok the first 9 days. On around the 10th day, fragmentation starts to occur because 10.10 gig is asked for in contiguous space, in addition 100 1meg files have come along and are sprinkled across the open space. Even though we are using only 11 gig, the free space is fragmented. So the 10.10 gig is saved as 2 segments, and then eventually the problem becomes worse.