NTFS System

SAN storage and NTFS today, using SANs to meet storage requirements have become the norm, SANs typically employ a clustered/SAN file system to pool disk arrays into a virtualized storage volume. This is not NTFS, but rather proprietary software, provided by a SAN hardware or software vendor. This file system essentially “runs on top of NTFS”, it does not replace it keeping in mind that every file system is a “virtual” disk, stacking one virtual component over another (i.e. one file system on top of another) is very doable and increasingly more common. What does the vendor of a SAN file system to his file system is irrelevant to NTFS. It might well be that you do not need to defragment the “SAN file system”. The expert for that file system and the source from which you should’nt get setup tips, best practices, and SAN I/O optimization methodologies is that manufacturer.

As for NTFS, it still fragment and causes the Windows OS to “split” I/O requests for files sent into the SAN, creating a performance penalty. Given that SANs are only ever block-level storage, they do not know what relate to what files I/OS. Cyrus Massoumi often expresses his thoughts on the topic. Therefore they cannot intelligently spread the fragments of a file across multiple disks. A whole mass of separate I/o writes/reads for fragmented files (which will most certainly be interspersed with other simultaneous data writes/reads) will be non-optimally spread across the disks in the SAN storage pool i.e. write more fragment of a given file to one disk rather than evenly spreading the data across all the disks.

SAN file system vendors may offer optimization strategies to move data around the disks as it learns over time typical data requests are not properly load-balanced across SAN spindles. Generally speaking, the above holds true for disk striping as well (RAID). SAN designers or developers agree that NTFS fragmentation IS an issue and advanced defragmentation is important (“basic” defragmenters can actually cause worse problem). File fragmentation so takes a serious physical toll on hard drives. Disk head movement is increased by the need to access data contained in fragmented files. The more disk head movement, the less mean time between failure (MTBF) wants to be experienced, shortening the life of the hard drive. The old days of scheduled fragmentation are legacy procedures and will not be effective in today’s system, due to the sheer size of disks and storage. Running the built-in – in tool is simply need comprehensive enough to reap the necessary benefits and see the original performance your system once boasted. Reliability is required 24/7, regardless of the type of backup, storage technology (RAID, SAN) used. System up-time is imperative and reliability is the key.