Dela via


Tweaking for Performance (Disks & Volumes)

There are plenty of people that like to tinker with the operating system to “tune” it and hopefully eke out a little more performance (possibly also going into the hardware side to fiddle with the Front Side Bus speeds. timings and voltages).

One of the most common bottlenecks in a modern system is the disk, especially given the volume of data we tend to use these days, and one of the popular configurations for gamers is RAID0 (striped sets) which they believe gives them the best throughput for little cost (given how so many motherboards have onboard RAID controllers today).

However, I believe that they may be slightly misunderstanding the way disk I/O works in a multi-tasking OS, and that there is a better configuration for them.

 

The principle itself is sound: pair together 2 identical hard disks and present it as 1 volume to the OS, and stripe all the data across both disks – then when reading the data the 2 disk heads move independently to access different parts of the I/O requests concurrently, making it much faster than waiting for a sequential read from 1 disk.

However, disk write operations in RAID0 can actually take longer than on single disk systems if each I/O has to return success before another I/O can be issued to “the” device.

Also, given how data can be scattered over the platters on a disk and bottlenecks are often multiple I/O requests for different files, both disk heads still end up going back and forth seeking file sections on the volumes, so random read times in a non-synthetic environment can end up with no benefit on simple stripe sets.

 

(For the sake of brevity I will just briefly mention that RAID1 (mirroring) is not a performance configuration, and RAID1+0/RAID10 can get around the write time problems but introduces a cost that gamers tend to be trying to avoid by their tweaks.)

 

The other common issue with disks that appears inevitably over time is fragmentation – Windows uses idle time to defragment files, many people swear by their own personal brand of defragger – rather than argue about which defragger is best (often the strongest comments being about the UX rather then functionality), I prefer to avoid fragmentation as much as possible instead.

 

To address both the performance issue and fragmentation issue at the same time, I will cover here how I configure my home system as food for thought.

I am not advocating this as “the best and only solution to address these issues” – mainly because no 2 people use a computer in quite the same way, so I do not believe in generalisations quite as bold as that.

 

First, ideally I would have a fast, single-platter disk as the primary disk, for the OS alone.

As it is difficult to find smaller disks these days, I tend to partition the first 100GB for the OS and either leave the rest unpartitioned or used solely for static data such as backups.

For my current system, the weapon of choice is an SSD 120GB disk with a single volume – this natively addresses the 2 issues directly through the fact there is no moving head to read or write data, and fragmentation is irrelevant because of this-  however this is an expensive option (at least at present), and previously I have happily used a decent speed SATA2 drive.

 

Additional disks of a decent size are added to hold the applications and/or data – but I do not partition and format them at the time of building the system, as I don’t yet know what it will be used for.

1 gigantic volume will become a nightmare for (re)formatting, defragmenting and scanning for errors, and fixing partition sizes invariably leads to the problem of having 1 virtually unused while another is almost full – inefficient use of the space.

 

A fact about disk performance – the outer tracks (lower track numbers) will give better performance as the head has to move in & out less.

Put your most commonly accessed data on partitions closer to the outside (“start” if you like) of the disk.

 

Now we come to install a big, disk-intensive application (maybe an FPS game with lots of very high resolution textures, they tend to get very large and incur lots of sequential reads) – start the installer and look for the traditional “custom” or “advanced” option where you can change (or just verify) the installation path.

Let’s say this is C:\Program Files (x86)\Widget Corp\Widget Magic 5000.

At this point we don’t let the installer continue, but we fire up Disk Management (diskmgmt.msc) and prepare the second disk as a GPT disk, then create a simple volume (I tend to use 40GB chunks at a time) and format it.

Don’t give this new volume a drive letter – instead right-click the volume, click “Change Drive Letter and Paths… ”, click the Add button, select “Mount in the following empty NTFS folder”, then click the Browse button.

Now drill down to C:\Program Files (x86) and click the New Folder button – name the new folder Widget Corp.

With Widget Corp selected, click the New Folder button again, and name the new one Widget Magic 5000.

With Widget Magic 5000 selected, click the OK button, then the OK button in the “Add Drive Letter or Path” dialogue box, then again in the “Change Drive Letter and Paths” dialogue box.

Optional: you can right-click the volume and through Properties give it a name (i.e. Widget Magic 5000 in this example).

 

Now we go back to the installer and let it resume – it may remark that the destination folder already exists, which we are aware of.

Once the application is installed, it is in the logical path C:\Program Files (x86)\Widget Corp\Widget Magic 5000, and the data has all been written to the first partition on the second disk.

 

So… what did we gain by jumping through all these hoops?

When you fire up the application it will undoubtedly incur disk I/O for Windows binaries (public APIs in DLLs provided with or added to the OS) – these I/Os will be directed at disk 1 – and it will also trigger large and/or multiple disk I/Os for its own executables, libraries and data – these I/Os will be directed at disk 2 and can thus occur concurrently with the other I/O requests.

 

The typical nature of users is that they only use 1 application at a time (note I used the word use, not “have running”), and gamers especially will tend to have only 1 heavyweight process running at a time (media & voice programs are not large or heavyweight) – so the chance of having disk I/Os for different partitions on disk 2 is slim.

 

And how does this help (avoid) fragmentation?

Extrapolate the concept to multiple games on their own partitions (even on different disks) – there are unused portions of the disk between each game (yes, it’s wasteful, but necessary for performance) and no 2 games interfere with each other or the OS.

Uninstalling a game will not delete the mount point you created – you need to do this yourself through Disk Management – but doing so also doesn’t leave the scattered gaps that data gets written into that is the root of fragmentation, and you can happily create a new folder to use as a mount point for the existing volume for a new game without introducing any negative effects.

 

The hidden bonus is that you don’t end up with D: = DATA, E: = APPS, F: = MP3S, G: = GAMES, etc. and the problems they create by being so large – you only have a C: drive letter.

You can happily use the remainder of disk 1 as a mount point such as C:\Media and store MP3s, pictures, ISOs, WMVs, etc. as these will not typically be accessed with masses of I/Os constantly and can be considered “static data” – alternatively use it for beta testing a new OS, or storing backups of data in your profile.

 

As I said, not a solution for all, and not something that would be of great value to a lot of users, but something to consider next time you build a new machine and reach the “how should I set up my partitions” stage.