Udostępnij za pośrednictwem


Performance of Memory vs Disk

There is a comment on this Visual Studio Blog post (about how we made Visual Studio faster):

“Focus on speed, not memory usage. Memory is very cheap, but CPU performance is muuuuuuuuch moooooooore expensive.”

Yes, memory keeps getting cheaper, but actually, reducing memory use is critical to increasing the performance of any large application like Visual Studio. Conversely, giving the application more memory will make it faster. That’s why using a 64 bit OS automatically gives more memory to VS (see Out of memory? Easy ways to increase the memory available to your program). You can see this by starting Task Manager or Process Explorer and observe the CPU column. It is rarely close to 100% for one processor (25% on a 4 core machine). Even when you initiate an operation, such as starting an application, the CPU rarely stays high for very long. Most of the time the CPU is waiting for user input, a request to disk or network. Reducing the disk bottleneck increases performance.

The disk is a physically spinning object, and there are read/write heads that are moved precisely to positions on the disk surface. There is a delay to actuate the head position motor, waiting for the right disk sector to be under the head, etc. This is a lot of work, compared to reading electrons from solid state memory (those of you fortunate enough to use a SSD (Solid State Drive) know this well!). Looking at it another way, memory speeds are measured in nanoseconds  and disk speeds are typically measured in milliseconds (1 million nanoseconds in a millisecond. How many microphones in a megaphone?  Just one J)

To create a short video demonstrating memory versus disk speeds. I chose a scenario that repeats easily and would consistently execute the same set of instructions: the startup of Visual Studio 2010. When a new process is created many files need to be read in from disk. The OS caches a lot of the disk content in memory, so that instead of reading it from slow disk, it’s actually hitting faster memory.

I used my laptop (Dell Core i5 with 64 bit Windows 7 and 8 Gigs RAM) and repeatedly started Visual Studio 2010. I used a command prompt  that showed a time stamp to help show the timing better. It also was quick to hit the up-arrow/enter keys to start VS easily without a  mouse. I had Process Explorer displaying the percent CPU use.

It took about 2 seconds to start each time. The CPU usage hovered close to 25% (of 4 processers or 100% of 1 processor). In 2 seconds, a CPU can execute about 2 billion instructions. If computers had blinking lights (as my first ones did) then the disk light would be blinking a lot: a sure sign that the CPU is just waiting for the disk. (Relaxen und watchen das blinkenlights. What lights?)

Then I cleared the OS disk cache, which took about a minute while the OS suspended running processes, flushing memory caches of dirty, copy-on-write pages, etc. (During this minute I did nothing but wait, so I edited out that minute from the video.)

When I hit up-arrow again, there was a noticeable delay even for the command prompt to respond. After a few seconds, the VS splash screen showed, and after about 40 seconds, VS completed startup, executing the same 2 billion instructions 20 times slower!

So, decreasing memory consumption means more is available to the process and the operating system, which will use it to reduce disk access.  The OS memory manager plays a huge role in perceived application performance.

CPU accessing memory is cheap and fast. CPU accessing disk is expensive and slow. Memory capacity, however, is much smaller than disk capacity. Like 8 Gig RAM vs 500 Gig disk. So it’s imperative to use the RAM resource efficiently.

The memory content for a process consists of data read from disk as well as dynamically allocated memory for transient objects, such as strings, data, class instances, etc. We reduced Visual Studio’s memory use in all these areas significantly to increase its performance.

You can simulate flushing the OS cache somewhat by hibernating the machine (not putting to sleep: sleep mode keeps memory content by using a small amount of power).  However, the hibernation file is read in a linear fashion, rather than the sequence requested by a normal OS process.

Another way to simulate is to bring up other large applications that use a lot of memory, such as Outlook or Word, which will page out other apps, like VS. When reactivating VS, memory will need to be paged back in.

Indeed, when you compare the amount of memory used in various scenarios by VS 2010 and VS2012, the latter uses much less memory (sometimes hundreds of megabytes less), even considering that many more features  have been added than removed.

See also:

https://blogs.msdn.com/b/visualstudio/archive/2012/03/05/visual-studio-11-beta-performance-part-1.aspx