Compartir a través de


False Sharing: Why You Can’t Afford To Ignore It

Many of us are aware that caches are our friend, reducing memory latency by exploiting locality. A performance problem known as false sharing can occur in multithreaded scenarios when locality backfires, leading to costly performance degradation.

Igor Ostrovsky, Stephen Toub, and Huseyin Yildiz, all members of the Parallel Computing Platform team, have written this article, which provides an in-depth description of false sharing, methods of detecting it, and how to avoid it.

To quote from their article:

…True sharing occurs when two threads specifically require access to a given memory location which is properly kept consistent by the cache coherency protocol. However, unbeknownst to the developer, other sharing can occur where two threads access distinct sets of data, but due to the hardware architecture details such as the size of a cache line, those sets of data still look like sharing as far as the cache coherency protocol is concerned. This is known as false sharing…and it can lead to significant performance problems in real-world parallel applications…

If you’re at all worried about the performance of your multithreaded application, I highly encourage you to read this article.

James Rapp – Parallel Computing Platform team