Freigeben über


More on Cores: Single Core? Dual Core? Quad Core? What's the Difference?

The core of a processor refers its components, along with system memory, that facilitate the fetch-execute cycle by which computers read (fetch) and process (execute) the instructions of programs. Although the physical implementation of a chip depends upon its architecture, all CPUs consist of two logical components: the arithmetic/logic unit (ALU) and the control unit (CU) (Englander, 2003).

Simply, the ALC is the component of the processor that is responsible for temporarily holding data and all of its performing calculations.

The CU is the component of the processor that is responsible for controlling and interpreting the instructions of a program retrieved from memory by moving data values and their memory location addresses between the different parts of a processor. Each control unit contains three logical subcomponents: the program counter (PC), which contains the address of an instruction to be executed; the memory management unit (MMU), which manages the fetching of instructions from memory; and the input/output (IO) interface, which provides the pipe-work through which the movement of data to and from main memory can occur.

If a processor contains only one set of the aforementioned components, it is referred to as a single-core processor; however, if a processor combines two (dual core), four cores (quad-core), or more processors into a single package, composed of a single integrated circuit, the processor is referred to as a multi-core processor.

Multi-core processors have an advantage over single-core processors; because system resources are shared (i.e. system memory), and facilitates the division of program execution between cores, permitting parallel execution of programs, resulting in an overall performance increase relative to systems with single-core processors; however, the performance gained is not directly proportional the number of additional cores.

There is a diminishing return of performance gained for each core added to a processor. This is due to the overhead required to distribute the instructions among the different cores and contention delays associated with sharing system resources (e.g., memory, I/O, bus, etc.). Additional, when a calculation performed in one core is dependent (i.e. waiting) on the result of one performed in another, utilization of parallel processing capabilities are not fully leveraged. Never-the-less, multi-core processors due offer and performance advantage over single-core ones.

References:

Englander, I. (2003). The Architecture of Computer Hardware and SystemsSoftware. 150-152, 311, 312-313. Hoboken, NJ: Wiley

Comments

  • Anonymous
    July 22, 2014
    Simple and explains really well.Thanks a lot.

  • Anonymous
    March 23, 2015
    This is awesome easy explanation but worth rembering. I am very happy now thanks.