Partager via


64-bit Windows Part 3: The Itanium Processor

There are two quite different species of 64-bit PC: the Itanium and the x64. The Itanium species first appeared in 2001, whereas the x64 species first appeared in April of last year.

What is the difference between the two species?  Well, to understand that, we need to start with the concept of the instruction set .

 An instruction set is the set of low-level, machine language statements that a processor can respond to. The 32-bit, x86 processors that we have been familiar with since the Intel386 chip was introduced way back in 1985, all have the same basic instruction set, and processor innovations since then have been about having that same instruction set execute more efficiently. In fact, the instruction set for the 80386 processor included the entire 80286 instruction set, so any software that would run on an 80286 processor would also run on an 80386 processor.
The x86 instruction set is described as a complex instruction set to distinguish it from the sorts of instruction sets that were supported by a new species of computers that was emerging at the time, which used reduced instruction sets. The idea behind reduced instruction sets was that if a processor was to support only a smaller, less elaborate set of instructions, then you could build the processor in such a way that the average time it would take to process each instruction would be lower.
So, at the end of the 20th century, there were x86-compatible complex instruction set processors in a great many machines, and reduced instruction set processors of various kinds in the remainder. Then, in 2001, Intel debuted its Itanium processor.
The Itanium processor has a different instruction set from its x86 predecessors; in fact, it had a new kind of instruction set altogether, a kind which it called EPIC, which stands for, Explicitly Parallel Instruction Computing. An EPIC instruction contains, in addition, to the operation to be executed, information about how to execute that operation in parallel with others. Then, based on that information about how to execute the operation in parallel with others, each EPIC instruction is bundled together with others, and the bundle is submitted to the processor all at once. The only proviso is that the instructions bundled together should not affect the data that the other instructions in the bundle are using.   

There had been similar forays beyond complex instruction sets and reduced instructions into what are known as very long instructions. But a key differentiator between the Itanium with its EPIC instruction set and those other types of processors is a feature called predication.

Predication has to do with those if-statements that anyone who has done any programming will be familiar with. Branching statements in code, of which if-statements are just one example, generally say, if a condition exists, then do something, otherwise, do something else. And if you think about a machine executing those kinds of instructions, then you realize that everything that the execution of everything that comes after the condition would have to wait on the step of figuring out whether the if-condition exists. You envisage the machine saying, “well, let me first go and figure out if this condition the code specifies exists, and then based on that, I’ll know what to do next.” Well, Intel figures that having subsequent instructions wait on earlier ones is expensive, so why not execute the whole if-statement at once? Have the processor figure out whether the if-condition exists at the same time as it does what needs to be done if the condition does exist, and at the same time as it does what needs to be done if the condition doesn’t exist! Their theory is that by making the processor wide enough to do all of that work in parallel, and the compiler smart enough to be able to take the branching statements out of the code and rework them into instructions to be executed in parallel, that you get not only a more efficient processor, but the foundation for tacking processor technology forward, that is, by making the processor progressively wider, and the compiler progressively smarter at reworking branches into parallel instructions.

So, Itanium processors have this new instruction set called an EPIC instruction set that is all about bigger instructions with information about how they can be bundled together to be executed in parallel. And if you are going to have the processor execute bundles of big instructions all at once, then you would want to have a bigger pool of memory for the instructions to draw upon. There is no point in having a whole bunch of instructions going through in parallel if the data that all the instructions refer to cannot be in memory at once. So, EPIC processors are 64-bit processors, capable of processing data in those massive 16TB+ chunks.
By putting a new instruction set into the Itanium processor, Intel broke with its fifteen-year tradition of only ever adding to the x86 instruction set with each new processor that it introduced, for the EPIC instruction set is not merely a superset of the x86 instruction set. So, x86 software cannot run on the Itanium processor as-is. The processor incorporates a decoder that translates x86 instructions into EPIC instructions, then assembles them into bundles. That decoding process takes time, and, as a result, x86 applications perform relatively poorly on Itanium processors: they run at the speed they typically would on a 1.5 GHz Xeon processor. Crucially, a machine with an Itanium processor cannot boot an x86 operating system.
One other characteristic of the Itanium processor that is essential to mention is that it has no less than 6 built-in floating point calculators, 2 of which are tuned for 3D applications. Together, those 6 calculators yield theoretical maximum of 6.4 gigaflops of single-precision floating point processing power.

Comments