Sunday, April 20, 2014

Why does x86 mean 32-bit?

Technically x86 simply refers to a family of processors and the instruction set they all use. It doesn't actually say anything specific about data sizes. 

x86 started out as a 16-bit instruction set for 16-bit processors (the 8086 and 8088 processors), then was extended to a 32-bit instruction set for 32-bit processors (80386 and 80486), and now has been extended to a 64-bit instruction set for 64-bit processors. It used to be written as 80x86 to reflect the changing value in the middle of the chip model numbers, but somewhere along the line the 80 in the front was dropped, leaving just x86. 

Blame the Pentium and it's offspring for changing the way in which processors were named and marketed, although all newer processors using Intel's x86 instruction set are still referred to as x86, i386, or i686 compatible (which means they all use extensions of the original 8086 instruction set). 

x64 is really the odd man out here. The first name for the 64-bit extension to the x86 set was called x86-64. It was later named to AMD64 (because AMD were the ones to come up with the 64-bit extension originally). Intel licensed the 64-bit instruction set and named their version EM64T. Both instruction sets and the processors that use them are all still considered x86.

No comments:

Post a Comment