Iris Classon
Iris Classon - In Love with Code

Stupid Question 82: What does x86 stand for? And why do we use x86 to represent 32-bit?

[To celebrate my first year of programming I will ask a ‘stupid’ questions daily on my blog for a year, to make sure I learn at least 365 new things during my second year as a developer]

.

I was honestly thought everybody knew the answer to this one except me, but secretly I really wanted to know. Because there is no 86 bit OS, right? So what’s up with this? I googled, and I found.

Turns out ‘x86 is a series of computer microprocessor instruction set architectures based on the Intel 8086 CPU’ (Wiki) (the 80 in front was dropped later, we like things simple and short).
In regards to the confusion, we actually got this a bit backwards, it is not the x86 that is confusing us, but x64. Most x86 processors today on user machines are 64 bit. When the x86 series was extended to 64 bit, the name x86-64 was used- which is logical, but then unfortunately it was renamed to AMD64 and x64 by others (Oracle Corporation) and Microsoft).
Often x86 is used instead of the x86-32 extension, and x64 instead of x86-64. The naming confusion comes from pure lazyness :D Programmers, learn - don’t be laxy on the names ;) !

Great answer here for those that want more details, but not so much that they want to read three wikipedia articles :)

Comments

Leave a comment below, or by email.
Robert MacLean
11/16/2012 1:34:10 AM
I just want to say keep up the good work - I read this question and thought, dumb question. There must be a million articles on this, who would ever need to ask this publicly any more.  I guess this is the issue with more senior people like myself - we have done this and forgotten two things:

1) You do not know what you do not know. If you have never thought of this as a junior, you would never think to look this up and then you would never know.

2) People are scared to ask questions - I see it in my talks, get to the end and ask "Who has questions?", no hands. Then afterwards I get loads of people coming up to ask questions they are just scared to ask before. 

Realising this will help be a better "senior" as I will understand my audiences a lot more - so thank you! 
Scott Barnes
11/16/2012 1:37:30 AM
I remember the day I got home to find my mum had bought me an upgraded 386 DX instead of an SX with a math co-processor. It felt unstoppable :) 
Iris Classon
11/16/2012 1:49:11 AM
Reply to: Robert MacLean
I asked around first, and keep in mind all my friends are nerds (90% of them anyway ;) ) and the youngest is 19 and the oldest , well she wont tell ;) - that old LOL, but nobody knew. 
Iris Classon
11/16/2012 1:51:00 AM
Reply to: Scott Barnes
Scott, you got to make me stop googling things,... the words, the WORDS! the darn words I do not recognize LOL :D Something with processors...something... 
Graham
11/16/2012 2:30:31 AM
A slightly more sane (though not by much) convention is that Apple refers to the 32-bit architecture as i386 - the first 32-bit Intel processor in the series was the 80386. The 64-bit architecture is called x86_64.

Just to be confusing, x86_64 is AMD's term...Intel calls it Intel 64 :). But that can be confused with IA-64, which is their name for the Itanium architecture... *sigh*. 
James Murphy
11/16/2012 2:39:22 AM
Another reason for the x86, x64 confusion is that Intel's original 64 bit route was Itanium which was not x86 based unlike the AMD stuff. That gets referred to as IA64 - so instead of IA64 we have x64... 
Gareth Bradley
11/16/2012 2:43:53 AM
I did not know this and presumed x86 was just 32bit and x64 was 64bit. Thanks Iris. Just goes to show we never presume things i suppose as well! :) 
Michal
11/16/2012 5:57:14 AM
Reply to: Graham
What Intel calls Intel 64 (or ia64) is much different than 64bit extension to x86. Ia 64 is whole different family of processors used in servers (like Itanium).

EDIT. Just realized, that you mentioned ia64 too. Sorry. Anyway it's all screwed up. 
Tom Alderman
11/16/2012 8:31:26 AM
Ahh, I remember having to set jumpers on the motherboard to set the CPU speed. Fun stuff. 
Alexander
11/16/2012 9:30:10 AM
I thought it didn't go back as far as to 8086 itself. Since there was i286 (well, 80286 really), i386, i486 -- x86 notation seemed pretty natural for this CPU family. 
Lance McCarthy
11/16/2012 10:10:21 AM
Reply to: Scott Barnes
Math Coprocessor FTW!!! Oh, and don't forget to grab a SoundBlaster card while you're at it :) 
Jimmy Shimizu
11/19/2012 7:29:34 AM
Reply to: Alexander
This is a more probable reason for the x-prefix, since all those cpus was called 8080, 80286, 80386 (the full model name).

Since they all share the same instruction set (atleast from 386 and onwards), this actually implies support for the x86 architecture (586 and 686 was used sometimes to indicate pentium and pentium 2 CPUs and compatible CPUs from other manufacturers than Intel). 80- has nothing to do with it AFAIK. 
Björn Åhmark
11/22/2012 12:47:30 AM
Well, as most question this question is more complicated than it first seams.

Only the newer x86 processors is 32 bit processors. 80286, 80186, 8086 and 8088 (which despite the name is a x86 processor) was 16 bit  processors, but the instruction set used on these processors should also work on newer x86 processors. The 8088 (the processor used in the original IBM PC) and 80186 was 16-bit internally, but the databus outside the chip was only  8 bits wide.

To confuse things even further you could add a math processor to the 80286, 80387 and i486 processors for improved floating point calculation performance. In the i486 Intel later integrated the math processor on-chip and that version of the i486 was called i486DX. The i486 without a math processor was called i486SX.

The 8080 mentioned previously in not a x86 processor.

These discussions get me feel a bit old (have not passed 40 yet). PCs have been part of my life since I was around 10 and my dad came home with an 8088 equipped PC. After that I have used PCs with most of the processors above. 
James Curran
11/27/2012 10:07:13 AM
The chip series started in the early 70's with the 4004, which was designed for pocket calculators, but had a limited instruction set.  The "4" was the number of bits in some part of it (probably the accumulator) -- 4 being the number of bits needed to hold one decimal digit which was the important thing with a pocket calculator.

When Intel realized that these were useful for programming beyond calculators, they expanded on it and created the 8008 (which I believe mention an 8-bit accumulator and an 8-bit data path).  
Their next generation was the 8086 -- an 8-bit accumulator and a 16-bit data path).  These were popular, but the 16-bit data path made the hardware too expensive, so they reworked it as the 8088 (mostly the same instruction set as the 8086 but with  an 8-bit data path).  This is what IBM choose for their original PCs.  8086/8088 had this funky address mode which allowed one to address a total of 1MB of memory (an insanely large amount back then) which deal with it only 64K at a time.  With mass production of PC kicking in, memory price dropped, and soon everyone was hitting the "640K limit" (the other 384K was used internally).
By this time Intel had already create the advanced version 80168 (a suped-up 8086), which as far as I know, was never used in any mainstream PC, although it was used on many non-DOS embedded machines.  

They followed that up with the 80286, which tried to address the memory problem.  It had two modes: "Real mode" which was just a fast 8086, and "protected mode"  which changed the memory address scheme allowing it to address much more memory (plus did a bunch of other cool things).   The problem was while you could switch from real to protected mode, you couldn't switch back (until a reboot), and DOS couldn't run in protected mode, so to use protected mode, you'd need a whole new OS.   IBM used this for their PC AT,  which became the industry standard at the time, running DOS, in real mode, as a fast PC.

Intel then came out with the 80386, which allowed switching back and forth between real and protected modes, plus added even more cool stuff to protected mode, so that you could build a sophisticated, multi-user, multi-tasking OS on it to rival mainframe computers, so at first IBM was resistant to using it, but other PC makers did --- again just as a fast 8086.

Finally, IBM came out with their PS/2 line which used the 80386 chip, and with it, OS/2 which was to make uses of all those cool 386 features.  

Hmmmm.... I've gotten a bit of the path here.... Moving on....

The 80386 was followed by the 80486, which as far as I know was just a fast 80386, but -- and here's the important part -- it was duplicated by a different chip maker (I think AMD, but I forget now).  The rival chip worked the same way, but was even faster, and it had a name like "AMD486".  This led Intel to sue them for trademark infringement, to which the Trademark office said "That's a number, not a trademark".

So, to avoid the problem in the future, Intel decided to give the next chip, which everyone assumed would be the 80586, an actual name, which they could trademark -- namely, "Pentium".  (about this time, they also started calling the 80386 and 80486, the "i386" and "i486" presumably to give them trademarkable names).

When it came time for their next chip in the series (i.e., the 80686), people were assuming it would be the "Hexium" or the "Sexium", but the marketing folks they had invested too much in the "Pentium" brand to abandon it, so the new chip was called the "PentiumPro" 

To confuse things even more, the next chip after that (i.e. the 80786). was called the "Pentium2" even though it was the third in the Pentium line.  

After that the line began to split into many sectors (low-power for laptops, different numbers of cores etc), and they completely abandoned numbers for names, so it's hard to say what's the successor to what. 


Last modified on 2012-11-14

comments powered by Disqus