Because the Datapoint 2200 used low-cost shift-register memory instead of RAM, it operated serially and needed to be little-endian. The 8008 copied this and that's why Intel processors are little-endian today.
Top-level
Because the Datapoint 2200 used low-cost shift-register memory instead of RAM, it operated serially and needed to be little-endian. The 8008 copied this and that's why Intel processors are little-endian today. 19 comments
Intel improved the 8008 to create the 8080 processor, which was popular in embedded systems. The first generation of home computers (Altair, IMSAI) used the 8080. Because of backward compatibility, the 8080 still had the Datapoint instructions and features. The 8086 was a big improvement over the 8080, a 16-bit processor instead of 8. The 8086's registers names originally matched the Datapoint ones: A, B, C, D, E, H, L as shown in this 8086 patent diagram. But these were renamed AX, BX, CX, and DX just before release. The 8086 was designed to be backward compatible with the 8080 through a conversion program called CONV86, so it inherited the Datapoint features. The 8086 was extended to the modern x86 architecture used in most laptops and servers today. So that's how the modern x86 architecture developed from an obscure desktop computer called the Datapoint 2200. For lots of details and a close look at the instruction sets, see my blog post: https://www.righto.com/2023/08/datapoint-to-8086.html Credits: Altair photo by Colin Douglas, (CC BY-SA 2.0) https://commons.wikimedia.org/wiki/File:Altair_8800,_Smithsonian_Museum_(white_background).jpg @kenshirriff I read your earlier (but in substance identical) account of the Datapoint/Intel history a while ago. It is mind-blowing how the design decisions of a rather obscure intelligent terminal in 1970 still shape a large part of computing today, more than 50 years later β and probably for decades to come. No one could have ever imagined that at the time, and I even have a hard time grasping it now. Thanks a lot for sharing this! @kenshirriff @kenshirriff @schotanus I have never worked on it but in the mid 80βs, Neddata, the IT part of Nedlloyd Shipping Company had one small but important system running on a Datapoint. The rest was working on IBM mainframe(s)and something new called DEC. We had one PC, for 125 it-emplyees. π @kenshirriff Iβd heard that the parity bit was brought forward from 4004 and itβs use in operating traffic lights. @alexr Unfortunately, there are two problems with that theory. The 4004 does not have parity and the 8008 is unrelated to the 4004. @kenshirriff The fact that a design decision in the 1970s led to an architecture trait that has endured for over 50 years now is pretty amazing. @zorinlynx The IBM System/360 architecture from 1964 is similarly amazing, since IBM's mainframes are still compatible with it. @kenshirriff @zorinlynx @kenshirriff BTW that looks like the video board - which used 14 shift memory ICs (7 bit ASCII, two chips per bit). This is a memory card from the d2200 recreation Iβm currently working on. Itβs not engineered for the power hungry 1405s which Iβll need to emulate with a daughter board. Decoder board is already done, processor board is at the layout stage. Iβm adding as many blinkenlights as I can fit on π @kenshirriff That's the first time an explanation of why x86 is LE has made sense to me! |
The Datapoint 2200 had a parity flag, very useful for a terminal. It had I/O instructions for its hardware. That's why x86 has a parity flag and uses I/O instructions.