Here's a silicon wafer for Intel's iAPX 432 processor (1981), a failed "micro-mainframe". Each rectangle on the wafer is one processor chip. But what are those five unusual rectangles? Those are test circuits... π§΅
Here's a silicon wafer for Intel's iAPX 432 processor (1981), a failed "micro-mainframe". Each rectangle on the wafer is one processor chip. But what are those five unusual rectangles? Those are test circuits... 𧡠Intel introduced the iAPX 432 "micromainframe" processor in 1981. Like every good mainframe, it had a separate channel processor to perform input/output (I/O). Here's my die photo of the 43203 Interface Processor chip. 𧡠The 432 was a strange system. Everything was an object, implemented in hardware with access control. You couldn't just move I/O data to memory because that would bypass the object protection. The Interface Processor translated between the object world and the I/O byte world. @kenshirriff Very intriguing thread, as in "These might be good ideas!", until I saw a memory structure diagram and I realized all that walking would be done in hardware, and how slow are they trying to be? And then I asked myself: What makes me think hardware can't walk data structures? I mean if software does it, the hardware still does it, right? What makes that slow? Today's die photo: the Intel iAPX 432 processor (1981). This "micro-mainframe" was so complex, the processor needed to be split across two chips: this is the 43202, which executes microinstructions. The 432 was slow, late, and a failure. The iAPX 432 was a follow-on to the Intel 8008 and 8080, originally called the 8800. It was supposed to be Intel's flagship processor but that didn't work out. This chip has the label 8802 since it is the second chip. Before WiFi, Ethernet was the best way to connect your computer to the network. This Fujitsu Ethernet chip from the 1980s was used in network cards. π§΅
Show previous comments
@kenshirriff Nice pic; any idea what's going on the left of the 10th row up where a few cells seem to be below the line of the others? I admit to not understanding why - even for a gate array - they have all the gaps between the cell rows; I think it might be because they didn't have many metal layers so all the custom horizontal wires are in the gap? @kenshirriff some may say that ethernet is still the best way to connect a computer to a network @kenshirriff I would argue that Ethernet is still the best way to connect your devices to the network. This Central Air Data Computer (CADC) was introduced in 1955. It computed airspeed, altitude, etc for fighter planes. But instead of a processor, it was an analog computer that used tiny gears for its computations. Let's look at how one of its modules works.π§΅
Show previous comments
"... The missile knows where it is because it knows where it isn't, by subtracting where it isn't from where it isβ¦" Well now it all makes sense... @kenshirriff awesome, thanks :) Also (imho ;)) highly entertaining on the subject, for it's crackly sound and distinct narration: such series of US-Navy training videos as i.e.: In 1981, Intel released the iAPX 432, calling this "micro-mainframe" one of the most important advances in computing since the 1950s. But it was a flop, costing Intel $100 million. An unexpected side-effect, though, was the 8086 processor. 1/n
Show previous comments
@kenshirriff I guess if it led to 8086 it was an important advance in computing even if it itself was a flop @kenshirriff interesting, didn't know this system before, thanks for the explanations The most popular way of wiring computers into a network is Ethernet, invented at Xerox PARC in 1973. I looked inside a Fujitsu Ethernet chip from 1984. The silicon die has rows of transistors and metal wiring. Around the edges, bond wires connect the chip to the external pins.
Show previous comments
@kenshirriff Thank you for such an interesting thread. My father worked for a company called General Automation in the 60s-early 70s. He designed mother boards. These illustrations reminded me of the large vellum sheets he would bring home to work on that looked just like these. Of course as a kid, I really didn't get what he did until I started using computers in the 80s. @kenshirriff itβs incredible to me that back in 1984, things were already this small. I think I truly have underestimated how unfathomably tiny things are now π€― You'd think every computer should be able to divide two numbers, but early microprocessors didn't have division instructions. The Intel 8086 (1978) was one of the first with division. Let's look at how it implemented division and why division is so hard. Computers can divide by performing long division, just like grade school except using binary. This needs a subtract-and-shift loop. For early microprocessors, you'd implement the loop in assembly code. The 8086 implemented the loop in microcode, much faster and more convenient. @kenshirriff I have a vague recollection of the 6809 having an advantage over the 6502 when it came to being able to divide more quickly. Interesting stuff! Another look inside the Intel 8086 microprocessor from 1978. This chip (and today's x86 computers) has "string" operations to efficiently copy or scan blocks of memory. They are implemented with microcode and a bit of special hardware. Let's see how this works. 𧡠Most people think of machine instructions as the lowest level of software, but many processors contain microcode. An instruction is broken down into micro-instructions. The 8086 has 21-bit micro-instructions; each executes a register move (sourceβdest) and an action in parallel. Fascinating! Thanks!! reminds me horribly of that book The Soul of a New Machine: https://en.wikipedia.org/wiki/The_Soul_of_a_New_Machine That book confirmed all my worst suspicions of working in the USA and permanently killed any ambition to do so!π @kenshirriff https://archive.org/details/bitsavers_decpdp8pdpgitalDesign2ed1987_36977011 Early microprocessors could add and subtract but didn't have multiply or divide instructions. The Intel 8086 processor (1978) fixed this with multiply and divide instructions, making life simpler for programmers. Multiplication used shift-and-add loops written in microcode.𧡠Binary multiplication is much like long grade-school multiplication, but simpler because each step is either 0 or the multiplicand. A processor can implement this by shifting the number and adding it in each cycle of a loop. For e.g. the 6502, this was done in assembly code. Intel introduced the 8086 microprocessor in 1978 and it still has a huge influence through the modern x86 architecture used today. This 16-bit processor contains a bunch of registers, some of them hidden. I reverse-engineered the 5-bit code that it uses to select registers. 𧡠Instructions for the 8086 processor specify registers through 3 bits in the opcode or following byte. This is expanded to a 5-bit code to support 16-bit registers (red), 8-bit registers (blue), segment registers (green), and special internal registers. The rotating globe of the Globus INK (1967) showed Soviet cosmonauts their location in orbit. I reverse-engineered this analog gear-driven computer. We tested the "landing position" feature that rapidly spins the globe to show where they would land. Then it spins back to orbit.𧡠The Globus advances its position every second. That's the annoying clicking sound, solenoids ratcheting the gears forward. For the landing position, a separate motor (upper right) spins the globe, stopping when an arm hits the limit switch. The relay board controls the motor. Inertial navigation systems, gyroscopes, gimbals, etc.βone of the best examples of military research contemporaneously contributing to science research, and then also aerospace industry! Still exists a history book to be written about, with a central chapter for Draper Laboratory ;) You should log/vlog this!! @kenshirriff It's amazing what was done with the newest technology that was available at that time. Probably nobody could imagine then that all that gets replaced by a generic screen and some super fast calculations done by billions of tiny transistors controlling the screen. But it's the same today; we just use the latest technology at hand and nobody knows what the future brings. The Intel 8086 processor (1978) has a complex instruction set with instructions from 1 to 6 bytes long. How does the processor determine the instruction length? It turns out that there is no explicit length. A ROM says if 1 or 2 bytes, then the microcode fetches bytes until done. 𧡠Instruction processing starts with the Group Decode ROM, which classifies instructions: 1 byte implemented in logic, a prefix, 1+ byte using microcode, or 2 bytes+ (including ModR/M byte) using microcode. A circuit called the loader gets 1 or 2 bytes from the prefetch queue. The Intel 8086 microprocessor (1978) led to the x86 architecture that your computer probably runs today. The 8086 provided a complicated set of memory access modes to get values from memory. Let's take a close look at how microcode and hardware work together to implement them. π§΅
Show previous comments
@kenshirriff Do you only decode HW? I still have 255 byte .COM that does stuff you could identify :) Thinking about this, I'm thinking about the movie "Inside Out." Only instead of people controlling a person it's a program controlling programs. How did Russian cosmonauts know where they were? The Globus INK (1967) showed the position of their Soyuz spacecraft on a rotating globe. It is an analog computer built from tiny gears. I reverse-engineered the wiring (which inconveniently had been cut) and we powered it up. π§΅
Show previous comments
The iconic Intel 8086 processor (1978) set the path for modern computers. Like most CPUs, it supports interrupts, a mechanism to interrupt the regular flow of processing. I've been reverse-engineering it from the silicon die, and I can explain how its interrupt circuitry works. The interrupt circuitry is implemented both in microcode and hardware. Microcode is a layer between machine instructions and the hardware, executing low-level 21-bit micro-instructions. These perform moves, combined with several types of actions: ALU, memory, jumps, etc. The Bendix Central Air Data Computer (CADC) is an amazing electromechanical device that computed airspeed, altitude, and other "air data" for fighter planes such as the F-104 and F-111. Digital computers weren't good enough in 1955, so the CADC used gears, cams and synchros.π§΅
Show previous comments
@kenshirriff Looks like I could plug my HP LaserJet into the top there. What's inside the famous 8086 processor from 1978? I opened up a chip, took microscope photos, and I'm reverse-engineering it. One of the 8086's instructions is HLT, which halts the processohttps://pbs.twimg.com/media/FnalrlNXEBoU1I_?format=jpg&name=4096x4096r. Seems simple, but there's a lot of circuitry to make the halt instruction work... 𧡠This diagram shows the main parts of the 8086 chip; dark labels are affected by HALT. The 8086 is partitioned into an Execution Unit which executes instructions, and a Bus Interface Unit which performs memory operations and preloads instructions into the prefetch queue. Amazing. I can only imagine what programming in assembler with this level of understanding would be like. @kenshirriff The 8086 version of CP/M used the same machine translated code, but typical 8086/88 machines were interrupt driven, waking the machine again at the next keystroke and thus rendering the anti-piracy check toothless. The Globus INK (1967) is a remarkable piece of Soviet spacecraft equipment. Its rotating globe showed cosmonauts the position of their Soyuz spacecraft. An electromechanical analog computer, it used gears, cams, and differentials to compute the position. Let's look inside π§΅
Show previous comments
@kenshirriff awesome technology. reminds me a bit of the KSP navball. Is there a way to replicate such a tech with modern off the shelf thingies? @kenshirriff Everyone says the Intel 8086 processor has 29,000 tiny transistors. But I counted and found 19,618. Why the difference? It turns out that most counts include "potential" transistors as well as real ones. Let's take a look and find out why.π§΅
Show previous comments
@kenshirriff I am guessing this got used in the 80186 variant, which included a lot more on board support circuitry.
[DATA EXPUNGED]
|
Creating chips on a silicon wafer is complicated and lots can go wrong. A few test circuits were placed on the die so Intel could check the important characteristics and make sure everything was okay. The white squares are test pads. Tiny probes contact the pads for measurements.
@kenshirriff I asked Google Bard to write a poem about the i432 π
The Intel i432
A chip ahead of its time,
An architecture too complex,
Too expensive to produce.
A dream that never came true,
A vision of the future,
That was lost to time.
A relic of the past,
A forgotten footnote,
In the history of computing.
But still, it is remembered,
By those who know its story,
And who appreciate its potential.
The Intel i432,
A forgotten masterpiece,
That could have changed the world.
@kenshirriff I asked Google Bard to write a poem about the i432 π
The Intel i432
A chip ahead of its time,
An architecture too complex,
Too expensive to produce.
A dream that never came true,
A vision of the future,
That was lost to time.
A relic of the past,
A forgotten footnote,
In the history of computing.