What If We Made Transistors in a Smaller Size? Wouldn't It Help to Make the Chips Bigger?

 Chips have become pretty large in modern times. But there is a penalty for making chips too large. The first is cost. The cost of a same-sized, same process silicon wafer will be the same regardless of how many dice you fit on it. So if I can hold 100 dice rather than 50, the physical production cost of my chip drops in half.

 But it’s even better that, due to the defect density of the silicon wafer. Every silicon wafer has defects, and those will cause some of the chips to fail. If I have 20 flaws across 50 chips, I will have only 30 chips. If I have 20 defects across 100 chips, that leaves me with at least 80 chips that work. So the larger chip may cost me 2.6x as much in just this simple case.

And then other issues. Smaller geometries offer lower power, faster signal propagation, etc. Shrinking doesn’t help all the time. For example, flash memory was getting less reliable and long-lived as it shrunk. So the state-of-the-art flash went to larger L-eff chips, but 3D chips, which may stack 40+ flash cells right on top of each other. But shrinking has been a crucial aspect of modern CPU, GPU, and memory evolution.

We have dramatically increased the size of the actual wafer over the years, making large chips possible. But as you increase to excessive size, the prices go way up. Consider one of the largest chips around: your camera sensor. Well, some folks’ camera sensors.

We all know that smartphones have a camera. A new Nokia phone has six cameras on the back, one front. It sells for about $500… because each camera module probably runs $15-$20, most of them using a sensor chip measuring 4.98 x 3.74mm. Next up, we have the Fujifilm GFX 50R, which I believe is the lowest-priced digital medium format camera on the market. The body runs for around $4,000 and includes a 43.8mm x 32.9mm sensor. That’s a big, expensive chip! While the camera sensor needs to be large, there are lower-cost options with most other chips. Look at the AMD Threadripper, the CPU in my PC. The latest version is based on nine “chiplets”… separate chips, each of which yields at much higher percentages than individual chips. But that’s not all. Because a 16 or 32-core CPU is 16 or 32 of the same thing, they build up to eight 8-core CPUs for a full-on 64-core packaged part. But for the 16 core, rather than build a different chip, they just use two of the 8-core chiplets, and so on. So the overall volume on the chiplets goes up, the cost goes even further down, and that’s why AMD is currently drinking Intel’s milkshake, and everyone else in the industry is moving to this approach.

Another thing about MCMs and chiplets: not every chip on the MCM needs to be fabbed in the same process. In this case, the processor chiplets are fabled in state-of-the-art 7nm, but the central I/O and switch chip are fabled in 14nm. That further lowers costs and improves yields.

This approach for PC CPUs isn’t entirely new, but the specific implementation is pretty recent. But the idea’s been around for a while. In the 1980s and 1990s, I worked on the Commodore Amiga personal computers. Amiga’s three “custom chips” wanted to be a single chip. But just the Agnus chip was very close to the largest chip Commodore’s MOS Technology fab could reliably make. So three Amiga custom chips are connected on board.

Many things happening in modern multi-chip modules are done in-module because the speeds or bus widths are too fast or wide for practical PCB use — perhaps both. But the idea is the same: making one useful chip in multiple packages to improve performance, costs, etc.


Comments

Popular posts from this blog

Is it :Possible to Live Without Fossil Fuels?

Lithography Technology and theirTechniques

The speed of Intel's Processor Level specified Gadgets?