Silicon, Imagination, Implementation
Patricia Highsmith:“Oh time, though art strange.
” We imagined, in the past, this future of ours in semiconductor device manufacturing but, despite our pooled imagination, and despite our best efforts, we just couldn’t make that envisioned future a here-and-now reality then using the machines and processes we had at the time.
However, in a fortuitous and virtuous cycle, advances in transistor architectures and advances in our ability to build those architectures at scale have been one long continuous improvement arc, an arc spanning the course of at least four decades. To the point today where smart machines, looking over our shoulders as assistants, or being let loose to toil away on thorny problems without too much supervision, design the machines that manufacture the machines, silicon chips, that are the basis for our world.
I am talking about the span that runs from the 29,000 transistors comprising an Intel 8088 microprocessor, which is where I entered the semiconductor industry, to the Cerebras Wafer Scale Engine of today, said to contain 2.6 trillion transistors, 850,000 AI-optimized cores, and 40 gigabytes of high performance on-wafer memory. Big Iron. I am talking about System-On-a-Chip (SoC) processors, built on 5-nanometer manufacturing technology, with 33.7 billion transistors, 10-core CPUs, and 16-core GPUs. In your laptop. And you may ask yourself, “Well, how did we get here?” Computational lithography is one part of the answer.
According to ASML, “Without computational lithography, it would be impossible for chipmakers to manufacture the latest technology nodes.” At and below the 130nm process node, the (compute-intense) algorithmic models on which computational lithography depends optimize the photomask design “by intentionally deforming the patterns to compensate for the physical and chemical effects that occur during lithography and patterning. The net result: we end up with an accurate replica of the desired chip patterns on the wafer.” (Although the mask pattern itself looks nothing like what prints on the wafer, which took a great leap of imagination to embrace.)
Another part of the how-did-we-gethere answer of particular interest to me involves the (compute-intense) design and modeling tools we now use in the semiconductor equipment business for both sustaining innovations, and for disrupting them.
Consider plasma etching. At one time in living memory plasma etch reactor design and reactor optimization were often done seat-of-the-pants. (I exaggerate slightly.) Subject matter experts observed and measured the behavior of a given plasma reactor, and the etch process results obtained from it, and created iterations on the original reactor design by physically varying the spacing between the wafer being etched and the showerhead above it, created iterations on reactant distribution by varying the hole pattern in the showerhead, created iterations on ion-dependent etch reactions by varying the RF power characteristics (RF frequency, RF peak-to-peak voltage) delivered to the plasma reactor, etc.
Experiments were almost always one-factor-at-a-time experiments, because multifactor experiments, for example experiments where RF power and process pressure and reactant mixtures were all deliberately varied according to Design of Experiment principles, produced tangles of data which were difficult to separate into main effects, interactions, and confounding. Or which were difficult to separate because, as the SAS Institute reports, “Generating and analyzing these designs relied primarily on hand calculation in the past; until recently practitioners started using computergenerated designs for a more effective and efficient DOE.
“In a fortuitous and virtuous cycle, advances in transistor architectures and advances in our ability to build those architectures at scale have been one long continuous improvement arc, an arc spanning the course of at least four decades”
” How did we get here? Cheap silicon, and abundant transistors - lots of them, to solve the compute-intense / compute-expense design and modeling bottlenecks that once prevented all but those fortunate to have access to university-level or National Laboratory-level computer centers from doing anything except imagining how we would do the work.
It’s now routine for engineers and designers, for example my colleagues at Ichor Systems, Inc., working in the realm of Computational Fluid Dynamics on both continuous improvement projects and on innovative configuration projects for our semiconductor processing gas delivery and chemical delivery products, to employ a host of modeling and design tools in the course of their work well before we cut metal or mold plastic. For DOE and data visualization, the JMP product does on a desktop or laptop computer what we once did (or tried to do) by hand. For modeling plasma etch reactors and processes, or deposition reactors and processes, the tools from COMSOL, and from Ansys, and from others, support our efforts. As does SolidWorks for 3D CAD.
What do we imagine comes next? AI-backed machines, looking over our shoulders as assistants, or being let loose to toil away on thorny problems without too much supervision, will step into many roles, perhaps, in semiconductor capital equipment engineering and process development. But as I wrote elsewhere, together with colleagues from a SEMI ASMC Smart Manufacturing panel a few years ago, “There’s nothing artificial about pairing human intelligence with machine-based smart manufacturing. Implementing an ever-smarter tomorrow in semiconductor manufacturing requires smart people just as much as it requires smart machines.
” We can’t advance without both.