The Origin, Nature, and Implications of
The Benchmark of Progress in Semiconductor Electronics
by Bob Schaller
September 26, 1996
This study will examine the development and evolution of semiconductor
electronics, and in particular attempt to more completely explain "Moore's
Law," a phenomenon unique to the rapid innovation cycles of this technology and
thus the semiconductor industry as a whole. Gordon E. Moore's simple
observation more than three decades ago that circuit densities of
semiconductors had and would continue to double on a regular basis has not only
been validated, but has since been dubbed, "Moore's Law" and now carries with
it enormous influence. It is increasingly referred to as a controlling
variable -- some have referred to it as a "self-fulfilling prophecy." The
historical regularity and predictability of "Moore's Law" produce organizing
and coordinating effects throughout the semiconductor industry that not only
set the pace of innovation, but define the rules and very nature of
competition. And since semiconductors increasingly comprise a larger portion
of electronics components and systems, either used directly by consumers or
incorporated into end-use items purchased by consumers, the impact of "Moore's
Law" has led users and consumers to come to expect a continuous stream of
faster, better, and cheaper high-technology products. The policy implications
of "Moore's Law" are significant as evidenced by its use as the baseline
assumption in the industry's strategic "roadmap" for the next decade and a
Organization of the Paper
This paper attempts to describe the origin, nature, and implications of "Moore's Law" in a
comprehensive fashion. It begins with an historical overview of the major developments
in semiconductor electronics that led up to Gordon Moore's 1965 observation. The next
section examines the "Moore's Law" concept in detail along with some of its broader
implications. This is followed by a review of the critical input side of the industry --
semiconductor manufacturing equipment makers. The paper then briefly examines
"Moore's Law" analogues along with more general interpretations and policy
considerations. Finally, preliminary conclusions are offered.
Genesis: Bell Labs and the Transistor
The invention of the transfer resistor or "transistor" in 1947 by Bell
Laboratory researchers heralded in a new era of solid-state electronics. The
concept was based on the fact that it is possible to selectively control the
flow of electricity through a material such as silicon, a solid material --
thus "solid-state" -- with unique conducting properties, designating some areas
as conductors of current and adjacent areas as insulators -- thus the term
"semiconductor." Compared with the vacuum tube (known also as the thermoionic
valve), which was the dominant technology for this task at the time, the
transistor proved significantly more reliable, required much less power, and
most importantly, could be miniaturized to almost infinitesimal levels. This
paper examines this last point in particular as it is the basis for "Moore's
The 1950s saw significant progress in solid-state research along with the
creation of an entire new industry that would design and manufacture
semiconductor devices. Although AT&T's Bell Labs is credited with the birth
and early development of this new industry, a 1956 Consent Decree ending an
anti-trust case prohibited AT&T from marketing commercial solid-state devices
and required them to disseminate their patents and technology throughout the
industry. Ironically, that very same year three AT&T scientists at Bell Labs
won the Nobel prize for their discovery of the transistor. Indeed, AT&T's
Western Electric would initially become the largest semiconductor producer to
satisfy the device requirements for its own telecommunications systems. Firms
with this internal demand became known as "captive" users. Although other
systems houses -- most notably IBM -- would also do the same, AT&T's pure and
applied scientific contributions were vital to the launching of the industry in
such a profound way. Dosi (1984) appropriately refers to this critical role as
the industry's 'bridging institution'.
From Science to Technology of Production
There is almost universal acceptance that the discovery of the transistor is
the modern era's example of the economic fruits of science and a testament to
Vannevar Bush's "Endless Frontier" assertion for continued post-WWII emphasis
on basic research. But commercial production of the new device proved very
difficult and would take most of the 1950s to iron out. Early devices were
hand-made under extremely crude conditions compared with today's "clean rooms."
Thus, initial yields (% of good devices manufactured) were very low -- 20% to
30% were common and even lower rates on some sophisticated devices -- while
operating characteristics of working devices varied considerably. From the
start, improving yields became one of the industry's primary production
challenges. It would take advances in technology, specifically process
technology, to improve production methods and, in turn, develop a viable
semiconductor industry. As Braun states, "It was process technology that
determined the winners in the semiconductor race." (Forester 1982) Gordon Moore
himself recalls the important and unique role of technology in the early
stages, "Indeed, the technology led the science in a sort of inverse linear
model." (Moore 1996)
Throughout the 1950s the industry continued to learn the art of semiconductor
production, continually refining ad hoc, trial-and-error methods. Improved
production methods enabled additional advances in products and processes. The
development of the integrated circuit in 1958 represents a major product
milestone, made possible by overcoming technological barriers.
Jack Kilby, inventor of the integrated circuit (IC), commented on this new
device as a prime example of the transition from a science-based enterprise to
one increasingly based on technology: "In contrast to the invention of
the transistor, this [integrated circuit] was an invention with relatively few
scientific implications. . . Certainly in those years, by and large, you could
say it contributed very little to scientific thought." (Braun & Macdonald
A second major breakthrough of the 1950s is better described as a series of
incremental process innovations in the manufacturing of semiconductor devices.
Work at Bell Labs and General Electric produced most of these innovations.
Bell Labs' sharing of these methods in formal symposia made possible rapid
process technology diffusion throughout the industry. The two most noteworthy
innovations are the diffusion and oxide masking process, and the planar
process, both becoming the permanent basis for production since. The diffusion
process allowed the producer to diffuse impurities (dopants) directly into the
semiconductor surface, eliminating the tedious practice of adding conducting
and insulating material layers on top of the substrate.
The addition of sophisticated photographic techniques permitted the laying of
intricate mask patterns on the semiconductor so that diffusion took place only
in designated areas. This greatly increased the accuracy of production while
improving the reliability of devices. With diffusion, production moved from a
craft process of individual assembly to batch processing.
The planar process was a logical outgrowth of the diffusion and oxide masking
process. Planarization was the creation of physicist Jean Hoerni of
newly-formed Fairchild Semiconductor. Hoerni observed the production
limitations of conventional 3- dimensional transistor designs (e.g., the "mesa"
transistor). Hoerni reasoned that a design based on a "plain" would be
superior. Thus, the planar transistor, as the name implies, was flat.
Flattening the mesa enabled electrical connections to be made, not laboriously
by hand, but by depositing an evaporated metal film on appropriate portions of
the semiconductor wafer. Using a lithographic process of a series of etched
and plated regions on a thin, flat surface or wafer of silicon, the "chip" was
born out of the planar transistor. Like the printing process itself, the
planar process allowed for significantly greater rates of production output at
even higher yields.
More importantly, the planar process enabled the integration of circuits on a
single substrate since electrical connections between circuits could be
accomplished internal to the chip. Robert Noyce of Fairchild quickly
recognized this. As Gordon Moore recalls: "When we were patenting this
[planar transistor] we recognized it was a significant change, and the patent
attorney asked us if we really thought through all the ramifications of it.
And we hadn't, so Noyce got a group together to see what they could come up
with and right away he saw that this gave us a reason now you could run the
metal up over the top without shorting out the junctions, so you could actually
connect this one to the next-door neighbor or some other thing."
Fairchild introduced the first planar transistor in 1959 and the first planar
IC in 1961. As will be discussed later, Moore views the 1959 innovation of the
planar transistor as the origin of "Moore's Law." Perhaps more than any other
single process innovation, planarization set the industry on its historical
exponential pace of progress. As one early industrial technologist noted, "The
planar process is the key to the whole of semiconductor work."
George Gilder's account in his 1989 treatise, Microcosm, is more
eloquent: "Known as the planar integrated circuit, Fairchild's concept
comprised the essential device and process that dominates the industry today. .
. Ultimately it moved the industry deep into the microcosm, and put America on
With time and experience, ad hoc production methods were replaced with more
formalized technology-based processes. To underscore the importance of process
innovations, Braun and Macdonald (1982) state that much of the early growth in
semiconductor electronics "was not only permitted by new processes, but
actually precipitated by them, for batch production in general, and planar in
particular, prompted both a rapid increase in the numbers of components
produced and an even more rapid decline in their price."
Amazingly, the industry has not veered from this course since then. With time,
chip manufacturers improved the lithographic process with more precise
photographic methods and "photolithography" thus became the standardized
production method for the industry. More pertinent to "Moore's Law,"
photolithography enabled manufacturers to continue to reduce feature sizes of
devices. Commenting on the significance of photolithography within the planar
process Malone states, "Thus were planted the seeds of Moore's Law, the very
principle that governs the information age." The research focus of the 1950s,
moving from laboratory to the production floor, gradually shifted its emphasis
from understanding why to learning how.
Creating and mastering the art of photolithography is an excellent example of
this transition from science to technology.
"The use of photolithography is yet another example of interdependence of
technologies and cross-fertilisation. The method had been developed for printing
purposes and had been in use in this area for some time. It is but one outstanding
example of the adoption and adaptation of extraneous technologies to improve the
manufacture and design of electronic devices." (Braun and Macdonald 1982)
A New Industry from a New Technology
The secondary literature on the development of the semiconductor industry --
including the phenomenon called "Silicon Valley" -- is extensive and need not
be reviewed here. One common theme worth noting is that this industry is
qualitatively different as characterized by its base technology which seems to
provide a limitless source of performance advancement. From the beginning this
was recognized primarily by new firms, not existing electronics device firms.
Harvard's Theodore Levitt's "Marketing Myopia" (1960) noted that the
once-dominant railroad industry had completely missed the opportunities brought
about by technological advances in other modes of transportation. The railroad
industry's narrow definition of its market as the "railroad" business, as
opposed to the broader "transportation" business excluded its participation in
whole new automobile, truck, and airplane/airline industries. A similar
parallel can be drawn regarding the creation of the semiconductor industry --
none of the major semiconductor players today bears the name of dominant
electronics firms of the 1950s (e.g., General Electric, RCA, Raytheon,
Sylvania, Philco-Ford, and Westinghouse).
These firms, all heavily engaged in the production of vacuum tubes, did make
substantial early investments in semiconductor electronics. But the
semiconductor industry that emerged by 1960 is represented by a whole new breed
of firms, some from other seemingly unrelated industries, some entirely new.
Texas Instruments, Shockley Laboratories, and Fairchild Semiconductor are three
of the many new firms that had emerged. Each had a traceable connection to
Texas Instruments, a geophysical company that provided oil well services, was
one of the first to purchase a license from AT&T and begin semiconductor design
and manufacturing operations. Texas Instruments' Gordon Teal, a former Bell
Labs researcher, successfully produced the first silicon transistor that would
prove significantly easier to manufacture while possessing much improved
operating characteristics over the germanium transistor in use at the time.
Robert Shockley, also formerly of Bell Labs and Nobel laureate for the
co-discovery of the transistor, established Shockley Transistor Laboratories,
gathering together some of the best minds at the time, including a young
engineer named Gordon Moore.
Within a few years, Moore and others at Shockley Labs convinced Fairchild
Camera and Instrument, an aerial survey company, to finance a new enterprise,
so they left Shockley and formed Fairchild Semiconductor. Moore would head up
the research department at Fairchild, where he would later make his circuit
density doubling observation. The innovative breakthrough of the IC in the
late-1950s as previously discussed actually involved both firms. Jack Kilby at
Texas Instruments produced the first germanium IC while Robert Noyce at
Fairchild quickly made the concept technically and economically feasible --
thus commercially viable -- by developing the planar process. Moore recalls
the significance of the planar process at Fairchild, "In the planar structure,
Fairchild struck a rich vein of technology." (Moore 1996)
The story of Fairchild Semiconductor is a fascinating one and is illustrative
of the dynamic nature of this industry, especially in its early days.
Fairchild is the subject of much industry lore. The young founders (including
Moore), seeking to make good on commercial success in semiconductor production,
left Shockley Laboratories in 1957 calling themselves the "Fairchild Eight" and
founded Fairchild Semiconductor. Fairchild is thought to have spawned no fewer
than 150 companies, including Moore's and Noyce's Intel in 1968 -- these
spin-offs have come to be referred to as "Fairchildren." It was at Fairchild
that Gordon Moore, Director of Research, made his profound density-doubling
observation and extrapolation.
Gordon Moore's Observation
The April 19, 1965 Electronics magazine was the 35th anniversary issue of the
publication. Located obscurely between an article on the future of consumer
electronics by an executive at Motorola, and one on advances in space
technologies by a NASA official is a less than four page (with graphics)
article entitled, "Cramming more components onto integrated circuits," by
Gordon E. Moore, Director, Research and Development Laboratories, Fairchild
Semiconductor. Moore had been asked by Electronics to predict what was going
to happen in the semiconductor components industry over the next 10 years -- to
1975. He speculated that by 1975 it was possible to squeeze as many as 65,000
components on a single silicon chip occupying an area of only about one-fourth
a square inch.
His reasoning was a log-linear relationship between device complexity (higher
circuit density at reduced cost) and time: "The complexity for minimum
component costs has increased at a rate of roughly a factor of two per year.
Certainly over the short term this rate can be expected to continue, if not to
increase. Over the longer term, the rate of increase is a bit more uncertain,
although there is no reason to believe it will remain nearly constant for at
least 10 years." (Moore 1965)
This was an empirical assertion, although surprisingly it was based on only
three data points.
Ten years later, Moore delivered a paper at the 1975 IEEE International
Electron Devices Meeting in which he reexamined the annual rate of
density-doubling. Amazingly the plot had held through a scatter of different
complex bipolar and MOS device types (see Product and Technology Overview)
introduced over the 1969-1974 period. A new device to be introduced in 1975, a
16k charge-coupled-device (CCD) memory, indeed contained almost 65,000
components. In this paper, Moore also offered his analysis of the major
contributions or causes of the exponential behavior. He cited three reasons.
First, die sizes were increasing at an exponential rate -- chip dice were
getting bigger. As defect densities decreased, chip manufacturers could work
with larger areas without sacrificing reductions in yields. Many process
changes contributed to this, not the least of which was moving to optical
projection rather than contact printing of the patterns on the wafers.
The second reason was a simultaneous evolution to finer minimum dimensions
(i.e., feature sizes or line widths). This variable also approximated an
exponential rate. Combining the contributions of larger die sizes and finer
dimensions clearly helped explain increased chip complexity, but when plotted
against the original plot by Moore, roughly one-third of the exponential
Moore attributed the remaining third to what he calls "circuit and device
cleverness." He notes that several features had been added. Newer approaches
for device isolation, for example, had squeezed out much of the unused area.
The advent of metal oxide semiconductor (MOS) technology in the late-1960s and
early-1970s had allowed even tighter packing of components per chip.
Interestingly, he also concluded that the end of "cleverness" had arrived with
the CCD memory device: "There is no room left to squeeze anything out
by being clever. Going forward from here we have to depend on the two size
factors - bigger dice and finer dimensions."
So Moore revised upward his annual rate of circuit density-doubling. Every
eighteen months seemed to be a reasonable rate and was supported by his
analysis. He redrew the plot from 1975 forward with a less steep slope
reflecting a slowdown in the rate, but still behaving in a log-linear fashion.
Shortly thereafter someone (not Moore) dubbed this curve, "Moore's Law."
Officially, Moore's Law states that circuit density or capacity of
semiconductors doubles every eighteen months or quadruples every three years.
It even appears in mathematical form:
(Circuits per chip) = 2(year-1975)/1.5
In 1995 Moore compared the actual performance of two device categories (DRAMs
and microprocessors) against his revised projection of 1975. Amazingly, both
device types tracked the slope of the exponential curve fairly closely, with
DRAMs consistently achieving higher densities than microprocessors over the 25
year period since the early- 1970s. Die sizes had continued to increase while
line widths had continued to decrease at exponential rates consistent with his
Moore's early prediction was based on the shared observations by many in the
fledgling semiconductor industry. The invention of the transistor had started
a miniaturization trajectory in semiconductors which had produced the
integrated circuit in the late-1950s, soon followed by medium scale integration
(MSI) of the mid-1960s, then large scale integration (LSI) of the early-1970s,
very large scale integration (VLSI) of the 1980s, and ULSI (ultra) of the
1990s. Today's Intel PentiumJ microprocessor contains more than three million
transistors, the Motorola PowerPCJ microprocessor contains almost seven million
transistors, and Digital's 64-bit AlphaJ microprocessor contains almost 10
million transistors on a thin wafer "chip" barely the size of a fingernail. In
early-1996 IBM claimed that a gigabit (billion bits) memory chip was actively
under development and would be commercially available within a few years.
Papers presented at a 1995 IEEE International Solid-State Circuits Conference
contend that terachips (capable of handling a trillion bits or instructions)
will arrive by the end of the next decade. (Stix 1995)
Implications: Technological Barometer?
The implications of Moore's Law are quite obvious and profound. It is
increasingly referred to as a ruler, gauge, benchmark (see subtitle),
barometer, or some other form of definitive measurement of innovation and
progress within the semiconductor industry.
As one industry watcher has recently put it: "Moore's Law is important
because it is the only stable ruler we have today... It's a sort of
technological barometer. It very clearly tells you that if you take the
information processing power you have today and multiply by two, that will be
what your competition will be doing 18 months from now. And that is where you
too will have to be." (Malone 1996)
Perpetuum Mobile, Self-Fulfilling Prophecy, or Both?
Perhaps the broadest implication of Moore's Law is that it has become an almost
universal guide for an entire industry that has not broken stride in
exponential growth rates for almost four decades now. The repeated
predictability and regularity of Moore's Law are characteristics of the elusive
perpetuum mobile for this industry. Some have referred to Moore's Law as
self-reinforcing or a "self-fulfilling prophecy."
Moore himself recently stated:
"More than anything, once something like this gets established, it becomes more
or less a self-fulfilling prophecy. The Semiconductor Industry Association
puts out a technology road map, which continues this generation [turnover]
every three years. Everyone in the industry recognizes that if you don't stay
on essentially that curve they will fall behind. So it sort of drives itself."
There is intuitive merit to this view. The inherent characteristics of the
technology contribute significantly to this "drive itself" tendency. Chip
makers have long recognized the combined benefits of miniaturization. As Moore
"By making things smaller, everything gets better simultaneously. There is
little need for tradeoffs. The speed of our products goes up, the power
consumption goes down, system reliability, as we put more of the system on a
chip, improves by leaps and bounds, but especially the cost of doing things
electronically drops as a result of the technology." (Moore 1995)
Braun and Macdonald (1982) also refer to the "self-sustaining" nature of
miniaturization in semiconductors as "tradeoffs," in Moore's words, don't
really enter into the equation. In economic parlance, this is the proverbial
From a different angle, George Gilder (1989) argues that the technology itself
possesses an almost natural "microcosmic" force toward integration in smaller
and smaller spaces. He refers to this as the "law of the microcosm" and
suggests that users and other institutions affected by the technology
understand and follow its direction: "Rather than pushing decisions up
through the hierarchy, the power of microelectronics pulls them remorselessly
down to the individual. This is the law of the microcosm. . . The very
physics of computing dictates that complexity and interconnections -- and thus
computational power -- be pushed down from the system into single chips . . .
Above all, the law of the microcosm means the computer will remain chiefly a
personal appliance . . . Integration will be downward onto the chip, not
upward from the chip."
He draws some fairly broad implications by stating that the evolution of
chip-related industries "will remorselessly imitate the evolution of the chip."
That is smaller, thus cheaper, yet more powerful chip capabilities will
redefine entire industries away from larger, oligopolistic structures to
smaller structures, more conducive to an entrepreneurial environment. He makes
a convincing case with telecommunications, referencing Peter Huber's post-AT&T
break-up analysis of the "geodesic" or horizontal network that evolved within
the telephone system. Huber asserts that the traditional pyramidal network
model, where all switching was done in the central office, had been made
obsolete by decentralized switching systems such as private branch exchanges,
local area networks, and related systems. Thus it was only natural to accord
an industry a more horizontal competitive landscape consistent with its
redefined geodesic network structure.
Yet another dimension, involving non-technical or non-physical variables such
as user expectations contribute to the dynamic of fulfilling this law. In this
view, Moore's Law is not based on the physics and chemical properties of
semiconductors and their respective production processes, but on other
non-technical factors. One hypothesis is that a more complete explanation of
Moore's Law has to do with the confluence and aggregation of individuals'
expectations manifested in organizational and social systems which serve to
self-reinforce the fulfillment of Moore's prediction.
A brief examination of the interplay among only three components of the
personal computer (PC) (i.e., microprocessor chip, semiconductor memory, and
system software) helps reveal this point. A very common scenario using the
IBM-compatible PC equipped with an Intel microprocessor and running Microsoft's
WindowsJ software goes something like this. As the Intel microprocessor has
evolved from the 8086/88 chip in 1979 to the 286 in 1982, to the 386 in 1985,
to the 486 in 1989, to the PentiumJ in 1993, and the Pentium ProJ in 1996, each
incremental product has been markedly faster, more powerful, and less costly as
a direct result of Moore's Law. At the same time, dynamic random access memory
(DRAM) and derivative forms of semiconductor memory have followed a more
regular Moore's Law pattern to the present where a new PC comes standard with
8Meg (million bits) to 16Meg of memory as compared to the 480k (thousand bits)
standard of a decade ago. Both of these cases reflect the physical or
technical aspects of Moore's Law.
However, system software, the third piece of this puzzle, begins to reveal the
non- technical dimension of Moore's Law. In the early days of computing when
internal memory was costly and scarce, system software practices had to fit
this limitation -- limited memory meant efficient use of it or "tight" code.
With the advent of semiconductor memory -- especially with metal oxide
semiconductor (MOS) technology -- internal memory now obeyed Moore's Law and
average PC memory sizes grew at an exponential rate. Thus, system software was
no longer constrained to "tight spaces" and the proliferation of thousands,
then many thousands, and now millions of "lines of code" have become the norm
for complex system software.
Nathan Myhrvold, Director of Microsoft's Advanced Technology Group, conducted a
study of a variety of Microsoft products by counting the lines of code for
successive releases of the same software package. (Brand 1995) Basic had 4,000
lines of code in 1975 -- 20 years later it had roughly half a million.
Microsoft Word consisted of 27,000 lines of code in the first version in 1982
-- over the past 20 years it has grown to about 2 million.
Myhrvold draws a parallel with Moore's Law: "So we have increased the
size and complexity of software even faster than Moore's Law. In fact, this is
why there is a market for faster processors -- software people have always
consumed new capability as fast or faster than the chip people could make it
As the marginal cost of additional semiconductor processing power and memory
literally approaches zero, system software has exponentially evolved to a much
larger part of the "system." More complex software requires yet even more
memory and more processing capacity, and presumably software designers and
programmers have come to expect that this will indeed be the case. Within this
scenario a kind of reinforcement multiplier effect is at work.
A "Slipstream" to Software Development?
This network reinforcement multiplier effect is most noticeable in computers
and related products. This point is further emphasized since computers
represent the single largest user category of semiconductor devices at 60% of
the entire industry demand, primarily for microprocessors and DRAMs. A very
distant second is telephones at 10%. After that, nothing else comes close.
(Hutcheson and Hutcheson 1996, Economist 1996) Arguably in computer software --
as in semiconductors -- complexity has also been rising exponentially. As just
discussed, though, the rate of increased software complexity appears to be
outpacing that of the chips that comprise the hardware that drives the
One noted software programmer has propounded two new Parkinson's Laws for
software: "Software expands to fill the available memory," and "Software is
getting slower more rapidly than hardware is getting faster." (Gilder 1995)
Indeed, newer programs seem to run more slowly on most systems than their
previous releases (e.g., compare WordPerfect 6.0 for Windows with WordPerfect
5.1 for DOS).
Microsoft, especially with its Windows development and emergent "Wintel"
(Windows-Intel) de facto standard, owes much of its success to shrewdly
exploiting the advances of microcosmic hardware. (Gilder 1995, 1989)
"[Bill] Gates travels in the slipstream behind Moore's Law, following a key
rule of the microcosm: Waste transistors. . . 'Every time Andy [Grove] makes a
faster chip, Bill uses all of it.' Wasting transistors is the law of thrift in
the microcosm, and Gates has been its most brilliant and resourceful exponent."
When asked recently about his view of "Wintel," Moore, Chairman of the Board at
Intel, quips, "Our legal department doesn't like it at all." He then expands on
the strategic importance of the hardware/software technological alliance, but
also acknowledges the independence of architectures made possible by an
ever-changing industry. "We certainly will try to keep it [Wintel]
going that way. We have a tremendous asset in all the compatible software
that's out there, so any new processor we introduce has to be able to run that
stuff and as long as we keep a very large fraction of the processors, I think
Microsoft will be sure that they write things that run well on our processors.
And, we both have ideas of being somewhat independent. We're happy to have
Java applications, and then Netscape, UNIX and everything else, and also
Microsoft ports NT to [Digital's] Alpha, but in fact there is a tremendous
advantage to the volume centers business."
It is clear that a type of lock-in (Arthur 1994) has occurred with respect to
PC system hardware and software architecture. The history of this particular
alliance beginning with IBM's early selection of both Intel and Microsoft as
critical component suppliers (microprocessor and operating system,
respectively) of their revolutionary PC has ultimately contributed to a decade
and a half of mutual learning by the two firms, ironically now without IBM.
Had the choice been different, who knows what system architecture would have
evolved? What's important is that one has, involving -- and producing -- two
of the most important players in the PC industry today in Intel and Microsoft.
And there is no doubt that this alliance affects development and innovation
cycles of both firms, thus the industry at large. Whether Moore's Law is the
slipstream to software development as George Gilder asserts, or the other way
around, may be a kind of "chicken and egg" question. There is little doubt
that a significant expectations feedback loop involving Moore's Law is at play.
This feedback mechanism is illustrated in Figure 1.
Scaling from J-Shaped to S-Shaped Curves
As stated earlier, the exponential pace of innovation was generally understood
within the semiconductor community by the mid-1960s. Erich Bloch credits
Gordon Moore as being "the most articulate" of the early group of technologists
in communicating this phenomenon. Carver Mead, noted computer scientist at
Caltech, did a series of early calculations to determine the precise scaling
effects of the technology. This work intensified with the introduction of MOS
technology in the late-1960s and by 1972 the first comprehensive scaling
analysis was published. Mead's analysis confirmed that Moore's extrapolation
was not only possible, but probable, and added academic credence to the
In a more recent study, Mead (1994) reexamined his earlier scaling estimates
and then looked ahead: "Over the ensuing 22 years, feature sizes have
evolved from 6 to 0.6F, and the trend shows no sign of abating. . . I shall
conclude that we can safely count on at least one more order of magnitude of
scaling, with a concomitant increase in both density and performance."
This form of analysis is consistent with technology development along
exponential "J- shaped" or "S-shaped" curves. (Rothschild 1990, Foster 1986,
and Klein 1984, 1977) In the field of economics, particularly its evolutionary
strain in the field of Complexity Science, this phenomenon is known as
increasing returns. (Arthur 1994, Waldrop 1992) Whatever label is applied,
there is little question that there is still considerable "learning" occurring
in exploiting the potential physical properties of semiconductors along with
the associated production processes. At some point this technology -- like all
technologies -- will reach its limit of exponential growth and begin to
experience diminishing marginal returns (at the top hump of the "S").
So When Will Moore's Law End? Is This The Right Question?
A 1995 article in the Economist is titled, "The End of the Line" and discusses
the impending fate of Moore's Law. A similar Forbes article is titled,
"Whither Moore's Law?" while a recent editorial's headline in a Unix Users
Group's Internet home page (CUUG 1996) reads, "The End of Moore's Law: Thank
God!" Numerous other forecasts have come to similar conclusions. But as
discussed earlier, Moore's Law started out as a simple observation and
extrapolation. Actual performance and experience have validated Moore's
original plot, proving him quite prophetic. An intriguing point about Moore's
Law is that throughout its existence, forecasts of its demise have consistently
For example, in an exhaustive study on the history and impact of the
semiconductor, Braun and Macdonald (1982) came to a similar conclusion as
Gordon Moore had in his original 1965 article: "Unlike the consequences
arising from the future use of semiconductor electronics, the technical future
of the technology, though still uncertain, can be forecast with a degree of
confidence over the very short term. For the time being, trends of increased
circuit densities will continue, although no-one expects Moore's Law to hold
for very much longer."
The authors go on to say that the microprocessor would probably reach its
zenith with a 32-bit architecture and questioned whether the 256k DRAM would
become the "ultimate single chip memor[y]." In a decade and a half since,
Digital's 64-bit Alpha 21164 microprocessor chip contains almost 10 million
transistors, operating at more than 300 MHZ, and the 16 Meg DRAM are now the
contemporary state-of-the-art chips, with advanced designs soon to eclipse
In late-1994, The Semiconductor Industry Association (SIA) published the
National Technology Roadmap for Semiconductors. The Roadmap is a consensus
view of the industry's technical vision and forecast over the next decade and a
half -- through 2010. The second paragraph of the document contains the
statement, "A central assumption of the Roadmap is an extension of industry
history according to Moore's law." (SIA 1994)
A recent survey of some of the best thinkers in the high-tech industry revealed
a wide range of responses to the question, "How many more years will Moore's
Law play out?" including: "With conventional lithography, another three
to five [years], max. . . 10 to 15 years max. . . At least another 20 years or
more. . . Moore's Law has worked in the past 25 years or so. There's no doubt
that it will continue. . . We'll all be dead when Moore's Law is played out."
A very revealing follow-up question, "What will stop it [Moore's Law] -- design
limits, manufacturing limits or fabrication costs?" has several predictable
answers such as, "The fundamental physics of silicon will become a limiting
factor." However, one respondent, Dan Lynch, President and CEO of CyberCash,
offers a starkly different view by answering, "Moore's Law is about human
ingenuity progress, not physics." (Malone 1996)
Along similar lines, Carver Mead (now Gordon and Betty Moore Professor of
Engineering and Applied Science at Caltech) states that Moore's Law "is really
about people's belief system, it's not a law of physics, it's about human
belief, and when people believe in something, they'll put energy behind it to
make it come to pass."
Mead offers a retrospective, yet philosophical explanation of how Moore's Law
has been reinforced within the semiconductor community through "living it":
"After it's [Moore's Law] happened long enough, people begin to talk
about it in retrospect, and in retrospect it's really a curve that goes through
some points and so it looks like a physical law and people talk about it that
way. But actually if you're living it, which I am, then it doesn't feel like a
physical law. It's really a thing about human activity, it's about vision,
it's about what you're allowed to believe. Because people are really limited
by their beliefs, they limit themselves by what they allow themselves to
believe what is possible. So here's an example where Gordon [Moore], when he
made this observation early on, he really gave us permission to believe that it
would keep going. And so some of us went off and did some calculations about
it and said, 'Yes, it can keep going'. And that then gave other people
permission to believe it could keep going. And [after believing it] for the
last two or three generations, 'maybe I can believe it for a couple more, even
though I can't see how to get there'. . . The wonderful thing about [Moore's
Law] is that it is not a static law, it forces everyone to live in a dynamic,
evolving world." (UVC 1992)
The historical literature reveals a pattern -- beginning with Moore's original
1965 prediction -- that the longer-term predictions (greater than 10 years) of
diminishing marginal complexity increases simply have not yet come to pass. In
fact, the latest set of "predictions" in 1996, although collectively more
optimistic than previous samples, still peg a future longer-term limit at less
than 15 years. (Malone 1996)
In a very recent interview, Moore himself seems to stick to the "about another
decade" prediction he originally made in 1965: "I think much of the
rate of progress can be expected to continue for at least a few more
generations. Three generations of technology at three years per generation is
about a decade. So I can see us staying on roughly the same curve that long."
At the same time, Moore recognizes that history has proven him and mostly
everyone else wrong about past predictions. His closing remarks at a
Microlithography Symposium in February 1995 challenged the participants to
continue to "think smaller": "Semiconductor technology made its great
strides as a result of ever increasing complexity of the products exploiting
higher and higher density to a considerable extent the result of progress in
lithography. As you leave this meeting I want to encourage each of you to
think smaller. The barriers to staying on our exponential are really
formidable, but I continue to be amazed that we can either design or build the
products we producing today. I expect you to continue to amaze me for several
years to come." (Moore 1995)
Internal and External Sources of Innovation
The transistor and its extensive lineage of semiconductor products are arguably
the result of much technology push, intrinsic to the nature of these devices.
Arguing against the conventional wisdom that product innovations are typically
developed solely by product manufacturers, von Hippel (1986) used the title,
The Sources of Innovation to explain that: "...the sources of
innovation vary greatly. In some fields, innovation users develop most
innovations. In others, suppliers of innovation-related components and
materials are the typical sources. In still other fields, conventional wisdom
holds and product manufacturers are indeed the typical innovators."
In an exhaustive case study of the semiconductor industry, Dosi (1984)
concluded that U.S. public (military and space) policies initially performed an
essential external role of selection and guidance of the directions of
technical progress, but noted that this role has since decreased. Moore (1996)
agrees with this view, noting that defense R&D and particularly the space
program of the 1960s had a "negligible impact on the semiconductor industry."
Dosi then poses the important question, "What are the factors which shape the
directions of the innovative activity when powerful external factors cease to
exert their 'pulling' and 'pushing' influence?" He goes on to argue three major
factors. First, 'normal' technical progress maintains a momentum of its own
which defines the broad orientation of the innovative activities.
He refers to this "in-built heuristic" in so many words as Moore's Law:
"Take, for example, the fundamental trend in the industry towards increasing
density of the circuits: the doubling of the number of components per chip
every year (in the late 1970s every two-three years) is almost a 'natural law'
of the industry. After 1K memories one progressed to 4K, 16K, 64K and further
increases in integration are expected. The same applies to microprocessors,
from 4 to 8, 16, 32 bit devices. This cumulative process has an important role
in the competitive process of the industry, by continuously creating
asymmetries between firms and countries in their relative technological
The second factor stems from the mutual relationship between innovation in
semiconductors and end-use applications. Technical change in semiconductors
defines one of the boundary conditions of possible technical advances in
'downstream' sectors. At the same time, both technological problems and
commercial opportunities in these downstream sectors focus and lead the
direction of technological advances in semiconductors. As previously
discussed, the interplay of the "Wintel" architecture is most evident here.
Moore acknowledges this, but continues to emphasize the "pushing" force of
semiconductor electronics: "There's still a lot of push [going on], we
work it on both ends. You look at what Intel does, for example, our thrust in
video conferencing. That is driven principally as an application that requires
higher performance processing to support, so our business depends on continuing
that model where everybody needs more computing power every year, so we're
trying to drive as much push as we can."
A third factor Dosi cites is the more traditional economic "market-pull"
influence from changes in relative prices and distributive shares. Dosi
emphasizes that market factors operate particularly with respect to 'normal'
technical progress, and second, that technical progress occurs within the
boundaries defined by the basic technological trajectory. This suggests that
user feedback can be self-reinforcing within the parameters of the
technological trajectory of semiconductors.
Finally, Hutcheson and Hutcheson (1996) offer a more critical view of the
regularity typically associated with Moore's Law. They point out that
underlying production limitations are becoming increasingly difficult to
overcome. "In stark contrast to what would seem to be implied by the
dependable doubling of transistor densities, the route that led to today's
chips was anything but smooth. It was more like a harrowing obstacle course
that repeatedly required chipmakers to overcome significant limitations in
their equipment and production processes. None of these problems turned out to
be the dreaded showstopper whose solution would be so costly that it would slow
or even halt the pace of advances in semiconductors and, therefore, the growth
of the industry. Successive roadblocks, however, have become increasingly
imposing, for reasons tied to the underlying technologies of semiconductor
The physics underlying semiconductor manufacturing steps suggests several
potential obstacles to continued technical progress and density doubling. For
example, the gigabit chip generation may finally force technologists up against
the limits of optical lithography. Lithographers confront the formidable task
of building structures smaller than the wavelength of light (see Figure 2).
"Think of it as trying to paint a line that is smaller than the width of the
paintbrush," says a researcher at Bell Labs. (Stix 1995) He goes on to say that
there are ways of doing it, but the cost involved may be prohibitive.
Economics may constrain Moore's Law before the limits of physics. The reality
is that both are closely intertwined which brings us to "Moore's Second Law."
Moore's Second Law: Economics
In 1977, Robert Noyce, then Chairman of the Board at Intel, wrote:
"Today, with circuits containing 218 (262,144) elements available, we have not
yet seen any significant departure from Moore's law. Nor are there any signs
that the process is slowing down, although a deviation from exponential growth
is ultimately inevitable. The technology is still far from the fundamental
laws of physics: further miniaturization is less likely to be limited by the
laws of physics than by the laws of economics."
Almost two decades later, Noyce's foresight of economic limitations has brought
about what has been referred to Moore's Second Law. (Ross 1995) "What has come
to worry me most recently is the increasing cost. . . This is another
exponential," writes Moore (Economist 1995). In today's dollars, the cost of a
new "fab" (fabrication plant) has risen from $14M in 1966 to $1.5B in 1995. By
1998 work will begin on the first $3B fabrication plant. Between 1984 and
1990, the cost of a fab doubled, but chip makers were able to triple the
performance of a chip. In contrast, the next generation of fabs will see cost
double again by 1998, but this is likely to produce only a 50% improvement in
performance. The economic law of diminishing marginal returns appears to be
setting in. If this exponential trend continues, by 2005 the cost of a single
fab will pass the $10B mark (in 1995 dollars) or 80% of Intel's current net
According to Dan Hutcheson, President of VLSI Research, Moore's Law will fall
victim to economics before it reaches whatever limitations exist in physics:
"The price per transistor will bottom out sometime between 2003 and
2005. From that point on, there will be no economic point to making
transistors smaller. So Moore's Law ends in seven years." (Ross
State-of-the-art fabs become obsolete in three to five years; staying ahead in
such a business requires a large chip maker to spend vast sums simply to keep
up with technology. In 1995 the industry spent $30B in new fab capacity;
Intel's share alone was $3B. To recoup its investment, a semiconductor firm
will want to run the plant as near to full capacity as possible. When levels
of demand change (and supply remains fixed), then wide price swings -- much
like in farming, another commodity business -- cause supply to adjust stickily
to demand. The result is an historical pattern of volatile market cycles
producing mass surpluses and shortages. These industry-unique cycles are
further aggravated by normal business cycle behavior at the macroeconomic
level. The U.S. industry crisis of the early-mid 1980s is a vivid reminder of
this economic impact.
So what are firms to do? Hutcheson and Hutcheson (1996) suggest that firms
collaborate -- team up. They cite that increasingly, chip makers are sharing
the costs of a new fab with customers, competitors, even countries. IBM and
Toshiba are building a plant together, as are Motorola and Siemens. Also
state-organized consortia appear to be on the rise in the newer participant
countries in semiconductor manufacturing such as Korea and Singapore. Another
point is that the role of the suppliers of materials and especially,
manufacturing equipment, has become even more vital to the overall success of
the industry. The next section describes the evolving role of the
semiconductor manufacturing equipment (SME) industry.
The Semiconductor Manufacturing Equipment (SME) Industry
The invention of the point contact transistor by Bell Laboratories signaled the
emergence of a new industry in the early-1950s -- semiconductor manufacturing.
During this industry's infancy (and prior to the introduction of ICs), the
companies producing semiconductors generally developed the equipment required
to manufacture these new electronic components. The semiconductor equipment
manufacturing industry, in fact, evolved out of the chip manufacturing firms.
Von Hippel (1988) provides a detailed historical account of fifteen major
innovations in silicon semiconductors (among other technologies) and strongly
argues the dominant role of the user (chip-maker) in the innovation process.
In many cases (e.g., mask alignment using split field optics), Fairchild or
another user firm initially developed the technique in-house which was later
offered commercially by an equipment manufacturer.
Gordon Moore recalls one experience of his early days at Fairchild involving a
technician who was paid for work on nights and weekends at home to make
capillary tubes used in a critical gold-bonding process: "Pretty soon
that business got so big that he quit and set up Electroglass, which was the
first one of these equipment companies that I know of. . . and he'd also been
helping me build furnaces -- we had to build our own furnaces in those days.
So he took the basic furnace design and started building furnaces, first for
us, then for the industry."
Volume production of ICs, starting in the 1960s, intensified this pattern. The
increased complexity of ICs necessitated the development of new production
equipment capable of much clearer resolution, narrower line widths, and more
exacting alignment specifications than those used to produce discreet
electronic components. Printing complex circuit designs onto highly polished
wafers called for the development of sophisticated photolithographic and other
wafer processing techniques such as ion implanting and etching, in addition to
appropriate testing and assembly operations.
Since the early days of the industry, the production process has become
increasingly -- now almost exclusively -- automated. Today, semiconductor
manufacturing technologies can go no farther than the equipment necessary for
their manufacture will allow. For example, the minimum line widths and,
therefore, the maximum integration levels attainable in pursuit of Moore's Law,
are directly influenced by the manufacturing equipment's lithographic
The relationship between chip-makers and equipment-makers continues to be very
close due to their mutual interdependence. This is in contrast to the
traditional American view of "arm's length and cautious" behavior between buyer
and seller. Equipment makers have long recognized their role as contributors
to, and participants in, this technology- driven industry. Partnership
arrangements, both formal and informal, along with technical seminars and
publications, trade association meetings, industry conventions, and normal
vendor/user relationships reinforce cooperative efforts between equipment
producers and device manufacturers.
The next section takes a broader look at various interpretations and uses of
Moore's Law, and its implications including those important to public policy.
Other Interpretations and Uses
Moore's Law is increasingly used as a metaphor or label for anticipated rates
of rapid change -- not only in semiconductors, but in broader contexts. The
source of this change is technological, but the effects of it are economic and
social. In this very complex arrangement, Moore acknowledges that Moore's Law
"gives us a short-hand to talk about things."
Recently, a software representative was quoted in the New York Times as saying,
"The length of eternity is 18 months, the length of a product cycle." In some
sense, Moore's Law has taken on a life of its own as it finds its way into the
broader community of users and other institutions impacted by the technology.
To assess this impact, an Internet keyword search on "Moore's Law" was recently
conducted. Out of well over 100 pertinent references, more than two dozen
quality references were obtained. Most references came from direct industry
application including the front-end component of the SME industry. The
majority of the references were from downstream user communities including
software, PC users, and network and Internet applications. It is interesting
to note that Moore's Law now has many "spin-offs" such as "Metcalf's Law."
Surprisingly, the fields of education and even marketing have referred to
Moore's Law. The following is a sample of the wide range of uses,
interpretations, and applications found.
Note that processing power, not circuit density, is increasingly becoming the
new basis of Moore's Law. "Management is not telling a researcher, 'You
are the best we could find, here are the tools, please go off and find
something that will let us leapfrog the competition.' Instead, the attitude is,
'Either you and your 999 colleagues double the performance of our
microprocessors in the next 18 months, to keep up with the competition, or you
are fired.'" (Odlyzko 1995)
"'Moore's Law' may one day be as important to marketing as the Four Ps:
product, price, place, and promotion. . . If it is borne out in the future the
way it has in the past, the powerful Pentium on your desktop will seem as
archaic as a 286 PC in a few years." (Koprowski 1996)
"We have become addicted to speed. Gordon Moore is our pusher. Moore's law,
which states that processing power will double every year and a half, has thus
far held true. CPU designers, always in search of a better fix, drain every
possible ounce of fat from processor cores, squeeze clock cycles, and cram
components into smaller and smaller dies." (Joch 1996)
"So holding 'Moore's Law' as the constant, the technology in place in
classrooms today will not be anything like the classroom of five years from
now!" (Wimauna Elementary School 1996)
"The End of Moore's Law: Thank God!. . . The End of Moore's Law will mean the
end to certain kinds of daydreaming about amazing possibilities for the Next
Big Thing; but it will also be the end of a lot of stress, grief, and unwanted
work." (CUUG 1996)
"Computer-related gifts must be the only Christmas presents that follow Moore's
Law." (Sydney Morning Herald 1995)
"Moore's Law is why . . . smart people start saving for the next computer the
day after they buy the one they have. . . Things are changing so fast that
everyone's knowledge gets retreaded almost yearly. Thank you, Mr. Moore. . .
[for] the internet, a creature of Moore's Law. . ." (Hettinga 1996)
Are There Any Good Analogues?
The examination of Moore's Law would not be complete without drawing analogues
to other technologies. This has been done often for various reasons. For
example, in arguing the uniqueness of the million-fold cost reductions and
performance improvements in semiconductors, Gordon Moore jokingly cites that if
similar progress were made in transportation technologies such as air travel, a
modern day commercial aircraft would cost $500, circle the earth in 20 minutes,
and only use five gallons of fuel. However, it may only be the size of a
shoebox. Stephen Kline of Stanford has suggested a bit more appropriate use of
the aircraft analogy, suggesting that the earlier era of rapid advances in
aircraft speed and performance may offer additional insight.
Carver Mead suggests that magnetic storage, specifically disk drive technology,
has followed a similar scaling path as semiconductors. He cites that PC hard
drives in particular have evolved from megabyte (million bytes) to gigabyte
(billion bytes) capacity in roughly a decade. This thousand-fold capacity
improvement approaches Moore's original extrapolation. Mead has done some
scaling calculations and continues to be amazed with the phenomenon. He
acknowledges, "I still don't understand that."
Mead and Erich Bloch have also suggested the field of biotechnology beginning
with Watson's and Crick's discovery of DNA. While there are others that could
be examined, some that have been used really miss the point. Take, for
example, the following Moore's Law analogy to railways recently offered by the
Economist (1996). "Consider the development of America's railways as an
example. In 1830, the industry boasted a mere 37 kilometers (23 miles) of
track. Ten years later it had twice as much. Then twice that, and twice again
-- every decade for 60 years. At that rate 19th-century train buffs might have
predicted that the country would have millions of kilometers of track by 1995.
In fact there are fewer than 400 km. Laying rails were too expensive to
justify connecting smaller towns; people simply did not need track everywhere.
Exponential growth gave way to something more usual -- a leveling off around a
stable value at which economic pressures were balanced. . . Americans stopped
building railways, but they did not stop becoming more mobile. As rail's
S-curve tailed off, Americans took to driving cars and built roads."
Used as an analogue to describe the limitations of Moore's Law, the railroad
analogy is limited in its application. Increasing railroad track area (or
roads, sea routes, bandwidth, etc.) really deals with implementation or
diffusion of technology -- transportation infrastructure in this case -- not
technological innovation. Moore's Law is about the pace of innovation (i.e.,
The next section attempts to summarize and draw together the major findings of
this examination. In doing so, implications for future research are discussed.
Moore's Law Reconsidered
Beginning as a simple observation of trends in semiconductor device complexity,
Moore's Law has become many things. It is an explanatory variable for the
qualitative uniqueness of the semiconductor as a base technology. It is now
recognized as a benchmark of progress for the entire semiconductor industry.
And increasingly it is becoming a metaphor for technological progress on a
broader scale. As to explaining the real "causes" of Moore's Law, this
examination has just begun. For example, the hypothesis that semiconductor
device users' expectations feed back and self-reinforce the attainment of
Moore's Law (see Figure 1) is still far from being validated or disproved.
There does appear to be support for this notion primarily in the software
industry (e.g., "Wintel" de facto architecture). Further research, including
survey research and additional interviews, is required to address this possible
What has been learned from this early investigation is the critical role that
process innovations in general, and manufacturing equipment innovations in
particular play in providing the technological capability to fabricate smaller
and smaller semiconductor devices. The most notable of process innovations was
the planar diffusion process in 1959 -- the origin of Moore's Law. Consistent
with Thomas Kuhn's (1962) paradigm- shifting view of "scientific revolution,"
many have described the semiconductor era as a "microelectronics revolution."
(Forester 1982, Braun and Macdonald 1982, Gilder 1989, Malone 1996, and others)
Indeed, the broad applications and pervasive technological, economic, and
social impacts that continue to come forth from "that astonishing microchip"
(Economist 1996) seem almost endless.
However, this phenomenon has also been aptly described by Bessant and Dickson
(1982) as evolutionary, albeit at an exponential rate. "In a definite
technical sense there has been no revolution (save, perhaps, for the invention
of the transistor in 1947) but rather a steady evolution since the first
Moore's Law is one measure of the pace of this "steady evolution." Its
regularity is daunting. The invention of the transistor, and to a lesser
degree the integrated circuit a decade later, represented significant
scientific and technological breakthroughs, and are both classic examples of
the Schumpeterian view of "creative destruction" effects of innovation. This
is evidenced by the literal creation of an entire new semiconductor industry at
the expense of the large electronics firms that dominated the preceding vacuum
tube technological era. This period of transition from old technology to new
technology is characterized by instability, and factors that underpin very
This would be considered a shift in the economic and technological paradigm
(Dosi 1984, 1988) similar to Constant's (1980) account of the "Turbojet
Revolution" where the invention of the turbojet, along with co-evolutionary
developments including advancements in airframe design and materials, enabled
significant performance improvements in air speed and altitude. The turbojet
produced a whole new "jet engine" industry and helped redefine both military
and commercial aircraft industries and their users (e.g., airlines). Following
the early experimental years of the turbojet, these industries settled in on a
new technological trajectory (Dosi 1984, 1988) toward the frontier of the "jet
Innovations within the boundary limits of this new frontier occurred at a
rapid, but more regular rate. The role of accumulated knowledge -- both tacit
and explicit (Freeman 1994) -- and standards (e.g., the role of the Proney
brake as the benchmark for performance measurement and testing) are emphasized.
Similarly, semiconductor development since the planar process has followed
Klein's (1977) description of "fast history," but is more in line with Pavitt's
(1986) application of "creative accumulation" (i.e., the new technology builds
on the old). The "new" technology in this case is the accumulated incremental
-- particularly process-oriented -- advancements indicative of the Moore's Law
semiconductor "era." As for standards, indeed Moore's Law itself is used
throughout the industry as the benchmark of progress, evidenced most strikingly
by the kilo- to mega- to giga-bit density DRAM chips. Increasingly, regular
advances in microprocessor performance measures such as MIPS (millions of
instructions per second) and MHZ processing speeds follow -- and become part of
-- Moore's Law.
Preliminary Conclusions and Future Research
Based on a review of the literature (academic, popular business, and computer trade), an
Internet keyword search, and a few personal interviews with major semiconductor players
including Gordon Moore and Carver Mead, much has been learned. But no firm
conclusions can yet be drawn about what "causes" or what is "caused by" Moore's Law.
This examination has revealed that there are two major lines of pursuit from this point.
The first is based on the user or "downstream" point of view. This analysis would address
the "Wintel" and other "demand-pull" innovation arguments including the expectations
feedback hypothesis, but requires more extensive and direct research methods. The
second avenue is the from the supplier or "upstream" perspective. Since much of the
literature is concerned with process limitations (e.g., is it possible to reach 'Point One') --
reflecting the reality of the industry's everyday challenge -- there is a tendency to
examine the "physics" limitations of photolithography and other essential fabrication
aspects. At this point it is not clear whether this is just another example of the endless
technological pursuit of increasing capabilities and performance similar to earlier
advances in turbojet technology. Or is Moore's Law, in Carver Mead's terms, "permission
to believe that it will keep going," reinforced by human belief systems? (UVC 1992) Or
is it some altogether different variable, yet to be determined?
The answers to these questions are probably all "yes, to some degree." Future
research will attempt to better answer these and related questions with more
Product and Technology Overview
This appendix provides a brief explanation of some of the terminology
associated with semiconductor electronic devices, products, and related
technologies. A semiconductor is a solid-state electronic device which can be
switched to conduct or block electric currents (thus, the term semiconductor).
A defining characteristic of the semiconductor since the invention of the solid
state transistor almost half a century ago has been the continuous
miniaturization of these devices. Because of this, the popular term,
"microelectronics" has become synonymous with "semiconductors." Most
semiconductors are made with silicon, which has the unique "semiconducting"
chemical property, although other materials, such as gallium arsenide, can also
be used. Semiconductors can be divided into two main groups: integrated
circuits and discrete devices. Integrated circuits (ICs) consist of many
active and passive elements that are fabricated and connected together in a
single chip. Discrete devices, by contrast, consist of a single switching
element such as a diode, rectifier, or transistor.
ICs can be further broken down into digital or linear (analog) devices.
Digital ICs store and process information expressed in binary numbers or "bits"
(i.e., "1"s or "0"s). They perform arithmetical operations or logical
functions by manipulating binary signals (on- off switches of constant
voltages). Digital devices form the basis of modern computing and
telecommunications technologies. In contrast, linear or analog devices deal
with continuous scales in which each point merges imperceptibly into the next.
Real-world phenomena, such as heat and pressure, are analog in nature. These
devices measure input analog signals, amplify input to output analog signals,
or convert analog signals to digital data or vice versa. The largest
subcategory of analog devices are special consumer circuits for specific
consumer applications such as radio or television receivers. ICs are, by far,
the major and fastest growing semiconductor product. The two most important IC
product categories are the microprocessor (and related "micro" products) and
A microprocessor is an IC which provides on one chip functions equivalent to
those contained in the processing unit or "brains" of a computer. The popular
Intel Pentium(tm), used in most IBM-compatible PC systems, is the most recent
in a successful lineage of Intel microprocessors which date back to 1971
(Adams, Kash, and Rycroft 1996). A microcontroller is a microprocessor and
memory integrated on the same chip -- a computer on a chip -- and is used in
dedicated applications such as traffic signals, laser printers, or antilock
brakes in automobiles. A Digital Signal Processor (DSP) is a type of
microcontroller. Microperipheral devices accompany microprocessors to handle a
computer's related function such as graphics. Logic chips handle the
mathematical treatment of formal logic by translating AND, OR, and NOT
functions into a switching circuit, or gate. The basic logic functions
obtained from gate circuits form the foundation of computing machines. This
category includes application-specific ICs (ASICs) including gate arrays,
standard cells, and programmable logic devices (PLDs). ASICs are semi-custom
devices, with a customer connecting standard elements in a customized fashion.
Semiconductor memory devices store information in the form of electrical
charges. In terms of sales volume, memories represent the largest single
product category. Memory devices can be subdivided into volatile and
non-volatile families. The Random-Access Memory (RAM) is a volatile memory
product, which means it will lose stored information (charges) once power is
turned off. This product sub-category includes the popular DRAM (Dynamic RAM)
and, to a lesser degree, the SRAM (Static RAM). In contrast, non-volatile
memory products retain information even after power is turned off. They are
used in applications requiring repeatedly used information. Non-volatile
memory includes the ROM (Read Only Memory) and its "erasable" derivatives, the
EPROM (Erasable Programmable ROM) and EEPROM (Electrically EPROM). In an
EPROM, stored information is erased by exposure to ultraviolet light, whereas
the EEPROM has the convenience of selective erasure of information through
electrical impulses rather than exposure to ultraviolet light. "Flash Memory"
is an IC which has the ability to bulk erase its entire contents
simultaneously. It shares the advantage of other non-volatile memory in that
it retains information when power is turned off. Its ability to repeatedly
erase and re-program information makes it competitive with DRAMs or disk drives
for storing data, although flash is presently more expensive. Flash memory is
a rapidly growing market.
Finally, semiconductor devices can also be classified according to the
technology used in the fabrication process. Digital devices can be
manufactured by two different process variations: metal oxide silicon (MOS) or
digital bipolar. Traditionally, digital bipolar devices are faster, but
require more power and generate more heat, while MOS products consume less
power. Modern MOS processes are making this distinction obsolete, however.
Both MOS and digital bipolar manufacturing processes can be used to create
logic parts to perform arithmetical operations, as well as memory devices for
the storage of data.
Chips are made by creating and interconnecting transistors to form complex
electronic systems on a sliver of silicon. The fabrication process is based on
a series of steps, called mask layers, in which films of various material --
some sensitive to light -- are placed on the silicon and exposed to light.
After these deposition and lithographic procedures, the layers are processed to
"etch" the patterns that, when precisely aligned and combined with those on
successive layers, produce the transistors and connections. Typically, 200 or
more chips are fabricated simultaneously on a thin disk, or wafer, of silicon.
In the first set of mask layers, insulating oxide films are deposited to make
the transistors. Then a photosensitive coating, called the photoresist, is
spun over these films. The photoresist is exposed with a step and repeat
device (i.e., stepper), which is similar to an enlarger used to make
photographic prints. Instead of a negative, however, the stepper uses a
reticule, or mask, to project a pattern onto the photoresist. After being
exposed, the photoresist is developed, which delineates the spaces, known as
contact windows, where the different conducting layers interconnect. An etcher
then cuts through the oxide film so that electrical contacts to transistors can
be made, and the photoresist is removed. More sets of mask layers, based on
much the same deposition, lithography and etching steps, create the conducting
films of metal or polysilicon needed to link transistors. All told, about 19
mask layers are required to make a chip.
In present-day chips, the physical separation between semiconducting regions is
less than 1 micron, and the entire transistor is invisible to the naked eye.
Furthermore, the number of impurities must be controlled to within a few parts
per billion in some regions of the device. All of these processing steps must
be carried out in an environment completely free of particles -- clean-room
facilities. Thus, the trend in chip fabrication has consistently been towards
greater automation of the processes, which reduces the chances of contamination
or human error. (OTA,1986).
Scale of Approximate Sizes
Adams, Richard C., Don E. Kash, and Robert W. Rycroft. 1996.
Innovation of the Intel Microprocessor (GMU TIPP Working Paper no.
Agres, Ted. 1996. "IC Density Growth Is Key Issue for Industry," R&D Magazine,
June, pp. 29-32.
Allen, Ian. 1996. "Chip Power," Online.
Angel, David. 1994. Restructuring for Innovation: The Remaking of the U.S.
Semiconductor Industry (New York: The Guilford Press).
Arthur, W. Brian. 1994. Increasing Returns and Path Dependency in the
Economy (Ann Arbor, MI: University of Michigan Press).
Bajarin, Tim. 1996. "Technology For the 21st Century: Roundtable in
Multimedia," Online. Multimedia," Online. http://fantasia.brel.com./acw/news/na049611.html
Barlow, John Perry. 1996. "It's a Poor Man Who Blames His Tools: What
Does Technology Threaten? What Is Human?" Wired Online.
Bell, C. Gordon. 1991. High-Tech Ventures (Reading, MA: Addison-
Bessant, John, and Keith Dickson. 1982. Issues in the Adoption of
Microelectronics (London: Frances Pinter).
Bessant, J.R., J.A.E. Bowen, K.E. Dickson, and J. Marsh. 1981. The
Impact of Microelectronics: A Review of the Literature (New York: Pica
Brand, Stewart. 1995. "The Physicist" (interview with Nathan Myhrvold),
Online. Online. http://www.hotwired.net/wired/3.09/features/myhrvold.html
Braun, Ernest, and Stuart Macdonald. 1982. Revolution in Miniature: The
History and Impact of Semiconductor Electronics (Cambridge: Cambridge
Constant, II, Edward W. 1980. The Origins of the Turbojet Revolution
(Baltimore: Johns Hopkins University Press).
Conway, Lynn. 1981. The MPC Adventures: Experiences with the
Generation of VLSI Design and Implementation Design and
Implementation Methodologies (Palo Alto: Xerox PARC).
Calgary Unix User's Group. 1996. "The End of Moore's Law: Thank
God!" Online. od!" Online. ">>od!" Online. http://www.cuug.ab.ca:8001/CUUGer/9605/editorial/html Davey, Tom. 1996. "Industry Buzzes About 1,400 Mhz, 64-bit Chip," PC
Week On Line, July 24.
Davis, Jim. 1996. "Researchers Compete to Speed Up Chips," Online.
Dorfman, Nancy S. 1987. Innovation and Market Structure: Lessons from
the Computer and Semiconductor Industries (Cambridge: Ballinger).
Dosi, Giovanni. 1988. "Sources, Procedures, and Microeconomic Effects
of Innovation," Journal of Economic Literature, XXVI:3, September, pp.
__________. 1984. Technical Change And Industrial Transformation: The
Theory and an Application to the Semiconductor Industry (London: The
The Economist. 1996. "That Astonishing Microchip," and "When the
Chips Are Down," March 23, pp. 13-14 and pp. 19-21.
__________. 1995. "The End of the Line," July 15, pp. 61-62.
Edwards, Owen. 1995. "ASAP Legends: Gordon Moore," Forbes ASAP,
Electronics Industries Association. 1995. The U.S. Consumer Electronics
Industry: In Review '95 Edition.
Fallows, James. 1993. "Looking at the Sun," The Atlantic Monthly,
November, pp. 69-100.
Foster, Richard N. 1986. Innovation: The Attacker's Advantage (New
York: Summit Books).
Freeman, Christopher. 1994. "The Economics of Technical Change,"
Cambridge Journal of Economics, 18, pp. 463-514.
Gemini C4 Lab. 1996. "Context: Convergence, Multimedia and the
Information Highway." Online.
Gilder, George. 1996. "Feasting on the Giant Peach," Forbes ASAP,
August 26, 19 pp.
__________. 1995. "The Coming Software Shift: Telecosm," Forbes
ASAP, August 28, pp. 147-162.
__________. 1994. "The Bandwidth Tidal Wave," Forbes ASAP,
December 5, 1994, 16 pp.
__________. 1993. "Metcalf's Law and Legacy," Forbes ASAP,
September 13, pp. 158-166.
__________. 1989. Microcosm: The Quantum Revolution in Economics
and Technology (New York: Simon and Schuster).
__________. 1988. "You Ain't Seen Nothing Yet." Forbes, April 4, pp.
Hanson, Dirk. 1982 The New Alchemists: Silicon Valley and the
Microelectronics Revolution (Boston: Little, Brown and Co.).
Hazewindus, Nico, with John Tooker. 1982. The U.S. Microelectronics
Industry: Technical Change, Industry Growth and Social Impact (New
York: Pergamon Press).
Hettinga, Robert. 1996. "The Geodesic Network, OpenDoc, and
Hoeneisen, B., and C.A. Mead. 1972. "Fundamental Limitations in
Microelectronics I: MOS Technology," Solid-State Electronics Vol. 15,
Howell, Thomas R., William A. Noellert, Janet H. MacLaughlin, and Alan
Wm. Wolff. 1988. Microelectronics Race: The Impact of Government
Policy on International Competition (Boulder: Westview Press).
Huber, Peter W., and U.S. Department of Justice Antitrust Division. 1987.
The Geodesic Network: 1987 Report on Competition in the Telephone
Industry (Washington D.C.: U.S. Government Printing Office, January).
Hutcheson, G. Dan and Jerry D. Hutcheson. 1996. "Technology and
Economics in the Semiconductor Industry," Scientific American, January,
Institute of Electrical and Electronics Engineers. 1995. 1995
University/Government/Industry Microelectronics Symposium
Joch, Alan. 1995. "Heart Throbs: CPU Designers Beg, Borrow, and Steal
for Every Ounce of Performance They Can Get," Byte, Online.
Johnston, Stuart J., and David Needle. 1996. "The PC Still Matters,"
Information Week, June 3, pp. 48-50.
Karlgaard, Karl. 1995. "Waste Is Good," Forbes ASAP, October 9, p. 9.
Kash, Don E., and Robert Rycroft. 1996. Technology Policy in a Complex
World (unpublished manuscript.
Kerridge, Charles. 1983. Microchip Technology: The Past and the Future
(Chichester: John Wiley & Sons).
Kilpatrick, Jr., Henry E. 1995. Competition Policy, Trade Policy and
Learning By Doing in the Japanese Semiconductor Industry (unpublished
Klein, Burton H. 1984. Prices, Wages and Business Cycles (New York:
__________. 1977. Dynamic Economics (Cambridge: Harvard University
Kline, Stephen J., and Nathan Rosenberg. 1986. "An Overview of
Innovation," in Ralph Landau and Nathan Rosenberg, eds., The Positive
Sum Strategy (Washington, DC: National Academy Press), pp. 275-305.
Koprowski, Gene. 1996. "The Next Big Wave: Marketing Tools," Online.
Kuhn, Thomas S. 1962. The Structure of Scientific Revolutions (Chicago:
University of Chicago Press).
Lenzner, Robert. 1995. "The Reluctant Entrepreneur," and "Whither
Moore's Law?" Forbes, September 11, pp. 162-168.
Lazowska, Edward. 1996. "20th Annual Faculty Lecture." University of
Washington, Department of Computer Science and Engineering, Online.
Levitt, Theodore. 1960. "Marketing Myopia," Harvard Business Review,
July-August, pp. 45-56.
Malone, Michael S. 1996. "Chips Triumphant," Forbes ASAP, February
26, pp. 53-82.
McCormick, Jim. 1995. "A Brief History of Silicon Valley." Online.
Mead, Carver A. 1994 "Scaling of MOS Technology to Submicrometer
Feature Sizes," Journal of VLSI Signal Processing, 8, pp. 9-25.
__________. 1985. Mead's Four Laws of the Economics of Innovation
Moore, Gordon E. 1996. "Some Personal Perspectives on Research in the
Semiconductor Industry," in Rosenbloom, Richard S., and William J.
Spencer (Eds.). Engines of Innovation (Boston: Harvard Business School
Press), pp. 165-174.
__________. 1995. "Lithography and the Future of Moore's Law." Paper
presented to the Microlithography Symposium, February 20.
__________. 1965. "Cramming More Components Onto Integrated
Circuits," Electronics (Volume 38, Number 8), April 19, pp. 114-117.
Murphy, Michael. 1996. "Betting on Moore's Law," Wired, June, p. 88.
Norton, R.D. 1996. The Westward Rebirth of American Computing
Noyce, Robert N. 1977. Microelectronics (San Francisco: W.H. Freeman
and Scientific American).
Odlyzko, Andrew. 1995. "The Decline of Unfettered Research." Online.
Organization for Economic Co-operation and Development (OECD).
1985. The Semiconductor Industry: Trade Related Issues (Paris: OCDE).
Parkinson, G. Northcote. 1957. Parkinson's Law and Other Studies in
Administration (Boston: Houghton Mifflin).
Pavitt, Keith. 1986. "'Chips' and 'Trajectories': How Does the
Semiconductor Influence the Sources and Directions of Technical
Change?" in Roy M. MacLeod, ed., Technology and the Human Prospect
(London: Frances Pinter).
Pennings, Johannes M. and Arend Buitendam (Eds.). 1987. New
Technology As Organizational Innovation: The Development and
Diffusion of Microelectronics (Cambridge: Ballinger).
Popper, Karl R. 1986. Objective Knowledge: An Evolutionary Approach
(Oxford: Clarendon Press).
Rayner, Bruce E. 1996. "Can Moore's Law Continue Indefinitely?"
from a lecture by and interview with Gordon E. Moore), 4 pp.
Robertson Stephens & Co. 1996. "Highlights of the Robertson Stephens
Semiconductor Conference." August 1, 16 pp.
Ross, Philip E. 1995. "Moore's Second Law," Forbes, March 25, pp. 116-
Rothschild, Michael. 1990. Bionomics: The Inevitability of Capitalism
(New York: Henry Holt).
Semiconductor Industry Association. 1996. Academics Information
__________. 1994. The National Technology Roadmap for
__________ (Thomas R. Howell, Brent L. Bartlett, and Warren Davis).
1992. Creating Advantage: Semiconductors and Government Industrial
Policy in the 1990s (San Jose: Dewey Ballantine).
Siewiorek, Daniel P., C. Gordon Bell, and Allen Newell. 1982. Computer
Structures: Principles and Examples (New York: McGraw-Hill).
SpinDoctor (a). 1996. "The Coming Software Economy." Online.
__________ (b). 1996. "Daily Dose -- The Next Big Thing 3: Fry Pilots."
Online. Online. http://www.spindoczine.com/dose/3.3.html
Steinmueller, William E. 1987. Microeconomics and Microelectronics:
Economic Studies of Integrated Circuit Technology (doctoral dissertation),
Stix, Gary. 1995. "Toward 'Point One'," Scientific American, February, pp.
Sydney Morning Herald. 1995. "Light Years Ahead," Online.
Time-Life Books. 1990. The Chipmakers (Alexandria, VA: Time-Life
Toda, Bobby. 1996. "Papken S. Der Torossian Speaks at the May
Meeting," Online. Meeting," Online. http://www.3wc.com.aama.papkenspeaks.html (Asian
American Manufacturers Association).
Turton, Richard. 1995. The Quantum Dot (New York: Oxford University
Tyson, Laura D'Andrea. 1992. Who's Bashing Whom? Trade Conflict in
High-Technology Industries (Washington, DC: Institute for International
University Video Communications. 1996. Gordon Moore: Nanometers
and Gigabucks [videotape] (Stanford, CA: UVC).
__________. 1993. Tracking the Teraflop [video recording] (Stanford,
__________. 1992. How Things Really Work: Two Inventors on
Innovation, Gordon Bell and Carver Mead [video recording] (Stanford,
U.S. Congress, Office of Technology Assessment. 1986. Microelectronics
Research and Development -- A Background Paper, OTA-BP-CIT-40
(Washington, DC: U.S. Government Printing Office, March).
U.S. Department of Commerce, International Trade Administration. 1985.
A Competitive Assessment of the U.S. Semiconductor Manufacturing
Equipment Industry (Washington DC: U.S. Government Printing Office).
Vvedensky, Dimitri. 1996. "Size Is Everything." Online.
von Hippel, Eric. 1988. The Sources of Innovation (New York: Oxford
Waldrop, M. Mitchell. 1992. Complexity: The Emerging Science at the
Edge of Order and Chaos (New York: Simon and Schuster).