The Economic Impact of Moore’s Law

“Computing performance doubles every couple of years” is the popular re-phrasing of Moore’s Law, which describes the 500,000 – fold in crease in the number of transistors on modern computer chips. But what impact has this 50 –
year expansion of the technological frontier of computing had on the productivity of firms?

This paper focuses on the surprise change in chip design in the mid-2000s, when Moore’s Law faltered. No longer could it provide ever-faster processors, but instead it provided multicore ones with stagnant speeds.

Using the asymmetric impacts from the changeover to multicore, this paper shows that firms that were ill-suited to this change because of their software usage were much less advantaged by later improvements from Moore’s Law.
Each standard deviation in this mismatch between firm software and multicore chips cost them 0.5-0.7pp in yearly
total factor productivity growth. These losses are permanent, and without adaptation would reflect a lower long term growth rate for these firms. These findings may help explain larger observed declines in the productivity growth
of users of information technology.

rib Paper


There’s Plenty of Room at the Top: What will drive growth in computer performance after Moore’s Law ends?

The end of Moore’s Law will not mean an end to faster computer applications. But it will mean that performance gains will need to come from the “top” of the computing stack (software, algorithms and hardware) rather than from the traditional sources at the “bottom” of the stack (semiconductor physics and silicon-fabrication technology). In the post-Moore era, performance engineering — restructuring computations to make them run faster — will provide substantial performance gains, but in ways that are qualitatively different from gains due to Moore’s Law. Improvements will no longer be broad-based and come on a predictable schedule; instead they will be opportunistic, uneven, and sporadic, and they will suffer from the law of diminishing returns. Performance engineering will also have challenges: software bloat will need to be excised, algorithms will need to be rethought, and hardware will need to embrace specialized circuitry. Big system components are a promising way to tackle these challenges, for both technical and economic reasons. This article summarizes the conclusions of a three-year study by a team of MIT researchers on the prospects for enhancing computer performance after Moore’s Law ends.



The Decline of Computers as a General Purpose Technology: Why Deep Learning and the End of Moore’s Law are Fragmenting Computing

rib SSRN Top 5 Paper of the Week

Deep Learning, Hardware Specialization, and the Fragmentation of Computing

It is a triumph of technology and of economics that our computer chips are so universal – the staggering variety of calculations they can compute make countless applications possible. But, this was not always the case. Computers used to be specialized, doing only narrow sets of calculations. Their rise as a ‘general purpose technology (GPT)’ only happened because of ground-breaking technical advancements by computer scientists like von Neumann and Turing, and virtuous economics common to general purpose technologies, where product improvement and market growth fuel each other in a mutually reinforcing cycle.
This paper argues that technological and economic forces are now pushing computing in the opposite direction, making computer processors less general-purpose and more specialized. This process has already begun, driven by the slow down in Moore’s Law and the algorithmic success of Deep Learning. This threatens to fragment computing into ‘fast lane’ applications that get more powerful customized chips and ‘slow lane’ applications that get stuck using general-purpose chips whose progress fades.
The rise of general purpose computer chips has been remarkable. So, too, could be its fall. This paper outlines the forces already starting to fragment this general purpose technology.

rib Paper


How far can Deep Learning take us? A look at Performance and Economic Limits

Co-authors: Kristjan Greenewald and Keeheon Lee

Since 2012 Deep Learning has revolutionized Machine Learning / Artificial Intelligence research, winning countless competitions and reshaping commercial applications. But what were the drivers of this success and how long can they continue? The paper looks at the computer science and economic limits facing Deep Learning.


Algorithm designers are the unsung heroes of computing performance

Co-authors: Yash Sherry

It is often said that algorithms have improved as fast computer hardware. But tests of this hypothesis have only ever involved test of one or two cherry-picked algorithms. This project aims to present the first broad view of this topic by assembling a comprehensive database of algorithms and how they have developed over time using a combination of targeted research and wiki-based collaboration.