Design

The Productivity Impact of Moore’s Law

“Computing performance doubles every couple of years” is the popular re-phrasing of Moore’s Law, which describes the 500,000 – fold in crease in the number of transistors on modern computer chips. But what impact has this 50 –
year expansion of the technological frontier of computing had on the productivity of firms?

This paper focuses on the surprise change in chip design in the mid-2000s, when Moore’s Law faltered. No longer could it provide ever-faster processors, but instead it provided multicore ones with stagnant speeds.

Using the asymmetric impacts from the changeover to multicore, this paper shows that firms that were ill-suited to this change because of their software usage were much less advantaged by later improvements from Moore’s Law.
Each standard deviation in this mismatch between firm software and multicore chips cost them 0.5-0.7pp in yearly
total factor productivity growth. These losses are permanent, and without adaptation would reflect a lower long term growth rate for these firms. These findings may help explain larger observed declines in the productivity growth
of users of information technology.

rib Paper

Design

What comes after Moore’s Law?

Co-authors: Charles E. Leiserson, Joel S. Emer, Bradley C. Kuszmaul, Butler W. Lampson, Daniel Sanchez and Tao B. Schardl

The end of Moore’s Law will not mean an end to faster computer applications. But it will mean that performance gains will need to come from the top of the computing stack (software, algorithms, and hardware) rather than from the traditional sources at the bottom of the stack (semiconductor physics and silicon-fabrication technology). In the post-Moore era, performance engineering — restructuring computations to make them run faster — will provide substantial performance gains, but in ways that are qualitatively different from gains due to Moore’s Law. Instead of broad-based improvements on a predictable schedule, gains from performance engineering will be opportunistic, uneven, and sporadic. Tackling performance from the top will produce challenges: software bloat will need to be excised, algorithms will need to be rethought, and hardware will become less general-purpose. Big system components will provide a promising venue for these changes for both technical and economic reasons.

Media

Design

The Rise and Fall of a General Purpose Technology: The boom in deep learning is actually a symptom of computing becoming less general

Co-author: Svenja Spanuth

Computer chips are a classic example of a general-purpose technology: innovation and investments fueled by those that value chips the most are providing widespread benefits across society. But as Moore’s Law winds down, IT users are increasingly producing specialized chips that provide great benefits to them, but many fewer benefits for others. Indeed, the rise of Deep Learning would not have been possible without such specialized chips. This paper looks at how this trend has developed and what economic consequences it will have.

Design

How Better Information: Evidence from a Randomized Control Trial (and Fast Computers) affect Stock Market Performance

Co-authors: Daniel Rock, Guillaume St. Jacques, Nick Fazzari, and Andrew Lo

In this randomized control trial with a stock exchange we examine how stock market outcomes change for stocks in the treatment group where market participants receive additional performance signals, as compared to the control group which does not.

Design

Understanding How Software affects Productivity

Co-authors: Karim Lakhani, Dan Sichel

Despite repeated efforts, current estimates of how IT affects national productivity ignore software improvements – despite the widespread understanding that these effects are substantial – because they are so difficult to measure. In this project we bring a new panel dataset to this question that allows us to understand how programmer speed and effectiveness have changed over time.

Design

Algorithm designers are the unsung heroes of computing performance

It is often said that algorithms have improved as fast computer hardware. But tests of this hypothesis have only ever involved test of one or two cherry-picked algorithms. This project aims to present the first broad view of this topic by assembling a comprehensive database of algorithms and how they have developed over time using a combination of targeted research and wiki-based collaboration.