Design

The Computational Limits of Deep Learning

ribCo-authors: Kristjan Greenewald, Keeheon Lee, and Gabriel Manso

rib Paper

ribWired, VentureBeat, Discover, The Next Web, Interesting Engineering, Tech Gig

Deep learning’s recent history has been one of achievement: from triumphing over humans in the game of Go to world-leading performance in image recognition, voice recognition, translation, and other tasks. But this progress has come with a voracious appetite for computing power. This article reports on the computational demands of Deep Learning applications in five prominent application areas and shows that progress in all five is strongly reliant on increases in computing power. Extrapolating forward this reliance reveals that progress along current lines is rapidly becoming economically, technically, and environmentally unsustainable. Thus, continued progress in these applications will require dramatically more computationally-efficient methods, which will either have to come from changes to deep learning or from moving to other machine learning methods.

Design

How Fast do Algorithms Improve

ribCo-authors: Yash Sherry

rib Paper, currently in review

Algorithms are one of the fundamental building blocks of computing. But current evidence about how fast algorithms improve is anecdotal, using small numbers of case studies to extrapolate.

In this work, we gather data from 57 textbooks and more than 1,137 research papers to present the first systematic view of algorithm progress ever assembled. There is enormous variation. Around half of all algorithm families experience little or no improvement. At the other extreme, 13% experience transformative improvements, radically changing how and where they can be used. Overall, we find that, for moderate-sized problems, 30% to 45% of algorithmic families had improvements comparable or greater than those that users experienced from Moore’s Law and other hardware advances

Design

Building the Algorithmic Commons: Who discovered the algorithms that underpin modern computing?

ribCo-authors: Shuning Ge and Yash Sherry

rib Paper, currently in review

Recent work has revealed rapid improvement in the algorithms that underpin modern computing. For many computations, these algorithmic innovations have been more important than those in computer hardware (including Moore’s Law, which is known to have substantially improved firm productivity). In this article, we analyze who built the “Algorithmic Commons”. We find that the United States has been the largest contributor of these public goods, with universities and large private labs (e.g. IBM) playing the biggest role. More broadly, we find a historical pattern of contributions consistent with world geopolitics, where the United States took algorithmic leadership in the post-war period, but that this has faded in recent decades as Europe recovered and then Asia grew.