Nilesh Sarkar / Notes

Sutskever's List: Foundational AI Research

When legendary game developer John Carmack asked how to catch up on modern artificial intelligence, Ilya Sutskeverβ€”OpenAI's co-founder and former Chief Scientistβ€”handed him a curated list of research papers. Sutskever famously remarked that these papers contained "90% of what matters today" in AI.

The list is not merely a bibliography; it is a worldview. It traces the arc of deep learning from the empirical breakthroughs of AlexNet to the scaling laws that define modern LLMs, while grounding these advances in the theoretical bedrock of compression and complexity theory.

"I plowed through all those things, and it all started sorting out [AI] in my head... not extreme black-magic mathematical wizardries, but simple techniques that make perfect sense." β€” John Carmack

Ilya's Worldview

01. Don't Bet Against Deep Learning

A rejection of hybrid approaches. The list focuses almost exclusively on supervised and self-supervised deep learning, omitting symbolic AI and classical planning.

02. Engineering Pragmatism

Genuine progress arises from refining ideas at scale rather than pure theoretical novelty. Scaling simple architectures often beats complex, handcrafted solutions.

03. Do More with Less at Scale

Complexity is a liability. The most enduring architectures (ResNet, Transformers) are often the simplest. Leverage brute-force scale to push the frontier.

04. Emergence via Compression

Intelligence is viewed as a compression process. Better compression of data leads to better generalization and emergent reasoning.

I. Foundational Architectures

Breakthroughs that proved neural networks could scale and outperform handcrafted features.

II. The Transformer Era & Scaling

The shift to pure attention and the realization that scale is the primary driver of performance.

III. Memory & Reasoning

IV. Complexity & Compression

The philosophical bedrock: intelligence as the discovery of patterns through compression.

V. Additional Key Readings