The Saga of the Closure Compiler, and Why TypeScript Won

Here's something that makes me feel old: in just six months, Gmail will celebrate its 20th anniversary. If you weren't actively developing web sites at the time, it's hard to capture just how revolutionary it was. This was a time when JavaScript was held in almost universally low regard. The idea that you could build a sophisticated web app using it was mind-boggling. But it clearly worked and it heralded the dawn of the single-page web app (SPA).

Behind this application was an exciting new tool that Google had built for creating large JavaScript applications: the Closure Tools (that's Closure with an 's', not Clojure with a 'j', which is a different thing). This included the Closure Compiler (CC), a JavaScript source-to-source compiler that did type checking. Sound familiar?

Unless you've worked on frontend at Google at some point in the past 20 years, it's unlikely that you've ever encountered the Closure Compiler. It occupied a similar niche to TypeScript, but TypeScript has absolutely, definitively won.

Still, it's interesting to revisit CC for a few reasons:

  1. By looking at a system that made different high-level design decisions than TypeScript, we can gain a deeper appreciation of TypeScript's design.
  2. It shows us missing features from TypeScript that it might not have even occurred to us to want.
  3. It's an interesting case study in the history of JavaScript.

In other words, the saga of the Closure Compiler gives us some perspective. TypeScript has become so ubiquitous that it's sometimes hard to imagine any other way of adding a type checker to JavaScript. The Closure Compiler shows us that the design space was larger than it looks in retrospect.

I wrote Closure-style JavaScript at Google most heavily from 2012–14. This post reflects Closure as it existed at that point. I'm much less familiar with how it's evolved since then.

What is the Closure Compiler?

TypeScript's motto is "TypeScript is a superset of JavaScript". Closure code, on the other hand, is JavaScript. It doesn't add any new syntax to the language.

If you've ever used TypeScript with --checkJs, it's a similar idea. Rather than adding types to JavaScript through new syntax, you add them via JSDoc-style comments.

Compare this TypeScript:

function max(a: number, b: number): number {
return a > b ? a : b;
}

to the equivalent Closurized JavaScript:

/**
* @param {number} a
* @param {number} b
* @return {number}
*/
function max(a, b) {
return a > b ? a : b;
}

An invalid invocation of max will result in an error:

> google-closure-compiler "--warning_level" "VERBOSE" "max.js"

max.js:12:16: WARNING - [JSC_TYPE_MISMATCH] actual parameter 1 of max does not match formal parameter
found : string
required: number
12| console.log(max('foo', 'bar'));
^^^^^

max.js:12:23: WARNING - [JSC_TYPE_MISMATCH] actual parameter 2 of max does not match formal parameter
found : string
required: number
12| console.log(max('foo', 'bar'));
^^^^^

0 error(s), 2 warning(s), 100.0% typed
function max(a,b){return a>b?a:b}console.log(max("foo","bar"));

This is similar to what tsc does in some ways but different in others. Just like tsc, it reports type errors in your code. And just like tsc, it outputs JavaScript (the last line). At a high level, type checking and JS emit are also the two things that TypeScript does.

There are some interesting differences, too. The Closure Compiler reports that our code is "100.0% typed". Using TypeScript terminology, this is a measure of how many any types you have. (Effective TypeScript discusses using the type-coverage tool to get this information in Item 44: Track Your Type Coverage to Prevent Regressions in Type Safety.)

The other interesting difference is that the output is minified. This gets us the fundamental design goal of the Closure Compiler: producing the smallest JavaScript possible.

Minification as Design Goal

When Gmail came out back in 2004, network speeds were much, much slower than they are today. The Gmail team found that runtime JavaScript performance was almost irrelevant compared to download times (Update: this isn't quite right, see below). If you wanted to make your page load faster, you needed to make your JavaScript bundle smaller. So this is the central goal of the Closure Compiler and its "advanced optimizations" mode.

To see how this works, let's look at some code to fetch and process data from the network.

Here's an "externs" file (the CC equivalent of a type declarations file) that defines a type and declares a function:

// api-externs.js
/**
* @typedef {{
* foo: string,
* bar: number,
* }}
*/
let APIResponse;

/** @return {APIResponse} */
function fetchData() {}

Some interesting things to note here:

  • Types are introduced via @typedef in a JSDoc comment. The APIResponse symbol exists at runtime but is not particularly useful. Just because CC is JavaScript doesn't mean that the JavaScript always makes sense.
  • The declaration of fetchData includes an empty implementation. TypeScript would use declare function here, but this is not JS syntax. So CC uses an empty function body.

Here's some more code that fetches data and processes it:

// api.js
/**
* @typedef {{
* longPropertyName: string,
* anotherLongName: number
* }}
*/
let ProcessedData;

/**
* @param {APIResponse} data
* @return {ProcessedData}
*/
function processData(data) {
return {
longPropertyName: data.foo,
anotherLongName: data.bar,
};
}

const apiData = fetchData();
const processedData = processData(apiData);
console.log(processedData.longPropertyName, processedData.anotherLongName);

Because it's just JavaScript, this code can be executed directly, presumably via a <script> tag (CC predates Node.js). No build step is required and your iteration cycle is very tight.

Let's look at what happens when you compile this:

> google-closure-compiler "--warning_level" "VERBOSE" "--externs" "api-externs.js" "api.js"

let ProcessedData;function processData(a){return{longPropertyName:a.foo,anotherLongName:a.bar}}const apiData=fetchData(),processedData=processData(apiData);console.log(processedData.longPropertyName,processedData.anotherLongName);

Here's what that looks like when we unminify it:

let ProcessedData;

function processData(a) {
return {
longPropertyName: a.foo,
anotherLongName: a.bar
};
}

const apiData = fetchData(), processedData = processData(apiData);

console.log(processedData.longPropertyName, processedData.anotherLongName);

Just like TypeScript, compilation here mostly consists of stripping out type information (in this case JSDoc comments).

Now look at what happens when we turn on "advanced optimizations":

> google-closure-compiler "--compilation_level" "ADVANCED" "--warning_level" "VERBOSE" "--externs" "api-externs.js" "api.js"

var a,b=fetchData();a={h:b.foo,g:b.bar};console.log(a.h,a.g);

The output is much shorter. Here's what it looks like unminified:

var a, b = fetchData();

a = {
h: b.foo,
g: b.bar
};

console.log(a.h, a.g);

This is a radical transformation of our original code. In addition to mangling our variable names (apiData became b, processedData became a), the Closure Compiler has mangled property names on ProcessedData (longPropertyNameh, anotherLongNameg) and inlined the call to processData, which let it remove that function entirely.

The results are dramatic. Whereas the minified code with simple optimizations was 231 bytes, the code with advanced optimizations is only 62 bytes!

Notice that CC has preserved some symbols: the fetchData function and the foo and bar property names. The rule is that symbols in an "externs" file are externally visible and cannot be changed, whereas the symbols elsewhere are internal and can be mangled or inlined as CC sees fit.

This is fundamentally unlike anything that TypeScript does. TypeScript does not rename symbols when it emits JavaScript nor does it attempt to minify your code. Even if you run your generated JavaScript through a minifier, it won't do anything nearly this radical. It's hard (or impossible) for a minifier to know which symbols or property names are part of an external API. So mangling property names is generally unsafe. You're unlikely to get anything smaller than the 231 byte "simple optimizations" output with TypeScript.

These results generally hold up well after gzip compression, and in larger projects as well. I ported a JavaScript library to Closure in 2013 and shrank my bundle by 40% vs. uglifyjs.

This is great stuff! So why didn't the Closure Compiler take off?

The Problems with Minification as a Design Goal

The externs file was critical to correct minification. Without it, CC would have mangled the fetchData function name and the foo and bar properties, too, which would have resulted in runtime errors. Omitting a symbol from an externs file would result in incorrect runtime behavior that could be extremely difficult to track down. In other words, this was a really bad developer experience (DX).

CC introduced some extralinguistic conventions to deal with this. For example, in JS (and TS) there's no distinction between using dot notation and square braces to access a property on an object:

const a = obj.property;
const b = obj['property'];
console.log(a, b); // exact same

This is not true with the Closure Compiler. Its convention is that quoted property access is preserved whereas dotted can be mangled. Here's how that code comes through the minifier with advanced optimizations:

console.log(obj.g,obj.property);

Note how the property names have diverged. In other words, while Closurized JavaScript is just JavaScript, it also kind of isn't.

There's another big problem with advanced optimizations: in order to consistently mangle a property name, CC needs to have access to all the source code that might use it. For this to be maximally effective, all the code you import must also be written with the Closure Compiler in mind, as must all the code that that code imports, etc.

In the context of npm in 2023, this would be impossible. In most projects, at least 90+% of the lines of code are third-party. For this style of minification to be effective, all of that code would have to be written with the Closure Compiler in mind and compiled by it as a unit.

On the other hand at Google in 2004, or 2012, or perhaps even today, that is quite realistic. At huge companies, the first- to third-party code ratio tends to be flipped. Using third-party code is more painful because there are legal and security concerns that come with it, as well as a loss of control. TypeScript's zero runtime dependencies are a good example of this.

All of Google's JavaScript was written with the Closure Compiler in mind and the vast majority of it is first-party. So advanced optimizations works beautifully. But the rest of the JS world doesn't operate that way. As soon as you pull in any dependencies like React or Lodash that aren't written with Closure Compiler in mind, it starts to lose its value.

Contrast this with TypeScript. It only needs to know about the types of existing libraries. This is all that's needed for type checking. The DefinitelyTyped project has been a monumental undertaking but it does mean that, generally speaking, you can get TypeScript types for almost any JS library. (There's a similar, though much smaller, set of externs to get type checking for popular JS libraries for the Closure Compiler.)

Stating it more directly: advanced optimizations requires that the compiler understand a library's implementation, not just its types, and that's simply infeasible given the enormous diversity of the JavaScript ecosystem.

Timing Is Everything

Google developed Closure c. 2004 but it wasn't open sourced until late 2009. An O'Reilly book on it, Closure: The Definitive Guide, came out in 2010.

In retrospect this timing was terrible. In 2010, JavaScript was just entering its period of maximum churn. JavaScript: The Good Parts came out in 2008 and ES5 codified many of its recommendations in a new "strict" mode in 2009. Node.js was first released in 2009 and npm followed hot on its heels in 2010, creating the ecosystem of JavaScript packages we know today. npm grew significantly more powerful and useful when browserify made it applicable to client-side code starting in 2011.

And finally, CoffeeScript was released in 2010. It normalized the idea of compiling an "improved" JavaScript down to regular JavaScript, as well having a build step. All of these influenced the direction of JavaScript, with ES2015 bringing some of the best elements of CoffeeScript into the language itself.

The Closure Compiler was developed in the era when JavaScript was a "bad" language that was to be avoided. CC itself is implemented in Java, which made it harder to integrate into an all-JS toolchain. And it attempted to add missing parts to JavaScript. Since it couldn't add new syntax, it used special functions: goog.provide and goog.require provided a module system and goog.inherits smoothed out the process of creating class hierarchies. These were real JavaScript functions that did something at runtime. If memory serves, goog.require might inject a <script> tag!

There were a few problems with this. One was that all the goog functions reinforced the idea that this was a tool primarily built for Google. Putting company names in your packages is common in Java, so presumably it felt natural for the Closure developers. But it's not in JavaScript. We just import 'react', not "facebook/react".

Second, it made it awkward when JavaScript itself gained a module system and class keyword. TypeScript faced some of these problems in its early days, too. It used to have its own module system and class system, but in the interests of ecosystem coherence it deprecated them in favor of the native solutions. TypeScript now lets JavaScript be JavaScript and innovates only in the type system.

This transition happened early in TypeScript's history, but late in the Closure Compiler's. Presumably adaptation was harder.

Why TypeScript won

TypeScript came along at a better time and has been able to adapt to the changes in JavaScript and its ecosystem over the past decade. It's self-hosted (tsc is written in TypeScript) and distributed with npm.

TypeScript also won by focusing more on developer tooling. The Closure Compiler is an offline system: you run a command, it checks your program for errors, then you edit and repeat. I'm not aware of any standard Closure language service. There's no equivalent of inspecting a symbol in your editor to see what CC thinks its type is. TypeScript, on the other hand, places as much emphasis on tsserver as tsc. Especially with Visual Studio Code, which is written in TypeScript and came out in 2015, TypeScript is a joy to use. TypeScript uses types to make you more productive whereas Closure used them to point out your mistakes. No wonder developers preferred TypeScript!

(Google engineers are no exception to this. In the past decade they've adopted TypeScript and migrated to it en masse. You can read about one team's experience porting Chrome Devtools from Closure to TypeScript).

TypeScript did a better job of engaging the JavaScript community. TypeScript is developed and planned in the open on GitHub. They respond to bug reports from anyone and treat non-Microsoft users as important customers. The Closure Tools, on the other hand, were very much an open source release of an internal Google tool. Google was always the primary consumer and external users were mostly on their own. The goog namespacing reinforced this.

Closure's idea of "it's just JavaScript" was appealing because it let you avoid a build step. This remains appealing in 2023: some TypeScript users still prefer to use JSDoc-style type annotations and --checkJs. But using JSDoc for all types is awkward and noisy. Ergonomics do matter and TypeScript's are undeniably better.

Finally, TypeScript's central idea of "JavaScript + Types" has held up better than the Closure Tools' idea of "minification" and "it's just JavaScript". While shaving bytes off your bundle was all the rage in 2008, our connections are much faster now and, while bundle size still matters, it is not as critical as it was back then. Closure forced a uniform system on you and all your dependencies in order to achieve extreme minification. We've given up that goal in exchange for more flexibility.

There's a general principle here. I'm reminded of Michael Feathers's 2009 blog post 10 Papers Every Developer Should Read at Least Twice which discusses D.L. Parnas's classic 1972 paper "On the criteria to be used in decomposing systems into modules":

Another thing I really like in the paper is his comment on the KWIC system which he used as an example. He mentioned that it would take a good programmer a week or two to code. Today, it would take practically no time at all. Thumbs up for improved skills and better tools. We have made progress.

The KWIC system basically sorts a text file. So are we correct to laud our progress as software developers? This would be a one-liner today:

console.log(
fs.readFileSync('input.txt')
.split('\n')
.toSorted((a, b) => a.localeCompare(b))
.join('\n')
);

But think about what makes this possible:

  • We're assuming that the entire file fits in memory, which almost certainly would not have been true in 1972.
  • We're using a garbage collected language, which would have been a rarity back then.
  • We have an enormous library at our fingertips via node built-ins and npm.
  • We have great text editors and operating systems.
  • We have the web and StackOverflow: no need to consult a reference manual!

All of these things are thanks to advances in hardware. The hardware people give us extra transistors and the software people take most of those for ourselves to get a nicer development process. So it is with faster network speeds and the Closure Compiler. We've taken back some of that bandwidth in exchange for a more flexible development process and ecosystem.

Conclusions

There were discussions of adding minification to TypeScript in the early days but now optimized output is an explicit non-goal for the language. If you've ever thought that type-driven minification would be a beautiful thing, the Closure Compiler is a fascinating data point. It can be tremendously effective, but it also comes at an enormous cost to the ecosystem.

The Closure Compiler as a standalone external tool seems mostly dead (the closure playground is badly broken and says "Copyright 2009"!). But it still lives on at Google. Since they've adopted TypeScript, they can use the Closure Compiler for just what it does best: minification. To make this work, Google has built a tool, tsickle, that makes TypeScript produce Closurized JavaScript. True to form, this tool is open source but pretty inscrutable to an outsider. It may be used by Angular but I couldn't tell.

Hopefully this was an interesting lesson in JavaScript history! The Closure Compiler represents an alternative path that the JavaScript ecosystem could have taken, with different principles and different tradeoffs.

There's a lively discussion of this article on Hacker News. In particular Paul Buchheit (the creator of Gmail!) points out that runtime performance was very much a goal of the Closure Compiler and inlining/dead code removal was a way to achieve this. It's hard to get back in the pre-JIT IE6 mindset where every getter comes with a cost! I don't think this changes the conclusions of the article. Also, the Closure Compiler is not the Google Web Toolkit (GWT).

Like this post? Consider subscribing to my newsletter, the RSS feed, or following me on Twitter.
Effective TypeScript Book Cover

Effective TypeScript shows you not just how to use TypeScript but how to use it well. The book's 62 items help you build mental models of how TypeScript and its ecosystem work, make you aware of pitfalls and traps to avoid, and guide you toward using TypeScript’s many capabilities in the most effective ways possible. Regardless of your level of TypeScript experience, you can learn something from this book.

After reading Effective TypeScript, your relationship with the type system will be the most productive it's ever been! Learn more »