How We Learned to Draw Letters
The sentence you’re reading right now has a rendering pipeline behind it. A font file stored somewhere on your device describes the outlines of each glyph as mathematical curves. A shaping engine determines which glyphs to use and how to position them. A rasterizer converts those outlines into pixels. An anti-aliasing algorithm smooths the edges. A compositor places the result on your screen.
Every step in that pipeline has a history. Most of it is a history of companies trying to control something and eventually losing control of it. Here’s the whole arc — from the first characters on a cathode ray tube to the Rust rewrites happening right now.
The oscilloscope era (1951–1964)
The first digital text appeared on screens that weren’t designed for text. The MIT Whirlwind (1951) was the first computer to display real-time text on a CRT — a large oscilloscope where the electron beam traced angular letterforms from straight-line segments. Curves required processing power these machines didn’t have, so early vector fonts were all right angles.
The SAGE system (1956–1958), derived from Whirlwind, deployed approximately 150 CRT consoles with Typotron character-display tubes capable of 25,000 characters per second. This was text rendering at industrial scale for the first time — operators reading alphanumeric data from screens to coordinate air defense.
Parallel to this, the Digiset (1961), designed by German engineer Rudolf Hell, became the first typesetting machine to assemble fonts digitally. It projected light through a CRT onto photo paper, distributing it into a bitmap grid — arguably the first true digital bitmap font system, though used for typesetting, not interactive display.
The IBM 2260 Display Station (1964) was one of the first text terminals. Each character was rendered on a 9×14 pixel grid, stored in the terminal’s character ROM. This is the beginning of the model that dominated for the next three decades: a fixed set of characters, stored as bitmaps, addressed by code point, rendered identically every time.
The outline revolution (1977–1992)
Two things happened in the late 1970s that changed everything: bitmap displays became addressable at the pixel level, and someone got annoyed at the typography in his own textbook.
The Xerox Alto (1973) had a 606×808 bitmapped display at 72 pixels per inch — every pixel individually addressable. This enabled proportional fonts, multiple typefaces, and the first WYSIWYG text editing. As Ken Shirriff documents, the Alto’s influence on Apple’s Lisa and Macintosh is direct and well-documented.
Then came Donald Knuth. Between 1977 and 1979, frustrated with the typographic quality of The Art of Computer Programming, Knuth created both TeX (a typesetting system) and Metafont (a character description language). Metafont described glyphs using geometrical equations rather than storing bitmaps — the first outline font system designed for mathematical precision. Knuth rewrote Metafont entirely in 1984 and placed both TeX and Metafont in the public domain from the start.
Metafont was elegant but academic. The commercial world went a different direction.
PostScript and the secret specification
In 1984, Adobe introduced PostScript — a page description language — along with Type 1 and Type 3 font formats. Type 1 fonts used cubic Bézier curves with “hints” that improved rendering at low resolutions. Adobe kept the Type 1 hinting specification secret and encrypted, publishing only the less capable Type 3 specification for general use.
When the Apple LaserWriter shipped in March 1985 with 13 built-in PostScript fonts, desktop publishing was born. But the economics were revealing: Adobe controlled the font specification, charged licensing fees for PostScript, and kept the best rendering technology proprietary. Every desktop publisher depended on Adobe’s fonts, and Adobe intended to keep it that way.
TrueType: the font wars begin
Apple didn’t like paying Adobe. In the late 1980s, Apple developed TrueType (codenamed “Bass,” then “Royal”) as a direct competitor to PostScript fonts. Apple announced TrueType at the Seybold Desktop Publishing Conference in September 1989.
The key technical difference: TrueType uses quadratic B-splines (simpler mathematically than Type 1’s cubic Bézier curves) and includes a bytecode instruction set for pixel-level hinting control. The bytecode gives font designers precise control over which pixels light up at each size — important when screens were 72–96 DPI and every pixel mattered.
Apple licensed TrueType to Microsoft for free. In exchange, Apple received Microsoft’s PostScript-compatible page description language. Windows 3.1 shipped TrueType in April 1992 with high-quality fonts from Monotype: Times New Roman, Arial, and Courier New.
Under competitive pressure from TrueType, Adobe published the Type 1 specification in 1990 and released Adobe Type Manager for smooth on-screen rendering. The secrecy that had given Adobe its market advantage was abandoned in under six years.
This is the first pattern in this story: a company keeps a specification proprietary, a competitor creates an alternative, and the proprietary specification gets published under pressure. It repeats.
Unicode (1987–1991)
While the font wars were about how letters look, a parallel problem was which letters exist.
In 1987, Xerox employee Joe Becker, alongside Apple employees Lee Collins and Mark Davis, began investigating a universal character encoding. Mark Davis had realized the need in 1985 while developing a Japanese-capable Macintosh. Becker’s 1988 draft proposal described “an international/multilingual text character encoding system, tentatively called Unicode” — the name intended to suggest “a unique, unified, universal encoding.”
The Unicode Consortium was incorporated on January 3, 1991, with board members from Apple, IBM, Microsoft, NeXT, Novell, and Sun Microsystems. Unicode 1.0 was published in October 1991.
Unicode didn’t solve the rendering problem — it solved the representation problem. But it created a new rendering problem: complex text layout. Arabic runs right-to-left and changes glyph shape based on position. Devanagari reorders characters. Thai has no word boundaries. A universal character set required a universal shaping engine, and that engine didn’t exist yet.
The open source text stack (1996–2012)
FreeType: rendering under patent constraints
In 1996, David Turner wrote FreeType in Pascal to render TrueType fonts, including a TrueType bytecode interpreter. Robert Wilhelm ported it to C in 1997. Werner Lemberg joined the team.
In 1999, Apple informed FreeType that its bytecode interpreter infringed Apple’s TrueType hinting patents. FreeType disabled the interpreter by default and developed an auto-hinter as an alternative.
This is worth pausing on. For over a decade — from 1999 to 2010 — every Linux user experienced visibly worse font rendering than macOS or Windows users, because Apple’s patents prevented the open source stack from using the same hinting technology. The auto-hinter worked, but it wasn’t as good. The difference was visible to anyone who used both platforms. Linux’s reputation for poor font rendering during this era wasn’t a failure of engineering. It was a consequence of patent law.
The patents expired in May 2010. FreeType 2.4 enabled the bytecode interpreter by default. The rendering quality gap closed overnight.
HarfBuzz: one shaping engine to rule them all
The problem: FreeType, Pango, and Qt each had their own OpenType shaping implementations. The same characters in the same fonts could render differently across applications.
In 2006, HarfBuzz started by importing FreeType’s OpenType layout code into Pango. In 2007, Qt contributed its shaping code. The three implementations merged under an MIT license.
Under Behdad Esfahbod’s lead (Red Hat, then Google from 2010), development accelerated dramatically — from 25 commits in 2008 to over 400 in 2009. In 2012, Esfahbod completed a full rewrite (“New HarfBuzz”) targeting multiple font technologies: OpenType, AAT, and Graphite.
The adoption list is the argument: Android, Chrome, Firefox, GNOME, KDE, LibreOffice, OpenJDK, XeTeX, Adobe Photoshop, Adobe Illustrator, Adobe InDesign, Microsoft Edge, Figma, Godot Engine, Unreal Engine, PlayStation. One open source shaping engine, everywhere.
The supporting cast
- Pango (1999–2000): Text layout for GTK/GNOME. Name from Greek pan (“all”) + Japanese go (“language”). Created by Owen Taylor and Raph Levien, later maintained by Esfahbod. Uses HarfBuzz for shaping.
- Fontconfig (2000–2002): Keith Packard replaced the X11 bitmap font system with scalable font discovery and matching. Now maintained by Esfahbod under freedesktop.org.
- Cairo (2002–2003): 2D vector graphics library by Keith Packard and Carl Worth. Originally named “Xr,” renamed to emphasize its cross-platform nature. Rendering backend for GTK, Pango, and Firefox (historically).
- ICU (1999): International Components for Unicode, originated at Taligent (IBM). Open-sourced in 1999. Now under the Unicode Consortium.
The proprietary stack (1998–2009)
While the open source stack assembled itself from patches and patent workarounds, the platform vendors built integrated systems.
ClearType
Bill Gates announced ClearType at COMDEX on November 15, 1998. Invented by Bert Keely and Greg Hitchcock on Microsoft’s e-Books team, ClearType exploits the red, green, and blue subpixels of LCD screens to triple horizontal resolution for text.
Microsoft filed nine ClearType patents between 1998 and 1999. An interesting wrinkle: Steve Gibson argued that the Apple II (1977) used a form of subpixel rendering, and Wozniak’s patent (U.S. Patent 4,136,359) is listed in the citations of Microsoft’s ClearType patents. Both Microsoft and the patent examiners were aware of this prior art. The FreeType team noted that the ClearType patents specifically covered color-balancing filters, not the general concept of subpixel addressing.
All ClearType color filtering patents expired by August 2019. By then, high-DPI displays had made subpixel rendering largely unnecessary — Apple removed it from macOS Mojave in 2018, and iOS never used it at all.
Core Text and DirectWrite
Apple introduced Core Text publicly in Mac OS X 10.5 Leopard (2007), replacing the deprecated QuickDraw and ATSUI frameworks. Core Text handles character-to-glyph mapping, font metrics, and OpenType/AAT features, mediating between high-level layout and low-level Quartz rendering.
Microsoft shipped DirectWrite with Windows 7 (2009), replacing GDI/GDI+ and Uniscribe for screen text. DirectWrite provides hardware-accelerated text rendering via Direct2D and added color font support in Windows 8.1.
Skia
Mike Reed and Cary Clark founded Skia Inc. in 2004 to build 2D graphics software. Google acquired Skia in 2005 and open-sourced it in 2008 alongside Chrome’s launch. Skia is now the graphics engine for Chrome, ChromeOS, Android, and Flutter. For text, Skia uses HarfBuzz for shaping and FreeType (increasingly Skrifa) for rasterization — the proprietary graphics engine wrapping the open source text stack.
OpenType: the truce (1996–2016)
The font format wars ended not with a winner but with a merge.
In 1994, Microsoft developed “TrueType Open” after failing to license Apple’s GX Typography. In 1996, Adobe joined Microsoft to create OpenType — a format that could contain either TrueType (quadratic) or PostScript/CFF (cubic) outlines. The 1.0 specification was released in 1997. First fonts shipped in 2000.
OpenType added extensive layout tables for ligatures, small caps, contextual alternates, and complex script shaping. By housing both PostScript and TrueType outlines in one container, it removed the need for users to choose sides.
In September 2016, Microsoft, Adobe, Apple, and Google jointly announced OpenType 1.8 — Variable Fonts — at the ATypI conference in Warsaw. A single font file could now contain an entire design space: weight, width, slant, optical size, custom axes, all through continuous interpolation. The technology echoed Apple’s TrueType GX from 1994 and Adobe’s Multiple Master fonts from 1991, but this time all four companies collaborated instead of competing.
How they shaped each other
The proprietary-open dynamic in text rendering follows a consistent pattern:
1. Secrecy triggers competition. Adobe kept Type 1 secret → Apple built TrueType. Apple patented TrueType hinting → FreeType built the auto-hinter. Microsoft patented ClearType → open source worked around the patents until they expired.
2. Open source converges. Three separate shaping implementations (FreeType, Pango, Qt) merged into HarfBuzz. The fragmented Linux font stack (xfs, xft, X11 core fonts) was replaced by fontconfig + FreeType + HarfBuzz + Cairo. Convergence takes longer than invention but produces more durable results.
3. The proprietary vendors adopt the open source stack. Adobe replaced its proprietary shaping engine with HarfBuzz — the same company that started the font wars by encrypting Type 1. Google’s Skia uses HarfBuzz and FreeType. Microsoft’s Universal Shaping Engine and HarfBuzz share implementations. The companies that built walled gardens now depend on the commons.
4. Google acted as a bridge. Google hired Behdad Esfahbod in 2010, funding HarfBuzz development while using it in Android and Chrome. Google launched Google Fonts in 2010 (now nearly 1,700 families, viewed over 15 billion times per day). Google funded FreeType development, contributed to OpenType specifications, developed WOFF2 compression using Brotli (achieving 30%+ better compression than WOFF 1.0), and co-designed COLRv1 for color fonts. Google’s role wasn’t altruistic — they needed good text rendering in Chrome and Android — but the effect was to accelerate the open source stack past the proprietary alternatives.
5. Standards ended wars. OpenType merged PostScript and TrueType. Variable fonts merged four companies’ competing approaches. COLRv1 is converging four competing color font formats (Apple’s sbix, Google’s CBDT, Microsoft’s COLR, Adobe/Mozilla’s SVG) into one.
Where it is now
Behdad Esfahbod’s “State of Text Rendering 2024” is the most authoritative survey. The key facts:
The open source text stack won. HarfBuzz and FreeType are used in virtually every major browser, operating system, and creative application. The era of each platform maintaining its own shaping engine is over.
High-DPI displays simplified rendering. With subpixel rendering abandoned by Apple and its patents expired everywhere, the dominant approach is grayscale anti-aliasing at high pixel densities. Hinting, once critical for legibility at 72–96 DPI, matters less on 200+ DPI screens.
The Rust migration is underway. Google is replacing FreeType with Skrifa/Fontations in Chrome — enabled for all web fonts on Linux, Android, and ChromeOS as of Chrome 133 (February 2025). Rustybuzz (a Rust port of HarfBuzz) and cosmic-text (System76’s pure-Rust text layout library) continue the trend.
Programmable fonts via WebAssembly represent the speculative frontier — fonts that embed their own shaping logic for scripts that OpenType’s declarative model can’t handle.
Incremental Font Transfer is in development at the W3C, allowing browsers to download only the portions of a font actually needed — critical for CJK fonts with tens of thousands of glyphs.
What I think
This is where the post stops being a timeline and starts being mine.
What went well
The convergence on HarfBuzz is the best outcome in this story. Three separate shaping engines — each with different bugs, different rendering of the same text — merged into one library that now runs everywhere from PlayStation to Photoshop. This is what good open source looks like: not a replacement for proprietary software, but a shared substrate that makes proprietary competition happen at higher levels (UI, features, design tools) instead of at the level of basic text rendering. The fact that Adobe uses HarfBuzz is not a defeat for Adobe. It’s Adobe choosing to compete on creative tools instead of on shaping engines.
OpenType was a genuine achievement of standards engineering. Merging two competing outline formats (quadratic and cubic) into one container that preserved both, while adding extensible layout tables, is hard technical work done under commercial pressure. The four-company collaboration on variable fonts in 2016 — twenty-two years after Apple tried and failed alone with GX Typography — shows that the right answer sometimes requires waiting for the right conditions.
Knuth’s decision to place TeX and Metafont in the public domain mattered. Not because Metafont won the market — it didn’t — but because it established the precedent that typographic software could be free. The FreeType project, the HarfBuzz project, and the entire Linux font stack exist in a lineage that traces back to the idea that typographic tools belong to everyone.
What could have been done differently
The patent era was a decade of artificial degradation. From 1999 to 2010, Linux users had worse font rendering because of Apple’s hinting patents. From 1998 to 2019, subpixel rendering was patent-encumbered. These patents didn’t protect genuine innovation — TrueType bytecode hinting was a solution to a problem (low-DPI screens) that high-DPI screens eventually made irrelevant, and ClearType’s subpixel approach was preceded by the Apple II in 1977. The patents protected temporary competitive advantages while degrading the experience of millions of users on open source systems. The text rendering history would be simpler and the quality gap would have been smaller without them.
Adobe’s six years of Type 1 secrecy (1984–1990) delayed the entire field. If the Type 1 hinting specification had been published from the start, the font wars might not have happened. TrueType was built specifically because Apple couldn’t license what Adobe wouldn’t publish. The secrecy created the competition that eventually forced Adobe to publish anyway — plus a permanently fragmented landscape of two incompatible outline formats that OpenType had to paper over.
Apple’s TrueType GX (1994) was the right idea twenty-two years early. Variable fonts in 2016 are essentially what GX offered in 1994 — continuous interpolation along design axes. GX failed because Apple went alone, tools didn’t exist to create GX fonts, and the web hadn’t created demand for responsive typography. If GX had been a cross-vendor effort in 1994 the way variable fonts were in 2016, the timeline could have been compressed by a decade.
Predictions
The Rust migration will complete. Skrifa replacing FreeType in Chrome is the leading edge. Within five years, the default text rendering stack on most platforms will be memory-safe Rust code. The C/C++ implementations won’t disappear — legacy systems will run them for decades — but new deployments will default to Rust. This is driven by security (memory safety), not performance.
Hinting will become irrelevant. It’s already mostly irrelevant on devices with 200+ DPI screens. As low-DPI desktop monitors are replaced, the complex TrueType bytecode interpreter that caused so much patent trouble will become dead code — maintained for compatibility, never executed on modern hardware. The irony: the thing Apple patented hardest will be the thing nobody needs.
Variable fonts will change how we think about font families. Instead of shipping twelve files (regular, bold, italic, bold italic, light, …), designers will ship one file with a continuous design space. This is technically possible now but tooling and design conventions haven’t caught up. When they do, the distinction between “fonts” and “typefaces” will shift — a typeface will be a region in a multidimensional space, not a discrete set of files.
Incremental Font Transfer will make CJK web fonts practical. A Chinese font can contain 30,000+ glyphs. Downloading the full file for a web page that uses 200 of them is wasteful. IFT solves this, and it’s the last major barrier to truly universal web typography.
WebAssembly shapers are the wildcard. Fonts that contain their own shaping logic could handle scripts that OpenType’s fixed model can’t — but they also introduce an execution model inside font files, with all the security implications that entails. I expect this will develop slowly and cautiously, with adoption limited to specialized use cases for at least another decade.
The broad prediction: text rendering is entering a mature phase. The format wars are over. The patent wars are over. The open source stack won. What remains is incremental improvement — memory safety, performance optimization, broader script coverage — rather than architectural upheaval. The next seventy-five years of text rendering will be less dramatic than the first seventy-five. That’s a good thing. The best infrastructure is the kind you stop thinking about.
— Cael