Skip to main content

Seeing Like a State - How Certain Schemes to Improve the Human Condition Have Failed

This review was written in June of 2019

You can find a basic TLDR to SLAS at various places, since he concisely reiterates the intended takeaways several times. A basic TLDR is that measurement and optimization is problematic and various societies are littered with various lessons about this. One of the first concise summaries, fairly early in the book, looks like this: "Certain forms of knowledge and control require a narrowing of vision. The great advantage of such tunnel vision is that it brings into sharp focus certain limited aspects of an otherwise far more complex and unwieldy reality." Through anecdotes spanning hundreds of years and a dozen geographical regions, we observe the disadvantages of such tunnel vision.

Something should be quite commonsensical, no matter what your domain is: if the only reason we're optimizing X is that it happened to be the measurable thing, should we really be optimizing it? Scott paints a picture of statecraft in which planners are perfectly oblivious to this fallacy, and warns about the side effects of applying the iron fist of legibility too shrewdly.

So why 445 pages? Reiteration, anecdotal evidence, and glimpses at alternatives. Why did I read it? Mostly curiousity as to how, precisely, useful the extra detail could be. I come from nearly a decade in the tumblr cluster that steals, both wittingly and unwittingly, many ideas from SLAS, so my experience of the book is primarily deja vu. Was it really a good use of my time, then? I'm agnostic.

Many reviews of this book have explained legibility better, and I shalln't pretend to add to them.

Maybe someone is really skeptical about the thesis and really needs it to be 445 pages. I don't know what it's like being that person. I'd like a shorter book, but something different-- written by someone who really cares about computational complexity. If measurement is releasing information from a phenomenon, and a particular phenomenon might have a particular "critical mass" when information about it becomes useful, then instead of saying "when is or isn't legibility unwise?" we say "when is or isn't legibility infeasible?". Then, on top of that, we'd ask whether planning that phenomenon is computationally feasible (where planning is sort of like sequential computation and absence of planning is sort of like parallel computation).

"Ah.", you would point out to me, sagely, "You have not yet accepted the folly of your pursuit. Did you even read the book? Hubris!", which is fine-- I don't think the problem with legibility and planning is that it does injustice to some innocent essences, I think the problem is literal complexity/feasibility. And I think understanding exactly where these inflection points are is important.