Excerpts from Aaron Swartz's blog, Raw Thought. The blog is still available (and in EPUB here). Here there is a useful search engine for the blog.
Where it gets creepy is when this natural enthusiasm is co-opted and channeled into a structured, top-down sort of system. Now I’m not just expressing my opinions, I’m following orders so I can get goodies. That fundamentally changes things.
There’s no need to get rid of the promotion system, just scale it back a little. Provide a list of suggested actions, a forum where people can talk about what they’re doing, and then offer to mail a t-shirt or something to people who work hard. You see, contrary to popular opinion — even in the free culture community, oddly enough — rewards are incredibly destructive. Study after study shows they actually demotivate people, encourage people to cheat and lie, and cause them to make stupid decisions about trade-offs. For an excellent book on the subject, see Alfie Kohn’s Punished By Rewards.
The boyfriend was desperately penitent, insisting it was just an accident, not a pattern, and that he loved her.
(See how I self-consciously point out the clichés I’m in? Let it never be said that this blog is not post-modern!)
AARON: But it’s about me! I waive my confidentiality rights.
Dan Connolly is very dismissive of my swearing off of competitive games. “Oh, pool isn’t pool a competitive game?” he sneered to me in email. (To be honest, playing was sort of a breach.) And when I described the application process he grumbled about it sounding competitive. (It wasn’t a game, though!) I assume it’s because Dan’s a social conservative.
Surely this is the only technology in history to have such philosophical problems.
The 1960s, as is well-known, had a major civilizing effect on all areas of American life. Less well-known, however, was the immediate pushback from the powerful centers of society. The process involved a great number of things, notably the network of right-wing think tanks I’ve written about elsewhere, but in the field of education it led to a crackdown on “those institutions which have played the major role in the indoctrination of the young”, as a contemporary report (The Crisis of Democracy) put it.
The indoctrination centers (notably schools) weren’t doing their job properly and so a back-to-basics approach with more rote memorization of meaningless facts and less critical thinking and intellectual development was needed. This was mainly done under the guise of “accountability”, for both students and teachers. Standardized tests, you see, would see how well students had memorized certain pointless facts and students would not be allowed to deviate from their assigned numbers. Teachers too would have their jobs depend on the test scores their students got. Teachers who decided to buck the system and actually have their students learn something worthwhile would get demoted or even fired. Not surprisingly, as always happens when you make people’s lives depend on an artificial test, teachers begun cheating. And it is here that Professor Levitt enters the story.
The problem is that the villains know they’re evil. And people really grow up thinking things work this way: evil people intentionally do evil things. But this just doesn’t happen. Nobody thinks they’re doing evil — maybe because it’s just impossible to be intentionally evil, maybe because it’s easier and more effective to convince yourself you’re good — but every major villain had some justification to explain why what they were doing was good. Everybody thinks they’re good.
Eichmann thought he was just doing his job. Eichmann, of course, is the right example because it was Hannah Arendt’s book Eichmann in Jerusalem: A Report on the Banality of Evil that is famously cited for this thesis. Eichmann, like almost all terrorists and killers, was by our standards a perfectly normal and healthy guy doing what he thought were perfectly reasonable things. And if that normal guy could do it, so could we.
A final note: Why is all this uninteresting? Because the Democrats aren’t paid to win elections. They’re paid to win policy for their corporate donors. Policy that hurts those companies, however popular with the public, simply will not be funded.
This is not to say that there is anything intrinsically wrong with using math or jargon or making grand claims. But to adopt these habits reflexively is to put the means before the ends. Scientists do not use math because it is complicated but because, for what they are doing, it is effective. Their grand pronouncements become accepted because (sometimes, at least) they are true.
Social behavior, they argue, is simply too complex for us to ever make real progress in the field. The topic is studied by idiots and charlatans because the intelligent and honest can immediately see its impossibility. I do not agree with such a view. In fact, I think it is only possible to maintain it thru abject ignorance of what science really knows.
Scientists — even the hardest of scientists — fabricate data, fabricate studies, fall prey to fads, and otherwise get things wrong. But more relevantly, they just don’t know that much. We have very little idea of how the body works; the pills we take are made through the bluntest of means. We don’t know how to calculate very simple things, like the dispersion of milk in a cup of coffee. The illusion that social science is ineffective can only be sustained by ignorance of such ineffectiveness of hard science. The upside of all this is that there is hope for social science.
I would always be the little kid in the corner, the guy who definitely stands out. But then I was looking at some old photos of me receiving the ArsDigita prize and I realize I really do look rather different. Then I was ugly, short, fat, awkward, and poorly-dressed. Now I’m taller and definitely much thinner (although that seems to change), I wear better-looking clothes, and my face has become rather handsome-looking, especially on the occasions when I shave it.
I’ve always been afraid of sharing space with other people. I almost thought of dropping out of college if I had to share a room.
Simon likes asking questions and I like explaining things.
‘Planning,’ I say, ‘makes sense when you’re solving a predefined problem or working for somebody who is defining the problem for you. But it doesn’t make sense when you’re hacking. When you’re hacking, as Paul Graham likes to note, you’re actually figuring out what the problem is, exploring the problem space. It’s through writing the code that you figure out what the code should do. You can’t plan that.’
I, meanwhile, have been struggling with my own demons — mainly procrastination. Programming is an odd task in that it requires so much mental discipline that your mind is often afraid of doing it. Worse still is the fact that it happens at a computer, usually one with an Internet connection, so there’s hardly any visible difference between actually doing your work and running off to check your email or read the news.
Pag. 31. 478 minuteminder.
Life seems so incredibly overworked and overcomplicated that you pare it down to the bare essentials: eat and code. Surely you should be able to handle this without distraction. Unfortunately, it’s not so easy.
typical discussion except this time Cory Doctorow spoke up: ‘are you sure you’re not a supertaster?’ he asked. I had heard the They Might Be Giants song but never considered the possibility. I thought about it as the conversation continued and it seemed to make sense to me. [At this point I imagine a crane shot lifting up and up over the conversation at the restaurant. Fade to:] I did some research on the Internet and did the test (which formally consists of putting blue food coloring on your tongue, taking a piece of paper with a three-hole punch, placing it over the tongue and counting the number of taste buds in it) and indeed, I am a supertaster. This hasn’t eliminated the discussions about my eating habits, but it does shift the blame.
‘Wouldn’t it be ironic if I died of pneumonoultramicroscopicsilicovolcanoconiosis?’ I asked Simon. (I chose pneumonoultramicroscopicsilicovolcanoconiosis as a spelling word in 6th grade.)
The last time I was fighting procrastination I was watching a bunch of good television shows. And as part of this, I would read Tim Goodman, the Roger Ebert of television critics. I was struck to learn one day that even Tim Goodman, whose job was to literally sit down and watch TV, could not bring himself to accomplish this task. I mean, I knew all about Structured Procrastination but surely it had its limits. How could someone procrastinate sitting down and watching TV?
The lesson I drew from this is that the human mind is such that whatever you do, it will try to avoid it. So you might as well aim high. Now the question is: what do you do with the rest of time?
ASTW: In the article you write that, “Future archaeologists trying to understand what the Shuttle was for are going to have a mess on their hands.” What do you think future archaeologists will think when they search the text of this era for comments about the problems of future archaeologists? Do you think they’ll find such results genuinely helpful or more of a rhetorical concern? MC: I think future archaeologists will be digging through 30 meters of ice with a sharpened stick, looking for canned goods.
I generously gave Ada a 20% discount off my normal consulting fees, in exchange for 10% of the company. I hope that other entrepreneurs will follow my lead in generously giving back to the community.
Joss Whedon’s film Serenity comes out tomorrow, the first new episode of his series Firefly in years. Go see it; you won’t be disappointed. It’s almost certainly Joss’s best work yet — mixing storyline tricks with biting humor with amazing battles and beautiful surroundings. The cinematography, special effects, and music are all grand. But most importantly, it’s just really, really fun. I received a “press pass” as part of the “Serenity Blogger Bonanza” — they invited bloggers to come see a free preview if they promised to write about it.
I was recently offered a free ticket to see David Lynch give a talk at Emerson College. The talk was titled “Consciousness, Creativity, and the Brain”.
Pag. 46. 778 Instead, he proposed prospective founders simply look around for things that are broken with the world
I have written before about the failures of experiments to provide evidence in favor of our concepts of personality or intelligence and how despite this many continue to believe in them.
The history of the IQ test — along with a number of other supposed ways of measuring “intelligence” — is detailed in Stephen Jay Gould’s classic The Mismeasure of Man. It was originally created by Alfred Binet to find children in French schools who might need special tutoring. Binet thought that by locating and helping these students, one could make sure that everyone learned all the material. Binet composed the test by throwing together whatever questions came to mind: things about shapes and numbers and words. He just wanted to see if some kids were having trouble, he made no attempt to make sure the result was a balanced measure of “intelligence”.
This is but one example — and one chapter in Paul’s book — but all the others all have similar stories. An absurd test, concocted through absurd means, completely untested, ends up becoming a powerful societal force. All the more reason for us to speak out about them.
The idea that there is something better than Lisp is apparently inconceivable to some, judging from comments on the reddit blog. The Lispers instead quickly set about trying to find the real reason behind the switch.
The more sane argued along the lines of saying Lisp’s value lies in being able to create new linguistic constructs and that for something like a simple web app, this isn’t necessary, since the constructs have been already built. But even this isn’t true. web.py was built pretty much from scratch and uses all sorts of “new linguistic constructs” and — even better — these constructs have syntax that goes along with them and makes them reasonably readable.
The framework that seems most promising is Django and indeed we initially attempted to rewrite Reddit in it. As the most experienced Python programmer, I tried my best to help the others out. Django seemed great from the outside: a nice-looking website, intelligent and talented developers, and a seeming surplus of nice features. The developers and community are extremely helpful and responsive to patches and suggestions. And all the right goals are espoused in their philosophy documents and FAQs. Unfortunately, however, they seem completely incapable of living up to them. While Django claims that it’s “loosely coupled”, using it pretty much requires fitting your code into Django’s worldview. Django insists on executing your code itself, either through its command-line utility or a specialized server handler called with the appropriate environment variables and Python path. When you start a project, by default Django creates folders nested four levels deep for your code and while you can move around some files, I had trouble figuring out which ones and how.
Pag. 60. 942 The way I wrote web.py was simple: I imagined how things should work and then I made that happen.
Until then, it looks like I’m forced to do that horrible thing I’d rather not do: release one more Python web application framework into the world.
imply that time is “fungible” — that time spent watching TV can just as easily be spent writing a novel. And sadly, that’s just not the case. Time has various levels of quality.
There’s also a mental component: sometimes I feel happy and motivated and ready to work on something, but other times I feel so sad and tired I can only watch TV.
First, you have to make the best of each kind of time. And second, you have to try to make your time higher-quality.
Is there something more important you can work on? Why don’t you do that instead? Such questions are hard to face up to (eventually, if you follow this rule, you’ll have to ask yourself why you’re not working on the most important problem in the world) but each little step makes you more productive.
Having a lot of different projects gives you work for different qualities of time. Plus, you’ll have other things to work on if you get stuck or bored (and that can give your mind time to unstick yourself). It also makes you more creative. Creativity comes from applying things you learn in other fields to the field you work in.
But if you try to keep it all in your head it quickly gets overwhelming. The psychic pressure of having to remember all of it can make you crazy. The solution is again simple: write it down. Once you have a list of all the things you want to do, you can organize it by kind. For example, my list is programming, writing, thinking, errands, reading, listening, and watching (in that order).
Once you have this list, the problem becomes remembering to look at it. And the best way to remember to look at it is to make looking at it what you would do anyway. For example, I keep a stack of books on my desk, with the ones I’m currently reading on top. When I need a book to read, I just grab the top one off the stack. I do the same thing with TV/movies. Whenever I hear about a movie I should watch, I put it in a special folder on my computer. Now whenever I feel like watching TV, I just open up that folder. I’ve also thought about some more intrusive ways of doing this. For example, a web page that pops up with a list of articles in my “to read” folder whenever I try to check some weblogs.
Pen and paper is immediately useful in all kinds of circumstances — if you need to write something down for somebody, take notes on something, scratch down an idea, and so on. I’ve even written whole articles in the subway.1 (I used to do this, but now I just carry my computerphone everywhere. It doesn’t let me give people information physically, but it makes up for it by giving me something to read all the time (email) and pushing my notes straight into my email inbox, where I’m forced to deal with them right away.)
Pag. 68. 1047 avoid getting interrupted. One simple way is to go somewhere interrupters can’t find you.
Time when you’re hungry or tired or twitchy is low-quality time. Improving it is simple: eat, sleep, and exercise. Yet I somehow manage to screw up even this. I don’t like going to get food, so I’ll often work right through being hungry and end up so tired out that I can’t bring myself to go get food.2
Easing mental constraints is much harder. One thing that helps is having friends who are cheerful. For example, I always find myself much more inclined to work after talking to Paul Graham or Dan Connolly — they just radiate energy.
But the real question is: what’s going on inside your head? I’ve spent a bunch of time trying to explore this and the best way I can describe it is that your brain puts up a sort of mental force field around a task. Ever play with two magnets? If you orient the magnets properly and try to push them towards each other, they’ll repel fiercely. As you move them around, you can sort of feel out the edges of the magnetic field. And as you try to bring the magnets together, the field will push you back or off in another direction. The mental block seems to work in the same way.
whether the task is hard and whether it’s assigned. Hard problems Break it down The first kind of hard problem is the problem that’s too big. Say you want to build a recipe organizing program. Nobody can really just sit down and build a recipe organizer. That’s a goal, not a task. A task is a specific concrete step you can take towards your goal. A good first task might be something like “draw a mockup of the screen that displays a recipe”. Now that’s something you can do.4
The important thing is to have something done right away. Once you have something, you can judge it more accurately and understand the problem better. It’s also much easier to improve something that already exists than to work at a blank page. If your paragraph goes well, then maybe it can grow into an essay and then into a book, little by little, a perfectly reasonable piece of writing all the way through.. Think about it Often the key to solving a hard problem will be getting some piece of inspiration. If you don’t know much about the field, you should obviously start by researching it — see how other people did things, get a sense of the terrain.
Assigned problems are problems you’re told to work on. Numerous psychology experiments have found that when you try to “incentivize” people to do something, they’re less likely to do it and do a worse job. External incentives, like rewards and punishments, kills what psychologists call your “intrinsic motivation” — your natural interest in the problem. (This is one of the most thoroughly replicated findings of social psychology — over 70 studies have found that rewards undermine interest in the task.)5 People’s heads seem to have a deep avoidance of being told what to do.6
This presents a rather obvious solution: if you want to work on X, tell yourself to do Y. Unfortunately, it’s sort of difficult to trick yourself intentionally, because you know you’re doing it.7 So you’ve got to be sneaky about it. One way is to get someone else to assign something to you. The most famous instance of this is grad students who are required to write a dissertation, a monumentally difficult task that they need to do to graduate. And so, to avoid doing this, grad students end up doing all sorts of other hard stuff. The task has to both seem important (you have to do this to graduate!) and big (hundreds of pages of your best work!) but not actually be so important that putting it off is going to be a disaster.
So the secret to getting yourself to do something is not to convince yourself you have to do it, but to convince yourself that it’s fun. And if it isn’t, then you need to make it fun. I first got serious about this when I had to write essays for college. Writing essays isn’t a particularly hard task, but it sure is assigned. Who would voluntarily write a couple pages connecting the observations of two random books? So I started making the essays into my own little jokes. For one, I decided to write each paragraph in its own little style, trying my best to imitate various forms of speech.
solve the meta-problem. Instead of building a web application, try building a web application framework with this as the example app. Not only will the task be more enjoyable, but the result will probably be more useful.
There are a lot of myths about productivity — that time is fungible, that focusing is good, that bribing yourself is effective, that hard work is unpleasant, that procrastinating is unnatural — but they all have a common theme: a conception of real work as something that goes against your natural inclinations.
Pag. 74. 1126 to listen to your body.
To eat when you’re hungry, to sleep when you’re tired, to take a break when you’re bored, to work on projects that seem fun and interesting. It seems all too simple.
If you want to learn more about the pscyhology of motivation, there is nothing better than Alfie Kohn. He’s written many articles on the subject and an entire book, Punished by Rewards, which I highly recommend.
While the terminology I use here (“next concrete step”) is derived from David Allen’s Getting Things Done, a lot of the principles here are (perhaps even unconsciously) applied in Extreme Programming (XP). Extreme Programming is presented as this system for keeping programs organized, but I find that a lot of it is actually good advice for avoid procrastination. For example, pair programming automatically spreads the mental weight of the task across two people as well as giving people something useful to do during lower-quality time. Breaking a project down into concrete steps is another key part of XP, as is getting something that works done right away and improving on it (“Simplify it” infra). And these are just the things that aren’t programming-specific. ↩
When I woke up later, there was no brass rod, nor was the back of my head soft. Somehow … my brain had invented false reasons as to why I shouldn’t [observe my dreams] any more. (Surely You’re Joking, Mr. Feynman!, 50) Your brain is a lot more powerful than you are. ↩
My new Python web application library, web.py, is out. An essay of mine about productivity has made the rounds. I’m not sure it’s finished, but lots of people have already read it and loved it, so I thought I might as well tell you about it. On a similar note, over the winter break I wrote a program called arcget to download web sites from the Internet Archive’s Wayback Machine. There are still a few more changes I’d like to make, but I’m unlikely to get to them anytime soon.
I’ve decided to stop being embarrassed. I’m saying goodbye to the whole thing: that growing suspicion as the moment approaches, that sense of realization when it comes, that rush of blood reddening your cheeks, that brief but powerful desire to jump out of your own skin, and then finally that attempt big fake smile trying to cover it all. Sure, it was fun for a while, but I think it’s outlived its usefulness. It’s time for embarrassment to go.
Pag. 80. 1221 Regret
But actually, I think it’s going to be frustration. It’s not discussed much, but frustration is really quite distracting. You’re trying to solve some difficult problem but it’s just not working. Instead of taking a moment to try and think of the solution, you just keep getting more and more frustrated until you start jumping up and down and smashing various things. So not only do you waste time jumping, but you also have to pay to replace the stuff you smashed. It’s really a net loss.
the white moderate, who is more devoted to “order” than to justice; who prefers a negative peace which is the absence of tension to a positive peace which is the presence of justice; who constantly says: “I agree with you in the goal you seek, but I cannot agree with your methods of direct action”; who paternalistically believes he can set the timetable for another man’s freedom; who lives by a mythical concept of time and who constantly advises the Negro to wait for a “more convenient season.”
This weekend I built a little site called (for now, at least) Simple Amazon. I was annoyed with how slow it was to search Amazon — first you have to load their complicated front page, type in your search in the tiny hidden box, then click the link to filter it just for books and so on. With Simple Amazon, you just go to a URL like: http://books.theinfo.org/lisp and you get a listing of lisp books superfast with covers and links to price checks. You can also use links like: http://books.theinfo.org/go/0262011530 to provide shorter links to Amazon.
There’s just one problem: I enjoy deep discussions of punctuation and other trivialities. I could try to justify this taste — some argument that we should think about everything we do so that we don’t do everything we think about — but why bother? Do I have to justify enjoying certain television shows as well? At some point, isn’t pure enjoyment just enough? After all, time isn’t fungible. But of course, the same drive that leads me to question punctuation leads me to question the drive itself, and thus this essay. What is “this drive”? It’s the tendency to not simply accept things as they are but to want to think about them, to understand them. To not be content to simply feel sad but to ask what sadness means. To not just get a bus pass but to think about the economic reasons getting a bus pass makes sense. I call this tendency the intellectual.
They don’t just love thinking, they love language. They love its tricks and intricacies, its games, the way it gets written down, the books it gets written into, the libraries those books are in, and the typography those books use.
Pag. 85. 1305 What good is thinking if you can’t share?
They only seem pretentious because discussing such things is so bizarre.
A 1979 study of people in caves suggested that contact with other people affected when we fell asleep and a 1985 survey of daily activities in 12 countries led to another clue: Americans were much more often awake around midnight than people in any other country and the only distinguishing factor seemed to be late-night television. Perhaps, Roberts thought, watching television could influence sleeping rhythm? The most popular late-night television show at the time the study was done was the The Tonight Show, with its person-heavy monologue. So one morning Roberts decided to watch Jay Leno and David Letterman’s monologues. It seemed to have no impact; an otherwise normal day. But the next morning he woke up feeling great.
So he tried adjusting the show and television set, finding that, despite his love for The Simpsons, life-size human faces at about a meter away for 30 minutes worked best. I have to concede, at this point, that the results sound fairly absurd and unbelievable. But reading Roberts’s papers on the subject, what’s striking is how careful he is about the subject. An actual psychologist, publishing in psychology journals, he’s taken into account every objection. The results cannot be, as one would first expect, simply self-induced by his own wishes. For one, Roberts took quantitative notes, so his memory couldn’t be playing tricks on him.
So what’s going on? If you look at faces in the morning, you feel worse 12 hours later but better 24 hours later. But the effect is muted if you see faces in the evening. Roberts theorizes that your body is using the faces to set its inner mood clock, which works similarly to its inner tiredness clock. You want to be happy during the day (as opposed to the night), but how do you tell when the day starts? The body assumes that you gab with people when you wake up, so it uses seeing other faces as a way to synchronize the clock. Of course, you want to make sure you’ve got the timing right on the nighttime side as well, so if you see faces late in the evening it tries to tweak the clock then as well. This is consistent with what we know from other sources about depression. Depression is highly correlated with insomnia as well as social isolation and is often treated by disturbing sleep. The Amish, who eat breakfast communally and go to bed very early, have 1/100th the rate of depression as other Americans. And depression rates increased by 10 times in the 1900s, around the same time radio/TV, electric lighting, and other such things became common. I’m hoping to get a chance to test this myself, but it sure appears that one easy way to improve mood is to look at faces in the morning.
Our body’s weight, he says, is regulated by a “set point”, like the setting on a thermostat. If our weight is lower than our internal set point, we feel hungry; higher, we feel full. So if you want to weigh less, all you need to do is lower your body’s set point. Your body will stop being hungry, you’ll burn the fat you already have, and your weight will go down.
On the other hand, if we eat new foods or foods with little taste, our brain assumes we’re eating them because there’s nothing else around and the set point is lowered.
And thus, the way to lower your set point: eat foods with no taste. Of course, they have to have calories as well, so Roberts’s preferred suggestion is extra-light olive oil (ELOO), which is basically just oil with absolutely no taste. Your body gets the calories but it doesn’t get the taste, so the set point goes lower every time you eat it. It all seems crazy, but Roberts is sort of a crazy guy, so he decided to test it. He started taking a couple hundred tasteless calories every day. Almost immediately, he begun feeling less hungry.
Of course, there’s no reason my particular anecdotes should be more convincing than any others, but they are convincing to me, so I’d like to move from discussing the diet to discussing its implications. Weight and trying to lose it is a huge part of American culture and a system that makes doing it trivially easy will have far-reaching effects.
Two years ago this summer I read a book that changed the entire way I see the world. I had been researching various topics — law, politics, the media — and become more and more convinced that something was seriously wrong.
Then, one night, I watched the film Manufacturing Consent: Noam Chomsky and the Media
It’s taken me two years to write about this experience, not without reason. One terrifying side effect of learning the world isn’t the way you think is that it leaves you all alone. And when you try to describe your new worldview to people, it either comes out sounding unsurprising (“yeah, sure, everyone knows the media’s got problems”) or like pure lunacy and people slowly back away. Ever since then, I’ve realized that I need to spend my life working to fix the shocking brokenness I’d discovered. And the best way to do that, I concluded, was to try to share what I’d discovered with others.
But even worse than these policy defeats are the conceptual defeats that underly them. As cognitive scientist George Lakoff has argued people think about politics through conceptual moral frames, and the conservatives have been masterful at creating frames for their policies. If the left wants to fight back, they’re going to have to create frames of their own.
The Conservative Nanny State: How the Wealthy Use the Government to Stay Rich and Get Richer, which takes decades of conservative frames and stands them on their head. (Disclosure: I liked the book so much I converted it to HTML for them and was sent a free paperback copy in return.) His most fundamental point is that conservatives are not generally in favor of market outcomes. For far too long, he argues, the left has been content with the notion that conservatives want the market to do what it pleases while liberals want some government intervention to protect people from its excesses.
But when it comes to the professional class, like doctors, lawyers, economists, journalists, and other professionals, oh no!, the conservative nanny state does everything it can (through licensing and immigration policy) to keep foreign workers out. This doesn’t just help the doctors, it hurts all of us because it means we have to pay more for health care.
In his classic A Mathematicians Apology, published 65 years ago, the great mathematician G. H. Hardy wrote that “A man who sets out to justify his existence and his activities” has only one real defense, namely that “I do what I do because it is the one and only thing that I can do at all well.” “I am not suggesting,” he added, that this is a defence which can be made by most people, since most people can do nothing at all well. But it is impregnable when it can be made without absurdity … If a man has any genuine talent he should be ready to make almost any sacrifice in order to cultivate it to the full.
But now I find myself faced with this dilemma: it is those other areas I would much prefer to work in.
Instead, it seems plausible that talent is made through practice, that those who are good batters are that way after spending enormous quantities of time batting as a kid.3 Mozart, for example, was the son of “one of Europe’s leading musical teachers”4 and said teacher began music instruction at age three. While I am plainly no Mozart, several similarities do seem apparent. My father had a computer programming company and he began showing me how to use the computer as far back as I can remember. The extreme conclusion from the theory that there is no innate talent is that there is no difference between people and thus, as much as possible, we should get people to do the most important tasks (writing, as opposed to cricket, let’s say). But in fact this does not follow. Learning is like compound interest. A little bit of knowledge makes it easier to pick up more. Knowing what addition is and how to do it, you can then read a wide variety of things that use addition, thus knowing even more and being able to use that knowledge in a similar manner.5
I’ve always thought that this was the reason kids (or maybe just me) especially disliked history. Every other field — biology, math, art — had at least some connection to the present and thus kids had some foundational knowledge to build on. But history? We simply weren’t there and thus know absolutely nothing of it. ↩
Many people, of course, are uninterested in such things precisely because they aren’t very good at them. There’s nothing like repeated failures to turn you way from an activity. Perhaps this is another reason to start young — young children might be less stung by failure, as little is expected from them. ↩
Ambitious people want to leave legacies, but what sort of legacies do they want to leave? The traditional criterion is that your importance is measured by the effect of what you do.
The real question is not what effect your work had, but what things would be like had you never done it.
The idea being that major discoveries were sure to follow soon and that if I picked that field I could be the one to make them. By my test, such a thing would leave a poor legacy. (For what it’s worth, I don’t think either person’s works fall into this category; that is to say, their reputation is still deserved even by these standards.) Even worse, you’d know it. Presumably Darwin and Newton didn’t begin their investigations because they thought the field was “hot”.
“Not a single number in this graph,” he says, “is in dispute.” This is the inconvenient truth: unless we change, we will destroy the environment that sustains our species.
discussion focused on whether there is a problem in the first place, they have effectively silenced the debate over what to do about it.”† So is it any wonder that conservatives want to do the same thing again? And again? And again?
This wasn’t just a media debate about the existence of global warming or the merits of internment, this was a full-on media endorsement of racism, which the American Heritage Dictionary defines as “The belief that race accounts for differences in human character or ability and that a particular race is superior to others.”
“Why can a publisher sell this book? Because a huge number of well-meaning whites fear that they are closet racists, and this book tells them they are not. It’s going to make them feel better about things they already think but do not know how to say.”† That’s certainly what The Bell Curve did, replacing a debate over how to improve black achievement with one about whether such improvement was even possible.
“I believe this book is a fraud, that its authors must have known it was a fraud when they were writing it, and that Charles Murray must still know it’s a fraud as he goes around defending it. … After careful reading, I cannot believe its authors were not acutely aware of … how they were distorting the material they did include.” (WLM?, 100)
Lott’s book More Guns, Less Crime claimed that his scientific studies had found that passing laws to allow people to carry concealed weapons actually lowered crime rates. As usual, the evidence melted away upon investigation, but Lott’s errors were more serious than most. Not content to simply distort the data, Lott fabricated an entire study which he claimed showed that in 97% of cases, simply brandishing a gun would cause an attacker to flee.
And what happened to Lott? Nothing. Lott remains a “resident scholar” at the American Enterprise Institute, his book continues to sell well, his op-ed pieces are still published in major papers, and he gives talks around the country.† For the right-wing scholar, even outright fraud is no serious obstacle.
In the nights and weekends I read and think and write. I’m working on a large book project, which I expect to take years, and which I don’t discuss much on the Web.
I recently had to sit through a performance of Bach’s Well-Tempered Clavier at the Chicago Symphony Orchestra (it was the conductor’s farewell concert). At first it was simply boring, but as I listened more carefully, it grew increasingly painful, until it became excruciatingly so. I literally began tearing my hair out and trying to cut my skin with my nails (there were large red marks when the performance was finally over). The pianist, I was certain, kept flubbing the notes and getting the timing off. But few around me seemed to agree. “Well, he certainly plays it differently from Gould,” was the most they could say.
When I listen to good modern music, it takes my heart in its hands and plays with it as it pleases — makes me soar, makes me sad, excited, and mad. But when I listen to classical music, at most it simply occupies my brain for a while. Is this simply a flaw in my perception or has music really improved? I think it’s possible to argue that music is actually getting better. As humans, we clearly share a number of genetically-encoded similarities, perhaps with some variation. For example, we almost all have two eyes, although in different shapes, sizes, and colors. Imagine that we are similarly endowed with some shared sense of musical appreciation (or, put another way, emotional susceptibility). We all fall for the same musical things, again with some variation. If this is the case (and while I can’t really prove it, it seems at least plausible to me that it is), then there would indeed be objective standards for measuring music: better music would be more appreciated by the “average person” or the majority of people or some such. And if there are objective standards for measuring music, then music can get better.
George Lakoff is a prominent cognitive scientist whose central insight (which is not to say that the idea originates with him) is that we can learn about the structure of our thoughts by looking carefully at the words we use to express them. For example, we think of time as a line, as you can see through phrases like “time line”, “looking forward”, “further in the past”, etc. Similarly, we thinking is thought of as a kind of seeing: “do you see what I mean?”, “pulled the wool over your eyes”, “as you can see from the book”, “his talk was unclear”, “that sentence is opaque”, etc.
Where Mathematics Comes From,
After the election of Bush2, Lakoff began talking about how Republicans were better at “framing”, or using language to get people to agree with them, than Democrats. Lakoff that the process goes both ways: language causes your mind to think of certain concepts which create certain pathways in your brain. Thus Republicans, he said, through massive repetition of certain phrases, were literally changing the brains of the electorate to be more favorable to them. (“If this sounds a bit scary,” he writes, “it should. This is a scary time.”)
He rushed out the slender book Don’t Think of an Elephant, a cobbled-together guide on his basic ideas and how progressives could use them. The book stayed on the New York Times bestseller list for weeks. Now Lakoff is back with a more studied work, Whose Freedom?, which tries to focus in more detail on the differing views of one particular concept: freedom. Lakoff starts the book by noting that in his 2004 speech at the Republican convention, Bush used “freedom”, “free”, or “liberty” once every forty-three words. Most progressives think of this simply as a stunt — using feel-good symbols like flag and words like freedom to distract from the real issues. But Lakoff argues something much deeper is going on: Bush is trying to change the meaning of freedom itself.
When you look at something you’re working on, no matter what it is, you can’t help but see past the actual thing to the ideas that inspired it, your plans for extending it, the emotions you’ve tied to it. But when others look at it, all they see is a piece of junk.
But when you release late, after everything has been carefully polished, you can share something of genuine quality. Apple, for example, sometimes releases stupid stuff, but it always looks good.
This is why “release early, release often” works in “open source”: you’re releasing to a community of insiders. Programmers know what it’s like to write programs and they don’t mind using things that are unpolished. They can see what you’re going to do next and maybe help you get there. The public isn’t like that. Don’t treat them like they are.
“soft” sciences are, in fact, harder. Humans are far more complicated than atoms, trying to figure out how they work is a great deal more difficult than coming up with the rules of mechanics. As a result, the social sciences are less well developed, which means there’s less to study, which means the fields are easier to learn.
“Centrism” is the tendency to see two different beliefs and attempt to split the difference between them. The reason why it’s a bad idea should be obvious: truth is independent of our beliefs, no less than any other partisans, centrists ignore evidence in favor of their predetermined ideology.
Together, these reasons combine to make centrism an especially attractive place to be in American politics. But the disease is far from limited to politics. Journalists frequently suggest the truth lies between the two opposing sources they’ve quoted. Academics try to distance themselves from policy positions proposed by either party. And, perhaps worst of all, scientists try to split the difference between two competing theories. Unfortunately for them, neither the truth nor the public necessarily lies somewhere in the middle. Fortunately for them, more valuable rewards do.
Exercise for the reader: What’s the attraction of “contrarianism”, the ideology subscribed to by online magazines like Slate?
“I don’t even know who you are anymore,” she sobs. “You’re starting to scare me.” What’s frightening, it would seem, is that people aren’t the way we expected.
Pag. 140. 2139 Having the world be off is frightening.
The fabric of my reality was being torn — something clearly impossible had happened: a time had disappeared off a train schedule. Things weren’t working the way I expected.
But my larger point is that tears in the way we think things work are scary. If things are this bad when a piece of paper doesn’t say what we expect, it’s not surprising that it’s worse when people we know don’t behave the way we thought.
I say we here, but as I mentioned at the start, I seem to be a bit alone on this. One possibility is that I’m hypersensitive to such emotions. Other people might simply feel a prick at such a scheduling anomaly, but I feel it as full-blown fear. Another (more flattering) possibility is that I’m more perceptive about people than others; since others don’t notice the duplicity, they don’t feel the associated fear. The latter makes some sense to me, as the things that make me scared of people are often very subtle, and others don’t seem to recognize them at all, even when I ask them about it specifically.
I often think that the world needs to be a lot more organized.
The cumulative knowledge of science is one of our most valuable cultural products, yet it can only be found scattered across thousands of short articles in hundreds of different journals.
One can, of course, make the reverse argument: since there is so much need for such organization projects, they must be pretty impossible.
The Internet is the first medium to make such projects of mass collaboration possible. Certainly numerous people send quotes to Oxford for compilation in the Oxford English Dictionary, but a full-time staff is necessary sort and edit these notes to build the actual book (not to mention all the other work that must be done). On the Internet, however, the entire job — collection, summarization, organization, and editing — can be done in spare time by mutual strangers.
Napster. Within only months, almost as a by-product, the world created the most complete library of music and music catalog data ever seen.
I’m not the first to suggest that the Internet could be used for bringing users together to build grand databases. The most famous example is the Semantic Web project (where, in full disclosure, I worked for several years). The project, spearheaded by Tim Berners-Lee, inventor of the Web, proposed to extend the working model of the Web to more structured data, so that instead of simply publishing text web pages, users could publish their own databases, which could be aggregated by search engines like Google into major resources.
The confrontation symbolizes the (at least imagined) standard debate on the subject, which Mark Pilgrim termed million dollar markup versus million dollar code. Berners-Lee’s W3C, the supposed proponent of million dollar markup, argues that users should publish documents that state in special languages that computers can process exactly what they want to say. Meanwhile Google, the supposed proponent of million dollar code, thinks this is an impractical fantasy, and that the only way forward is to write more advanced software to try to extract the meaning from the messes that users will inevitably create.^1
But yesterday I suggested what might be thought of as a third way out; one Pilgrim might call million dollar users. Both the code and the markup positions make the assumption that users will be publishing their own work on their own websites and thus we’ll need some way of reconciling it. But Wikipedia points to a different model, where all the users come to one website, where the interface for inputting data in the proper format is clear and unambiguous, and the users can work together to resolve any conflicts that may come up.
and MusicBrainz followed this model and were Semantic Web case studies. (Full disclosure: I worked on the Semantic Web portions of MusicBrainz.) Perhaps the reason is simply that both sides — W3C and Google — have the existing Web as the foundation for their work, so it’s not surprising that they assume future work will follow from the same basic model. One possible criticism of the million dollar users proposal is that it’s somehow less free than the individualist approach. One site will end up being in charge of all the data and thus will be able to control its formation. This is perhaps not ideal, certainly, but if the data is made available under a free license it’s no worse than things are now with free software.
By contrast, Wikipedia has seen explosive growth, Amazon.com has become the premier site for product information, and when people these days talk about user-generated content, they don’t even consider the individualized sense that the W3C and Google assume. Perhaps it’s time to try the third way out.
Shockingly, losing weight has to be one of the easiest things I’ve ever done. I simply don’t eat unless I’m really hungry and then I eat as little as possible (a couple crackers, for example).
(Furthermore, there’s some evidence that not eating significantly prolongs lifespan.) The one thing that really did surprise me is that while I predicted there would be strong social pressures to lose weight, in reality all the pressure seemed to go the other way. Friends and acquaintances urge me to eat more, doctors think I’m sick, family members suggest I have an eating disorder.
But in my darker moments, I wonder if part of it is selfish. The extraordinarily thin people encouraging me to eat more, I darkly wonder, don’t want me to be like them. The people who need to lose some weight don’t like the example of my success. I don’t like thinking this way, and I have no evidence for it, but it’s hard to resist.
The first thing I noticed was the burping. When losing weight, it seems you burp quite a bit. But even worse is the feeling of wanting to burp. The olive oil, it seems, has inflated my stomach with gas, making me desperately want to burp, but I can’t. In fact, it was so painful that I decided to stop taking the olive oil. I still ate less — it seems like once the olive oil lowered my set point, it was easy to keep things off from there.
She [lost so much weight that] was unrecognizable to anyone who had known her before, and even to herself. “I went to bars to see if I could get picked up—and I did,” she said. “I always said no,” she quickly added, laughing. “But I did it anyway.”
The changes weren’t just physical, though. She had slowly found herself to have a profound and unfamiliar sense of willpower over food.
The newfound willpower allowed me to be more conscious about my diet. I started thinking about what foods I wanted to eat and researching the topic of nutrition. I read Walter Willett’s book about the results of his epidemiological nutrition studies and begun looking at the labels of boxes I ate. I begun ordering different things at restaurants when I did eat and buying different things at the supermarket. But most of all I found myself eating less. When this proved not to be enough, I found myself exercising.
Perhaps it’s natural, when doing something so greedy and practical as a startup, to pine for the idealized world of academia. Its image as a place in an idyllic location filled with smart people has always been attractive; even more so with the sense that by being there one can get smarter simply through osmosis. People describe a place of continual geekiness, of throwing chemicals into the river and building robots in free time.
It’s not that I don’t enjoy my work; it’s just that I feel like I’m getting dumber doing it. Or, at least, that I’m not getting as smart as I should. This academaphilia isn’t new. It’s clearly what drove magazines like Lingua Franca and makes saying obscure names and words so impressive.
And yet, it’s hardly paradise at all. When I was actually there I was turned off by the conformism, the lack of interest in real work, the politics, the pointless assignments. My lunch date is a grad student and he tells me of the internecine squabbles, the overspecialization, the abandonment, the insecurity. I go back to the W3C’s offices and stand at the balcony. Down below, Tim Berners-Lee discusses details of a project with a group of kids who presumably took this on as summer job. I was once one of those kids, working there, and I think about why I left and why I miss it. I marvel at the pointlessness, the impracticality, the
But even just visiting it, the facts are plain. It doesn’t exist, it never has. I’m nostalgic for a place that never existed.
Calories are the basis of eating; they’re a measure of the amount of energy a food provides. Your body gets calories from the food you eat and spends them to keep you moving. If you get more calories than you spend, your body stores the excess as fat. If you spend more than you get, your body burns some of the fat it’s stored up (for just such an occasion).
Pag. 150. 2301 eat less, exercise more.
So just as your body seems to make you hungry when you’re losing weight and full when you’re gaining it, it seems to make you tired when you’ve burned too many calories and antsy when you haven’t burned enough.
Fats have gotten a bad rap, most likely because they share a name with body fat but also, some argue, because they seem lower-class.
Fats also have effects on cholesterol, a key building block for your body’s cells. There are two types of cholesterol — known informally as good and bad cholesterol. Good cholesterol consists of tightly-packed proteins of cholesterol in your blood stream, allowing cholesterol to be efficiently transported where it needs to go. Bad cholesterol is less densely packed and its cholesterol ends up sticking in the walls of arteries, clogging them and leading to heart disease. Fats have varying effects on cholesterol. Saturated fats should be avoided: they increase levels of bad cholesterol (although they also increase good cholesterol). Unsaturated fats, however, whether monounsaturated or polyunsaturated, are good: they lower bad cholesterol and raising good cholesterol. Trans fats are just the reverse: they increase bad cholesterol levels and decrease good ones; it’s recommended they be avoided as much as possible.
The goal, remember, is to avoid trans fats whenever possible, avoid saturated fats, and go for unsaturated fats. Carbohydrates are another source of calories, the kind found in white wheat products, like bread and pasta. Sugars are a form of carbohydrate and, in fact, the body breaks down other carbohydrates into simple sugars. The problem with sugars is that they go directly into the bloodstream, spiking your blood sugar level. This in itself is unhealthy, but it’s even worse when the level inevitably crashes and you begin to feel hungry again and eat even more. The exception is with fiber, which the body can’t break down. Foods made from whole wheat are high in fiber, so your body takes longer to digest them and the sugar intake is spread out over a longer period of time. Thus while carbohydrates might generally be avoided, whole wheat products (along with fruits and vegetables), include additional nutrients as well as having a safe impact on blood sugar, and are the foundation of a healthy diet.
Protein is a similar essential nutrient, allowing the body to make essential components of muscle and hair and so on. If you don’t get enough (about 9 grams of protein for every 20 pounds), the body begins breaking down its tissues. (Eating far too much protein, however, as people in low-carb diets do, can be unhealthy as it absorbs calcium from your bones.)
While protein can be found in animal products, whole wheat bread is a also an excellent source — a single slice contains five grams of protein. Unfortunately, the proteins found in grains and vegetables are incomplete, so you either need to get some (complete) animal protein or eat a variety of them. Calcium is necessary for building bones and teeth, maintaining the heart’s rhythmym, and more.
Many other foods are fortified with calcium and some vegetables (kale and collard greens, dried beans, and legumes) are also a good source. Vitamins do all sorts of good things, as well as warding off diseases like scurvy and rickets. They’re often added to juices and cereals and can be taken by themselves in a daily multivitamin as well.
By serving as a check on repetitious work, your blog also enables you to conserve your energy. It also encourages you to capture ‘fringe-thoughts’: various ideas which may be byproducts of everyday life, snatches of conversation overheard in the street, or, for that matter, dreams. Once noted, these may lead to more systematic thinking, as well as lend intellectual relevance to more directed experience.
Actually, he called it a “file” instead of a blog, but the point remains the same: becoming a scientific thinker requires practice and writing is a powerful aid to reflection. So that’s what this blog is. I write here about thoughts I have, things I’m working on, stuff I’ve read, experiences I’ve had, and so on. Whenever a thought crystalizes in my head, I type it up and post it here. I don’t read over it, I don’t show it to anyone, and I don’t edit it — I just post it. I don’t consider this writing, I consider this thinking. I like sharing my thoughts and I like hearing yours and I like practicing expressing ideas, but fundamentally this blog is not for you, it’s for me. I hope that you enjoy it
Then you simply punch the information into a computer and figure out what foods kill people. The results, as described in the associated book Eat, Drink, and Be Healthy provide some simple tips for living longer.
Replace white bread with whole wheat. White bread is simply whole wheat bread shorn of all its nutritional value. Whole wheat has more nutrients, more protein, etc. Plus, white bread is metabolized quickly by your body so it leads to huge spikes in blood sugar which have unhealthy effects on your body and make you hungry after you crash; whole wheat bread is digested more evenly. Replace burgers with chicken. Dark meat when grilled can lead to potential carcinogens, whereas white meat is overall healthier. Chicken contains less saturated (bad) fat while dark meat may give you too much iron.
I’m about the fussiest eater I know and even I can handle these changes. Whole wheat bread even tastes better than white. Disclaimer: I know nothing about nutrition, this is simply what I took away from reading one book.
Finally, on the hypotenuse, were the cyclists, just standing and chatting. By the time I’d made my way over to them, I noticed they were biking around in a circle together, so I joined in, smiling at the sight. We all biked around for a while until someone shouted “Mike, go right!” Mike did, exiting onto the street, and the crowd followed. For the most part, I tried to stay in the middle of the pack. Bikes rode on all sides of me, while cars were stopped in their tracks. Frustrated, many of them pounded on their horns. In response, the crowd imitated their honk, except shouting “woohoo” as the noise, as if the cars were cheering them on.
While there certainly were a lot of bikes, and together we formed quite an imposing swarm, but despite engaging it a coordinated activity, we related mostly as individuals, most people just talking to the friends they’d come with. For the most part, there just wasn’t much to talk about. I was riding my weirdo bike, so a lot of people said something like “sweet bike”, but the conversation didn’t go far beyond “did you build it?” Bikes were, perhaps, the only thing we had in common. And even casual smalltalk seems exceedingly awkward in such a situation.
American education, it has been said, is about learning how to be alone in a crowd. Perhaps its not surprising then that it also teaches how to be alone at a mass protest.
The group, however, did make some attempt at community. When some began falling behind, they’d shout “mass up!” encouraging the others to slow down and catch up with the group.
The massers, for the most part, embraced the rain, throwing their hands in the air and cheering. And, since I had nothing to worry about getting wet (except the bike, which had already been left out in the rain one too many times), I did the same. It was liberating.
“What book have you read recently?” will cause the majority of Americans who don’t read to flail, while at best only getting an off-the-cuff garbled summary of a random book. “What’s something cool you’ve learned recently?” puts the person on the spot and inevitably leads to hemming and hawing and then something not all that cool. I propose instead that one ask “What have you been thinking about lately?” First, the question is extremely open-ended.
Spending time with all these people was amazing fun — they’re all incredibly bright, enthusiastic and, most shockingly, completely dedicated to a cause greater than themselves. At most “technology” conferences I’ve been to, the participants generally talk about technology for its own sake. If use ever gets discussed, it’s only about using it to make vast sums of money. But at Wikimania, the primary concern was doing the most good for the world, with technology as the tool to help us get there. It was an incredible gust of fresh air, one that knocked me off my feet.
Finally, the Wikimedia Foundation Board seems to have devolved into inaction and infighting. Just four people have been actually hired by the Foundation, and even they seem unsure of their role in a largely-volunteer community. Little about this group — which, quite literally, controls Wikipedia — is known by the public. Even when they were talking to dedicated Wikipedians at the conference, they put a public face on things, saying little more than “don’t you folks worry, we’ll straighten everything out”.
Organizational structures are far from neutral: whose input gets included decides what actions get taken, the positions that get filled decide what things get focused on, the vision at the top sets the path that will be followed. I worry that Wikipedia, as we know it, might not last. That its feisty democracy might ossify into staid bureaucracy, that its innovation might stagnate into conservatism, that its growth might slow to stasis. Were such things to happen, I know I could not just stand by and watch the tragedy. Wikipedia is just too important — both as a resource and as a model — to see fail.
I was heartened to discover research by Seth Anthony which, independently and more formally, came to largely the same conclusions. As he explained on Reddit: “Only about 10% of all edits on Wikipedia actually add substantive content.
Larry Sanger famously suggested that Wikipedia must jettison its anti-elitism so that experts could feel more comfortable contributing. I think the real solution is the opposite: Wikipedians must jettison their elitism and welcome the newbie masses as genuine contributors to the project, as people to respect, not filter out.
But, once again, investigation shows the picture to be far more interesting: translation, reorganization, and plagiarism. Exciting stuff!
No, the reason Wikipedia works is because of the community, a group of people that took the project as their own and threw themselves into making it succeed.
Why does anyone do such a thing? It’s not particularly fascinating work, they’re not being paid to do it, and nobody in charge asked them to volunteer. They do it because they care about the site enough to feel responsible. They get upset when someone tries to mess it up. It’s hard to imagine anyone feeling this way about Britannica. There are people who love that encyclopedia, but have any of them shown up at their offices offering to help out? It’s hard even to imagine. Average people just don’t feel responsible for Britannica; there are professionals to do that.
But what’s less well-known is that it’s also the site that anyone can run. The vandals aren’t stopped because someone is in charge of stopping them; it was simply something people started doing.
This is so unusual, we don’t even have a word for it. It’s tempting to say “democracy”, but that’s woefully inadequate.
This is so radically different that it’s tempting to see it as a mistake: Sure, perhaps things have worked so far on this model, but when the real problems hit, things are going to have to change: certain people must have clear authority, important tasks must be carefully assigned, everyone else must understand that they are simply volunteers. But Wikipedia’s openness isn’t a mistake; it’s the source of its success. A dedicated community solves problems that official leaders wouldn’t even know were there. Meanwhile, their volunteerism largely eliminates infighting about who gets to be what. Instead, tasks get done by the people who genuinely want to do them, who just happen to be the people who care enough to do them right.
Of course, that’s not the only reason this mistake is made, it’s just the most polite. The more frightening problem is that people love to get power and hate to give it up. Especially with a project as big and important as Wikipedia, with the constant swarm of praise and attention, it takes tremendous strength to turn down the opportunity to be its official X, to say instead “it’s a community project, I’m just another community member”. Indeed, the opposite is far more common. People who have poured vast amounts of time into the project begin to feel they should be getting something in return. They insist that, with all their work, they deserve an official job or a special title. After all, won’t clearly assigning tasks be better for everyone? And so, the trend is clear: more power, more people, more problems. It’s not just a series of mistakes, it’s the tendency of the system.
A systemic tendency like this is not going to be solved by electing the right person to the right place and then going to back to sleep while they solve the problem. If the community wants to remain in charge, it’s going to have to fight for it.
Wikipedia’s users come from all over society: different cultures, different countries, different places, different fields of study. The physics grad students who contribute heavily to physics articles are in a much better position to promote it to physicists than a promotional flack from the head office. The Pokemon fan maintaining the Pokemon articles probably knows how to reach other Pokemaniacs than any marketing expert.
of complaint-tracking system for articles, like the discussion system of talk pages. Instead of simply complaining about an article in public, Stallman could follow a link from it to file a complaint. The complaint would be tracked and stored with the article. More dedicated Wikipedians would go through the list of complaints, trying to address them and letting the submitter know when they were done. Things like POV allegations could be handled in a similar way: a notice saying neutrality was disputed could appear on the top of the page until the complaint was properly closed.
For the most part, people have simply assumed that Wikipedia is as simple as the name suggests: install some wiki software, say that it’s for writing an encyclopedia, and voila! — problem solved. But as pretty much everyone who has tried has discovered, it isn’t as simple as that. Technology industry people tend to reduce web sites down to their technology: Wikipedia is simply an instance of wiki software, DailyKos just blog software, and Reddit just voting software. But these sites aren’t just installations of software, there also communities of people. Building a community is pretty tough; it requires just the right combination of technology and rules and people. And while it’s been clear that communities are at the core of many of the most interesting things on the Internet, we’re still at the very early stages of understanding what it is that makes them work.
the focus is always something “out there”, something outside the discussion itself. But with Wikipedia, the goal is building Wikipedia. It’s not a community set up to make some other thing, it’s a community set up to make itself. And since Wikipedia was one of the first sites to do it, we know hardly anything about building communities like that. Indeed, we know hardly anything about building software for that. Wiki software has been around for years — the first wiki was launched in 1995; Wikipedia wasn’t started until 2001 — but it was always used like any other community, for discussing something else. It wasn’t generally used for building wikis in themselves; indeed, it wasn’t very good at doing that.
Wikipedia’s real innovation was the idea of radical collaboration. Instead of having a small group of people work together, it invited the entire world to take part. Instead of assigning tasks, it let anyone work on whatever they wanted, whenever they felt like it. Instead of having someone be in charge, it let people sort things out for themselves. And yet it did all this towards creating a very specific product.
Reddit, for example, is radical collaboration to build a news site: anyone can add or edit, nobody is in charge, and yet an interesting news site results. Freed from the notion that Wikipedia is simply about wiki software, one can even imagine new kinds of sites. What about a “debate wiki”, where people argue about a question, but the outcome is a carefully-constructed discussion for others to read later, rather than a morass of bickering messages.
figuring out the key principles that make radical collaboration work. What kinds of projects is it good for? How do you get them started? How do you keep them growing? What rules do you put in place? What software do you use?
Code is law, Lawrence Lessig famously said years ago, and time has not robbed the idea of any of its force. The point, so eloquently defended in his book Code, and Other Laws of Cyberspace, is that in the worlds created by software, the design of the software regulates behavior just as strongly as any formal law does; more effectively, in fact.
For one thing, the software decides who gets to be part of the community.
Pag. 179. 2737 For another, the software decides how the community operates.
As a programmer, I have a great deal of respect for the members of my trade. But with all due respect, are these really decisions that the programmers should be making?
The Wikipedia community is enormously vibrant and I have no doubt that the site will manage to survive many software changes. But if we’re concerned about more than mere survival, about how to make Wikipedia the best that it can be, we need to start thinking about software design as much as we think about the rest of our policy choices.
I started trying to read my book, but found I wasn’t really capable of reading anything at all.
Henry Bakwin, pediatric director of New York’s Bellevue Hospital, saw this and thought that perhaps they were going about things exactly wrong. The infants weren’t dying of infection, he believed, they were dying of loneliness — a loneliness that made it easier for them to succumb to infection. He took down the signs about washing hands and put up signs requiring everyone to pick up and fondle a baby. And infection rates went down. Harold Skeels and a team at the Iowa Child Research Welfare Station decided to try an experiment. They took thirteen girls out of institutionalized care and had them “adopted” by older girls in “a home for the feeble-minded”. Within nineteen months, the average IQs of the adopted kids jumped from 64 to 92.
John Bowlby came at it from a different perspective. Interviewing severely disturbed kids, he discovered they all shared a traumatic separation from their parents when they were young. He concluded the mother-infant relationship was essential to development and issued recommendations much like Skeels and Spitz.
He would pick a name at random from the list of babies, then always film them at a specific hour, clock in the background, so you could tell he wasn’t cheating. The name he picked was Laura. When he went to find her he was devastated: Laura was the one girl in a hundred who wasn’t crying; her parents had reared her so strictly that she quietly restrained all her emotions. “I saw immediately [she] was going to be the one child in a hundred who was not going to demonstrate what I had been shouting my head off [about],” Robertson said. But he couldn’t pick another child — that would be cheating. The project continued.
The first day, Laura jumped out of her bath to the door in an attempt to escape. Her smiles disappear and sometimes she quietly sobs while clutching her teddy. “Where’s my mummy?” she asks repeatedly, while trying hard to hold back tears. Each day she grows grayer until on the fifth, when she appears unsmiling and resentful. Her mother comes to visit (thanks to a special exception to the rules Robertson negotiated), but she wipes away her mother’s kiss. When her mother waves goodbye, she looks away. When her mother finally comes to take her home on the eight day, she begins shaking with sobs. She gathers up all of her stuff, but refuses to take her mother’s hand as they walk out.
Reviews in the medical journals, however, were all positive. And younger nurses and doctors begun telling Robertson how they agreed with him and would do things differently, if only they were in charge. And a few higher-ups quietly sent some votes of support. So Robertson kept going.
And nobody was willing to take the obvious step of letting mothers stay with their children on the wards.
Again, the doctors that are supposed to help the patients seem less concerned about the patients as people than bodies, things to be measured and operated upon, puzzles to solve, problems to fix. They do not tell the patient what is being done to them, do not reap the benefits that could be received by engaging them in the search for the solution, but instead only share knowledge when forced by law and precedent, preferring to keep the real details private among the priesthood of doctors and nurses. Barbara Ehrenreich and Deidre English note how well-off women of the pre-feminist era suffered from mysterious symptoms of inactivity, a condition they diagnose as the psychological result of their inactivity and powerlessness; society entrusted them with no responsibility and so their minds collapsed from lack of active use.
While women have made great strides in the years since, for many the problem is still quite real. And laid up in a hospital, with domestic and childrearing tasks undoable, they may find the responsibilities they had fade away, their condition stripped back to that of their afflicted forebearers. And so patriarchical society and patriarchical medicine combine to strip all vestiges of humanity away. No freedom, no responsibility; no movements, no tasks; no privacy, no thought. The person becomes the body that the doctors treat them as.
A magazine, we may imagine, is like a one-way web site. It doesn’t really allow the readers to talk back (with the small exception of the letters page), it doesn’t even have any sort of interactivity. But I still think communities are the key for magazines; the difference is that magazines export communities. In other words, instead of providing a place for a group of like-minded people to come together, magazines provide a sampling of what a group of like-minded people might say in such an instance so that you can pretend you’re part of them. Go down the list and you’ll see.
The late, great Lingua Franca exported the university. Academephiles, sitting at home, probably taking care of the kids, read it so they could imagine themselves part of the life of the mind. Similarly, the new SEED magazine is trying to export the culture of science, so people who aren’t themselves scientists can get a piece of the lab coat life.
Edward Tufte teaches us to always ask about the information density of a method of communication.
Pag. 195. 2992 (The other day someone asked me why more people don’t watch the recordings of MIT lectures made available for free online. This is why.)
The other week I saw Scott McCloud give a presentation at a local college. Although he is not a professor himself, McCloud is a theorist of comics. Edward Tufte (among countless others) calls his guide Understanding Comics the best book on the medium. McCloud breaks comics down to its essential: the use of sequential art to tell a story — we see one thing, we see another, we imagine what happens in between. And watching McCloud speak, I realized that his talk was a vivid form of comics. The images weren’t just illustrations, they drove the story along, with McCloud simply filling in the words to connect them together.
Tufte himself is professor emeritus at Yale. These days he goes on tour, rock-star style, teaching classes on presenting information. Tufte is a brilliant presenter — his energy keeps the audience spell-bound for an entire day. At one point, as I recall, he jumped up on a table and asked us to imagine the information density of various media as charted from one side of the room to the other.
Then there’s Lawrence Lessig, who’s presentations are so powerful and influential that an entire style of presentation has been named after him. At his peak, I saw him give a talk at the O’Reilly Open Source Conference that had the audience, as Wes Felter put it, looking to start a riot afterwards. Lessig’s rhythmic, almost hypnotic, presentation, invariably blows people away.
It’s not because of the medium’s informational density, it’s because of its emotional density. The same is true of these presentations. Reading Lessig’s books, you’ll probably learn more about the history of copyright law and the other things he discusses in his talk. But you won’t feel his righteous indignation against those “extremists on the right and left” who are trying to distort its intentions and, in the process, hurt our culture.
They show you how the speakers think about problems, how they feel about them, and, in doing so, provide a more fleshed-out notion than writing ever could.
Michael Berubé’s bestseller, What’s Liberal About the Liberal Arts? (Copies had been checked out of all the nearby libraries, so I spent the morning sitting in the Harvard Coop reading the entire book there. Yeah, I’m such a cheapskate.)
The “I see where you’re coming from, here’s someone else who makes that argument” trick — while eminently reasonable — is incredibly effective at robbing a young person of their moral indignation. It suggests that the question isn’t one to be resolved, but simply one to be accepted as a fact of life: some people believe the Vietnam War was to fight Communism, others disagree. But when we’re talking about the lives of millions, such ambivalence is more frustrating than outright disagreement.
No, the real reason I want to go to a University — and the reason, when you get right down to it, everybody else seems to be interested in as well — is the people. I want to go to a place filled with people like me, but smarter; a place where you can’t help but learn. The key phrase there is “people like me”. What I want to know is what the culture is like. To unfairly overgeneralize, people at Harvard are snobbish, people at Stanford are lazy, and people at MIT are nerds.
Some of our most formative years are spent in schools, odd places whose ostensible goal is adult-directed education but in reality are controlled by student-culture peer groups of which adults have little actual understanding. Adults run examinations and programs, try to be “hip” to teen culture, but ultimately, we must admit, we have little idea what really goes on, making it easy for rumors to run wild. Jeremy Iversen and Rebekah Nathan decided to see for themselves what school life was really like, by going undercover and experiencing it themselves. While they went to different places, in different guises, in entirely different situations (Iversen was a senior in high school, while Nathan was a freshman in college), the pictures they draw are startlingly similar: a world where genuine education is absolutely the last thing on everyone’s mind.
But it’s all too easy to lament this sad state of student affairs, perhaps complain about the laziness of modern students. But Nathan goes one step further: she shows why it is happening. For even she, a professor with a Ph.D, finds herself doing the exact same things. “We don’t need to study those things, they won’t be on the test”, she tells her study partner Rob. It takes Rob, a fellow student, to ask her whether she just cares about learning for its own sake. The culture of Undergraduate Cynical, you see, is not created by student laziness or a lack of concern for intellectual life. It’s created by the necessities of the schedule. Students simply don’t have time to care. They take three to five classes, each with separate sections and lab assignments, each with its own schedule of papers and readings and adults to suck up to.
So students instead focus on doing what’s required of them: just scraping by. Anything that won’t impact their grade much is tossed and a desire to learn becomes a desire to pass. It’s hard to imagine any sincere desire to learn surviving such a harried schedule. As soon as you get engrossed or a book or topic, you have to dash off to your next meeting. Again, this is all something completely invisible to the professors. They spend their days worrying about tomorrow’s lecture and are shocked when students don’t do the same.
the Milgram experiments on obedience to authority and Zimbardo’s Stanford Prison Experiment — were easy fodder for armchair ethicists. But while people may have their feelings ruffled, in all of these experiments there was little lasting hurt to the participants, while the educational consequences of the studies themselves have been immense.
The real ethical question is how we can justify forcing our children into such institutions of anti-intellectualism. Iversen found that high school students were quite conservative politically, even more so than their parents, and perhaps it’s not surprising that Bush’s anti-intellectual charm appeals to kids who daily experience education as a form of torture.
But, as experiments in cognitive dissonance have shown us, if one continues saying what one doesn’t feel, one begins feeling it before too long. It’s easy to see how this is effective training for professionalism, which actually means doing what you’re told, despite what you believe. But it’s hard to see how this system is going to generate students who will buck a trend.
People hate failing, so much so that they’re afraid to try. Which is a problem, because failing is most of what we do, most of the time.
Anyone who wants to build a decent educational environment is going to need to solve this problem. And there seem to be two ways of doing it: try and fix the people so that they don’t feel embarrassed at failing or try to fix the environment so that people don’t fail. Which option to pick sometimes gets people into philopolitical debates (trying to improve kids self-esteem means they won’t be able to handle the real world! preventing kids from experiencing failure is just childish coddling!), but for now let’s just be concerned with what works. Getting people to be OK with being wrong seems tough, if only because everybody I know has this problem to a greater or lesser degree. There are occasional exceptions — mavericks like Richard Feynman (why do you care what other people think?) often seem fearless, although it’s hard to gauge how much of that was staged — but these just seem random, with no patterns suggesting why.
self-esteem is like a cushion: it prevents the fall from being too damaging, but it doesn’t prevent the fall. The real piece, it would seem, is finding some way to detach a student’s actions from their worth. The reason failing hurts is because we think it reflects badly on us. I failed, therefore I’m a failure. But if that’s not the case, then there’s nothing to feel hurt about. Detaching a self from your actions might seem like a silly thing, but lots of different pieces of psychology point to it. Richard Layard, in his survey Happiness: Lessons from a New Science, notes that studies consistently find that people who are detached from their surroundings — whether through Buddhist meditation, Christian belief in God, or cognitive therapy — are happier people. “All feelings of joy and even physical pain are observed to fluctuate, and we see ourselves as like a wave of the sea—where the sea is eternal and the wave is just its present form.”
Similarly Alfie Kohn, who looks more specifically at the studies about children, finds that it’s essential for a child’s mental health that parents communicate that they love their child for who they are, no matter what it is they do. This concept can lead to some nasty philosophical debates — what are people, if not collections of things done? — but the practical implications are clear. Children, indeed all people, need unconditional love and support to be able to survive in this world. Attachment parenting studies find that even infants are afraid to explore a room unless their mother is close by to support them, and the same findings have been found in monkeys.
While I’m loathe to introduce more individualism into American schools, it seems clear that one solution is to have people do work on their own. Kids are embarrassed in front of the class, shy people get bullied in small groups, so all that really leaves is to do it on your own. And this does seem effective. People seem more likely to ask “stupid” questions if they get to write them down on anonymous cards.
Apparently when nobody knows you’re getting it wrong, it’s a lot easier to handle it. Maybe because you know it can’t affect the way people see you. Schools can also work to discourage this kind of conditional seeing by making it completely unimportant.
Clearly we could teach everybody Buddhist meditation or something (which, studies apparently show, is effective), but even better would be if there was something in the structure of the school that encouraged this way of thinking. Removing deadlines and requirements should help students live more fully in the moment. Providing basic care to every student should help them feel valued as people. Creating a safe and trusting environment should free them from having to keep track of how much they can trust everyone else. And, of course, all the same things would be positive in the larger society. Too often, people think of schools as systems for building good people. Perhaps it’s time to think of them as places to let people be good.
When I first heard about this experiment, I just assumed it was because they were good kids. But now I think there’s a different explanation. It’s because doing this is fun. Working on something that’s too easy for you isn’t enjoyable, it’s just mindless. (There’s a reason few people play 50K Racewalker.) But doing something that’s too hard for you isn’t fun either.
The islanders don’t kiss, he explains. Instead, they scratch. The girls scratch the guys so hard that they draw blood and, if the guys can withstand the pain, then they move forward to having sex. The ethnographer (as Malinowski calls himself) verified this by noting that just about everyone on the island had noticeable scratches. And while everybody is having sex whenever they want, premarital meal-sharing is a big no-no. You’re not supposed to go out for dinner together until after you get married. But the most fascinating and strange part about the islanders are their beliefs on the subject of pregnancy, also described in Malinowski’s classic article “Baloma: The Spirits of the Dead in the Trobriand Islands”. When people die, you see, their spirit takes a canoe to the island of Tuma, which works very much like the normal island except everybody is a spirit of the dead. When the spirit gets old and wrinkled it shrugs off its skin and turns back into an embryo, which a spirit then takes back to the island and inserts into a woman. This, you see, is how women get pregnant. That’s right. The islanders do not believe that sex causes pregnancy. They don’t believe in physiological fatherhood.
It is speculated that the yams that form the basis of the island diet have a contraceptive agent in them (The Pill was originally made by looking at chemicals in wild yams), which conveniently explains quite a bit, including the low birthrate despite the high level of sexual activity. Indeed, the whole idea lends quite a bit of support to the idea that material factors shape culture — after all, our own sexual revolution didn’t happen until we got the yam’s chemicals in pill form in 1960.
The notion has some other interesting consequences. For example, the society is necessarily matrilineal, since fathers have no technical lineage. Yet sociological fathers (the mother’s husband), Malinowski notes, show more love and care for their children than most he’s seen in Europe. Furthermore, they believe the same rules apply to the rest of the animal kingdom. This is what clinches it for Malinowski — despite all the effort they go to to raise pigs, they insist that pigs also reproduce asexually.
Larry Wall once noted that the scientificness of a field is inversely correlated to how much the word “science” appears in its name. Physics, of course, doesn’t have science in the name and is the most scientific of all sciences.
You can criticize this view for just being silly or wrong, and many have, but there’s another problem with it: it’s completely ahistorical. As Robert McChesney describes in The Problem of the Media, objectivity is a fairly recent invention — the republic was actually founded on partisan squabblers. When our country was founded, newspapers were not neutral, non-partisan outlets, but the products of particular political parties.
You often hear the media quote Jefferson’s comment that “were it left to me to decide whether we should have a government without newspapers, or newspapers without government, I should not hesitate a moment to prefer the latter.” However, they hesitate to print the following sentence: “But I should mean that every man should receive those papers, and be capable of reading them.” In particular, Jefferson was referring to the post office subsidy the government provided to the partisan press.
Pag. 228. 3515 I think following the news is a waste of time.
Most people’s major life changes don’t come from reading an article in the newspaper; they come from reading longer-form essays or thoughtful books, which are much more convincing and detailed.
With the time people waste reading a newspaper every day, they could have read an entire book about most subjects covered and thereby learned about it with far more detail and far more impact than the daily doses they get dribbled out by the paper. But people, of course, wouldn’t read a book about most subjects covered in the paper, because most of them are simply irrelevant.
Its obsession with the criminal and the deviant makes us less trusting people. Its obsession with the hurry of the day-to-day makes us less reflective thinkers. Its obsession with surfaces makes us shallow.
A friend who’s a prominent free software developer says that every community member who’s joined Google has stopped contributing to public projects. It’s so bad, he says, that they’re thinking of banning Google from buying a booth at their next conference. They can’t afford to lose any more developers.
The right thing is to build not a bubble, with it’s binary in-or-out choice, but to build a gradient, with shades of resources you make available as people achieve success.
So you have this organization dedicated to building cool web apps. The first thing you do is you start giving away free food in the middle of San Francisco. You have a nice cozy area with tables and bathrooms and Wi-Fi and anyone interested in starting a web site is encouraged to drop by and hang out. There they can eat, chat, hack, get feedback, get suggestions, get help. Then you give them free hosting. Servers and bandwidth are cheap, good projects are invaluable. But not only will you host their app for free, throwing in servers to scale it as necessary, but you’ll pay them for the privilege of hosting. Indeed, you’ll pay them proportionately to the amount of traffic they get, in exchange for the right to run ads on it someday. So now you’ve got all the bright, smart young things who want to start companies starting them on your servers, with clear and unambiguous incentives: get traffic, get paid. They don’t need to worry about impressing anyone with their idea; anyone can use the hosting. And they don’t need to sell out to investors anymore; as their traffic grows, you’ll already be giving them the cash to grow the business.
And — this is where the gradient comes in — as they become more successful, you give them more resources. Let them move into the apartment building above to food/hangout space, so they can get more facetime with fellow successful hackers. Give them free offices to work in. Provide free massages and exercise equipment. Have your PR team set up interviews with the major media. Integrate their site with your other sites. Plus, of course, they’re getting paid more for more traffic the whole time. Some of the sites will be huge hits, another YouTube or Facebook.
(Bonus for the truly adventurous: run the whole thing as a non-profit and have all the applications involved be open source.)
Even as they get big, they betray facets of the founder’s personality. The most obvious is in who is respected. My friend Emmett Shear has a theory that in each company only one class of people can be in charge and it’s going to be the class of people the founders are in. At Apple, for example, the UI designers are in charge, because Jobs obsesses over UI design. At Google, it’s the programmers, because Larry and Sergey used to code. Even though the founders aren’t directly involved in every project, their surrogates still win the day. Now the problem comes when the organization wants to grow beyond its founder. This is most common on non-profits, where they even have a name for it: founder’s syndrome.
Kragen begins by defining “elite”: An “elite” is a small group of people who are distinguished from the majority in one of two ways: either they are better in some way, or they have more power. These are distinct meanings, although apologists for established orders like to conflate them, and sometimes one leads to the other.
Elitism, then (according to Kragen) is the ideology that insists that the elite and non-elite reached their positions through intrinsic merit. (This belief might also be called meritocracy, but that is perhaps a less pejorative term.)
Despite the fact that most long-term relationships fall apart, nobody suggests it’s the idea of a long-term relationship that’s the problem. Instead we’re told that lasting relationships require “hard work”. So you go to work and put in some hard work and then come home and put in some more. Or maybe you spend more time at the office to avoid having to come home. Or maybe you channel your unfulfilled passion into shopping. Either way, capitalism wins.
Careful studies find that people have a suspiciously accurate habit of pairing up with people with a similar level of attractiveness and wealth (although sometimes an excess of one can compensate for a lack of the other). And then they promise to live with each other forever, to working to make it work. Sure, perhaps you get a little more latitude in picking your partner, but is this system really that different? Most people don’t see love as a system to be criticized, but they do feel the social straitjacket even if they assume it’s just the “natural” way relationships work out. And they begin looking for an escape. Kipnis points to two: murder (“lethal intimate violence” — killing your lover — is so popular that there’s an entire category in the crime statistics for it) and adultery (its popularity needs no statistics).
She notes self-action generally comes before self-knowledge in these things; workers had walk-outs before they had unions. In the same way, the prevalence of adultery is people’s individualized attempt at escaping from love, each one sneaking out for an affair, stealing time from work at work and work at home to embrace their feelings.
I remember how when reddit started, the whole thing seemed so childish. The cartoony alien, the barebones design, the fresh-faced programmers, the rented house. And none of that has really changed. It’s just that with success behind it, it’s harder to dismiss. A scribbled drawing a kid hands to you is “cute”, the same thing on the wall of a museum is “art”. You assume there must be something there, even if you can’t see it. It’s hard to notice this when you’re in the middle of it. During the days, I mostly saw my co-workers, who lived and breathed the site. At night, I hung out with my friends, who all knew what I did. On weekends, we’d go to parties for local startups, who all wanted to emulate reddit’s success. Everyone we talked to treated us like it was serious. But whenever I stepped outside the bubble, things were very different.
In reality, Borat is about the existence and enforcement of cultural norms. In place after place, Borat goes somewhere and does exactly what you’re not supposed to do. By doing so, he demonstrates exactly what our cultural assumptions are, makes us laugh uncomfortably at their violation while we start to question their legitimacy, and then documents the punishment inflicted for violating them. There are scenes where he questions feminist dogma, provides a brilliant critique of nationalist rhetoric, violates norms about racial integration, takes superstar-worship culture to its logical conclusion, and, in my favorite scene, deconstructs the fake niceties of the television interview (something I’ve always dreamed of doing). This is an incredibly tough kind of humor to do, because watching people violate cultural norms is so challenging. We’re ingrained from birth with an injunction to follow the rules of behavior in such situations and violating them does not come naturally. (A Japanese friend said that watching the film was actually painful in parts.)
I was recently dragged to the Boston Vegetarian Festival to see Singer speak about his new book, The Way We Eat, and was deeply impressed by his thoughtfulness and clarity of mind. An aging fellow with thoughtful glasses, he looks like Noam Chomsky, another plainspoken professor. He is not a passionate activist who has taken on the cause of animals, but simply what he appears: a moral philosopher who started thinking about the issue one day and drew the logical conclusions.
“I don’t believe in veganism as a religion. I simply believe that refraining from eating animal products is the most effective way of putting pressure on producers to stop abusing and killing animals. Sometimes, if a host misunderstands my request and makes non-vegan food, instead of throwing it away, I will eat it. I don’t think this is a problem, because I don’t think this does any moral harm.”
Pag. 271. 4163 “We would if we knew how to do so without making things worse and disturbing the ecosystems and so on.”
I had to get that off my chest, because it was the one thing bugging me about Singer. Somehow later in life Singer had become a sociobiologist, one of that vulgar group of psuedoscientists who insist — despite all evidence — that humans are genetically programmed to do all everything a right-wing politician could imagine. (Sociobiology having gotten a bad name, they now call themselves evolutionary psychologists.)
The first day I showed up here, I simply couldn’t take it. By lunch time I had literally locked myself in a bathroom stall and started crying. I can’t imagine staying sane with someone buzzing in my ear all day, let alone getting any actual work done.
This has clear implications for one’s personal behavior: when you want to do something, don’t follow the rules but look for a friend. (Sure, you could stay in a New York hotel — but you’d be much better off sleeping on the floor of a friend.) And build up lots of friends so that you’ll be able to call on them when you need them. (Find a book you really like? Send the author a friendly note.)
Marx wrote incisively about commodity fetishism—the tendency of people to see only the results of production (commodities), ignoring the hours of human labor that actually created them. The humanities seems to suffer from something of the reverse problem: a tendency to be absorbed by the names of big people and not seeing beyond to the ideas they espouse.
The goal of science is to discover the truth about the world. Truths remain true no matter who says them and it’s unlikely that one person will discover the whole truth. Thus the pattern of letting multiple people develop a theory and try to find evidence for it to convince the others. So why don’t the softer sciences follow the same model? The problem gets worse the softer you get, which suggests the problem lies in the softness itself. The problem is that without identities, one has to judge the ideas themselves which, in a soft science is somewhat difficult to do. It’s easy in science to run an experiment and see if it proves a theory true or false, it’s much harder to get consensus about a reasonable theory of morality in philosophy.
If there’s one thing good UI designers know, it’s that the best UI is not to have one at all. Applications should just save, security should just work, and computers should just backup.
As best as we can tell, the human brain works by mastering a specific thing and then “giving it a name”, wrapping the whole thing up into a bundle and pushing it down a level, so that things can then be built with it as a component.
So what’s going on here? As we noted at the beginning, the brain works by mastering the details and then giving them a name. But the business guys took the easy way out: they just mastered the names. If you asked them exactly how a content delivery system worked, they wouldn’t be able to tell you. They know only the high-level thing, with none of the details. And it’s the details that make it so interesting — and so powerful.
I lie down on the sand and close my eyes. When you’re trying to relax, everyone always says to imagine you’re on the beach by the ocean with the sun streaming down upon your face. But here I actually am and I still can’t relax. Instead, I wonder about the fractal nature of coastlines — does their length grow with out bound or simply converge as you measure them more finely? (It grows without bound.)
It should be clear to anyone who has studied the topic that the way to drive innovation forward is to have lots of small groups of people each trying different things to succeed. In Guns, Germs, and Steel, for example, we see that certain societies succeed because geography breaks them up into chunks and prevents any one person with bad ideas from getting control of too much, while other societies fail because their whole territory can too easily be captured by an idiot. It might at first seem more efficient to let the whole territory be captured by a genius, but a moment’s reflection will show that there are few geniuses whose brainpower can match the combined results of many independent experiments.
And yet we also know that competition is a terrible way to get people do well. In No Contest: The Case Against Competition (now out in an elegant 20th anniversary edition) we see dozens of studies that show that, by all sorts of metrics, people’s performance (and enjoyment) goes down when they are forced to compete. Even worse, it goes down most notably for creative tasks — precisely the kind of thing involved in innovation. How do we resolve the contradiction? The key is to notice that competition, especially market competition, isn’t the only way to encourage experimentation.
Instead of providing a prize for winner, we could provide rewards to everyone who tries. And that actually makes sense — not only because prizes also decrease productivity and creativity — but also because, when it comes to experimentation, it’s not really your fault if the experiment doesn’t work. In fact, we want to encourage people to try crazy things that might not work, which is exactly why rewards are so counterproductive.
Moral Mazes (one of my very favorite books) tells the story of a company, chosen essentially at random, and through careful investigation from top to bottom explains precisely how it operates, with the end result of explaining how so many well-intentioned people can end up committing so much evil.
There’s no ownership over text. If you write something, as soon as you post it to Wikipedia, it’s no longer “yours” in any real sense.
With the single exception of Flickr, all these websites are hideous.
Clear goal. Wikipedia is an encyclopedia. It’s an understandable task with a clear end result. When you want to know something, you know whether it’s the kind of thing that might be in Wikipedia or not.
Worth doing. Collecting the sum of human knowledge in one place is just the kind of grand goal that inspires their people to sink their time into a big, collective effort.
Objective standards. It’s pretty clear what an encyclopedia article should be. It needs to contain an explanation of what it is and why it’s important, the history, the uses (or actions), criticism, and pointers to more information.
Made from small pieces. Encyclopedias are huge projects, but they’re made up of manageably-sized articles. If an article ever grows too long, it can be split into parts (see Al Gore Controversies). When a page is small enough that the whole thing can fit comfortably in your head, it’s much easier to work with: you can write it in one sitting, you can read it relatively quickly, and you can remember the whole thing.
Pag. 319. 4885 Each piece is useful. Each article in an encyclopedia is useful in its own right.
Segmented subjects. Few people are passionate about learning all human knowledge. But many more people are passionate about some subset of that. Encyclopedias allow the people who really care about French social theorists to spend all their time on that, without ever caring about the rest of the site. And the same is true of readers.
Personally useful. The best way to understand something is to write about it and the best thing to write is a layman’s explanation. An encyclopedia provides a opportunity to do just that. At the same time, it captures what you’ve learned in case you forget it later and gives the concept more form so that you’re more likely to remember.
Enjoyable work. An encyclopedia mostly consists of people trying to explain things and explaining things can be quite fun. At parties, if you as someone about the problem they’ve dedicated their life to, they’ll gladly talk your ear off about it for hours. Wikipedia capitalizes on this tendency while also magnifying it — now it’s not just one partygoer, it’s the whole world listening. Contrast this with a project like categorizing all the pages on the Internet, which most people would find quite boring.
“It’s about infantilizing people,” she explained. “Give them free food, do their laundry, let them sit on bouncy brightly-colored balls. Do everything so that they never have to grow up and learn how to live life on their own.”
“It’s always frightening when you see how the sausage actually gets made,” explains a product manager. And that’s exactly what the secrecy is supposed to prevent.
Our world is full of forces pushing us towards specificity. Open a newspaper and it’s divided into topic sections. Go to the bookstore and it’s divided into subject categories. Go to school and the classes are all in separate fields. Get a degree and you have to study in a particular major. Get a job and you have to work at a particular task. The world needs specificists, of course, but it also needs generalists. And we see precious few of those.
Don’t listen to them. People are afraid of grandeur; it challenges the status quo. But you shouldn’t be. “Look up more” should be your motto; “Think bigger” your mantra. The first step is to recognize your place in things.
Good thinking is that which better helps us approximate reality — avoiding fallacies, missteps of judgment, faulty assumptions, misunderstandings, and needless fillips and loops.
The closest I can think of is Chomsky’s review of B.F. Skinner (an unfair match-up if there ever was one — a bit like using a blow torch to clear off a dust mite). But Chomsky’s attacking Skinner’s ideas rather specifically (and, more generally, exposing the political implications behind bogus science); the essay is certainly not one in a series of examples of how to think better.
And then there’s the future where everything just sort of keeps going on the way it has, with incremental changes, and technology is no longer the deciding factor in things. You don’t need high tech to change the world; you need Semtex and guns that were designed by a Russian soldier fifty-odd years ago.
The upshot of all of this is that the Future gets divided; the cute, insulated future that Joi Ito and Cory Doctorow and you and I inhabit, and the grim meathook future that most of the world is facing, in which they watch their squats and under-developed fields get turned into a giant game of Counterstrike between crazy faith-ridden jihadist motherfuckers and crazy faith-ridden American redneck motherfuckers, each doing their best to turn the entire world into one type of fascist nightmare or another.
So everybody pretends they don’t know what the future holds, when the unfortunate fact is that — unless we start paying very serious attention — it holds what the past holds: a great deal of extreme boredom punctuated by occasional horror and the odd moment of grace.
The medium stupid idea has much wider applicability. Most specifically, it explains the general state that the mainstream media tries to inculcate in the public. The uneducated American has a general idea that invading other countries is probably a bad idea. The overeducated American can point to dozens of examples of why this is going to be a bad idea. But the “medium stupid” American, the kind that gullibly reads the New York Times and watches the CBS Evening News, is convinced that Iraq is full of weapons of mass destruction that could blow our country to bits at any minute. A little education can be a dangerous thing.
split people up into groups, have them try to solve real problems, encourage them to sit and engage with something over time instead of flitting from exhibit to exhibit, make it just as rewarding for adults as well as kids.
The exhibits could use what Tufte calls “small multiples” to give kids a physical intuition about a phenomenon by letting them change the relevant variables, rather than just showing them one case. The descriptions could give the force vectors and equations for each examples instead of just the name. Some of it might go over kids heads, but even just getting them accustomed to such things is a valuable skill.
Sometimes people ask me what the difference is between sociology and anthropology. There are the surface ones, of course — sociology typically studies first-world societies, whereas anthropology has a rep for studying so-called “primitive” cultures. But the fundamental difference is a philosophical one: sociologists study society, while anthropologists study culture.
I saw a paper by an anthropologist on this fact; their argument was that these textbooks were a result of the sexism of American culture, a culture which sees men as competing for access to women, and those notions are naturally transported onto our writing about conception. Sexist culture, sexist output. A sociologist would dig a little deeper. They’d see who writes the textbooks, perhaps notice a disproportionate number of males. They’d look into why it was that males got these jobs, find the sexism inherent in the relevant institutions. They’d argue it was the structures of society that end up with sexist textbooks, not some magical force known as “American culture”.
But if there’s one thing we’ve learned from psychology, it’s that — for the most part — people are people, wherever you go. As Zimbardo’s Stanford Prison Experiment showed, put normal people into the wrong situation and they turn into devious enforcement machines. And put the same people into a different society and they’ll change just as fast. It isn’t culture — whatever that is — that causes these things; it’s institutions. Institutions create environments which force a course of action. And that’s why I’m a sociologist.
Institutions require people to do their bidding. A tobacco company must find people willing to get kids addicted to cigarettes, a school must find teachers willing to repeat the same things that they were taught, a government must find public servants willing to enforce the law. Part of this is simply necessity. To survive, people need money; to get money, people need a job; to get a job, people need to find an existing institution. But the people in these positions don’t usually see themselves as mercenaries, doing the smallest amount to avoid getting fired while retaining their own value system. Instead, they adopt the value system of the institution, pushing it even when it’s not necessary for their own survival. What explains this pattern of conformance?
Studies on punishment and rewards show that dealing them out lessens the victim’s identification with the enforcer. Hitting me every time I don’t do my job right may teach me how to do my job, but it’s not going to make me particularly excited about it. Indeed, punishments and rewards interfere with a much more significant effect: cognitive dissonance. Cognitive dissonance studies have found that simply by getting you to do something, you can be persuaded to agree with it. In a classic study, students asked to write an essay in favor of a certain position were found to agree more with the position than students who could write for either position. Similarly, people who pay more to eat a certain food claim to like it more than people who pay less.
Quitting your job for the government is tough and painful; and who knows if you’ll soon find another? So it’s much easier to simply persuade yourself that you agree with the government, that you’re doing the right and noble thing, that your work to earn a paycheck is really a service to mankind.
Men make their own history, but they do not make it as they please; they do not make it under self-selected circumstances, but under circumstances existing already, given and transmitted from the past. The tradition of all dead generations weighs like a nightmare on the brains of the living.
The Thomas theorem says “If men define situations as real, they are real in their consequences.”
Copyright is still a live issue here in Sweden, perhaps best illustrated by the now world-famous The Pirate Bay website. Despite the site’s arrogant name and attitude (“I’m running out of toilet paper, so please send lots of legal documents to our ISP,” they replied to one legal complaint) what it provides is apparently quite legal in Sweden. The site is a BitTorrent tracker, helping your computer get in touch with others who are sharing (usually copyrighted) files. Because it only assists in copyright infringement and doesn’t do any copyright violations itself, the Swedish government has had a hard time shutting the site down.
This, I suppose, is the actual problem: I feel my existence is an imposition on the planet. Not a huge one, perhaps, not a huge one at all, but an imposition nonetheless. When I go to a library and I see the librarian at her desk reading, I’m afraid to interrupt her, even though she sits there specifically so that she may be interrupted, even though being interrupted by for reasons like this by people like me is her very job. At the fast food restaurant, I feel embarrassed taking time to pick through my pocket for appropriate change, so I always give them whole bills, then feel embarrassed when they have to take the time to count me out change. When someone asks me what high school I’m going to, I feel awkward explaining to them that I’ve gone to high school and to college and then started a company and sold it, so I just stutter a bit and then tell them that my high school is outside of Chicago.
I am a good person, I mean. I work hard, I happily pay my taxes, I think of ways to make the world a better place, I always look the woman behind the counter in the eye and say “thank you, thank you,” as if I really mean it, as if I really do appreciate the effort it took for her punch my order into her cash register and withdraw the right amount of change. And I do! So I deserve this, I deserve my glass of water, my can of soda. I am not like one of those people who goes around robbing banks and mugging old ladies and then stands in front of me in the supermarket line, throwing a tantrum about how dare the clerk not accept my credit card. No, I am perfectly justified in asking for a glass of water. I know all this, and yet, somehow, I still feel like an imposition.
Normally, I just sit in my quiet little room and do the small things that bring me pleasures. I read my books, I answer email, I write a little bit. I’m not such a nuisance to the world, and the kick I get out of living can, I suppose, justify the impositions I make on it.
The left is allies with Islamic extremists because the extremists hate the left. Just like Dinesh D’Souza. “[W]hen it comes to core beliefs,” he writes, “I’d have to confess that I’m closer to the dignified fellow in the long robe and prayer beads than to the slovenly fellow with the baseball cap [Michael Moore].”
Nearly 80 percent of all college courses are simply lectures by professors, a stunningly ineffective form of teaching. By the end of a lecture, a student remembers less than half of what was taught. Only a week later, that number is down to 20%. At such stunning rates, it’s hard to imagine much is left after a month, let alone by the time the student gets out of college.
Bok expects to only have the job for a year and no doubt his hands are tied in many ways — but rumor about campus is that he wants to make his year count. Yet Bok’s biggest changes have been a recommendation for more hands-on activities and the elimination of early admissions. Not bad moves, by any means, but hardly anything like the deep rethinking Bok’s book suggests is necessary. But if Bok — a thoughtful and intelligent figure who has written eloquently about these problems — can’t use his position — the most prominent spot in the entire field, with the deadline already on his head freeing him from any accountability — can’t do anything about these problems, what hope do we possibly have? Opportunities like this come around once a century and it appears that Bok is going to blow it.
Gödel was a hacker. He attended the meetings of the Vienna Circle, one of the most important philosophical groups in history, and sat quietly, convinced they were all wrong. Others might try to argue them out of it. Not Gödel. Perhaps (rightly) he thought that rational argument would get him nowhere with people so detached from reality. So he decided to prove them wrong.
(The Vienna Circle insisted the world was simply language-games, with rules and structures. Gödel, by showing something that was clearly true to us but not provable from within the game, thought he had proved that such things, like numbers, have an independent reality.)
This is a very bad book. It rambles and prattles and occasionally repeats itself practically verbatim. (It is the result of a project to improve science writing simply by paying famous authors to write about scientific topics. Perhaps the payment should be contingent on some measure of quality in the result.) But it is a compelling story and Gödel’s proof is so brilliantly beautiful that it should be learned by all educated people. I had never seen it presented in any real detail before and once I got to its key principle I exclaimed and tossed the book down and paced, admiring its brilliance. But there are many sources who will explain the fairly simple idea. I’d love to tell you about it myself.
After months of trying to make my way through desultory conversations with other Stanford students (“What are you studying?” “Oh, I don’t know yet. You?” “I don’t know yet either.”) I finally gave up and spent the rest of my time there with my nose in a book. And part, perhaps, is that average people just have a really bad rap. Books complain about the anti-intellectualism of American culture. Professors complain about the students who don’t care about the work. Newspapers complain about the ignorant red state-filling wingnuts. The average person seems like a dangerous entity, like the fellow at the lecture with the long and rambling not-quite-a-question that makes you think that maybe elitism isn’t such a bad idea after all. And who knows, maybe average people are actually like that. Maybe college towns are funny.
Seth was much braver than I and took most of the brunt of the conversation, leaving me to observe and reflect quietly. But even just watching random conversations with random people was a thoroughly rewarding experience. I felt as if it was some essential task of humanity that I had heretofore neglected. Everyone we interviewed stands out, but perhaps a few examples will give you the flavor of people’s situations. There was the young man at Stanford who had started a web-based business with his friends from high school. He soon realized that they were in an overcompetitive market and during his first year at Stanford had reconfigured their business plan to target a new space. Meanwhile, he planned to take business classes to get a more academic grounding in his profession. There was the African-American girl at UC Berkeley who loved her courses in African-American Studies, how they revealed a secret history of her country that she’d never before heard told. She couldn’t see making a living in the subject, of course, but she wanted to study it while she could, before she had to find a real job. A job like, if she had to pick right now, probably teaching. There was the Berkeley City College student who had done three years of college in Chile, spent a couple years in Germany, and now worked in Berkeley as a professional dancer, hoping to get college credit so she could get a job doing Latin American human rights work, ideally for the UN.
I am not sure what to say to these people. I understand the paths their lives are on and I see the flaws in the institutions in which they reside, but I have trouble imagining how I can fix things for them, as people. The problem with psychologists, sociologists complain, is that they’re too obsessed with individual people. And the problem with focus groups, even though they were invented by a sociologist, is that people are too obsessed with themselves. They see their problems, they see the context, and they reflect on them, but it’s not clear to either of us how they can particularly be remedied. The situations are just too big.
made by whoever was around — Nancy Reagan, his wife; Michael Deaver, his aide in charge of public relations; etc. Reagan’s top people, such as his cabinet officials, frightened that they were actually making policy without any supervision, kept this fact secret from their staffs and the public until they all published their kiss-and-tell memoirs after Reagan had left office. Even more shocking, Reagan didn’t seem to mind when the members of this group changed. One day Reagan’s inner circle informed him that they were leaving and bringing the Treasury Secretary in to take their place. Reagan simply thanked them for their service. There was one thing Reagan did seem to care about (aside from politely answering his fan mail): speeches. Reagan would rewrite his own speeches, removing abstract verbiage and adding homespun stories. And it was out of this concern that he stumbled into launching the Star Wars initiative.
A similar problem gives the book its title. Imagine you get some utility from having a vibrant downtown of independent shops. Then a Wal-Mart opens up on the outskirts of town. You begin shopping at the Wal-Mart because the prices are cheaper and you can still walk through the vibrant downtown when you like. But with everyone buying things at Wal-Mart, the downtown stores can no longer afford to stay open and the center of your city turns into an empty husk. You’d prefer to have the vibrant downtown to the Wal-Mart, but nobody ever gave you that choice.
“Abundance, like growth itself, is a force that is changing our world in ways that we experience every day, whether we have an equation to describe it or not.” (p.
Pag. 383. 5866 I think John Searle might be my favorite living philosopher.
Nonetheless, I will defend the Chinese Room Argument. The basic idea, for those who aren’t familiar with it, is this: imagine yourself being placed in a room and given instructions on how to convert one set of Chinese symbols to another. To outsiders, if the instructions are good enough, it will seem as if you understand Chinese. But you do not consciously understand Chinese — you are simply following instructions. Thus, no computer can ever consciously understand Chinese, because no computer does more than what you’re doing — it’s simply following a set of instructions. (Indeed, being unconscious, it’s doing far less.) The Chinese Room Argument works mainly as a forcing maneuver. There are only two ways out of it: you can either claim that no one is conscious or that everything is conscious. If you claim that no one is conscious, then there is no problem. Sure, the man doesn’t consciously understand Chinese, but he doesn’t consciously understand English either. However, I don’t think anyone can take this position with a straight face. (Even Daniel Dennett is embarrassed to admit it in public.)
The alternative is to say that while perhaps the man doesn’t consciously understand Chinese, the room does. (This is functionalism.) I think it’s pretty patently absurd, but Searle provides a convincing refutation. Functionalists argue that information processes lead to consciousness. Running a certain computer program, whether on a PC or by a man with a book or by beer cups and ping pong balls, will cause that program to be conscious. Searle points out that this is impossible; information processes can’t cause consciousness because they’re not brute facts. We (conscious humans) look at something and decide to interpret it as an information process; but such processes don’t exist in the world and thus can’t have causal powers.
Despite the obvious weakness of the arguments, why do so many of my friends continue to believe in functionalism? The first thing to notice is that most of my friends are computer programmers. There’s something about computer programming that gets you thinking that the brain is nothing more than a special kind of program. But once you do that, you’re stuck. Because one property of computer programs is that they can run on any sort of hardware. The same program can run on your Mac or a PC or a series of gears and pulleys. Which means it must be the program that’s important; the hardware can’t be relevant. Which is patently absurd. I used to think that part of the reason my friends believed this was because they had no good alternatives. But I’ve since explained to them Searle’s alternative — consciousness is a natural phenomenon which developed through evolutionary processes and is caused by the actions of the brain in the same way solidity is caused by the actions of atoms — and it hasn’t caused them to abandon their position one bit.
if a computer program acted conscious, if it plaintively insisted that it was conscious, if it acted in all respects like the conscious people we know in the real world, then it must be conscious. How could we possibly tell if it was not? In short, they believe in the Turing Test as a test for consciousness — anything that acts smart enough to make us think it’s conscious must be conscious. This was the position Ned Block was trying to refute when he postulated a computer program known as Blockhead. Blockhead is a very simple (although very large) computer program. It simply contains a list of all possible thirty minute conversations. When you say something, Blockhead looks it up in the list, and says whatever the list says it’s supposed to say next. (Obviously such a list would be unreasonably long in practice, perhaps even when heavily compressed, but let us play along theoretically.)
Some people argue that because the evidence for determinism is so overwhelming, free will must simply be an illusion. But if so, it is a very odd kind of illusion. Most illusions result from a naive interpretation of our senses. For example, in a classic illusion, two drawings of equal size appear to be of different size. But when we are told this is an illusion, we can correct for it, and behave under the new (more accurate) impression that the drawings are in fact of equal size. This simply isn’t possible with free will. If someone tells you that you do not actually have free will but have actually been acting under an illusion, you cannot sit back and let determinism take over. When the waiter asks you whether you like soup or salad, you cannot say “Oh, well I’ve just learned that free will is an illusion and all my actions are completely determined by the previous state of the world, so I’ll just let them play themselves out.” I mean, you can say that, but the waiter will look at you like you’re crazy and you will get neither soup nor salad.
A couple years ago the Showtime network called Ira Glass, the head and host of This American Life and asked him if he wanted to make a television version of his show.
I’ve collected thousands of actual facts from real scientists and the verdict is in: people don’t matter, except for a couple of rare exceptions, and you’re not one of them. Sorry.
“It’s like watching someone shovel Mars Bars into his gob while telling you how much he hates chocolate,” Doctorow complains. Doctorow’s conclusion? Blogs are just better. But I think Mars Bars are just the right analogy. Everyone in America knows that it’s easy to accidentally find yourself stuffing your face with junk food when you’re not paying attention. But no one would seriously maintain that junk food is better than fine cuisine. It’s just easier.
Looking at photos of sunsets or reading one-liners takes no cognitive effort. It’s the mental equivalent of snack food. You start eating one and before you know it you’ve gone through two cans of Pringles and become a world expert on Evan Williams’ travel habits. We need to stop pretending that this is automatically a good thing. Perhaps Procter & Gamble doesn’t care of their making us into a nation of fat slobs, but there’s no reason why programmers and the rest of the startup world need to be so amoral. And no doubt, as pictures of cats with poor spelling on them become all the rage, people are beginning to wonder about where all this idiocy is leaving us. Which is where apologists like Doctorow and Steven Johnson step in, assuring us that Everything Bad is Good For You.
Nobody prefers farting to thought. It’s just that, as David Foster Wallace noted about television, “people tend to be extremely similar in their vulgar and prurient and dumb interests and wildly different in their refined and aesthetic and noble interests.” Similarly, no one (Doctorow included, I suspect), actually prefers blog posts to novels, it’s just that people tend to have more short chunks of time to read blog posts than they do long chunks of time to read novels. Technology was supposed to let us solve these problems. But technology never solves things by itself. At bottom, it requires people to sit down and build tools that solve them. Which, as long as programmers are all competing to create the world’s most popular timewaster, it doesn’t seem like anyone is going to do.
funding treatment programs for drug addicts would reduce drug use by 1% at a cost of only $34 million. In other words, for every dollar spent on trying to stop drugs through source-country control, we could get the equivalent of twenty dollars benefit by spending the same money on treatment. This isn’t a bunch of hippy liberals saying this. This is a government think tank, sponsored by the US Army.
Pag. 402. 6198 Second, you have to show the genetic differences are relevant.
Third, you need to prove that the differences are unavoidable. The brain has amazing levels of neuroplasticity.
Fourth, you need to show a causal link from the genetic difference to the tenure disparity. Why is it that doing worse at math causes you to do worse at tenure? Are speed-math-tests used as a relevant factor in tenure decisions? If so, maybe you guys should really cut that out, because that’s a pretty stupid test. Fifth, you need to show that it’s the only cause of discrimination. Even if genetic differences cause some of the disparity, it’s still morally required for us to remove the rest.
I read over a hundred and twenty books in 2006. Some of them were OK. Some were good. Some were very good. Here are the handful that I can recommend you read without any reservations. This isn’t a top ten list; the books aren’t in any particularly meaningful order. These are just the books that I can honestly say that, as a human being, I think you will enjoy reading (and you’ll be a better person for having done so).
Thomas Geoghegan, Which Side Are You On?: Trying to Be for Labor When It’s Flat on Its Back One would think a book on labor history would be dreadfully dull and, more to the point, depressing. And yet, in the first chapter of this book, I found something that made me laugh or smile widely on practically every page.
Robert Karen, Becoming Attached: Unfolding the Mystery of the Infant-Mother Bond and Its Impact on Later Life At the beginning of the last century, doctors thought parental love was unimportant: parents weren’t allowed to even visit their kids in the hospital, psychology experts encouraged moms not to hug or kiss their children, the US government handed out pamphlets on how to be firm with your children. This tour de force book tells the amazing story of how all that was overturned by a group of dedicated scientists whose research into the subject of parental love brought some of the most stunningly strong results in the entire field of psychology. Thrillingly good story, textbook on the science, and self-help guide all in one — I can’t recommend this book enough.
I realized what I really wanted was books that were compulsively readable, the kind that once you slurped down like wet noodles, where once you started in on them you just couldn’t stop. I’ve read a couple books like that, but not many. For example, David Boies’ autobiography isn’t the kind of thing I would normally pick up. Self-aggrandizing autobiographies aren’t exactly my thing and Boies, while interesting, isn’t exactly a topic of fascination for me. But when I found myself holding it with a couple minutes to kill, I started reading it and before I noticed it I was most of the way through the 500 page book. Or take James Wolcott’s Attack Poodles. Now I love a good media-bashing as much as the next guy, but this one I just couldn’t put down. I had to sneak away from dinner to finish reading it. And when it was done, I found myself wanting more.
Early this year, when I left my job at Wired Digital, I thought I could look forward to months of lounging around San Francisco, reading books on the beach and drinking fine champagne and eating foie gras. Then I got a phone call. Brewster Kahle of the Internet Archive was thinking of pursuing a project that I’d been trying to do literally for years. I thought long and hard about it and realized I couldn’t pass this opportunity up. So I put aside my dreams of lavish living and once again threw myself into my work. Just as well, I suppose, since San Francisco’s beaches are freezing cold, champagne has a disgusting taste, and foie gras is even worse.
So today I’m extraordinarily proud to announce the Open Library project. Our goal is to build the world’s greatest library, then put it up on the Internet free for all to use and edit. Books are the place you go when you have something you want to share with the world — our planet’s cultural legacy. And never has there been a bigger attempt to bring them all together.
You ever notice how when you learn a new word you begin seeing it used everywhere? Lately I’ve been feeling that way about consciousness. I knew the word before, obviously, but lately I’ve clarified my thoughts about what it is and sloppy usage of the term sticks out like a sore thumb. “Consciousness”, the dictionary kindly explains, is “the state or condition of being conscious.”
Now we don’t know for sure what causes consciousness (it’s an ongoing research project) but whatever the answer is, it must be caused by something. Yet this obvious fact is continually missed by laypeople who make bizarre comments like “as soon as computers become self-aware, they might become conscious”.1 This is as absurd as saying that as soon as computers are told about food, they might start digesting things. Consciousness isn’t some vague property of things that look smart to us. It has a real, physical meaning: feeling things. I suppose it’s logically possible that a talking robot might start feeling things, but the chances seem awfully remote.
After what seemed like years working in the Reddit isolation chamber, I begun saying yes to all the interesting projects that came my way as soon as I got out. And there were a lot of interesting projects. I signed up to build a comprehensive catalog of every book, write three books of my own (since largely abandoned), consult on a not-for-profit project, help build an encyclopedia of jobs, get a new weblog off the ground, found a startup, mentor two ambitious Google Summer of Code projects (stay tuned), build a Gmail clone, write a new online bookreader, start a career in journalism, appear in a documentary, and research and co-author a paper. And that’s not including all the stuff I normally do. I’ve actually been spending most of my time catching up on my 2000 email backlog, reading a book a week, following a bunch of weblogs, and falling in love. (Falling in love takes a shockingly huge amount of time!) Yes, it has been pointed out to me that I’m insane.
So how is it that I’m able to do so many tasks that even I, upon reflection, can’t see how they’re all getting done? The secret is to be interrupt-driven. Previously, if I wanted to do something, I’d immerse myself in that thing. I’d wake up in the morning thinking about the problem, spend all day either working on it, reading background materials for it, talking to friends about it, thinking about it in bed before I went to sleep and then dreaming about it. I’m sure I did much better work this way: all that thinking and dreaming led to lots of ideas I wouldn’t have had otherwise. And it was fun, too. Immersing yourself in a problem can be very enjoyable. (See Mihaly Csikszentmihalyi.)
In the new system, for every task I have a partner. And they’re the one responsible for thinking about it. Perhaps they don’t immerse themselves in it totally like I sometimes did, but it takes up a substantial portion of their mental energy. And so, after we hash out the big plan together at the beginning, they work on it and think about it and worry about it and when they get stuck or finish a piece or just want to talk about it, they shoot me an email. Meanwhile, I sit at home and just deal with these emails, answering questions and solving problems as they come up. Now while I think giving people a partner on these projects is really valuable (I wish I had one on more of my immersive projects) I’ll readily admit that I’m lousy at it. I’ve been spending too much time ill and traveling and with people in my life to respond promptly or in detail. (I’m sorry guys; I’m going to work hard to pare things down and do better.) But I think the larger principle is valuable. This is clearly how the real big-shots “get things done”.
Paul eventually became convinced that we had written lots of good code but wouldn’t release it because we were perfectionists. Knock it off, he would tell us. It’s more important to get it up than to get it right. Paul had become convinced that users love seeing new features, it gave them the impression of an exciting vibrant site. There is something to this, of course. But I have a contrary proposal: users love perfectionism. Creating something brilliant is a process of continual refinement: adding bits where they’re needed, chipping off others that aren’t, and sanding everything smooth once it’s in place, then polishing it until it gleams.
When I decided to get into political journalism, Extra! was the first magazine I turned to. Every other month they issue a brilliant magazine full of articles which collect and dissect the standard media narratives on a particular issue and then lay out the real story for you. It’s invaluable. I think of their work as a good digest of the news: you get the same misinformation you’d get everywhere else but you also get how and why it’s misinformation.
Cooling out marks is how institutions persuade people to accept things they think are wrong. The con-man convinces you getting stolen from is OK. Your job convinces you it’s OK that they’re corrupt. The restaurant persuades you it’s OK that they’re incompetent.
I have a lot of illnesses. I don’t talk about it much, for a variety of reasons. I feel ashamed to have an illness. (It sounds absurd, but there still is an enormous stigma around being sick.) I don’t want to use being ill as an excuse. (Although I sometimes wonder how much more productive I’d be if I wasn’t so sick.) And, to a large extent, I just don’t find it an interesting subject. (My friends are amazed by this; why is such a curious person so uncurious about the things so directly affecting his life?) One of my goals for this blog is to describe what it’s like to be in various situations and it struck me that I’ve never said much about what it’s like to be sick. So I figured I’d try to remedy that. (Unfortunately, being sick has made this slightly more difficult. I started this post on thanksgiving and now it’s almost four days later.)
Everything you think about seems bleak — the things you’ve done, the things you hope to do, the people around you. You want to lie in bed and keep the lights off. Depressed mood is like that, only it doesn’t come for any reason and it doesn’t go for any either. Go outside and get some fresh air or cuddle with a loved one and you don’t feel any better, only more upset at being unable to feel the joy that everyone else seems to feel. Everything gets colored by the sadness.
When I was a kid, I used to take Saturdays to read, really read, devouring five or six books in one sitting. I haven’t read like that in years, but now I’m doing it again — checking out stacks of books from the library and setting upon them one by one. It’s fantastic.
I feel like the books are bringing me back — back not only to health, but to the world of thought and action, the world of accomplishment, the world of doing something grand with oneself. It’s fantastic.
Open Library: Moderate success. Open Library has a team of six or so people working on it. It’s not progressing as quickly as I would like, but it is progressing. The demo site has launched and there are no big hurdles in our path. Personally, I’ve learned an enormous amount about managing projects. Memoir: Failed. At the end of last year, I planned to start this year to take a week to write a memoir of life at Reddit. The writing proceeded on schedule but towards the middle of the week I fell terribly ill, realized I would not be happy with the resulting book, and reluctantly abandoned it. Why did it fail? There were lots of mid-level reasons (I’m not that interesting a subject, my memory of the period was poor, I was doing it with impure motives) but the fundamental one was that I just did not honestly believe a memoir of my time at Reddit was a book that was worth reading. I only lost a couple days, so this was not a devastating failure. Psychology book: On hold. After the Reddit book, I begun researching and outlining a book about the highlights of psychology. I am still interested in this book and think about it a fair amount, but I put the project on hold after I started work on Open Library. Another book: Unknown. In the other post I said I was working on three books. I can’t remember what the third one is now. I think about different books a lot but I don’t think I’ve pursued any others very seriously. Consult on Berkeley Big Ideas: Failed. I had no big ideas for helping this project and I did not follow through on finding any. I had made no strong commitments, so I do not consider this a serious failure. JobBook: Intermediate. An initial site is launched but I have not spent a lot of time on it nor is it progressing rapidly. Other people are pursuing it, though, so I feel less pressure. Science That Matters: Intermediate. This site has launched and a fair number of posts have been written, but it hasn’t been updated in a while and nobody else has really joined seriously. But the site hasn’t been officially abandoned, so it’s not a total failure. Jottit: Moderate success. The site was finished and launched and got incredibly positive reviews but hasn’t really caught fire with traffic yet. Perhaps this is because it’s a great product that no one wants to use or perhaps we simply haven’t figured out how to market it yet. (If you want to try to market it, contact email@example.com.) Statful: Intermediate. This was a project I mentored for Summer of Code to allow for better web stats visualization. I was a fairly bad mentor, in retrospect. A fair amount of the coding got done for this project but I did not have a clear design in place ahead of time so the software is not especially usable. If any readers want to help work on the design for a free software web stats analyzer (you know, something that will tell you who’s visiting your site) please email firstname.lastname@example.org. I still think the project is finishable but it certainly has languished for a while. Seddit: Failure. This was the other Summer of Code project. The code got written but not tested or launched and has since languished. The schedule budgeted time for launching the project but apparently not enough. I think the lesson here is that in such projects launching should be done first and features added later, so that whatever results is usable. Also, I should be a better mentor. Gmail clone: Failure. I got a couple of people started working on this project but I didn’t follow through because I felt too overloaded. However, people have since pointed me to Posterity and Sup. Again, the consequences weren’t too bad. I wasted some people’s time, but not an enormous amount. Book reader: Intermediate. I finished 70% of this but never launched it. Just never got around to it. The same was true of a number of other web projects. Journalism: Success. I had two articles published for money and a third is on its way to the editor. Paper: Intermediate. Plagued by difficulties, the paper has been teetering on the brink of failure for another year. I think a big problem was not having a partner who had enough time to push me about it more. Novel: Intermediate. I had to stop working on the novel when I got sick but I mostly did it on time when I was OK and a shocking number of people seemed to actually read it. I will try to finish it early this year. All in all, I think it was a fairly mixed year project-wise. A couple minor failures, a couple minor successes, and a couple big projects where time will tell. In truth, it may be too soon to evaluate this year. If Open Library becomes a big success, this year will have been well worthwhile. Otherwise, its legacy will be more mixed.
John Searle, Mind, Language, and Society This book combines, in abbreviated form, the arguments in all of Searle’s work up to the time of its publication (which is basically everything except Rationality in Action). In doing so, it basically has all the hallmarks of Searle’s work, only more so: a clear exposition of complex philosophical ideas that demonstrates how to think better, useful tools to help you understand your world better, and genuine philosophical solutions that let you resolve confusions you may have had before. And all of it is done with Searle’s customary wit and style — almost an anti-style, like that of D. J. Bernstein, in which he simply explains things clearly and concisely. A model of public philosophy.
So that’s why I’ve started a new community site for people who work with large data sets. It’s called theinfo.org and I’d really appreciate it if you joined the mailing lists and spread the word. http://theinfo.org/
I don’t know enough about the subject to vouch for it, but this article claims that neurons are small enough that we could see quantum effects in their high-level behavior:
So here’s the proposal: a series of entangled quantum particles at the synaptic level allow for coordinated firing patterns which occur in response to choices by our conscious free will. Just as my previous post reconciled free will with statistical randomness, this would seem to reconcile free will with the neuroanatomy.
There’s a lot of talk, here and elsewhere, about how Internet collaboration is going to revolutionize business and politics. Just add some Internet collaboration, they say, and your business will suddenly start working better and smarter—and cheaper, as well. But the Internet is not this magic pixie dust you can sprinkle on anything. In the States, the back of every ketchup bottle now has a notice explaining that you can now create your own advertisements for the ketchup company. In return, well, in return they might use your ad. This is magic pixie dust thinking at work: people are not going to suddenly start designing your ad campaigns for you just because you asked them to. We have to remember that these things are done by real people, not magical abstractions. The rhetoric often suggests that some magical force of “peer production” or “mass collaboration” has written an encyclopedia or created a video library. Such forces do not exist; instead there are only individual people, the same kind of people who drive everything else. The power is that these people are collaborating. But they are collaborating because they have come together to form a community. And a community works because it has shared values. But here’s the thing: these shared values are profoundly anti-business. [Laughs from the audience.] I mean, look at Wikipedia. This is a group who wakes up every day and tries to put the encyclopedia publishers out of business by providing a collection of world knowledge they can give away to everyone for free. If you want someone to do your company’s work for you, finding a well-organized online community with strong anti-business values seems like a bad idea. [Laughs.] So what do you do? I have a friend who is even more brash than I am and when anyone asks her for business advice she tells them simply: Well, in the future, your servants are going to rise up and eat you. So, invest in toothpicks.
gender gap in technology (which, I assure them, is worse than in any other field and a result of the most disgusting discrimination and misogyny) to the future of news (freelancers and aggregators, not institutions).
Michael Geist and his family. Geist is invariably referred to as “the Lawrence Lessig of Canada”.
Pag. 536. 8239 Rory Stewart That night there was a talk by Rory Stewart, the other person I got to know and enjoy at the conference.
He somehow got it in his head to walk across Afghanistan and his talk consisted of photos and descriptions of this incredible journey. he begun in a major city and walked for years, depending almost entirely on the hospitality of strangers in each town to keep him alive and moving. He walked every day, through deserts and snowstorms, with company and without. And what he found was an incredibly kind people, living in terribly poor conditions in autonomous villages, with a passionate faith in their religion (including such rules as keeping women out of site).
As someone who wants to make a difference in the world, I’ve long wondered whether there was an effective way for a programmer to get involved in politics, but I’ve never been able to quite figure it out. Well, recent events and Larry Lessig got me thinking about it again and I’ve spent the past few months working with and talking to some amazing people about the problem. I’ve learned a lot and must have gone through a dozen different project ideas, but I finally think I’ve found something. It’s not so much a finished solution as a direction, where I hope to figure more of it along the way. So the site is called watchdog.net and the plan has three parts. First, pull in data sources from all over — district demographics, votes, lobbying records, campaign finance reports, etc. — and let people explore them in one elegant, unified interface. I want this to be one of the most powerful, compelling interfaces for exploring a large data set out there. But just giving people information isn’t enough; unless you give them an opportunity to do something about it, it will just make them more apathetic. So the second part of the site is building tools to let people take action: write or call your representative, send a note to local papers, post a story about something interesting you’ve found, generate a scorecard for the next election. And tying these two pieces together will be a collaborative database of political causes. So on the page about global warming, you’ll be able to learn more about the problem and proposed solutions, research the donors and votes on the issue, and see or start a letter-writing campaign. All of it, of course, is free software and free data. And it’s all got a dozen different APIs to make it easy for others to build on what we’ve done in their own work. The goal is to be a hub, connecting citizens, activists, organizations, politicians, programmers, and everybody else who’s interested in politics.
No, I think the most important thing a person in charge of a large company can work on is sociology — designing the social structure of the company. It’s the sociology that determines who gets hired, what their life is like, how much freedom they have, what sorts of things they work on, etc. Clearly these structures determine an enormous amount about the corporation. And yet, strikingly, I’ve never heard of a single corporation that has a high-level group devoted to studying and improving them. “Practical men,” Keynes famously wrote at the end of his General Theory, “who believe themselves to be quite exempt from any intellectual influences, are usually the slaves of some defunct economist.”
Instead, the real innovation hasn’t come from companies, but the online peer-production projects, like GNU/Linux, that take contributions from a distributed set of volunteer contributors. But such groups solve the problem largely through eliminating it — they don’t have to worry about who to hire and how to treat them because they don’t hire anyone. Instead, most of the people who work on GNU/Linux are hired by other companies where they must contend with the antiquated social structures that those companies provide. And since those are the brutal facts that most humans must contend with, it would be nice if more people were thinking about alternatives.
The streets of San Francisco are lined with poor people looking for a little spare change. Many different strategies are tried — some just shake a jar, others call for help, some make specific small requests, and a fellow I saw today just kept sunnily repeating “a nickel and a smile will last a long while” in an endearing tone. Others, however, try to earn their keep — playing music, doing tricks, selling special papers like Spare Change. I have a strong urge to help out the first group, those who simply ask, but helping the second has always struck me as odd. People tell me that it’s better if the poor receive their money by doing work, because it lets them retain some dignity, but I’ve never quite bought that. After all, how much dignity do you get when your income comes from people patronizingly pretending to buy a newspaper specially created for this ruse? But there’s a much more serious problem with only giving the poor money for doing things. It encourages them to think their worth as a person is defined by their success in the capitalist economy. Now there is a grain of truth to this delusion.
There are many useful jobs that society doesn’t compensate well. There are many useful people who can’t do any of those jobs because society never trained them or gave them the opportunities required. And even if, perchance, there existed someone who cannot and even with training and opportunity could not do anything useful, it seems clear to me that their simple existence as a human being endows them with some inalienable value. (If human beings didn’t have value, then we would have no one to do useful things for.) People on the street don’t deserve our money because they can pretend to do certain menial jobs. Nor should their sense of dignity be bound up in doing them. Instead they, like everyone else, deserve our money because they are people and if we cannot care for other people, then we have precious little else.
“I never give money to those people,” she said. “They’re only going to spend it on drugs, anyway.” And what’s so wrong with that?, I wondered. I can see why one might want to discourage Harvard students from spending all their time getting stoned (although, I have to say, I don’t see anyone doing that), but if your life is spent sitting outside, hungry, cold, and miserable, drugs seem like a pretty decent use of the money. But, more importantly, since when is that your call to make? That you live in a nice house with a bulging wallet and he lives on the street is due to an enormous number of random factors that could just as easily have been reversed. And even if you’re arrogant enough to believe you’re a better person in some way — smarter, harder-working, more ambitious — since when does being better give you the right to tell other people how to live their lives?
It is a sad fact of reality that you have money and he has none and that, as a result, he needs the money to buy material goods. But no moral consequences can be derived from this. Just because history has given you the power to choose whether this person can acquire certain material goods doesn’t give you the right to make that call. Now it’s true, you don’t have to give him money at all. Most don’t. But if you feel that other people deserve to live a life without privation, at least let them choose how to live that life.
As he reaches his late sixties, it is understandable if he begins to think of his legacy. That certainly would help explain his latest book, Explaining Social Behavior: More Nuts and Bolts for the Social Sciences (Cambridge University Press, 2007), a 500-page masterpiece that I expect will be seen as the summation of a brilliant career. It’s a book unlike any other and, as a result, unless read from start to finish can seem bizarre, if only because one has little sense of what the book is trying to do. It is not a guidebook, or a textbook, or a piece of social science in itself. In short, it is nothing less than an attempt to summarize an idealized vision of the whole of social science in simple language. The book’s foundational assumption (as implied by its title) is that the goal of social science is to discover explanations for social phenomena.
His explanations are peppered with examples from an amazing variety of sources: ancient history, recent history, personal experience, the classics of social science (e.g. Tocqueville), the great philosophers (Montaigne, Pascal, Mill), and classic novelists (e.g. Proust). The result is a book which not just introduces readers to the discoveries of the social sciences but to the intellectual world as a whole. Bibliographical notes following each chapter as well as the conclusion provide a rich guide for further exploration. And yet it’s not simply a compendium of interesting results in the social sciences, but attempts to defend a particular conception of what the social sciences should be. In the conclusion, Elster defends his notion of social science as the attempt to discover particular explanations for particular phenomena against the “soft obscurantism” of the literary theorists and the “hard obscurantism” of the economists. As part of this, he turns his back on the notion of rational-choice models being an explanation in themselves, noting that their many assumptions are in desperate need of empirical defense.
It believed in the intelligence of its audience. It didn’t try to pander with sex or disasters or quick cuts. It took a serious news story and investigated it thoroughly for a full hour, with only one break. And it didn’t try and dumb any of it down — it explained the whole thing, from top to bottom. It didn’t assume you already knew the subject. Most news stories on important topics are incomprehensible to the average person who doesn’t know much about their topic.
It was done in an entertaining and conversational tone. It didn’t treat the news as some important series of facts that had to be seriously conveyed to you.
Together, these three points seem like the recipe for a genuine news show: intelligent, comprehensive, and entertaining.
Like many who want to duplicate success they do not understand, social sciences has been obsessed with duplicating the form of the natural sciences and not its motivations. Just as rival music player manufacturers have tried to copy the look of the iPod without understanding why it takes that look, the social sciences have copied the structure of the natural sciences without understanding why they take that structure.
The greatest success of the natural sciences is undoubtedly the laws of physics. Here, an handful of simple equations can accurately predict the motion of a vast variety of everyday objects under common actions. Seeing this, social scientists have aspired to derive similar laws that predict the behavior of whole societies. (Others, meanwhile insist the entire project is impossible because the society will respond to the creation of the law, making the law invalid — reflexivity.)
various kinds of motion (like falling objects) were described, rules for their behavior deduced, and commonalities in those rules discovered. Eventually it was the case that the commonalities were so great and the rules so few that a handful of laws could explain most of the phenomena, but this assumption was not made a priori. Jon Elster argues that the social sciences should proceed in a similar way: various social phenomena should be described, the mechanisms that give rise to them explained, and the commonalities among mechanisms discovered.
Analytical philosophers do not take as their task grand law-like explanations for the world. Instead, they set upon a particular piece of conception — language, free will, ethics — and try to discover its logical structure. In doing so they often develop tools they shared in common with other philosophical projects. This similarity can perhaps be best seen in the work of the man who is Jon Elster’s closest equivalent in the world of analytical philosophy, John Searle. In his career, Searle has addressed a number of topics: language, intentionality, consciousness, social reality, and rationality. Throughout he has taken has his task providing a clear description of the phenomena and explaining the pieces it consists of. And in explaining those pieces, he frequently develops tools that he reuses in his other explanations.
sciences. Paul Krugman recently noted that while Larry Bartels (in his new book Unequal Democracy) provides solid, convincing evidence that Republican presidents systematically preside over slower growth and increasing inequality, most social scientists don’t believe him because we haven’t yet identified the mechanisms. Krugman: Now, I’m a big Bartels fan; I’ve known about this result for quite a while. But I’ve never written it up. Why? Because I can’t figure out a plausible mechanism. Even though I believe that politics has a big effect on income distribution, this is just too strong — and too immediate — for me to see how it can be done. Sure, Republicans want an oligarchic society — but how can they do that? Bartels, for his part, argues that providing the mechanisms isn’t his job — his goal is to highlight the phenomena and encourage many others to research the mechanisms:
But having been through a startup myself, I think there’s much more you can do in the other direction: decreasing economic inequality. People love starting companies. You get to be your own boss, work on something you love, do something new and exciting, and get lots of attention. As Daniel Brook points out in The Trap, 28% of Americans have considered starting their own business. And yet only 7% actually do. What holds them back? The lack of a social safety net.
imagine if the government provided a basic minimum income, like Richard Nixon once proposed. Instead of having to save up (increasingly difficult in a world in which the only way to survive is on credit card debt) or borrow money to stay afloat, you could live off the government-provided income as you got things started. Suddenly having to quit your job would no longer be such a huge leap — there’d be a real social safety net to catch you. (Not to mention if those labor laws some people want to loosen required your old job to take you back if things didn’t work out.)
It’s been great seeing everyone, but like most locals, they’re all puzzled as to why I’m leaving. I’ve been struggling to explain why. When I say the weather, everyone just laughs. When I say San Francisco is too loud, they start arguing. When I say it’s the people, they tell me to find a better group of friends. And the thing is, they’re right. It’s none of these. I’ve been spectacularly unable to articulate it, but the real answer is simpler and more prosaic. And now, after great thought and struggle, I realize the answer is simply this: Cambridge is the only place that’s ever felt like home. It’s that simple.
If we work to save 50% on everything, big or small, that’s the equivalent of saving 50% of our money altogether. Whereas if we only try to save fixed amounts on every purchase, how much we save is dependent on how many things we buy.
James K. Galbraith’s The Predator State is undoubtedly one of the most important books on the economics of our era. Galbraith sets himself the task, not only of exposing the discredited economic orthodoxies of our generation, but also documenting the economy as it really exists, and setting an agenda for the future. It is a book that desperately needs to be listened to. And, even better than all that, it’s a fun read.
2: Friedman and friends said that markets would lead to democracy — that “economic freedom” begets political freedom. But economic freedom isn’t what it sounds like; it’s not freedom from economic want but instead, as Friedman put it, “the freedom to choose” or, in other words, “the freedom to shop”.
changes in interest rates dwarf changes in tax rates; furthermore, real investment is encouraged by high personal taxes, since this forces people to keep their money in corporations,
The basic idea behind the Hollywood Launch is simple: you release a few hints about your product to build buzz, slowly revealing more and more until the big day, when you throw open the doors and people flood your site, sent there by all the blog coverage and email alerts.
I’ll call this technique the Gmail Launch, since it’s based on what Gmail did. Gmail is probably one of the biggest Web 2.0 success stories, so there’s an argument in its favor right there. Here’s how it works: Have users from day one. Obviously at the very beginning it’ll just be yourself and your co-workers, but as soon as you have something that you don’t cringe while using, you give it to your friends and family. Keep improving it based on their feedback and once you have something that’s tolerable, let them invite their friends to use it too. Try to get lots of feedback from these new invitees, figuring out what doesn’t make sense, what needs to be fixed, and what things don’t work on their bizarre use case combination. Once these are all straightened out, and they’re using it happily, you let them invite their friends. Repeat until things get big enough that you need to… Automate the process, giving everyone some invite codes to share. By requiring codes, you protect against a premature slashdotting and force your users to think carefully about who actually would want to use it (getting them to do your marketing for you). Plus, you make everyone feel special for using your product. (You can also start (slowly!) sending invite codes to any email lists you might have.) Iterate: give out invite codes, fix bugs, make sure things are stable. Stay in this phase until the number of users you’re willing to invite is about the same as the number you expect will initially sign up if you make the site public. For Gmail, this was a long time, since a lot of people wanted invites. You can probably safely do it sooner. Take off the invite code requirement, so that people can use the product just by visiting its front page. Soon enough, random people will come across it from Google or various blogs and become real users. If all this works — if random people are actually happy with your product and you’re ready to grow even larger — then you can start building buzz and getting press and blog attention. The best way to do this is to have some kind of news hook — some gimmick or controversial thing that everyone will want to talk about. (With reddit, the big thing was that we switched from Lisp to Python, which was discussed endlessly in the Lisp and Python communities and gave us our first big userbase.) Start marketing. Once you start using up all the growth you can get by word-of-mouth (and this can take a while — Google is only getting to this stage now), you can start doing advertising and other marketing-type things to provide the next big boost in growth. The result will be a graph that just keeps accelerating and climbing up. That’s the graph that everyone loves to see: solid growth, not a one-day wonder. Good luck.
I find this to be excellent advice. This is exactly the approach we took at GitHub almost down to the letter. It took about 2 months until the site was good enough to use to host the GitHub source, another month until we started private beta with invites, and three more months until public launch. Artificial scarcity is a great technique to generate excitement for a product while also limiting growth to a rate that won’t melt your servers. We worked through a huge number of problems and early users gave us some of the ideas that have defined GitHub. By doing a Hollywood launch, things would have been very different and I am convinced, very much worse. Do not, I repeat, DO NOT underestimate how much your users will help you to define your product. If you launch without having significant user feedback time, you’ve essentially thrown away a massive (and free) focus group study. Let me also say that when we finally did our public launch, there was plenty of buzz, and all of it was the RIGHT kind of buzz. The buzz that attracts real, lasting customers (and no, we weren’t on TechCrunch, that traffic is garbage).
The last time my path crossed with TimBL is when I was applying for Stanford. My Dad happened to be in Cambridge at the time and insisted on asking TimBL for a letter of recommendation, by going over to TimBL’s office. Apparently he had better luck than I, since I’m told TimBL agreed and I was later accepted to Stanford. I guess a letter of recommendation from the creator of the Web counts for something, even if he is an Englishman.
In the nonprofit world, such a plan is called a Theory of Change. And the reason they’re so rare is because they’re dreadfully hard to come by. The world has no shortage of big problems, but it’s hard to think of ways we might realistically solve them. Instead, the same few things — vote, preach, march — get trotted out again and again. For over a year now, I’ve been looking for theories of change for politics. And I’ve found a few that I think just might work. But I can’t pull them off by myself. So here they are, in case someone out there wants to help. The Netroots Congress Here’s how you get elected to Congress today: First, you make friends with a bunch of wealthy people, being sure to agree with them on all the important issues. Then you take their money and hire a well-connected Washington, D.C. campaign manager. The campaign manager shows you how to ask for more money and then gives it to his partner, who makes some TV and radio ads and runs them in your district. They keep doing this until your money runs out and then, if you’re lucky, you get more votes than the other guy. Because of the netroots, it’s now possible to change the first part of this story. Instead of raising your money from conservative or centrist rich people, you can now raise money from progressive people over the Internet. So instead of candidates who all agree that telephone companies shouldn’t be punished for spying on Americans, you can have candidates who think every American should have free health care. Concretely, you’d ask people who want to do this to sign up to pay $X a month. Then you’d go around looking for candidates (or potential candidates) who genuinely believe in progressive principles. When you find them, you give them the money, and now they actually have a chance of getting elected. Bonus: Get more money by fiercely promoting how bad the incumbent is or how good the challenger is.
— his argument is that investing in early childhood education pays off in the long run by making future workers much more productive, and thus wealthy.
OCLC is running scared. My comments on their attempt to monopolize library records has been Slashdotted, our petition has received hundreds of signatures, and they’re starting to feel the heat. At a talk I gave this morning to area librarians, an OCLC rep stood up and attempted to assure the crowd that what I was saying “wasn’t entirely true”. “What wasn’t true?” I asked. “I’d love to correct things.” She declined to say, insisting she “didn’t want to get into an argument.”
Karen insists that “OCLC welcomes collaboration with Open Library”, which seems a funny way of putting it. As I said last time, they’ve played hardball: trying to cut off our funding, hurt our reputation, and pressured libraries not to cooperate.
Blogs I Would Like to Read November 25, 2008 Original link (in no particular order) The Wonk Wing: Thoughtful exploration of important policy issues by decent writers who are clearly fascinated by their subject. Not only would you get a first-class education in the relevant issues around health care, global warming, urban sprawl, zoning, traffic, sewage, etc. but you’d have fun while doing it. Think Ezra Klein for more than just health care. Think The Wonk Room but more Sorkin and less Pennebaker. (Sorry, Wonk Room!) Perfect Devices: Coverage of things which are simply the best-in-the-world at what they do, and the stories of how they got there. I want stories from the people who calibrate bathroom-mirror lighting to be the perfect combination of brightness and diffusion “so that it’s diagnostically acute without being brutal” (ASFTINDA, 302). I want stories about the kitchen at French Laundry and Alinea. I want the start-to-finish story of HF&J designing a typeface. (Yes, I’m eagerly awaiting Objectified.) 17th and Pennsylvania: This is the address of the Starbucks outside the White House, where apparently executive branch officials regularly grab coffee, chat, and meet with a wide variety of famous-for-DC types. Why doesn’t an enterprising Gawker Stalker simply sit there and write down what happens? This Academic Life: Stories of new papers and research results — not just a summary of the work itself, but the story of how it fits into the field’s debates, the personal intrigues of the players, the implications for the wider world. Basically, Lingua Fraca returning as a blog. Evisceration Quarterly: A daily selection of the finest in insults, takedowns, and general argumentative evisceration. The motto: teaching you how to think by showing you how not to. And, to not be entirely negative, the occasional model of clarity. With special blogging consultant, Brad DeLong.
Maurer, The Big Con: The Story of the Confidence Man Luc Sante’s intro alone is worth the price of the book, but the rest of the book is fantastic as well. Everyone should know about con men. (The BBC’s Hustle is obviously a television adaptation of the
Bearman et. al., Doormen. I read this because I now have a doorman and am uncomfortable about it. This helped.
DFW, Everything and More. This book is an interesting, but, I think, ultimately unsuccessful experiment. DFW tries to teach math by channelling his favorite math teacher — writing in the style of an excitable lecturer, completely with verbal tics and backtracking (which, in printed form, becomes kind of a running gag). It’s certainly not a bad book by any means, but I don’t think it’s really a successful model for how books can teach math.
DFW, Consider the Lobster DFW’s suicide hit me very hard. I ended up coping by reading every piece of nonfiction he’d ever published. He was a brilliant, tortured man and I see so much of myself in him. His nonfiction was fantastic and I will consider my life a success if I can do half of what he did. If you want to get started, I recommend (best work first): Federer as Religious Experience [B/W PDF] “David Lynch Keeps His Head”, in A Supposedly Fun Thing I’ll Never Do Again (there’s a severely abbreviated version printed in Premiere; read the real thing instead) “A Supposedly Fun Thing I’ll Never Do Again”, in A Supposedly Fun Thing I’ll Never Do Again (the Harper’s version [PDF] preserves most of the good stuff but is shorter)
Keynes, Economic Consequences of the Peace (full text online). Wow, Keynes knows how to write. The first section is a must-read for any diplomat. Chapters 4 and 5 (which unfortunately are the bulk of the book) are only worth skipping or skimming for modern readers.
Kaufman, Synecdoche, New York (scripts). What a movie! There were a lot of script reviews that said things along the lines of “I don’t know if movies can capture a script this complex.” Reading the script now, you see the exact opposite is the case. The script is a pale imitation of the film, missing most of what made the film magical. Which just underscores what a great movie it was.
everybody should agree that Bell stole the telephone from Gray after this book.
Alexander Graham Bell (or Aleck Bell, as he was then called) was the son of Alexander Melville Bell, the inventor of a system of phonic notation called Visible Speech. The elder Bell would use Aleck as an assistant in his demonstrations: After sending Aleck to wait in another room, Mr. Bell would ask the audience for a word or strange noise then write it in Visible Speech. Aleck would return and reproduce the sound from the writing alone. Voila. As a child growing up like this, he played at inventing machines that could talk and telegraphs that could listen. But he found his career in tutoring the deaf — by teaching them to pronounce the phonemes of Visible Speech, he eventually succeeded in teaching them to talk and read lips.
At the time, telegraph wires blanketed the skies of Boston, hanging in a dense web above the buildings. Many desperately wished for someone to develop a telegraph that could send multiple messages over the same wire, so that many wires could be replaced with just one. The theory was that if one could transmit the messages using different tones, they would “harmonize” instead of interfere, leading the idea to be called the “harmonic telegraph”. Naturally, Alexander Graham Bell turned his tinkering to this problem and persuaded Hubbard (as well as Thomas Sanders, another father of a Bell student) to finance his research in exchange for a share of any future US profits. Further complicating matters, Bell had fallen in love with his student, Mabel Hubbard. Mr. Hubbard made it clear he did not approve of such a marriage unless Bell made a profitable discovery. But Bell was simply a hobbyist, the real research was being done by a man named Elisha Gray.
Bell’s biographers have gone to heroic lengths to explain away all the evidence. Refusing credit for the telephone just showed Bell’s humility; not being involved in the corporation showed his dedication to pure research. The fact that both patents were filed on the same day is a grand historic coincidence — or perhaps Gray stole the idea from Bell. As a result, Gray is forgotten and Bell is remembered as one of history’s great inventors — not as he should be: a hobbyist and a fraud, forced by love into stealing one of the greatest inventions of all time.
On Capitol Hill sit many powerful people — Congressmen, Senators, Justices — but also numerous others who do the daily work of keeping government running. And, like anyone with such a weighty responsibility, they sometimes want a break: a chance to see a movie or eat out with their spouse. Kids always make these things difficult, so in the late 1950s someone thought of starting a Capitol Hill Babysitting Coop. The idea was simple: a bunch of families would get together and dole out scrip — little fake money — amongst themselves. Anytime you wanted to go out, you could just hire another family in the coop to watch your kids: one piece of scrip per hour. Later, of course, you’d earn the money back by watching someone else’s kids. It was a brilliant system and much beloved, until sometime in the 1970s.
Keynes’ genius came in seeing that the Depression wasn’t a moral problem. We’re not being punished for our exuberance or our stinginess, just as the folks on Capitol Hill weren’t at fault for not wanting to go out. In both cases, the problem wasn’t legislative, but merely technical: there just wasn’t enough money to go around. And the technical problem has a technical solution: print more money. The moralists insist it’s irresponsible for us to just print more money. After all, they say, debt got us into this mess; is more debt really going to get us out? This is what they told FDR, causing him to hit the break on a recovery that was pulling us out of the Great Depression. This is what they told Japan, ending their recovery and plunging the country into a “lost decade” of unemployment. It’s not irresponsible to spend money; it’s irresponsible not to. Factories are lying idle, people are sitting at home unemployed, and our economy is slowing.
Aaron Swartz used a free trial of the government's Pacer system to download 19,856,160 pages of documents in a campaign to place the information free online. Michael Francis McElroy for The New York Times Attention attractive people: Are you looking for someone respectable enough that they’ve been personally vetted by the New York Times, but has enough of a bad-boy streak that the vetting was because they ‘liberated’ millions of dollars of government documents? If so, look no further than page A14 of today’s New York Times: Aaron Swartz, a 22-year-old Stanford dropout and entrepreneur who read Mr. Malamuds appeal, managed to download an estimated 20 percent of the entire database: 19,856,160 pages of text. Then on Sept. 29, all of the free servers stopped serving. The government, it turns out, was not pleased. A notice went out from the Government Printing Office that the free Pacer pilot program was suspended, pending an evaluation. A couple of weeks later, a Government Printing Office official, Richard G. Davis, told librarians that the security of the Pacer service was compromised. The F.B.I. is conducting an investigation.
[O]ver the course of six weeks, Mr. Swartz was able to download 780 gigabytes of data — 19,856,160 pages of text — from Pacer. The caper grabbed an estimated 20 percent of the entire PACER network, with a focus on the most recent cases from almost every circuit.
In order for any team to succeed, they need someone helping them all stay on track — someone who we will call a “manager”. The word manager makes many people uncomfortable. It calls up the image of a bossman telling you what to do and forcing you to slave away at doing it. That is not effective management. A better way to think of a manager is as a servant, like an editor or a personal assistant. Everyone wants to be effective; a manager’s job is to do everything they can to make that happen. The ideal manager is someone everyone would want to have. Instead of the standard “org chart” with a CEO at the top and employees growing down like roots, turn the whole thing upside down. Employees are at the top — they’re the ones who actually get stuff done — and managers are underneath them, helping them to be more effective. (The CEO, who really does nothing, is of course at the bottom.)
Together, you and your team can achieve amazing things. As a manager, your task is to serve the team — to make it as effective as it can possibly be, even if that means stepping on the toes of a few individuals. One incredibly popular misconception is that managers are just there to provide “leadership” — you set everyone up, get them pointed in the right direction, and then let them go while you go back to the “real” stuff, whether it’s building things yourself, meeting with funders, or going on the road and talking up your organization. Those are all perfectly valid jobs, but they are not management. You have to pick one. You cannot do both.
One of the nice things about having a system is it actually makes you less stressed out. Most people just keep their todo list somewhere in the back of their head. As things pile up, they become harder and harder to keep track of, and you become more stressed out about getting them all done or forgetting about them. Simply writing them down on a list makes everything seem more manageable. You can see the things you have to do — really, there’s not quite as many important ones as you thought — and you can put them in order and get that nice burst of satisfaction that comes from crossing them off. Yes, it all sounds like silly, basic stuff, but it’s important. Just having a list with all the stuff you need to do — and taking it seriously, actually going down it and checking stuff off every single day — is the difference between being a black hole of action items and being someone who actually Gets Stuff Done.
Point 2: Know your team. As a servant, it’s crucial you know your masters well. You need to know what they’re good at and what gives them trouble. You need to be able to tell when they’re feeling good and when they’re in a rut. And you need to have a safe enough relationship with them that they can be honest with you and come to you when they’re in trouble. This is not easy. (You have to be willing to hear bad news about yourself.) The most important piece is understanding what people are good at and what they like doing. A good first step is to just ask them, but often people are wrong or don’t know. So you try giving them different things, seeing how they do at them, and adjusting accordingly. But in addition to your team’s professional skills, it’s important to understand their personal goals. However much you may care about the work, at bottom it’s still a job. You need to understand why your team members took it. Was it because it seemed interesting? Because it seemed worthwhile? Because it would give them valuable experience and help them get a better job down the road? It’s important that you know, so you can make sure your tasks and expectations are in line with their goals.
Point 2a: Hire people smarter than you. You want the best working for you. People who aren’t just good at their job, but people who are also good at your job. People you can trust to not just do something right but tell you that the way you suggested doing it was wrong. People you can rely on to get things done if you just stay out of their way. At least, that’s the ideal. In practice, it’s hard to find people like that and even when you do, they still need help. I have never found the traditional methods of hiring — resumés, interviews, quizzes — to be helpful at all. Instead, I look at two things: what someone has done and whether I enjoy spending time with them. The first shows not just their talent but also their ability to execute. If they haven’t made something interesting, whether as a side project or at a previous job, then they’re probably not worth hiring. It’s not that hard to sit down and accomplish something; be wary of people who haven’t.
chat. Point 2b: Be careful when hiring friends. Everyone wants to work with their friends. After all, you have so much fun hanging out after work, why not hang out during work too? So they recruit their friends to work with them. (Or, even worse, they recruit their lovers.) But being friends is very different from being colleagues. All friends learn ways to adjust themselves to each other — which tones to use, which subjects to avoid, when to give each other space. These go out the window when you’re working together. You can’t just not say things because they’ll get your friend upset. So you say them, and they get upset, and you realize you have no way of dealing with each other when you’re like this. It makes working together difficult, to say the least. The situation is the same, but vastly worse, with couples. Plus, you’re really screwed when your relationship falls apart under the stress. If you do decide to work with people you’re close with, you need to find a way to put your other relationship “on hold” while you work together. Which means you both need to be strong enough to be able to blow up at each other at work and then go out for drinks like nothing ever happened. If you can’t do this (and few can), then either give up on the relationship or give up on the job.
Point 2c: Set boundaries. Conversely, don’t become close friends with the people you work with. You have to set some personal boundaries: you’re their manager, not their friend. Naturally, part of being a manager means that you have to talk to people about their personal problems and possibly even offer advice. After all, it’s your job to make your team effective and if personal problems are distracting from that, you are going to have to face someone’s personal problems.
Point 3: Go over the goals together. Your first job as a manager is to make sure everyone’s on the same page. The team needs to understand what they’re expected to do, why they’re doing it, and who else is involved (funding it, using it, counting on it). If you picked a good team (point 2a), they’ll hear this and find holes in your plan and catch things you hadn’t thought of. (Which is good! Together, you can fix it.) But real work can’t begin until everyone’s on board with the plan. Point 3a: Build a community. You’re not managing a bunch of individual employees; you’re managing a team.
(First law of friendship drift: Just because you like two people doesn’t mean they’ll automatically like each other.)
I have a “no asshole rule” which is really simple: I really don’t want to work with assholes. So if you’re an asshole and you work on my team, I’m going to fire you. Now, if the whole team says “gosh, that’s awful. We want to work with as many assholes as we can!” then we have a simple solution. I’ll fire me! (FYI: The “No Asshole Rule” is a book. I thought it was actually a pretty good book as far as Business books go. As far as I’m concerned, anybody could stand to read 100 pages giving them the MBA Book cover they need to say to their boss: let’s get the assholes out of here.)
Point 4: Assign responsibility. First, break the plan up into parts. Make sure everybody understands the parts. Second, find a team member who wants to do each part. The key word here is wants — some things just have to get done, it’s true, but things will get done much better by people who want to do them. One of the weird facts of life is that for just about everything you hate doing, there is someone out there who loves doing it. (There are even people who get a real kick out of cleaning toilets.) You may not currently employ them and you may not be able to hire them, but the is the goal worth striving toward.
Point 4a: Vary responsibilities. Another thing to keep in mind is that most people like variety in their work. It’s very tempting to think of someone as “the finance guy” and just give them all the finance-related tasks. But in any organization there’s lots of different kinds of things to do and a wide mix of people to do it. Many people will appreciate the opportunity to switch up the kinds of things they do.
Point 4b: Delegate responsibility. As the manager, it’s a continual temptation to keep important jobs for yourself. After all, they’re usually fun to do and doggone-it they’re important, you can’t risk them on somebody else! Resist the temptation. For one thing, taking jobs for yourself is one way of distracting yourself from having to do actual management (point 1). But more importantly, you’ll never be able to develop your team if you keep all the real responsibility for yourself.
Point 5: Clear obstacles. This is the bulk of what non-hierarchical management is about. You’ve got good people, they’ve got good responsibilities. Now it’s your job to do everything in your power to help them get them done. A good way to start is just by asking people what they need. Is their office too noisy? Did they get confused about something you said? Are they stuck on a particular problem? Are they overwhelmed with work? It’s your job to help them out: get them a quieter office, clarify things, find them advice or answers, shift some stuff off their plate. They shouldn’t be wasting time with things that annoy them; that’s your job. But you have to be proactive as well. People tend to suffer quietly, both because they don’t want to come whining to you and just because when you’re stuck in a rut all your attention is focused on the rut. A key part of being a manager is checking in with people, pointing out that they’re stuck in a rut, and gently helping them out.
Procrastination is the crop blight of the office-work world. It affects just about everyone and it’s very hard to fight alone. The single best way to stop procrastination is to sit down with someone and come up with the next concrete step they have to take and then start doing it together. There’s something magical about having another person sit down with you and do something that can overcome procrastination’s natural resistance. And once you get someone started, momentum can often carry them through the rest of the day. Even if all you do is help people overcome procrastination, you will be well worth it.
Point 6: Give feedback. White-collar work is lonely. You sit at a desk, staring at a screen, poking at buttons. It’s easy to get lost and off-track and depressed. That’s why it’s important to check in and see how things are doing.
Studies consistently show that people are much happier and more productive when they have control over the way they work. Never take that away.
Point 7: Don’t make decisions (unless you really have to). As manager, people will often come to you to make decisions or resolve disputes. It’s very tempting, with people looking up at you for guidance, to want to give your sage advice. But the fact is, even if (or especially if) as a manager you’re held up on a pedestal, you probably know less about the question than anyone else on the team.
Point 8: Fire ineffective people. Firing people is hard. It’s probably the hardest thing you’ll ever do. People go to absurd lengths to try and make it easier (“we’ll just try him out for a month and see how it goes” is a common one) but they never really help. You just have to bite the bullet and let people go. It’s your job. If you can’t do it, find someone else. Firing people isn’t just about saving money, or petty things like that. It’s the difference between a great organization and a failure.
Inefective people drag everyone else down to their level. They make it so that you can’t take pride in what you’re doing, so that you dread going into work in the morning, so that you can’t rely on the other pieces of the project getting done. And assholes, no matter how talented they may be, are even worse. Conversely, there are few things more fun than working hard with a really nice, talented group of people.
Point 9: Give away the credit. As the team’s manager, there will be many opportunities where people will want to give you credit. And getting credit is nice, it makes you feel good. So you start coming up with excuses for why you deserve it, even though you didn’t do any of the work. “Well, it was my vision,” you will say. “I was the one who made it all happen.” But think of all those talented people slaving away at desks. They were the ones who actually made it happen. Make sure they get the credit.
Practical men, who believe themselves to be quite exempt from any intellectual influences, are usually the slaves of some defunct economist. Madmen in authority, who hear voices in the air, are distilling their frenzy from some academic scribbler of a few years back. I am sure that the power of vested interests is vastly exaggerated compared with the gradual encroachment of ideas. Not, indeed, immediately, but after a certain interval; for in the field of economic and political philosophy there are not many who are influenced by new theories after they are twenty-five or thirty years of age, so that the ideas which civil servants and politicians and even agitators apply to current events are not likely to be the newest. But, soon or late, it is ideas, not vested interests, which are dangerous for good or evil. (John Maynard Keynes, General Theory, last page)
Traditional left-wing thought treats elections as epiphenomenal: build a strong enough social movement and politicians will be forced to do what you want. In this view, it doesn’t really matter who gets elected since they’re ultimately all subject to the same structural forces. Working to get someone “good” elected is really just a waste of time, since they’ll turn out to be as bad as all the others once they get into office. (Think Noam Chomsky’s comments about the unimportance of electoral politics, or the Alinskyite theory that one should try to cultivate an attitude of “fear and loathing” among politicians.) There’s clearly a great deal of truth to this — structural forces are ultimately very powerful. But I think it misses a great deal as well. This model assumes politicians are this separate class of rational actors who respond purely to electoral incentives; if your grassroots movement gets them votes, they’ll help you out, but they’re just as happy to sell you out to a higher bidder. But what if the politicians involved are actually activists themselves? What if the choice isn’t between joining a electoral campaign and joining an issue campaign, but between starting a electoral campaign and starting an issue campaign? Here I think the calculus changes wildly.
Take this story from Matt Taibbi about Bernie Sanders, the socialist Senator from Vermont: [He] kept coming back to a story about his very first meeting with the Health, Education, Labor, and Pensions Committee. At the meeting, the subject of the Head Start program had come up. Ted Kennedy, who runs the committee, had proposed a modest increase. Sanders wanted more—so he went and had a word with Kennedy after the meeting. “The end result is that we got a 6 percent increase, instead of a 4 percent increase,” he said. “Over a three-year period, that’s five hundred million dollars more. What I’m finding out is it’s just a different world. Not saying it’s better, it’s just different. If you want something you just go talk to someone in the hall. […]” He tried to sound like it was a good thing, and it might very well have been, in terms of getting more money for a worthy-enough program. But the subtext of this story was Sanders expressing amazement that he could get $500 million just by talking to someone. As any human being would, he looked blown away by the reality of his situation. (The Great Derangement, 127) Obviously there are few offices as powerful as United States Senator, but every job has opportunities for simple victories like these, if at a much smaller scale.
Pag. 672. 10336 Who Really Rules?, by G. William Domhoff, is one of my very favorite books.
He gives an engrossing case study of how powerful businessmen get things like this done, based on extensive archival research and contemporaneous notes. And he tells an entire alternative history of American urban renewal, showing how big business turned a plan to build housing for the poor into an excuse to expel them to make room for upscale businesses. The result is a tour de force: a complete demolition of one of the most influential books of political science, an engrossing case study of how power really operates, and an example of how to do research into the people who, after all, really rule.
Nor has she sacrificed her family, taking her children with her to the office and reserving time to spend with them at home. A standout in so many ways, it seems the one struggle left is finding a new struggle. “Now that the pressure’s off,” she told a reporter, “I’ve started to ask myself: What’s my next goal? I won my black belt in karate a year ago. I’ve got tenure, a wonderful family, and a thriving business. It’s time to figure out what’s next.”
It probably doesn’t make sense to invest in just one startup, even if the returns on startups are huge. That’s why VCs invest in large numbers of startups; the returns from the wins balance out the flops.
Take the St. Petersburg paradox. Imagine this game: A dollar is placed on the table and a coin is flipped. If the coin comes up heads, the money is doubled and the coin is flipped again. Tails, the game ends and you take the money. How much would you pay to play? The paradox comes about because the naive answer here is infinite. There’s a 50% chance you get a dollar (=fifty cents), a 25% chance you get 2 (another fifty cents), a 12.5% chance you get 4 (again), and so on infinitely. But, naturally, it seems insane to pay a fortune to play this game. Thus the paradox.
the explosion of information beginning in the sixties rendered them all-but-extinct and the electronic transformation of the past few decades threatens to finish the job. Still, we can’t but admire them and their milieu. This certainly seems to be George Scialabba’s position. The greatest working book reviewer — when the National Book Critics Circle inaugurated their Excellence in Criticism award, he was their first recipient — collects his reviews of these grand men’s work and a sampling of his own in his new collection, What Are Intellectuals Good For? The result is a delightful introduction to this world of ideas.
A dedicated follower of the left-rationalist-progressive tradition, I had to continually catch myself from nodding along in agreement. Recommended for anyone who’s a fan of the Intellectual Scene and the men and women who inhabit it.
The first point — that tech startups collect in Silicon Valley — is certainly true, just like car companies all tend to cluster in Detroit. This is because of a feedback effect set off by some random initial condition: Shockley Semiconductor was started in Silicon Valley, so when its employees left to start their own companies they did so there, and so on. Now everyone in the industry moves to Silicon Valley because that’s where everyone else is.
Industries tend to cluster together. The second — that startups represent a new economic phase — may also be true. It’s a rather more extreme claim, but it would be pretty cool. But I don’t think it combines with the first to create a local revolution. It’s true, tech startups have generated a lot of wealth, but they’re far from the only kind of startup to do so. The amazing thing about the Internet is that it makes all sorts of startups possible.
Silicon Valley may have had the first wave, but the next one belongs to the world.
I’ve also become increasingly skeptical of the transparency project in general, at least as it’s carried out in the US. The way a typical US transparency project works is pretty simple. You find a government database, work hard to get or parse a copy, and then put it online with some nice visualizations. The problem is that reality doesn’t live in the databases. Instead, the databases that are made available, even if grudgingly, form a kind of official cover story, a veil of lies over the real workings of government. If you visit a site like GovTrack, which publishes information on what Congresspeople are up to, you find that all of Congress’s votes are on inane items like declaring holidays and naming post offices. The real action is buried in obscure subchapters of innocuous-sounding bills and voted on under emergency provisions that let everything happen without public disclosure. So government transparency sites end up having three possible effects. The vast majority of them simply promote these official cover stories, misleading the public about what’s really going on. The unusually cutting ones simply make plain the mindnumbing universality of waste and corruption, and thus promote apathy. And on very rare occasions you have a “success”: an extreme case is located through your work, brought to justice, and then everyone goes home thinking the problem has been solved, as the real corruption continues on as before.
Ken Silverstein regularly writes brilliant pieces about the influence of money in politics. And he uses these sorts of databases to do so. But the databases are always a small part of a larger picture, supplemented with interviews, documents, and even undercover investigation — he recently did a piece where he posted as a representative of the government of Turkmenistan and described how he was wined and dined by lobbyists eager to build support for that noxious regime. The story, and much more, is told in his book Turkmeniscam. (His book Washington Babylon is similarly indispensible.) Matt Taibbi, in his book The Great Derangement, describes how Congress really works. He goes to the capitol and lays out the whole scene: the Congressmen naming post offices on the House floor, the journalists typing in the press releases they’re handed, the key actions going on behind the scenes and out of the public eye, the continual use of emergency procedures to evade disclosure laws. And Robert Caro, in his incredible book The Power Broker (one of the very best books ever published, I’m convinced) takes on this fundamental political question of “Who’s actually responsible for what my government is doing?” For forty years, everyone in New York thought they knew the answer: power was held by the city council, the mayor, the state legislature, and the governor. After all, they run the government, right? And for forty years, they were all wrong. Power was held — held, for the most part, absolutely, without any checks or outside influence — by one man: Parks Commissioner Robert Moses. All that time, everyone (especially the press) treated Robert Moses as merely the Parks Commissioner, a mere public servant serving his elected officials. In reality, he pulled the strings of all those elected officials.
Well, we’ve built it. And they haven’t come. The only success story its proponents can point to is that transparency projects have bred even more transparency projects. I’m done working on watchdog.net; I’m done hurting America. It’s time to give old-fashioned narrative journalism a try.
Journalists get mad at bloggers: “Without real reporting, they’d have nothing to comment on!” Bloggers get mad at journalists: “There’s a reason nobody reads newspapers anymore. They’re dry and dull and wrong.” But the gap is shrinking: bloggers are doing more real reporting, journalists are getting more humanized (with all the digressions, opinions, and biases that entails). So what if you paired an investigative reporter with a blogger? Reporters didn’t used to write their own stories. (Why would a good investigator be a good writer?) The reporter would be out in the field, knocking on doors and taking notes, which they’d hand to a writer at a desk, who would turn them into a coherent, vivid story. (Newsweek still operates this way.) Replace the writer with a blogger. They’d post the story as it unfolded, capturing the excitement of discovery: the big breaks, the wrong turns, the moment when it all comes together. Like any talented blogger, they’d keep people coming back: What happens next? I want to know more! They’d keep up a conversation with readers and other bloggers, sharing new leads with the reporter. It’d be a powerful duo. But blogging isn’t everything. You also want to recap the story so far: for those just tuning in, here are the characters, here’s what’s happened, here’s why it’s important.
You’ll also want a tech person around to help out. Many stories involve databases; you need someone to work with the reporter to parse and process the data, then work with the blogger to put the results online.
Pag. 686. 10518 And you’ll need a lawyer on staff.
Instead, a good investigative team needs a political organizer. They can build an email list of people who get outraged by their reporting and use it, along with blogs and the lists of other political groups, to put pressure on the bad guys, fundraise for further journalism, and collect a team of volunteers. The volunteers can help with aspects of the reporting — a modern investigation can get much further by crowdsourcing certain tricky aspects and depending on talented volunteers for particular tasks. A good political organizer knows how to get and manage volunteers.
Pag. 687. 10529 But to make your organizing maximally effective, you’ll need (gasp!) a lobbyist.
investigative strike team: uncovering corruption, exposing it, and effecting change. They can watch the whole process unfold from a reporter’s suspicion to a writer’s story to a legislative fix. And they can get better at it. It’d be a powerful combination. That’s the kind of future-of-news that I want to see.
There are two kinds of nonfiction: science writing and journalism. Science writing is when you’re trying to explain an idea. You have a concept in your head and you try to get it across. There are lots of tools you can use to do this: you can give an example, you can tell the story of how you thought of it, you can draw a picture. But the concept is the important thing. In journalism, you’re telling a story. Someone did one thing, which led to something else, which led to this other thing. Occasionally you pause to take a step back and make some larger point: the story might have some moral or illustrate some larger principle or lead you to a conclusion. But the important thing is always the story.
Of course, this is how science advances. Something weird happened over here, so we measured it carefully and took detailed notes. (These are the experimentalists.) When you put all these weird things together, they kind of fit a larger pattern. (These are the theorists.) The theory then leads to more experiments and the new experiments lead to more theory. You inch forward, bouncing between experiment and theory, journalism and science writing, to a larger understanding of the world. But, of course, just as science requires both, the best science writing requires both. This is what makes This American Life’s show “The Giant Pool of Money” still so unsurpassedly brilliant. It took a question everyone wanted to know the answer to — why did the economy melt down? — and explained it not by just illustrating the concepts, as many science writers did, or just telling stories of the people involved, as journalists did, but by doing both, moving between the two modes so you could understand not just the theory but how it worked. It seems like an obvious idea, especially when you lay it out this way, but I really can’t think of any other good examples. Take three of my very favorite books: Robert Jackall’s Moral Mazes, Robert Karen’s Becoming Attached, and William Foote Whyte’s Street Corner Society1. All are absolutely brilliant, among the best examples of the genre while conveying facts of incredible importance.
Malcolm Gladwell probably comes closest to a genuine mixture of the two, but his work is marred by the fact that he kind of makes up all his science. His stories are never illustrating some established scientific principle or even a new one he has that he wants to stand up to scrutiny, but instead his principles are always invented ad hoc to serve his stories, with the same fidelity a typical This American Life episode has to its theme. As Ira Glass comments on “Six Degrees of Lois Weisberg”: “the article could be half the length and still hit all its big ideas, and it’s only longer because Gladwell has found so many things that interest and amuse him, and that’s the engine that drives the whole enterprise. … pretty much everything in the story after section five is, to my way of thinking, just there for fun.”
I want to be human again. Even if that means isolating myself from the rest of you humans. What if there’s an emergency? Has there ever been an emergency? The biggest urgent things seem to be that my servers go down. Which sucks, but I need to be able to walk away from that. If you have things hosted on one of my machines, contact me now and I’ll try to get you enough privileges that you can fix things if they break. If something’s really an emergency, I’m sure you’ll find me.
It was a huge, incredible, transformative experience. Those 30 days felt like six months. My habits changed, my relationships changed, my identity changed, my personality changed — hell, the physical shape of my body changed dramatically. I went through four legal pads trying to describe what it was like. I’m still not sure I really know. One thing is clear, though: my normal life style isn’t healthy. This doesn’t seem like the kind of thing that requires a break to learn. I imagine people with unhealthy lifestyles know they’re unhealthy. They come home after work and say “I can’t go on like this,” they cry randomly in elevators. But I didn’t know. Life online is practically the only life I know. Sure, I guess things were different when I was very young — I remember, after getting my first email account, wishing someone would email me so I’d have an email to answer (even then I knew I’d soon be missing those empty-inbox days) — but for most of my life, this has been it: a jumble of interruptions and requests and jobs and people, largely carried out alone. It never let up, so I never saw anything different. How was I to know there was anything wrong? But the last few weeks have made it clear there was — is. These weeks haven’t felt that different my other weeks online, really — same jumble of work and people and interruptions as always. The usual sense that I’m never really here, I’m always worried about the million things around the corner: a todo list that goes for pages, a thousand emails to respond to, hundreds of blog posts to read, twenty open tabs, a dozen IM windows, a text message to answer, a Twitter stream to catch up on. I never used to think about these things as a benefit or a distraction — I didn’t think about them at all; they were just how life online was. This was the era of multitasking and I was its child.
I am not happy. I used to think of myself as just an unhappy person: a misanthrope, prone to mood swings and eating binges, who spends his days moping around the house in his pajamas, too shy and sad to step outside. But that’s not how I was offline. I loved people — everyone from the counter clerk to the old friends I bumped into on the street. And I loved to go for walks and exercise in the gym and — even though there was no one around to see me — groom. Yes, groom: shower and shave and put on nice clothes and comb my hair and clean up my nails and so on, all things a month ago I would have said went against my very nature, things I never did before voluntarily. But most of all, I felt not just happy, but firmly happy — solid, is the best way I can put it. I felt like I was in control of my life instead of the other way around, like its challenges just bounced off me as I kept doing what I wanted. Normally I feel buffeted by events, a thousand tiny distractions nagging at the back of my head at all times. Offline, I felt in control of my own destiny. I felt, yes, serene.
a book called Flow. It argued that people good at their jobs went into a sort of flow state — they were “in the zone” — where the normal stress of the world faded away and all their concentration was focused on the task at hand. It wasn’t “fun” the way ice cream or sex is fun — it didn’t make you smile, just look grimly determined — but it was somehow more than that. It was fulfilling. And that was even better than a smile. I go into such states when programming or writing and they are indeed fantastic, but also weirdly hollow.
A friend asked me if I knew I was privileged to be able to take such a break. It seemed a silly question: I feel privileged every day. As I write, my best friend is broke and homeless, much of the world struggles just to stay alive. I feel privileged to own a mattress, let alone take a break. I realize everyone’s lives are filled with work and people and distractions — the situation brewing at the office, the sump pump breaking down at the house, the family member who’s fallen ill.
I realize it must seem like the greatest arrogance to think one could escape life’s mundane concerns, like asking to live on a cloud, floating above the mere mortals. But it was that arrogance that made me think I could contribute to adult mailing lists when I was still in elementary school, that arrogance that made me think someone might want to read my website when I was still just a teen, that arrogance that had me start a company as a college freshman. That sort of arrogance — not bragging, but simply inwardly thinking I could do more than was expected of me — is the only thing that’s gotten me anywhere in life. I see no reason to stop now.
So I’m writing a book. In some sense, this is nothing new. I’ve wanted to write a book since I was probably five and since then I must have started seriously writing drafts of half a dozen, before abandoning them. But this one feels different somehow. I really think I’m going to finish it.
My goals for it are ambitious too: I want it to be popular (how hard is it to be a ‘national bestseller’?), I want it to be great writing (accurate, nuanced, and hard-to-put-down), and I want it to make a difference (get people organized, change government policy). Oh, come on. Now you’re giving me that look.
It started with an email. I’d written a blog post on management that had gotten some attention, including a link from the famed Jason Kottke. Apparently the New York literati all read Jason’s blog, because an editor at a publishing house followed the link and read my piece and thought it might make a decent book. He worked for the business book imprint of a major-name publisher and invited me to give him a call and discuss the idea further. Normally when I come up with book ideas, I don’t tell more than a couple people about them. I’ve certainly never talked to anyone at a major-name publisher before. So getting this email was thrilling. I’d always imagined I’d have to pitch my book to publishers someday, but now publishers were coming to me, and asking for a book! It gave the whole thing a seriousness those other book projects lacked. I told him I was heading to New York soon and he invited me to lunch at the Knickerbocker. It was the kind of place you imagine New York businesspeople meet for lunch: guys in suits, wood-paneled walls, I think I might have even spotted a cigar. The editor was very excited and encouraging, but as we talked I grew increasingly discouraged. I began to remember how much I hate business books with a passion, how ridiculously dumb and faddish they are. For his part, the editor complained about how the rest of the world didn’t take business books seriously. They sold ten times better than normal books, he said, but the New York Times refuses to list any of them on their prestigious nonfiction bestseller list (there’s a special section just for business bestsellers that’s only published monthly and buried away).
I think I first realized this when I visited a well-known author. He’d written several highly-regarded books which received apparently unanimous praise. If someone’s ever criticized him for something, I’ve never seen it. Yet, when I saw him, he told me he’d been feeling down for nearly a week. Why? Because a reader from Australia sent him a nasty email. The endless praise hadn’t made him more resilient; it had made him unusually vulnerable.
I think this explains why the pick-up artist’s technique of the “neg” — a minor offhand insult intended to dent a girl’s self-esteem — is so particularly effective, especially on unusually attractive women. For people who aren’t used to being insulted, even a minor insult carries a powerful sting. (A major insult would probably be too strong, though. They’d be too hurt to want to even associate with you.)
A dollar auction is an auction where both the highest bidder and the second-higest bidder have to pay (even though only the highest bidder gets the prize). Rational behavior in a dollar auction isn’t particularly clear — if you’re the second-highest bidder, it always seems to make sense to bid a little more, since you’ll lose the same amount of money but at least get to take home the prize. But if you keep doing that, you soon find yourself paying ridiculously large amounts for something you might not even get.
Perhaps the safest way to win a dollar auction is not to play at all. And, indeed, this was Richard Feynman’s surprising finding with women as well. He takes the advice of the bar’s MC to refuse to buy girls anything until “you’ve asked her if she’ll sleep with you, and you’re convinced that she will, and that she’s not lying.” (Feynman is taken aback by the suggestion: “Uh… you mean… you don’t… uh… you just ask them?”)
I used to think I was a pretty good person. I certainly didn’t kill people, for example. But then Peter Singer pointed out that animals were conscious and that eating them led them to be killed and that wasn’t all that morally different from killing people after all. So I became a vegetarian. Again I thought I was a pretty good person. But then Arianna Huffington told me that by driving in a car I was pouring toxic fumes into the air and sending money to foreign dictatorships. So I got a bike instead. But then I realized that my bike seat was sewn by children in foreign sweatshops while its tubing was made by mining metals through ripping up the earth. Indeed, any money I spent was likely to go to oppressing people or destroying the planet in one way or another. And if I happen to make money some of it goes to the government which spends it blowing people up in Afghanistan or Iraq. I thought about just living off of stuff I found in dumpsters, like some friends. That way I wouldn’t be responsible for encouraging its production. But then I realized that some people buy the things they can’t find in dumpsters; if I got to the dumpster and took something before they did, they might buy it instead. The solution seemed clear: I’d have to go off-the-grid and live in a cave, gathering nuts and berries. I’d still probably be exhaling CO2 and using some of the products in the Earth, but probably only in levels that were sustainable.
Off in the cave, I thought I was safe. But then I read Peter Singer’s latest book. He points out that for as little as a quarter, you can save a child’s life. (E.g. for 27 cents you can buy the oral rehydration salts that will save a child from fatal diarrhea.) Perhaps I was killing people after all. I couldn’t morally make money, for the reasons described above. (Although maybe it’s worth helping fund the bombing of children in Afghanistan in order to help save children in Mozambique.) But instead of living in a cave, I could go to Africa and volunteer my time. Of course, if I do that there are a thousand other things I’m not doing. How can I decide which action I take will save the most lives? Even if I take the time to figuring out, that’s time I’m spending on myself instead of saving lives. It seems impossible to be moral. Not only does everything I do cause great harm, but so does everything I don’t do. Standard accounts of morality assume that it’s difficult, but attainable: don’t lie, don’t cheat, don’t steal. But it seems like living a moral life isn’t even possible. But if morality is unattainable, surely I should simply do the best I can.
Peter Singer is a good utilitarian, so perhaps I should try to maximize the good I do for the world. But even this seems like an incredibly onerous standard. I should not just stop eating meat, but animal products altogether. I shouldn’t just stop buying factory-farmed food, I should stop buying altogether. I should take things out of dumpsters other people are unlikely to be searching. I should live someplace where others won’t be disturbed. Of course all this worrying and stress is preventing me from doing any good in the world. I can hardly take a step without thinking about who it hurts. So I decide not to worry about the bad I might be doing and just focus on doing good — screw the rules. But this doesn’t just apply to the rules inspired by Peter Singer.
Glass argues that humans are somehow hardwired for stories. Once they get started, we have to hear what happened next. “One thing happens, and then another thing, and then another thing,” he says. “It’s got its own momentum — it’s like a train leaving the station.” I think Ira is being too modest. He’s an incredibly gifted storyteller; obviously one of the era’s greatest (his show is the #1 podcast on iTunes).
That’s the first rule of storytelling: a story needs to be interesting in its own right. But the second is equally important: the audience needs a reason to care. I think part of the reason we’re wired to follow stories is because a story carries an implied promise: there’s going to be something good at the end of this.
The way Ira puts it is that there are two tools in storytelling: action, and reflection. Ira’s shows (like Gladwell’s articles) open with action. But the action leads immediately to reflection: look at the story above, there’s not even a pause between the ending of the story and Ira asking why it is that we cringe. Ira’s not just telling the story because it’s entertaining, he’s telling it because it makes us cringe and he wants us to think about that feeling.
Your head is full of evidence, examples, models, implications — all of which your readers aren’t just going to magically intuit. You need to tell them.
The problem is that even the longest magazine articles don’t make for more 30 pages and you can’t really publish a book that short. So you write a couple hundred pages of filler. And, in most cases, that only detracts from your great article by watering it down.
The conversation eventually turned to the fact that Palanpur farmers sow their winter crops several weeks after the date at which yields would be maximized. The farmers do not doubt that earlier planting would give them larger harvests, but no one the farmer explained, is willing to be the first to plant, as the seeds on any lone plot would be quickly eaten by birds. I asked if a large group of farmers, perhaps relatives, had ever agreed to sow earlier, all planting on the same day to minimize losses. “If we knew how to do that,” he said, looking up from his hoe at me, “we would not be poor.”
There was plenty of food to feed them. They starved because they were too poor to afford it. Poor people die because they can’t get food, because they can’t get shelter, because they can’t get health care, because they can’t get homes in places that aren’t polluted, because they can’t get food without toxins, because they can’t get time off to supervise their kids, because they can’t spend money on safety, because they can’t spend money on education, because they can’t get a vacation from the stress that’s literally eating away at their brain. We don’t even know all the reasons poor people die. But we do know that they do. It’s not polite to talk about that. We talk about the poverty rate or the poverty level or the poverty gap, not kids catching on fire and adults wasting away. We talk about economic development and markets and education, not the millions who die each year coughing blood as tuberculosis takes over their body. (They don’t die from tuberculosis. They die because they can’t afford the vaccine.) Eben Kenah wants to change that. His thesis, “Poverty and the English Language” [PDF], is the best thing I’ve read in a very long while. Quite literally, it’s changed the way I look at what’s important in the world. He argues that the right way to think about poverty isn’t in terms of GDP or income or education or literacy, but in terms of death. That we should measure poverty by measuring who it kills. “It is easier to believe that poverty causes people to wear old clothes, live in small houses, or forego owning a television than it is to admit that people on the bottom of the socioeconomic ladder often die early as a result,” he writes.
A black man in Harlem is 4.11 times as likely to die in a given year as the average American male. A poor white in Detroit is twice as likely. Poor people are more likely to die than rich people, lower-class people than upper-class people, unemployed than employed, blacks than whites. Mortality resolves a number of long-standing technical debates about the right way to measure poverty. In the US we calculate poverty by having experts at the Department of Agriculture figure out the cheapest products on sale in America that could meet minimal nutritional requirements. They add up how much they cost and multiply by three. People with less than that are defined as poor. Can the poor really follow that minimal diet in practice? How do you even decide what minimal nutritional requirements are? Why three? The answer is simple: just count deaths instead.
But asking that broader question — why do people die? — provides a framework for all these other issues: health care, education, poverty, disease, crime, public services, stress, shelter, climate, mobility, famine, pesticides, fatty foods, toxins, pollution, daycare, quality of life. It all comes in corpses: clear, countable, comparable facts. We’re all dead bodies in the end.
In 1972, the philosopher Peter Singer proposed a simple thought experiment: Imagine you’re on your way to work and you come across a child drowning in a shallow pond. You’re tall enough that you can run in and rescue him, but if you do so you’ll ruin your new suit. Should you save the child? Almost everyone says yes: the value of saving a child’s life far outweighs the cost of losing your new suit. Indeed, someone who would let a child die to save their clothes seems like a monster. But aha, Singer says. You — yes, you, the reader — probably spent several hundred dollars on new clothes recently, clothes you didn’t really need. (Or if not clothes, perhaps a dinner out, or music, or books you could’ve gotten from the library.) And instead of spending that money on luxuries, you could have sent it to Partners in Health, and they could have used it to save a child’s life in the developing world. (GiveWell estimates that you can save a life for between $150-$750.) How are you not a monster?
In his recent book, The Life You Can Save, Singer sets about systematically debunking these arguments. In the process, he complicates his original thought experiment. Imagine now that instead of just you walking by the pond, five people are. And imagine that five children are also drowning. Still, he argues, most people would say you should rush in to save a child — even if the other people passing by don’t. But there’s one detail Singer leaves out — one that I think dramatically affects his conclusions: the children didn’t just wander into this pond on their own; they were pushed.
Imagine an evil man stands above the pond, grabbing children and throwing them in. People passing by see the children and rush in to try to save them, but as soon as one is saved or drowns, in goes another, and another, and another. You can rush in to try to save another child — or you can try to stop the man.
The man, of course, is economics. People in the developing world are poor because they live in poor countries — countries without schools or good jobs or welfare programs or even running water. And their countries are poor in large part because of us. It’s often said that visiting a developing country is like traveling back in time — the conditions seem little changed from those of medieval Europe. But how did medieval Europe stop being medieval Europe? The answer is through protectionism: Britain became the reigning world power by being one of the most protectionist countries on earth, expending enormous amounts of government money to promote local industries. Eventually these industries grew strong enough to compete on the world stage and it withdrew the barriers.
But they don’t want others to follow in their footsteps. Instead of letting developing countries grow and compete in their own right, they’d prefer to use them as a source of cheap labor and raw materials. So enormous effort has been expended on building international institutions to prevent their economic growth. The World Bank and the IMF issue loans to countries, but only on the condition they dismantle all forms of protectionism. The WTO requires countries to agree to principles of “free trade”. Academic “experts” come up with reasons why protectionism really hurts everyone and rewrite the history of economic growth. As a result, poor countries are forced to stay poor and children keep dying in shallow ponds. Stopping this is hard. I can give you a phone number to call to donate to Oxfam and buy a child life-saving treatment.
Such conversation clearly does not perform an objective information-sharing function — the relevant facts about the dog can be laid out in a paragraph (if that). It serves a social function — a function with a deep evolutionary history. Primates get to know each other through grooming each other’s fur. But that’s time-consuming; as a result, primates rarely form groups larger than 25. One of the big breakthroughs for humans was moving from grooming to gossip. Instead of 25 people, the average human knows 150. And so we talk, and as we talk we reveal our personalities to each other: the things we care about, the way we think, the subjects we understand. We make friends through this process of conversation and personality reveal, even though objectively the conversation is about matters that seem trivial. When it comes to our friends, we know a lot of trivia.
What Twitter1 does is automate this process. Instead of telling your bit of gossip or joke or humdrum story or minor complaint to each of your friends as you see them, you tell it once to Twitter, and then all your friends can see it. And just like the transition from grooming to gossip, Twitter allows for an explosion in the number of people we know. Where, in the past, it was only practical to have these kinds of close, chatty friendships with a handful of people (even using a technology like IM), now — using the power of the Web to bridge time and space — you can have them with hundreds.
But the relationships need not be symmetrical. One of the things that’s clear about celebrities in the age of television is that they take advantage of this innate social sense. (Fahrenheit 451 is caustic on this subject.) We see these people all the time, we listen to them, we watch them — and we come to feel as if we know them. And so, naturally, our innate social sense kicks in and we want to hear their gossip — a need tabloids try their best to fill.
Oprah, of course, has been a pioneer of this: with a daily long-form television show, she’s been able to cultivate (and monetize) a friendship with millions. But most celebrities don’t have that kind of access to their “followers.” They do on Twitter. The catch, of course, is that it’s all somewhat fake. What you see on Oprah’s show isn’t the real Oprah; it’s a hyperreal Oprah, a carefully-crafted simulation of a gregarious friend chatting with you in your living room — makeup, lighting, sets, and script are all carefully planned to seem “natural.” And most Twitter feeds are the same — humorists spend days polishing the one-liner they seem to carelessly toss off, politicians have speechwriters thinking up soundbites that they can tweet. But it’s not just fake, it’s empty. The reason such apparently boring conversation is interesting is because the act of conversation itself reveals your personality.
In the 1990s, a group of psychologists began studying what made experts expert. Their first task was to see whether experts really were expert — whether they were particularly good at their jobs. What they found was that some were and some weren’t. Champion chess players, obviously, are much better at playing chess than you and I. But political pundits, it turns out, aren’t that much better at making predictions than a random guy off the street. What distinguishes people who are great at what they do from those who are just mediocre? The answer, it seems, is feedback. If you lose a chess game, it’s pretty obvious you lost. You know right away, you feel bad, and you start thinking about what you did wrong and how you can improve. Making a bad prediction isn’t like that. First, it’s months or years before your prediction is proven wrong. And then, you make yourself feel better by coming up with some explanation for why you were wrong: well, nobody expected that to happen; it threw everything else off! And so you keep on making predictions in the same way — which means you never get good at it. The difference between chess and predictions is a lot like the difference between companies and nonprofits. If your company is losing money, it’s pretty obvious. You know right away, you feel bad, and you start thinking about how to fix it. (And if you don’t fix it, you go bankrupt.) But if your nonprofit isn’t accomplishing its goals, it’s much less obvious. You can point to various measurable signs of success (look at all the members we have, look at all the articles we’ve been quoted in) and come up with all sorts of explanations for why it’s not your fault.
I expect many nonprofits are not accomplishing their goals at all. Even if they made a little bit of progress, their improvement would be mathematically infinite. (It’s also quite possible that many nonprofits are actually being counter-productive. After all, before we started measuring the effects of medical treatment, we were bleeding people with leeches.) What can be done about this? I think that everyone who donates to a nonprofit should demand an accounting of results — not just the number of times they’ve been cited in the media or the number of policy discussions they’ve held, but an actual attempt to measure how much they’re improving people’s lives. For most nonprofits, I expect these numbers will be depressingly small. But that’s much better than having no numbers at all. For feeling bad about failing is the first step to doing better next time.
Until recently, men having sex with men was disapproved of in American culture. Actually, “disapproved of” isn’t really the right word — it was immoral, illegal, disgusting. People who did it lived in secrecy, under the constant threat of blackmail for their actions. In the tumult of the 1960s, various out-groups — blacks, Chicanos, Native Americans — begun organizing themselves and demanding to be respected and given their due. And men-who-had-sex-with-men decided that they were an out-group — they were gay — and they deserved rights too. In doing so, they transformed an action (having relationships with someone of the same gender) into an identity (“being gay”). And, using the normal human mechanisms for distinguishing between people in your club and those not in it, they closed ranks. Gay men didn’t have sex with women. Those who did weren’t gay, they were “bi” (which became a whole new identity in itself) — or probably just lying to themselves. And straight men had to be on constant guard against being attracted to other men — if they were, it meant that deep down, they were actually gay. This new gay identity was projected back through history — famous historical figures were “outed” as gay, because they’d once taken lovers of their own gender. They truly were gay underneath, it was said — it was just a homophobic society that forced them to appear to like the opposite sex. Along with the identity went an attempt at justification. Being gay wasn’t “a choice,” they argued — it was innate. Some people were just born gay and others weren’t. To a culture that tried to “correct” gay people into being straight, they insisted that correction was impossible — they just weren’t wired this way.
If you ask someone to justify a rule, they usually do it by listing its consequences: if we don’t steal, God will reward us; everyone will be happier if we stop killing. In the end, it seems like everything boils down to consequences: good acts are those which accomplish good things. So how do we decide what good things are? Doesn’t everyone have their own idea of what’s good? Instead of trying to promote one particular person’s notion of what’s good, it seems like we should balance everyone’s good. In most cases, it’s impossible for us to know what’s actually good for a person, so this usually means taking their word for it and trying to give them what they want.
Here’s another way to look at this. Imagine that before we were born, we all sat up in the heavens and talked about how to design the world. None of us yet know which bodies we would be born into or which parents we’d have, so none of us can possibly be biased. Aren’t we all going to want to promote the greatest good overall? We’ll make sure the worst-off aren’t particularly worse-off in case we’re one of them, and we’ll make sure the rest aren’t especially handicapped in case we’re one of them.1
So we have our simple moral principle: when faced with a question, pick the answer that will accomplish the most overall good.
I am convinced that the account here is largely correct, but I certainly don’t live up to its demanding standards. And that’s OK. One of the conclusions of this argument is that it’s impossible to be perfectly moral. By accepting that, and keeping it in the back of my mind, I do a little better each day. For a long time, people told me eating meat was wrong and I refused to believe them, because I thought it would be impossible for me not to eat meat. Then one day, I accepted that they were right and I was doing the wrong thing and I decided I could live with that. I wasn’t perfect. But shortly after I decided that, meat started seeming less and less attractive, and I started eating less and less, and now I don’t eat it at all anymore. Accepting you’re immoral is the first step to being a more moral person. This thought experiment comes from philosopher John Rawls, although its conclusion has been modified by Peter Singer. ↩
And while it’s true that taking MIT food and drink probably does increase the university’s costs slightly, this concern doesn’t seem too consistently applied. Do you think it’s wrong to take one of the free refreshments at an MIT event? The consequences seem about the same.
A more serious complaint is that this “erodes the social contact.” Peter Singer (no contract theorist he!) puts this more clearly in his book Democracy and Disobedience: In any society people are going to have disputes. Everyone’s better off if these disputes are resolved without resorting to force. Thus in most societies there are governments to help resolve disputes peacefully. Resorting to force when you don’t like their resolution could tip things back to the bad state of people resolving things through force in general. I don’t think this is a particularly plausible concern. My friends (understandably) keep quiet about their lifestyle. If anyone, I am the one undermining the social contract by publicizing it. But let’s keep me out of this analysis for a second. It’s hard to see how sleeping on MIT couches will lead to violent revolution.1
Indeed, the philosopher David Hume argued that we could never know whether causation was at work. “Solidity, extension, motion; these qualities are all complete in themselves, and never point out any other event which may result from them,” he wrote. But not causation: “One event follows another; but we never can observe any tie between them. They seem conjoined, but never connected.”
Which is why, centuries later, Karl Pearson, the founder of mathematical statistics, banned the notion of causality from the discipline, calling it “a fetish amidst the inscrutable arcana of modern science” and insisting that just by understanding simple correlation one “grasped the essence of the conception of association between cause and effect.” His followers have kept it banished ever since. “Considerations of causality should be treated as they have always been in statistics: preferably not at all,” wrote a former president of the Biometric Society. “It would be very healthy if more researchers abandon thinking of and using terms such as cause and effect,” insisted another prominent social scientist. And there the matter has stayed. Causality is a concept as meaningless as “the soul” and just as inappropriate for modern mathematical science. And yet, somehow, this doesn’t seem quite right. If causation is nothing but a meaningless word that laypeople have layered over correlation, then why the ceaseless insistence that “correlation does not imply causation”? Why are our thoughts filled with causal comments (he made me do it!) and never correlational ones? The result is exceptionally strange. Statistics has no mathematical way to express the notion “mud does not cause rain”. It can say mud is correlated with rain (i.e. that there’s a high probability of seeing mud if you see rain), no problem, but expressing the simple causal concept — the kind of thing any five-year-old would know — is impossible.
Statisticians may have never had to confront this problem but, luckily for us, Artificial Intelligence researchers have. It turns out if you’re making a robot, having a notion of causality is essential — not just because it’s the only way to understand the humans, but because it’s the only way to get anything done! How are you supposed to turn the lights on if you don’t know that it’s the light-switch and not the clicking noise that causes it? The result is that in recent years several teams of AI researchers have turned their focus from building robots to building mathematical tools for dealing with causality. At the forefront is Judea Pearl (author of the book Causality, Cambridge University Press) and his group at UCLA and Clark Glymour (author of The Mind’s Arrows, MIT Press), Peter Spirtes, and their colleagues at Carnegie Mellon. The result is a quiet revolution in the field of statistics — one most practicing statisticians are still unaware of.
Next, they created a new mathematical function to formalize our notion of causality: do(…). do expresses the notion of intervening and actually trying something. Thus, to mathematically express the notion that mud does not cause rain, we can say P(rain | do(mud=true)) = P(rain) — in other words, the chance of rain given that you made it muddy is the same as the chance of rain in general. But causes rarely comes in pairs like these — more often it comes in complicated chains: clouds cause rain which causes both mud and wet clothing and the latter causes people to find a change of clothes. And so the researchers express these as networks, usually called causal Bayes nets or graphical causal models, which show each thing (clouds, rain, mud) as a node and the causal relationships as arrows between them: (clouds) | | v (rain) /\ / \ / \ v v (mud) (wet) | | v (change) And all this was just the warm-up act. Their real breakthrough was this: just as kids can discover causes by observation, computers can discern causes from data.
Or, in another example in his book Causality, he analyzes data from a study on a cholesterol-reducing drug. Since whether people got the placebo or not is unassociated with any other variables (because it was randomly assigned) if we merely assume that receiving the real drug has some influence on whether people take it, we can calculate the effectiveness of the drug even with imperfect compliance. Indeed, we can even estimate how effective the drug would have been for people who were assigned it but didn’t take it! And that’s not all — Peter Spirtes and Clark Glymour have developed an algorithm (known as PC, for Peter-Clark) that, given just the data, will do its best to calculate the causal network behind it. You can download the software implementing it, called TETRAD IV, for free from their department’s website — it even has a nice graphical interface for drawing and displaying the networks. As an experiment, I fed it some data from the IRS about 2005 income tax returns. It informed me that the percentage people donate to charity is correlated with the number of dependents they have, which in turn correlates with how much people receive from EITC. That amount, along with average income, causes how many people are on EITC. Average income is correlated with the tax burden which is correlated with inequality. All interesting and reasonable — and the result of just a few minutes’ work.
I think it’s time to remind people that D. J. Bernstein is the greatest programmer in the history of the world. First, look only at the objective facts. djb has written two major pieces of system software: a mail server and a DNS server. Both are run by millions of Internet domains. They accomplish all sorts of complicated functions, work under incredibly high loads, and confront no end of unusual situations. And they both run pretty much exactly has Bernstein first wrote them. One bug — one bug! — was found in qmail. A second bug was recently found in djbdns, but you can get a sense of how important it is by the fact that it took people nearly a decade to find it. No other programmer has this kind of track record. Donald Knuth probably comes closest, but his diary about writing TeX (printed in Literate Programming) shows how he kept finding bugs for years and never expected to be finished, only to get closer and closer (thus the odd version numbering scheme). Not only does no one else have djb’s track record, no one else even comes close. But far more important are the subjective factors. djb’s programs are some of the greatest works of beauty to be comprehended by the human mind. As with great art, the outline of the code is somehow visually pleasing — there is balance and rhythm and meter that rivals even the best typography. As with great poetry, every character counts — every single one is there because it needs to be. But these programs are not just for being seen or read — like a graceful dancer, they move! And not just as a single dancer either, but a whole choreographed number — processes splitting and moving and recombining at great speeds, around and around again. But, unlike a dance, this movement has a purpose. They accomplish things that need accomplishing — they find your websites, they ferry your email from place to place. In the most fantastic movies, the routing and sorting of the post office is imagined as a giant endless choreographed dance number. (Imagine, perhaps, “The Office” from Brazil.) But this is no one-time fantasy, this is how your email gets sorted every day. And the dance is not just there to please human eyes — it is a dance with a purpose. Each of its inner mechanisms is perfectly crafted, using the fewest number of moving parts, accomplishing its task with the most minimal energy. The way jobs are divided and assigned is nothing short of brilliant. The brilliance is not merely linguistic, although it is that too, but contains a kind of elegant mathematical effectiveness, backed by a stream of numbers and equations that show, through pure reason alone, that the movements are provably perfect, a better solution is guaranteed not to exist. But even all this does not capture his software’s incredible beauty. For djb’s programs are not great machines to be admired from a distance, vast powerhouses of elegant accomplishment. They are also tools meant to be used by man, perfectly fitted to one’s hand. Like a great piece of industrial design, they bring joy to the user every time they are used. What other field combines all these arts? Language, math, art, design, function. Programming is clearly in a class of its own.
The academy is often thought of as the ideal for developing knowledge: select the brightest minds in the country, guarantee them jobs, allow them all the resources they need to research anything, don’t interfere with any of their conclusions. On some issues, these independent-minded academics form a consensus and we tend to give their consensus very heavy weight. They can’t all be wrong, can they? And yet, in my empirical research, I find they very often are. A short blog post is no place to do a careful study, but I can mention some examples. The classic works in industrial relations turn out to be complete hoaxes, yet they’ve dominated the teach of the field for over half a century. (See Alex Carey’s book for details.) In political science, the most respected practioner’s most famous work shades and distorts his own findings to support a theory wildly at odds with the facts. (See Who Really Rules?) The whole field of fMRI studies are so flat-out ridiculous that journal articles are even making jokes about them. And, maybe most blatantly today, economics was dominated by a paradigm that believed substantive unemployment was impossible, despite that notion having been famously and thoroughly debunked by Keynes and, of course, reality.
How is this possible? I think the key, as in most institutional studies, is that of the filter. To become a professor of X, one must first spend several years receiving an undergraduate major in X, then several more years going to graduate school in X, then perhaps work as a postdoc or adjunct for a bit, before getting a tenure-track position and working like mad to make enough of a dent in the field of X to be seen as deserving of a prominent permanent position. When your time is called, a panel of existing professors of X passes judgment on your work to decide if it passes muster. Can you imagine a better procedure for forcing impressionable young minds to believe crazy things?
There are three questions you have when you’re hiring a programmer (or anyone, for that matter): Are they smart? Can they get stuff done? Can you work with them? Someone who’s smart but doesn’t get stuff done should be your friend, not your employee. You can talk your problems over with them while they procrastinate on their actual job. Someone who gets stuff done but isn’t smart is inefficient: non-smart people get stuff done by doing it the hard way and working with them is slow and frustrating. Someone you can’t work with, you can’t work with. The traditional programmer hiring process consists of: a) reading a resume, b) asking some hard questions on the phone, and c) giving them a programming problem in person. I think this is a terrible system for hiring people. You learn very little from a resume and people get real nervous when you ask them tough questions in an interview. Programming isn’t typically a job done under pressure, so seeing how people perform when nervous is pretty useless. And the interview questions usually asked seem chosen just to be cruel. I think I’m a pretty good programmer, but I’ve never passed one of these interviews and I doubt I ever could. So when I hire people, I just try to answer the three questions. To find out if they can get stuff done, I just ask what they’ve done. If someone can actually get stuff done they should have done so by now. It’s hard to be a good programmer without some previous experience and these days anyone can get some experience by starting or contributing to a free software project. So I just request a code sample and a demo and see whether it looks good. You learn an enormous amount really quickly, because you’re not watching them answer a contrived interview question, you’re seeing their actual production code. Is it concise? clear? elegant? usable? Is it something you’d want in your product? To find out whether someone’s smart, I just have a casual conversation with them. I do everything I can to take off any pressure off: I meet at a cafe, I make it clear it’s not an interview, I do my best to be casual and friendly. Under no circumstances do I ask them any standard “interview questions” — I just chat with them like I would with someone I met at a party.
But if I had to write down what it is that makes someone seem smart, I’d emphasize three things. First, do they know stuff? Ask them what they’ve been thinking about and probe them about it. Do they seem to understand it in detail? Can they explain it clearly? (Clear explanations are a sign of genuine understanding.) Do they know stuff about the subject that you don’t? Second, are they curious? Do they reciprocate by asking questions about you? Are they genuinely interested or just being polite? Do they ask follow-up questions about what you’re saying? Do their questions make you think? Third, do they learn? At some point in the conversation, you’ll probably be explaining something to them. Do they actually understand it or do they just nod and smile? There are people who know stuff about some small area but aren’t curious about others. And there are people who are curious but don’t learn, they ask lots of questions but don’t really listen. You want someone who does all three.
Finally, I figure out whether I can work with someone just by hanging out with them for a bit. Many brilliant people can seem delightful in a one-hour conversation, but their eccentricities become grating after a couple hours. So after you’re done chatting, invite them along for a meal with the rest of the team or a game at the office. Again, keep things as casual as possible. The point is just to see whether they get on your nerves. If all that looks good and I’m ready to hire someone, there’s a final sanity check to make sure I haven’t been fooled somehow: I ask them to do part of the job. Usually this means picking some fairly separable piece we need and asking them to write it. (If you really insist on seeing someone working under pressure, give them a deadline.) If necessary, you can offer to pay them for the work, but I find most programmers don’t mind being given a small task like this as long as they can open source the work when they’re done. This test doesn’t work on its own, but if someone’s passed the first three parts, it should be enough to prove they didn’t trick you, they can actually do the work.
One of the best things about capitalism is the way it handles sociopaths. Major executives look up to Alexander the Great and apparently try to follow in his footsteps. But instead of leading a murderous campaign across Asia, they decide to make something people want: newspapers and movies and television shows. True, they’re far from perfect, but you have to admit it’s a lot better than mass slaughter.
Google gets a lot of criticism (often deserved), but it’s worth taking a moment to think of all the things they haven’t done. If Microsoft had Google’s market share in search, is there any doubt that they’d be systematically demoting or even banning their competitors in the search results? Demoting someone in Google is a virtual death sentence, and yet not only has Google never been accused of using this vast power, the idea itself is almost unimaginable. Hearing things from the sociopaths’ perspective, it’s easy to get fooled. “Yeah!” you think. “Why should these Google guys get to control everything?” But for average people, this shift has been great: much more stuff is available, faster and freer than ever before, and the people making all the money off of it are actually decent human beings who feel some responsibility for the planet they inhabit. Sure, I don’t agree with them on everything and there’s a lot more they can do, but let’s not lose sight of the basic point: at least they’re not sociopaths.
We don’t kill native Americans much these days and we don’t keep slaves, but it’s hard to believe that our era must be morally perfect. Surely if people back then could make such huge moral blunders, we could be making similar ones right now. And ethical philosophy is useless if it can’t help us avoid such huge mistakes. Some people suggest that the way to do ethical philosophy is to listen to our intuitions. “I do not think our intuitions about cases are less reliable than those about principles,” Frances Kamm argues. But of course our intuitions about cases are less reliable! If we could simply trust our intuitions, we wouldn’t need ethical philosophy at all. If something was wrong, we would just know it was wrong. There would be nothing philosophy could tell us. Obviously this is absurd. Lots of people do things that seem clearly unethical while thinking they’re in the right. Perhaps Kamm thinks these mistakes are merely the result of temporary passions and that from her desk at Harvard she can consider such question with a more objective eye. But, as I have shown, people’s intuitions about cases are systematically distorted. Sitting at a desk wasn’t enough to persuade George Washington to stop killing native Americans. His mistake wasn’t the result of some momentary passion, but of an entire culture that had normalized mass murder and a society that depended on it. To think that he would just suddenly sit down and go “Hmm, murdering Indians feels wrong to me” is ridiculous. The only way he would possibly conclude that is by taking seriously his principles. I grew up eating animals. I saw nothing wrong with this. My parents ate them, my siblings ate them, my friends ate them, people on TV ate them, the President ate them. I doubt I stopped to think about the morality of eating animals any more than I stopped to think about the morality of brushing my teeth. If you asked me for my intuition, I would have said eating animals was just fine. It was only when I stopped eating animals that my intuitions began to change.
The Power Broker (5) I cannot possibly say enough good things about this book. Go read it. Right now. Yes, I know it’s long, but trust me, you’ll wish it was longer. I think it may be simply the best nonfiction book.
American Apartheid This book is criminally under-publicized. Everyone has their own crazy theories about why it is that blacks are disadvantaged in our society. Massey and Denton show it’s much more obvious than any of that: they’re victims of extreme segregation, with all the negative effects that entails. An absolutely brilliant book.
We procrastinate because we are afraid. We’re afraid it’s too much work and that it will drain us. We’re afraid we’ll screw it up and get in trouble. We’re afraid we don’t know how to do it. We’re afraid because, well, we’ve been putting it off forever and every time we put it off it seems a little more fearsome in our minds. That’s why not putting things off is so liberating. We’re forced to confront our fears, not let them grow bigger by repeatedly running away. And when we confront them, we find they’re not so scary after all. This doesn’t just apply to email, of course — it works for any todo list. But only if you say no to reordering, prioritizing, estimating deadlines, and doing the most important things first. Forget all that. Do it now.
In a classic piece of psychology, Kahneman and Tversky ask people what to do about a fatal disease that 600 people have caught. One group is asked whether they would administer a treatment that would definitely save 200 people’s lives or one with a 33% chance of saving 600 people. The other group is asked whether they would administer a treatment under which 400 people would definitely die or one where there’s a 33% chance that no one will die. The two questions are the same: saving 600 people means no one will die, saving just 200 means the other 400 will die. But people’s responses were radically different. The vast majority of people chose to save 200 people for sure. But an equally large majority chose to take the chance that no one will die. In other words, just changing how you describe the option — saying that it saves lives rather than saying it leaves people to die — changes which option most people will pick. In the same way that Festinger, et. al. showed that our intuitions are biased by our social situation, Kahneman and Tversky demonstrated that humans suffer from consistent cognitive biases as well. In a whole host of examples, they showed people behaving in a way we wouldn’t hesitate to think was irrational — like changing their position on whether to administer a treatment based on what it was called. (I think a similar problem affects our intuitions about killing versus letting die.)
This is a major problem for people like Frances Kamm, who think our moral philosophy must rely on our intuitions. If people consistently and repeatedly treat things differently based on what they’re called, are we forced to give that moral weight? Is it OK to administer a treatment when it’s described as saving people, but not when it’s described as not saving enough? Surely moral rules should meet some minimal standard of rationality. This problem affects a question close to Kamm’s work: what she calls the Problem of Distance in Morality (PDM). Kamm says that her intuition consistently finds that moral obligations attach to things that are close to us, but not to thinks that are far away. According to her, if we see a child drowning in a pond and there’s a machine nearby which, for a dollar, will scoop him out, we’re morally obligated to give the machine a dollar. But if the machine is here but the scoop and child are on the other side of the globe, we don’t have to put a dollar in the machine. But, just as with how things are called, our intuitions about distance suffer from cognitive biases. Numerous studies have shown that the way we think about things nearby is radically different from the way we think about things far away. In one study, Indiana University students did better on a creativity test when they were told the test was devised by IU students studying in Greece than when they were told it was devised by IU students studying in Indiana. It’s a silly example, but it makes the point. If our creativity depends on whether someone mentions Greece or Purdue, it’s no surprise our answers to moral dilemmas depend on whether they take place in the US or China. But surely these differences have no more moral validity than the ones that result from Tversky’s experiment — they’re just an unfortunate quirk of how we’re wired. Rational reflection — not faulty intuitions — should be the test of a moral theory.
Transparency is a slippery word; the kind of word that, like reform, sounds good and so ends up getting attached to any random political thing that someone wants to promote. But just as it’s silly to talk about whether “reform” is useful (it depends on the reform), talking about transparency in general won’t get us very far.
The goo-goo reformers moved elections to off-years. They claimed this was to keep city politics distinct from national politics, but the real effect was just to reduce turnout. They stopped paying politicians a salary. This was supposed to reduce corruption, but it just made sure that only the wealthy could run for office. They made the elections nonpartisan. Supposedly this was because city elections were about local issues, not national politics, but the effect was to increase the power of name recognition and make it harder for voters to tell which candidate was on their side. And they replaced mayors with unelected city managers, so winning elections was no longer enough to effect change.1 Of course, the modern transparency movement is very different from the Good Government movement of old. But the story illustrates that we should be wary of kind nonprofits promising to help. I want to focus on one particular strain of transparency thinking and show how it can go awry. It starts with something that’s hard to disagree with. Sharing Documents with the Public Modern society is made of bureaucracies and modern bureaucracies run on paper: memos, reports, forms, filings. Sharing these internal documents with the public seems obviously good, and indeed, much good has come out of publishing these documents, whether it’s the National Security Archive, whose Freedom of Information Act (FOIA) requests have revealed decades of government wrongdoing around the globe, or the indefatigable Carl Malamud and his scanning, which has put terabytes of useful government documents, from laws to movies, online for everyone to access freely.
I find such practices ridiculous. When you create a regulatory agency, you put together a group of people whose job is to solve some problem. They’re given the power to investigate who’s breaking the law and the authority to punish them. Transparency, on the other hand, simply shifts the work from the government to the average citizen, who has neither the time nor the ability to investigate these questions in any detail, let alone do anything about it. It’s a farce: a way for Congress to look like it has done something on some pressing issue without actually endangering its corporate sponsors.
There’s just one problem: if you can’t trust the regulators, what makes you think you can trust the data? The problem with generating databases isn’t that they’re too hard to read; it’s the lack of investigation and enforcement power, and websites do nothing to help with that. Since no one’s in charge of verifying them, most of the things reported in transparency databases are simply lies. Sometimes they’re blatant lies, like how some factories keep two sets of books on workplace injuries: one accurate one, reporting every injury, and one to show the government, reporting just 10% of them.2 But they can easily be subtler: forms are misfiled or filled with typos, or the malfeasance is changed in such a way that it no longer appears on the form.
Congress’s operations are supposedly open to the public, but if you visit the House floor (or if you follow what they’re up to on one of these transparency sites) you find that they appear to spend all their time naming post offices. All the real work is passed using emergency provisions and is tucked into subsections of innocuous bills. (The bank bailouts were put in the Paul Wellstone Mental Health Act.) Matt Taibbi’s The Great Derangement tells the story. Many of these sites tell you who your elected official is, but what impact does your elected official really have? For 40 years, people in New York thought they were governed by their elected officials—their city council, their mayor, their governor. But as Robert Caro revealed in The Power Broker, they were all wrong. Power in New York was controlled by one man, a man who had consistently lost every time he’d tried to run for office, a man nobody thought of as being in charge at all: Parks Commissioner Robert Moses. Plenty of sites on the Internet will tell you who your representative receives money from, but disclosed contributions are just the tip of the iceberg. As Ken Silverstein points out in his series of pieces for Harper’s (some of which he covers in his book Turkmeniscam), being a member of Congress provides for endless ways to get perks and cash while hiding where it comes from.
What’s ironic is that the Internet does provide something you can do. It has made it vastly easier, easier than ever before, to form groups with people and work together on common tasks. And it’s through people coming together—not websites analyzing data—that real political progress can be made.
Wikis seem to work well, so you build a political wiki. Everyone loves social networks, so you build a political social network. But these tools worked in their original setting because they were trying to solve particular problems, not because they’re magic. To make progress in politics, we need to think best about how to solve its problems, not simply copy technologies that have worked in other fields.
You can have technologists poring through safety records, investigative reporters making phone calls and sneaking into buildings, lawyers subpoenaing documents and filing lawsuits, political organizers building support for the project and coordinating volunteers, members of Congress pushing for hearings on your issues and passing laws to address the problems you uncover, and, of course, bloggers and writers to tell your stories as they unfold. Imagine it: an investigative strike team, taking on an issue, uncovering the truth, and pushing for reform. They’d use technology, of course, but also politics and the law.
This is where data analysis can be really useful. Not in providing definitive answers over the Web to random surfers, but in finding anomalies and patterns and questions that can be seized upon and investigated by others. Not in building finished products, but by engaging in a process of discovery. But this can be done only when members of this investigative strike team work in association with others. They would do what it takes to accomplish their goals, not be hamstrung by arbitrary divisions between “technology” and “journalism” and “politics.”
Change doesn’t come from thousands of people, all going their separate ways. Change requires bringing people together to work on a common goal. That’s hard for technologists to do by themselves. But if they do take that as their goal, they can apply all their talent and ingenuity to the problem. They can measure their success by the number of lives that have been improved by the changes they fought for, rather than the number of people who have visited their website. They can learn which technologies actually make a difference and which ones are merely indulgences. And they can iterate, improve, and scale. Transparency can be a powerful thing, but not in isolation.
Drink more water. There are lots of reasons to drink more water, but it’s also a great way to lose weight. A lot of what feels like hunger is actually thirst, while having water in your stomach seems to counteract certain feelings of hunger. Furthermore, burning fat requires extra water.
The average person spends 1704 hours a year watching TV. If the average reading rate is 250 words per minute and the average book is 180,000 words, then that’s 142 books a year. To my surprise, I wasn’t reading nearly enough books.
Now I either focus on the problem at hand or think enough about it to take a break and go for a walk, eat something, drink some water, read a book, or take a
Order lots of books at the library. Most people think the way you read more books is by spending more time reading. But I’ve found that, like exercise, this is an effect and not a cause. I spend time reading when I have a great book to read. When I don’t, I feel no urge to read and when I do start reading something, I put it down quickly. But if I’m reading a great book, I spontaneously come up with times and places to read it. But figuring out which books are great in advance is hard.
I begin reading them and finish the ones that are exciting enough to finish and return the ones that are unpromising enough to give up on. Then I return them all and get some more. I also find that the due dates and the growing pile of books provides additional impetus to read them. And the habit doesn’t cost me any money this way, so I don’t feel guilty about it.
By keeping yourself away from other people (living alone is a good start), you free up an enormous amount of time for reading. I find this is particularly useful in reading books, since books can usually substitute for human company: you can take them with you on the train and to meals and curl up with them at night and so on. Getting rid of other hobbies no doubt also helps.
One cannot accidentally promise, which is why the promise didn’t exist in the foreign country, but you knew full well that ordering at the restaurant in the States was a promise to pay full price at the end of the meal. And by promising, you have created a desire-independent reason for action, or, as I put it, a DIRFA. DIRFAs are surely the most amazing and confounding of all of Searle’s discoveries. It seems crazy to think that there can be some magical realm, independent of any individual human desire, to which we can be called to account. And yet, there it is. We pay at the restaurant not because we want to, or because we want to help certain others, but because we have committed to doing so and that commitment somehow binds us.
In his new book, Making the Social World, Searle shows that, contrary to appearances, DIRFAs (like all social institutions) are merely an outgrowth of language. This is an incredible claim, but Searle makes a convincing case.
Kaczynski argued that the left was the result of oversocialization. Leftists took the social constraints they were taught — don’t discriminate on the basis of race, for example — so strongly that they begin applying them much more widely than the others around them. But Searle shows how this is a necessary outgrowth of empathy, the left’s defining traits: social institutions are grounded in a form of collective intentionality, where others count upon you to obey the institutional rules. Someone who can better imagine others’ minds must feel this network of expectations on them to be especially strong.
What if the animal preferred to have a short, pleasant existence before being consumed as food rather than having no existence at all? Wouldn’t that mean we should breed the animal, give it a nice life, then kill and eat it? Response: This is a ridiculous hypothetical — you’re suggesting an animal that doesn’t exist yet has a preference about existing. I don’t respect hypothetical creatures’ hypothetical desires to not be hypothetical. If I did, you could get me to do all sorts of absurd things just by hypothesizing them. You could, say, simply hypothesize a utility monster’s very strong desire to exist and I would be morally bound to try to create one. Or perhaps my hypothetical children really want to exist, so I have to hurry to procreate. That’s ridiculous. I think we should maximize the actual interests of actual people.
Let’s take a concrete example. Imagine you want to decrease the size of the defense budget. The typical way you might approach this is to look around at the things you know how to do and do them on the issue of decreasing the defense budget. So, if you have a blog, you might write a blog post about why the defense budget should be decreased and tell your friends about it on Facebook and Twitter. If you’re a professional writer, you might write a book on the subject. If you’re an academic, you might publish some papers. Let’s call this strategy a “theory of action”: you work forwards from what you know how to do to try to find things you can do that will accomplish your goal. A theory of change is the opposite of a theory of action — it works backwards from the goal, in concrete steps, to figure out what you can do to achieve it. To develop a theory of change, you need to start at the end and repeatedly ask yourself, “Concretely, how does one achieve that?” A decrease in the defense budget: how does one achieve that? Yes, you. AUDIENCE MEMBER: Congress passes a new budget with a smaller authorization for defense next year. Yes, that’s true — but let’s get more concrete. How does that happen? AUDIENCE: Uh, you get a majority of the House and Senate to vote for it and the President to sign it. Great, great — so how do you get them to do that? Now we have to think about what motivates politicians to support something. This is a really tricky question, but it’s totally crucial if we want to be effective. After all, if we don’t eventually motivate the politicians, then what we’ve done is useless for achieving our goal. (Unless we can think of some other way to shrink the defense budget.) But this is also not an insoluble problem. Put yourself in the shoes of a politician for a moment. What would motivate you? Well, on the one hand, there’s what you think is right. Then there’s what will help you get reelected. And finally there’s peer pressure and other sort of psychological motivations that get people to do things that don’t meet their own goals. So the first would suggest a strategy of persuading politicians that cutting the defense budget was a good idea. The second would suggest organizing a constituency in their districts that would demand they cut the defense budget. And maybe one of you can figure out how to use the third—that’s a little trickier. But let’s stick with the first, since that’s the most standard. What convinces politicians that something is the right thing to do? AUDIENCE: Their beliefs? In a sense, I suppose. But those are going to be pretty hard to change. I’m thinking more, if you have a politician with a given set of beliefs, how do you convince them that cutting the defense budget advances those beliefs? AUDIENCE: You outline why to them. Well, OK, let’s think about that. Do you think if you ran into Nancy Pelosi in the hallway here and you tried to explain to her why cutting the defense budget would accomplish her beliefs, that you’d convince her? AUDIENCE: Probably not. Why not? AUDIENCE: Because she wouldn’t really listen to me — she’d just smile and nod. Yeah. Nancy Pelosi doesn’t trust you. She’s never met you. You’re not particularly credible. So you need to find people the politicians trust and get them to convince the politicians. Alright, well, we can continue down this road for a while — figuring out who politicians trust, figuring out how to persuade them, figuring out how to get them to, in turn, persuade the politicians, etc. Then, when the politicians are persuaded, there’s the task of developing something they can vote for, getting it introduced so they can vote on it, then getting them to vote on the specific measure even when they agree with the overall idea. You can see that this can take quite a while.
I don’t like wearing suits. In part, this is simply a question of personal taste — I find them uncomfortable and overpriced, and I don’t like the way they look. But it’s also a question of principle. Suits — and the other trappings of “respect” that go with them, like titles and sir’s and the rest — are the physical evidence of power distance, the entrenchment of a particular form of inequality.
In retrospect, the answer is rather obvious. Children learn a language because they are surrounded by it. It’s unavoidable. Their world is full of people speaking it and the pattern matchers in their brains go to town, figuring out the structures underlying its grammar and associated its vocabulary with the other things they see around them. It’s impossible for there to be anything similar with words. Sure, some words appear in fairly regular positions (MEN on bathroom doors, perhaps) and children may learn to recognize them, but for the most part words are rather avoidable and their patterns hard to spot. How are children to draw a connection between the words in the newspaper and any sentences that they can understand? The only clues are the pictures and anyone who’s read picturebooks to a kid knows that kids make valiant use of those few clues, but it’s simply not enough to let them learn to read. What’s needed is a way to give children the additional clues they require, but at their own pace. An adult can read books but only reads linearly and soon gets bored of reading the same thing over and over again. (I’ve often thought that children were being stupid by reading the same things over and over and over again. Now I realize I’m the stupid one; it’s the kids who are being smart. Only through repetition can your brain see the patterns!) It’s very difficult for children to pick up a pattern under such conditions.
But devices never get tired, so I would propose a device. Here is what I imagine: Give the child an iPad with a special program for reading books. The program provides a selection of nice picture books with words in large type underneath. Switching pages can be done the usual way; kids seem pretty good at figuring out gestural interfaces. But the big innovation is simply this: when you touch a word, it turns red while the speakers say it out loud. In this way, the child can have the machine read the book to them. Tap the words in sequence and the book pronounces them. If a word is somehow unclear, just tap it again.
Like a lot of people, I grew up feeling frustrated with the world — extremes of wealth and poverty, insane and bloody wars, outdated intellectual monopoly laws, big corporations run amok. But I had no idea what to do about it. Writing just felt like preaching to the choir, marching in the streets felt like the protest of the powerless, working with people on the ground just didn’t seem to scale.
Smart people actually say things that are very simple and easy to understand. And the smarter they are, the more clear what they say is. It’s stupid people who say things that are hard to understand.
This means the tradeoff between being expert and being popular doesn’t actually exist. People who truly understand their subject should have no trouble writing for a popular audience. And, in fact, their writing will probably better than that of the professional popularizers.
Management is art of getting people who work for you to accomplish things. It’s a subtle and fascinating art, the applied version of my great intellectual love, sociology. It’s usually practiced badly, but even when done badly it can accomplish incredible things. One person can only do so much on their own—their time, their powers, their creativity are all limited. But even an incompetent manager, who uses only a fraction of the powers of her employees, is capable of accomplishing tasks far beyond the range of any single person.
Organizing is the art of getting people who don’t work for you to accomplish things. Many of the underlying concepts are the same but the execution is vastly more difficult. You don’t really get to pick your people. The people you get don’t simply follow instructions, they must be persuaded and cajoled and made to understand your vision. But when it works, they accomplish great things you never would have allowed them to try.
Mobilizing can be done thru any medium. The folks who knock on your door to ask for your vote (or donation) are face-to-face mobilizers. You can do the same by telephone or television (call now to contribute!). It is, however, a one-way relationship. You are simply a number on a list. But this doesn’t mean online organizing is impossible, just that it isn’t often done. Obviously it’s much harder than mere mobilization—and much more complicated—but it’s much more rewarding as well. It is what makes for a successful open source project, or a thriving online community. The problem is one of scale — and that’s true when it’s done through any medium.
The standard explanation is hyperbolic discounting: humans tend to weigh immediate effects much more strongly than distant ones. But I think the actual psychological effect at work here is just the percentage fallacy. If I ask for the money now, I may have to wait 60 seconds. But if I get it tomorrow I have to wait 143900%more.
Liberals don’t like talking about crime. The classic answer—fixing the root causes of crime—now seems hopelessly ambitious. And our natural sympathy for the millions ground down by an out-of-control prison system and a pointless war on drugs doesn’t play well with voters, especially when most criminals can’t vote. The general belief seems to be that the problem of crime has been solved—after all, crime levels have dropped dramatically since the law-and-order 80s—and that the real problem now is not too much crime, but too much punishment.
So there’s the question: How can we have less crime with less punishment? The first thing to notice is that low-crime is an equilibrium state: if nobody is committing any crimes, all anti-crime resources can be focused on anyone who decides to break the law, making it irrational for them to even try. But high-crime is also an equilibrium (assuming reasonable levels of punishment): if everyone is breaking the law, the police can’t possibly stop all of them, so it’s not so risky to keep on breaking the law.
Dynamic concentration isn’t a panacea. Obviously it only works where the costs of monitoring are much less than the costs of enforcement. But this still leaves lots of opportunities and clever selection of the population to concentrate on can significantly decrease the cost of monitoring.
On Writing by Stephen King Nothing earth-shattering, but it turns out Stephen King is actually a good writer. I honestly had no idea.
Becoming Attached (reread) by Robert Karen One of my favorite books of all time. Probably the best work of science writing I’ve ever read.
How to Win Friends and Influence People (reread) by Dale Carnegie There’s a reason this is a classic. It articulates a way of dealing with people, founded on concern and empathy, and convincingly argues that this kind style is actually the more productive one for getting things done. Instead of yelling at people to do things, you make them want to help you. And the book itself is a genius exemplar of this practice. Instead of berating you for being a jerk, like most people would, it persuades you to want to change. The Immortal Life of Henrietta Lacks by Rebecca Skloot Everyone has praised this book, and for good reason — it deftly interweaves an incredible story of science with the heartbreaking tragedy of the people science studies. Nothing earthshattering, but a great piece of writing.
Scott Pilgrim’s Precious Little Life (1 of 6) by Bryan Lee O’Malley Scott Pilgrim vs. The World (2 of 6) by Bryan Lee O’Malley Scott Pilgrim and the Infinite Sadness (3 of 6) by Bryan Lee O’Malley Scott Pilgrim Gets It Together (4 of 6) by Bryan Lee O’Malley Scott Pilgrim vs. The Universe (5 of 6) by Bryan Lee O’Malley Scott Pilgrim’s Finest Hour (6 of 6) by Bryan Lee O’Malley You should definitely see the movie and then, if you do see it, it’s worth reading the books. The books are much deeper and darker than the movie otherwise lets on. You realize that the film you saw as an example of joy and exuberance is actually incredibly depressing. By contrast, we will just forget that someone made a movie of Bonfire of the Vanities. Yeek.
Meta Math! by Gregory Chaitin Chaitin makes an obscure field you’ve never heard of like Algorithmic Information Theory sound interesting and fun, even if you don’t know any math.
Part of why running a nonprofit is so hard is that pretty much all nonprofits are delegation. Donors aren’t buying a particular thing they know they want, they’re buying a chance to help others, without knowing exactly what it is they want. And that’s why randomized controlled trials have been transformational for the nonprofit sector — they’ve converted a delegation into a service. Great nonprofits don’t have to guess at what will help people the most; they just need to look up the most helpful service and then purchase more of it. Poor Economics is a remarkable book if only because it shows how crucial this is. It’s full of tales of small-scale experiments where well-intentioned do-gooders try hard to help some people and fail catastrophically. But they only notice because there are academics there collecting data; in the typical nonprofit, where the decisionmakers are far removed from the evidence on the ground, they’d probably never know that much was going wrong (assuming that they even cared).
Jon Elster has a four-phase theory of revolutions:1 A hard-core of committed activists get together to do something completely crazy. The regime cracks down, attracting people who are sympathetic to the cause to rally to the support of the crazy ones. As the protests grow, it seems like they might have a reasonable chance of succeeding and it seems worth it even for just normal reasonable people to start joining in. The protests become so overwhelmingly large that even their opponents pretend to be part of them, so as not to be on the wrong side of history.
It seems pretty clear that the Internet helps with 1 — after all, it’s brought together groups of crazy committed people about every other topic, from Smallville slash fiction to high-energy astrophysics. It’d be very surprising if it didn’t bring committed activists together too. It’s clearly helped with 2 — YouTube videos of protestors being mistreated by police have been a staple of the #occupy movement, even though they haven’t gotten much coverage on traditional TV; We are all Khaled Said presumably reached some people in Egypt. 3 and 4 are when the cable news and satellite television stations start joining in and when people support the protest just because it’s such a huge physical presence in their lives. Here, I agree, the Internet probably has less effect. The problem is that you never get to 3 and 4 without 1 and 2 — I don’t think it’s a total accident that all of these protests are happening now. I think they’re happening because 1 and 2 have been made much easier thanks to the Internet. It’s just that most people don’t hear about them until steps 3 and 4, which are carried much more by traditional media. They suffer from the understandable fallacy that just because they heard about it on TV, that must be how everyone else did.
And that’s why I like Jony Ive. He too clearly feels that pain (he once insisted they hold up an entire product launch because he didn’t like the polish on the screws) but he doesn’t lash out at people about it. Instead, he sits down with the people involved and works to fix the problem until they get it just right. Ask Jobs about his viciousness and he insisted it was all for the best: “I’ve learned over the years that when you have really good people you don’t have to baby them…A-plus players like to work together, and they don’t like it if you tolerate B work. Ask any member of that Mac team. They will tell you it was worth the pain.” Even Debi Coleman agreed: “I consider myself the absolute luckiest person in the world to have worked with him.”
But does it require so much pain? My hope is that I can be just as exacting, demand work just as good, without emotionally destroying people in the process. I want to be a perfectionist and a nice guy. I want to be Jony Ive.
Dishonesty has two parts: 1) saying something that is untrue, and 2) saying it with the intent to mislead the other person. You can have each without the other: you can be genuinely mistaken and thereby say something false without intending to mislead, and you can intentionally mislead someone without ever saying anything that’s untrue. (The second is generally considered deceit, but not dishonesty.)
I know of two exceptions. In Boston, there is a company called 5 Wits. The experience is something like this: you enter an unassuming rug shop and when the salesman asks if he can help you, you tell him the secret pass code. He gets a funny look on his face, locks the door and pulls down the blinds. He pulls back the rug to reveal a television screen that briefs you on your secret mission.
Poor Economics by Abhijit Banerjee and Esther Duflo God, what a book! Poor Economics is a series of tales of foreigners trying to save the far-flung poor, while failing to realize not only that their developed-country ideas are terrible disasters in practice, but also that everything they’ve learned to think of as solid — even something as simple as measuring distance — is far more fraught, and complex, and political than they ever could have imagined. It’s a stunning feeling to have the basic building blocks of your world questioned and crumbled before you — and a powerful lesson in the value of self-skepticism for everyone who’s trying to do something.
Harry Potter and the Methods of Rationality by Eliezer Yudkowsky This is a book whose title still makes me laugh and yet it may just turn out to be one of the greatest books ever written. The writing is shockingly good, the plotting is some of the best in all of literature, and the stories are simply pure genius. I fear this book may never get the accolades it deserves, because it’s too hard to look past the silly name and publishing model, but I hope you, dear reader, are wiser than that! A must-read. As it says at the beginning, you really need to give it a couple chapters to get started before passing judgment — the first bunch are quite silly and it doesn’t seem worth sticking with until you’ve gotten past them.
The Lean Startup by Eric Ries Ries presents a translation of the Toyota Production System to startups — and it’s so clearly the right way to run a startup that it’s hard to imagine how we got along before it. Unfortunately, the book has become so trendy that I find many people claiming to swear allegiance to it who clearly missed the point entirely. Read it with an open mind and let it challenge you, so you can start to understand how transformative it really is.
CODE: The Hidden Language of Computer Hardware and Software by Charles Petzold A magnificent achievement. Charles Petzold starts with the story of two kids across the street who wish to communicate with each other and, from this simple beginning, builds up an entire computer without ever making it seem like something that should be over your head. I never really felt I understood the computer until I read this book.
The difference between these two fates — between people’s imperfections canceling each other out versus amplifying each other — is institutions, the social structures that guide people in their actions. Hollister seems gifted with an amazing set of institutions. I don’t know the details, but we can imagine them: Everyone must show up for their shift an hour early. If you don’t show, a manager calls in a replacement. The managers keep an eye on your performance and if you don’t do a good enough job folding shirts, you’re reprimanded or replaced. Perhaps a roving “brand protection squad” goes around ensuring local managers are upholding the high national standards. And on and on. Every failsafe has a failsafe.
Our nation’s institutions have crumbled, Hayes argues. From 2000–2010 (the “Fail Decade”), every major societal institution failed. Big businesses collapsed with Enron and Worldcom, their auditors failed to catch it, the Supreme Court got partisan in Bush v. Gore, our intelligence apparatus failed to catch 9/11, the media lied us into wars, the military failed to win them, professional sports was all on steroids, the church engaged in and covered up sex abuse, the government compounded disaster upon disaster in Katrina, and the banks crashed our economy. How did it all go so wrong? Hayes pins the blame on an unlikely suspect: meritocracy. We thought we would just simply pick out the best and raise them to the top, but once they got there they inevitably used their privilege to entrench themselves and their kids (inequality is, Hayes says, “autocatalytic”). Opening up the elite to more efficient competition didn’t make things more fair, it just legitimated a more intense scramble. The result was an arms race among the elite, pushing all of them to embrace the most unscrupulous forms of cheating and fraud to secure their coveted positions.
And their distance from the way the rest of the country really lives makes it impossible for them to do their jobs justly — they just don’t get the necessary feedback. The only cure is to reduce economic inequality, a view that has surprisingly support among the population (clear majorities want to close the deficit by raising taxes on the rich, which is more than can be said for any other plan).
reducing economic inequality was somehow always the appropriate solution to each of the many social ills the group identified.
Meritocracy says “there must be one who rules, so let it be the best”; egalitarianism responds “why must there?” It’s the power imbalance, rather than inequality itself, that’s the problem. Imagine a sci-fi world in which productivity has reached such impressive heights that everyone can have every good they desire just from the work young kids do for fun.
Why put all your eggs in one basket, even if it’s the best basket? Surely you’d get better results by giving more baskets a try. You can argue that this is exactly where technology is bringing us — popular kids on YouTube get made into huge pop sensations, right? — and the genius of Hayes’ book is to show us why this is not enough. The egaliatarian demand shouldn’t be that we need more black pop stars or female pop stars or YouTube sensation pop stars, but to question why we need elite superstars at all. I hope Hayes’ next book shows us what the world without them is like.
I’ve put together a new guide for developing software, from idea to architectural details. The idea was to combine all the good ideas I’d heard from various areas of software development into a single, concise document. I’ve also added some new stuff — I think my need/idea model is actually a pretty good way to generate startup (and other) ideas. The Pokayoke Guide to Developing Software I’m eager for suggestions and
But it seems to me like, if you’re on the side of labor, your preferred solution should be more social insurance and progressive taxation instead.
No, I think the thing startup founders want is importance. Importance is a bit like power, but heavily diluted.
Importance is different from impact. Tim Berners-Lee (inventor of the Web) had a huge impact in the world, but he’s not particularly important. He decided long ago that the Semantic Web was the next big thing, but few people cared, because practically there was very little he could actually do about it. Dick Costolo (CEO of Twitter), by contrast, is pretty important. If he decides that Twitter needs a “consistent user experience”, he can shut down apps millions of people use each day, destroying the companies that build them. We all know the dangers of wanting money or power. But the dangers of wanting importance are little-discussed. Importance tends to require centralizing things, which means restraining innovation and leaving yourself open to the demands of actual power.
But what’s weird about this mania for science is how unscientific it all is. As far as I know, no studies have shown that evidence-based medicine leads to better patient outcomes or that companies which practice comprehensive A/B testing are more profitable than those which follow their intuition. And the evidence that science is responsible for stuff like increased life expectancy is surprisingly weak. But there’s such a mania for science that even asking these questions seems absurd. How could there possibly be evidence against evidence-based medicine? The whole idea seems like a contradiction in terms. But it is not. Recent decades have seen science encroach on the kitchen, with scientific approaches to cooking and cuisine. Where other chefs might simply follow instructions they found on a yellowing scrap of paper, the new modernists seek to understand the physics behind their actions. This approach has led to some interesting new techniques, but it’s also led us to understand that some of those silly traditions aren’t so silly after all.
Eggs, for example, were often beaten in copper bowls. Why copper bowls? Chefs might have been able to give you some kind of reason, but it would have sounded silly to scientific ears. But the modernists discovered that the ions in the copper ended up forming complex bonds with the conalbumin in the eggs. This was not something that chefs had ever established as scientific knowledge—no aproned Isaac Newton ever discovered this was the right way to cook the eggs—but it was knowledge chefs had nonetheless. It was, in Polyani’s phrase, tacit knowledge, part of the things society genuinely knew but was never able to write down or clearly prove.
Scientism systematically destroys tacit knowledge. If chefs were forced to follow “evidence-based cooking”, not using anything special like a copper bowl until their was a peer-reviewed double-blind randomized controlled trial proving its effectiveness, the result surely would be worse food. So why is it crazy to believe the same attitude leads to worse
If you’re struggling with a decision, we’re taught to approach it more “scientifically”, by systematically enumerating pros and cons and trying to weight and balance them. That’s what Richard Feynman would do, right? Well, studies have shown that this sort of explicit approach repeatable leads to worse decisions than just going with your gut. Why? Presumably for the same reason: your gut is full of tacit knowledge that it’s tough to articulate and write down. Just focusing on the stuff you can make explicit means throwing away everything else you know—destroying your tacit knowledge.
I’ve always just assumed that this was always true—that tradition and intuition had nothing to contribute, unless carefully coached by scientific practice. That science was the only way to get knowledge, rather than just another way of codifying it. Now, instead of throwing it all away, I’m now thinking I ought to spend more time finding ways to harness all that tacit knowledge.
The successful kids didn’t just live with failure, they loved it! When the going got tough, they didn’t start blaming themselves; they licked their lips and said “I love a challenge.” They’d say stuff like “The harder it gets the harder I need to try.” Instead of complaining it wasn’t fun when the puzzles got harder, they’d psych themselves up, saying “I’ve almost got it now” or “I did it before, I can do it again.” One kid, upon being a given a really hard puzzle, one that was supposed to be obviously impossible to solve, just looked up at the experimenter with a smile and said, “You know, I was hoping this would be informative.”3 What was wrong with them?
The successful kids believed precisely the opposite: that everything came through effort and that the world was full of interesting challenges that could help you learn and grow. (Dweck called this the “growth mindset.”) That’s why they were so thrilled by the harder puzzles — the easier ones weren’t any sort of challenge, there was nothing you could learn from them. But the really tough ones? Those were fascinating — a new skill to develop, a new problem to conquer. In later experiments, kids even asked to take puzzles home so they could work on them some more.4 It took a seventh-grader to explain it to her: “I think intelligence is something you have to work for…it isn’t just given to you… Most kids, if they’re not sure of an answer, will not raise their hand… But what I usually do is raise my hand, because if I’m wrong, then my mistake will be corrected. Or I will raise my hand and say… ‘I don’t get this. Can you help me?’ Just by doing that I’m increasing my intelligence.”5 In the fixed mindset, success comes from proving how great you are. Effort is a bad thing — if you have to try hard and ask questions, you obviously can’t be very good. When you find something you can do well, you want to do it over and over, to show how good you are at it.
The first step to getting better is believing you can get better. In her book, Mindset, Dweck explains how to start talking back to your fixed mindset. The fixed mindset says, “What if you fail? You’ll be a failure.” The growth mindset replies, “Most successful people had failures along the way.”7
Growth mindset has become a kind of safe word for my partner and I. Whenever we feel the other person getting defensive or refusing to try something because “I’m not any good at it”, we say “Growth mindset!” and try to approach the problem as a chance to grow, rather than a test of our abilities. It’s no longer scary, it’s just another project to work on. Just like life itself.
Raw Nerve August 18, 2012 Original link This is a series of pieces on getting better at life. Take a step back Believe you can change Look at yourself objectively Lean into the pain Confront reality Cherish mistakes Fix the machine, not the person The best posts are probably 2 and 4. Bonus pieces: What are the optimal biases to overcome? Related reading: The Flinch (for part 4) Everything is Obvious Ray Dalio’s
Why did doctors so stubbornly reject Ignaz Semmelweis? Well, imagine being told you were responsible for the deaths of thousands of your patients. That you had been killing the people you were supposed to be protecting. That you were so bad at your job that you were actually worse than just giving birth in the street. We all know people don’t like to hear bad news about themselves. Indeed, we go out of our way to avoid it — and when we do confront it, we try to downplay it or explain it away. Cognitive dissonance psychologists have proven it in dozens of experiments: Force students through an embarrassing initiation to take a class, and they’ll insist the class is much more interesting. Make them do a favor for someone they hate, and they start insisting they actually like them. Have them make a small ethical compromises and they’ll feel comfortable making bigger and bigger ones. Instead of just accepting we made a mistake, and shouldn’t have compromised or done the favor or join the class, we start telling ourselves that compromising isn’t so bad — and when the next compromise comes along, we believe the lies we tell ourselves, and leap at making another mistake. We hate hearing bad news about ourselves so much that we’d rather change our behavior than just admit we screwed up.2
This is what we’re taught: make five compliments for every criticism, sandwich negative feedback with positive feedback on each side, the most important thing is to keep up someone’s self-esteem. But, as Semmelweis showed, this is a dangerous habit. Sure, it’s awful to hear you’re killing people—but it’s way worse to keep on killing people! It may not be fun to get told you’re lazy, but it’s better to hear it now than to find out when you’re fired. If you want to work on getting better, you need to start by knowing where you are.
Looking at ourselves objectively isn’t easy. But it’s essential if we ever want to get better. And if we don’t do it, we leave ourselves open to con artists and ethical compromisers who prey on our desire to believe we’re perfect. There’s no one solution, but here are some tricks I use to get a more accurate sense of myself: Embrace your failings. Be willing to believe the worst about yourself. Remember: it’s much better to accept that you’re a selfish, racist moron and try to improve, than to continue sleepwalking through life that way as the only one who doesn’t know it. Studiously avoid euphemism. People try and sugarcoat the tough facts about themselves by putting them in the best light possible. They say “Well, I was going to get to it, but then there was that big news story today” and not “Yeah, I was procrastinating on it and started reading the news instead.” Stating things plainly makes it easier to confront the truth. Reverse your projections. Every time you see yourself complaining about other groups or other people, stop yourself and think: “is it possible, is there any way, that someone out there might be making the same complaints about me?” Look up, not down. It’s always easy to make yourself look good by finding people even worse than you. Yes, we agree, you’re not the worst person in the world. That’s not the question. The question is whether you can get better — and to do that you need to look at the people who are even better than you. Criticize yourself. The main reason people don’t tell you what they really think of you is they’re afraid of your reaction. (If they’re right to be afraid, then you need to start by working on that.) But people will feel more comfortable telling you the truth if you start by criticizing yourself, showing them that it’s OK. Find honest friends. There are some people who are just congenitally honest. For others, it’s possible to build a relationship of honesty over time. Either way, it’s important to find friends who you can trust to tell to tell you the harsh truths about yourself. This is really hard — most people don’t like telling harsh truths. Some people have had success providing an anonymous feedback form for people to submit their candid reactions. Listen to the criticism. Since it’s so rare to find friends who will honestly criticize you, you need to listen extra-carefully when they do. It’s tempting to check what they say against your other friends. For example, if one friend says the short story you wrote isn’t very good, you might show it to some other friends and ask them what they think. Wow, they all think it’s great! Guess that one friend was just an outlier. But the fact is that most of your friends are going to say it’s great because they’re your friend; by just taking their word for it, you end up ignoring the one person who’s actually being honest with you. Take the outside view. As I said before, we’re always locked in our own heads, where everything we do makes sense. So try seeing what you look like from the outside for a bit, assuming you don’t know any of those details. Sure, your big money-making plan sounds like a great idea when you explain it, but if you throw that away, is there any external evidence that it will work?
The problem is that the topics that are most painful also tend to be the topics that are most important for us: they’re the projects we most want to do, the relationships we care most about, the decisions that have the biggest consequences for our future, the most dangerous risks that we run. We’re scared of them because we know the stakes are so high. But if we never think about them, then we can never do anything about them. Ray Dalio writes: It is a fundamental law of nature that to evolve one has to push one’s limits, which is painful, in order to gain strength—whether it’s in the form of lifting weights, facing problems head-on, or in any other way. Nature gave us pain as a messaging device to tell us that we are approaching, or that we have exceeded, our limits in some way. At the same time, nature made the process of getting stronger require us to push our limits. Gaining strength is the adaptation process of the body and the mind to encountering one’s limits, which is painful. In other words, both pain and strength typically result from encountering one’s barriers. When we encounter pain, we are at an important juncture in our decision-making process.1
The agile approach, however, is to do the opposite: merging hurts, so we’ll do it more often. Instead of merging every couple weeks, or every couple months, we’ll merge every single day, or every couple hours.
Now I realize this is a bogus argument: it’s not that the pain is so bad that it makes me flee, it’s that the importance of the topic triggers a fight-or-flight reaction deep in my reptile brain. If instead of thinking of it as a scary subject to avoid, I think of it as an exciting opportunity to get better, then it’s no longer a cost-benefit tradeoff at all: both sides are a benefit — I get the benefits of being good at selling and the fun of getting better at something. Do this enough times and your whole outlook on life begins to change. It’s no longer a scary world, hemming you in, but an exciting one full of exciting adventures to pursue.3 Tackling something big like this is terrifying; it’s far too much to start with. It’s always better to start small. What’s something you’ve been avoiding thinking about? It can be anything — a relationship difficulty, a problem at work, something on your todo list you’ve been avoiding. Call it to mind — despite the pain it brings — and just sort of let it sit there. Acknowledge that thinking about it is painful and feel good about yourself for being able to do it anyway. Feel it becoming less painful as you force yourself to keep thinking about it. See, you’re getting stronger! OK, take a break. But when you’re ready, come back to it, and start thinking of concrete things you can do about it. See how it’s not as scary as you thought? See how good it feels to actually do something about it? Next time you start feeling that feeling, that sense of pain from deep in your head that tells you to avoid a subject — ignore it. Lean into the pain instead. You’ll be glad you did.
Synthesizing hundreds of these studies, K. Anders Ericsson concluded that what distinguishes experts from non-experts is engaging in what he calls deliberate practice.5 Mere practice isn’t enough — you can sit and make predictions all day without getting any better at it — it needs to be a kind of practice where you receive “immediate informative feedback and knowledge of results.”6
There are some things writing is really good at, but forcing people to get up and do something isn’t one of them. The irony, of course, is that the books are totally useless unless you take their advice. If you just keep reading them, thinking “that’s so insightful! that changes everything,” but never actually doing anything different, then pretty quickly the feeling will wear off and you’ll start searching for another book to fill the void. Chris Macleod calls this “epiphany addiction”: “Each time they feel like they’ve stumbled on some life changing discovery, feel energized for a bit without going on to achieve any real world changes, and then return to their default of feeling lonely and unsatisfied with their life. They always end up back at the drawing board of trying to think their way out of their problem, and it’s not long before they come up with the latest pseudo earth shattering insight.”7 Don’t let that happen to you.
Mistakes are our friend. They can be an exasperating friend sometimes, the kind whose antics embarrass and annoy, but their heart is in the right place: they want to help. It’s a bad idea to ignore our friends. That’s a hard attitude to take toward mistakes — they’re so embarrassing, our natural instinct is to want to hide them and cover them up. But that’s the wrong way to think about them. They’re actually giving us a gift, because they’re pointing the way toward getting better.
Sakichi Toyoda, the founder of the Toyota car company, developed a technique called “Five Why’s” for handling this. For example, sometimes a car would come off the Toyota production line and not start. Why? Well, imagine it was because the alternator belt had come loose. Most car companies would stop here and just fix the alternator belt. But Toyoda understood that was dodging the mistake — it would just lead it to come back again and again. So he insisted they keep asking “Why?”. Why was the alternator belt loose? Because it hadn’t been put on correctly. Why? Because the person putting it on didn’t double-check to see if it had fit in correctly. Why? Because he was in too much of a hurry. Why? Because he had to walk all the way to the other side of the line to get the belts and by the time he got back he didn’t have enough time to double-check. Aha! There, on the fifth why, we find the real cause of the mistake. And the solution is easy: move the box of alternator belts closer.
By forcing yourself to write it down, to keep a log of the problems you’ve run into, you begin to see patterns.
Pag. 1010. 15499 It wasn’t the workers who were the problem; it was the system.1
Again, the same striking results: students were persuaded Jim believed the arguments he said, even when they knew he had no choice in making them.3 This was an extreme case, but we make the same mistake all the time. We see a sloppily-parked car and we think “what a terrible driver,” not “he must have been in a real hurry.” Someone keeps bumping into you at a concert and you think “what a jerk,” not “poor guy, people must keep bumping into him.” A policeman beats up a protestor and we think “what an awful person,” not “what terrible training.” The mistake is so common that in 1977 Lee Ross decided to name it the “fundamental attribution error”: we attribute people’s behavior to their personality, not their situation.4