Archive for the 'Books' Category

Book review: “Programmers at Work”

Wednesday, April 30th, 2008

I just finished reading a fascinating book: “Programmers at Work”, published in 1986 by Microsoft Press. It’s a series of interviews with notable programmers of the day; it’s out of print right now, sadly (I have a borrowed copy). The concept was to ask the biggest programmers of the day about what it’s like to be a programmer, and where they thought things were headed. The interview subjects are a real treasure chest, featuring a mixture of those still dominating the software market today (Bill Gates, Ray Ozzie (currently “Chief Software Architect” at Microsoft), Adobe’s John Warnock), those lauded for their former contributions (Apple Macintosh innovators Andy Hertzfeld and the late Jef Raskin, Pac Man inventor Toru Iwatani), and those whose contributions now live in obscurity, often after having been crushed by the Microsoft juggernaut (PFS:FILE’s John Page, Framework’s Bob Carr).

It’s a very interesting artifact because, for all of the contributions of computer programming, the craft of it rarely gets examined by the larger culture. For various reasons, but mostly the fact that programming ooks like just about one of the most mundane tasks imaginable, basically involving sitting at a computer all day, there’s no great desire to hear the individual stories of programmers. (As a digression, it’s interesting to examine the handling of programmers in film: it’s too big a subject to ignore entirely, but given that there’s nothing interesting about watching someone typing on a screen, Hollywood seems to have fallen on a curious standard template, which is programmers-as-victims. Think about what we’ve seen so far: forced into a computer game in Tron, eaten by dinosaurs in Jurassic Park, hunted by a shadowy corporation in The Net, laid off and then nearly imprisoned in Office Space, hunted by a sinister CEO in Antitrust, attacked by Martians in Independence Day, hunted by ruthless virtual agents in The Matrix. For better or worse, apparently programmers are only compelling when someone’s after them. (By the way, if you think this is an incomplete list - I’m not counting the movies where the characters are just system hackers, like “Swordfish”).)

Anyway, as it stands “Programmers at Work” is fairly unique: a view of the software industry from programmers themselves. And it’s definitely a product of its time, when the state of the software industry could somewhat reliably be summed up by interviews with around 20 people. Today the field of software development is so huge, and so distributed, that I doubt you could find a set of interview subjects that could truly summarize the current state of programming. Not to say that it wouldn’t be an interesting attempt…

The book is a nice overview of what programming looked like during the PC revolution, when computers became a household fixture, and began to take over tasks like document creation, financial processing and mathematical calculations. It was a Wild-West era of software development, during which all manners of interface questions were still up in the air - is having a mouse useful, or is it just a fad? What’s the best way to do a copy-and-paste of text? - developers had to make all sorts of decisions we don’t think about any more, from small interface questions to giant ones like which brand of computer they should do their development for. The stakes were higher, too, because one or a handful of programmers could create an application that would take over the industry, as Jonathan Sachs and a few others did with Lotus 1-2-3.

The interviewees discuss the nature of programming, and some of the language might sound familiar to today’s programmers, despite the huge technology gap - the need to keep the entire structure of a system in your head while you’re working on it; the euphoria of finding what seems like the ideal solution to a problem; the advantages of creating “well-balanced code”.

Every interview contains a section asking what the subject thinks will be the future of computing. This makes for some interesting reading, obviously, because we know how the story turns out. Sadly, most of these sections are notable for what isn’t mentioned. None of the programmers, in 1986, anticipate the dominance that object-oriented programming would have over the programming world by the end of that decade; a few mention SmallTalk, which as I understand is object-oriented, but none mention the object-oriented C++, already out at the time, which, by the early 90’s, would become (I think) the dominant language of application development. Gates thinks that rule-based programming will be the wave of the future, which was not the case. There’s also no mention of relational databases, or of Oracle, which was already a fairly large company at the time, and would go on to become huge. There’s also no mention, either positive or negative, of open-source development, which had already had had some great successes at the time with UNIX and the C programming language, would soon go on to more successes with Linux and Perl, and which, 22 years later, is set to surpass paid software as a business model. All of which is an interesting demonstration of William Gibson’s quote that “the future is already here; it’s just unevenly distributed.”

So what do the programmers predict? There are a variety of general comments that computers will continue getting smaller and faster, and, beyond that, testimonials as to how unknowable the future is. In general, the more specific the predictions, the more off-the-mark they are: a 25-year-old Jaron Lanier (now known as the “father of virtual reality”) talks about a secret project he’s working on. Reading between the lines of his description, it sounds like a virtual-reality application that lets people create software; he predicts that it “will really change the way people think about programming”. I don’t know what happened with that application, but suffice it to say that people still program goggle-less. Bill Gates and Gary Kildall (founder of Digital) both predict that CD-ROMs are the future of how people will use their computers - and perhaps they might have been, had the web not eclipsed their usefulness.

There are a few glimpses of the future that did occur, that pop up in some throwaway comments among the interviews. Gates talks about the email system at Microsoft, which helps him “keep up” with informaion. Michael Hawley mentions a graphics application he’s used that’s “spinning off to form a new company called Pixar.” Most tantalizingly, Page predicts that computers will become important as a source of information, and mentions as an example how he logs on to the National Weather Service to “get briefings through my computer.” I don’t know what type of service it was; undoubtedly some early internet protocol. Unfortunately the interviewer changes the subject, so this is the one mention in the book of using the internet for information retrieval, a simple concept that, within 15 or 20 years, would redefine the use of computers and, it’s no great stretch to say, change the world.

Page, in general, comes off somewhat like a hero from the book: besides anticipating the use of networked computers for information retrieval, he talks about elegance in software interface design, by “making a program more functional without increasing its complexity,” echoing the greater focus today on ease-of-use in applications. He laments the current bloated state of applications, “full of controls nobody ever touches or wants.” So how come no one today has heard of him or his software? Perhaps the software industry in the 80’s was no place for an aesthete: ordinary users valued simplicity of purchasing decision over simplicity of interface - a single operating system and suite of applications (Microsoft’s) would suffice, and whether or not it was well-applications was no great concern, as long as everything worked. Maybe that’s still true today, at least for desktop applications - most people still use Internet Explorer, after all, because that’s what comes installed on their PCs, even though the general consensus is that it’s the worst of the major web browsers currently out.

Black swans, and the problem with prediction markets

Thursday, July 12th, 2007

“Who knows what’s going to happen?/Lottery or car crash/Or you join a cult.” - Bjork, “Possibly Maybe”

I’m reading “The Black Swan”, the new book by Nassim Nicholas Taleb, whose “Fooled by Randomness” I read a few months ago and really liked. I didn’t think there was much more for him to say on the topic of uncertainty, but this book proves that wrong: in fact, there’s quite a bit more to say. Whereas the first one focused on human psychology and all the various ways we fool ourselves into thinking we can predict the future, this one takes a more mathematical tone, explaining why the future is inherently unpredictable. This is a very big statement: after all, maybe the only reason that we can’t predict the future very well is that each of us is cursed with our inherent biases, and limited information. If that were true, then if you could aggregate everyone’s thoughts, using, say, a prediction market, you’d have a good chance of getting at the truth.

Prediction markets: 2004-era buzzword, and the inspiration for my own Betocracy site. It’s far from a dead concept, with a site like Media Predict, launched two months ago, which is designed to help media companies figure out how well their movie, music and book properties will sell. And yes, Betocracy is still operational, though in all honesty I’ve lost interest in it; and so, apparently, has the world. (No need for sympathy, please! It was an important learning experience, I think.)

Anyway, the “holy grail”, to anyone who’s been interested in prediction markets, is James Surowiecki’s 2003 book “The Wisdom of Crowds”, the book which directly inspired me, and which I still have a high opinion of (though I may have to rethink some of my praise). Surowiecki captured many people’s imaginations with his examples of large groups making uncanny predictions. There was the first such demonstration, in which a crowd at an 1800’s fair guessed the weight of an enormous ox to within a pound or two. There are horse-race crowds, who collectively have odds-making abilities that are nearly unbeatable. And more recently, there are election-prediction markets, that have consistenly beaten the polls in predicting election results. So, to extrapolate, asks the book (and many people), why can’t we use prediction markets as an all-around forecasting tool? For movie grosses, say, or flu outbreaks, or terrorist strikes?

Taleb doesn’t directly talk about prediction markets, though he does talk about capital markets, which are just a more established version of the same thing. But his logic can be easily applied. All of these things have something in common: the weight of an ox (okay, that’s really an observation and not a prediction, but you could phrase it as some sort of prediction), sporting events, political elections. Taleb says that they all fall within the world of what he calls “Mediocristan”, which is not a comment on their quality but rather on the nature of their probabilities. If you plotted the possible outcomes of any of these, they’d all end up in a nice bell curve graph, where, once you get outside of a rather narrow range of possibilities near the center, the probability of an outcome declines dramatically. The chance of a U.S. presidential candidate winning anything more than 65% of the vote, for instance, is rather small; more than 80%, nearly impossible. Similarly, if you ran the same set of horses against one another over and over, the times for each horse would be fairly similar from one run to the next - for a horse to suddenly double or halve its usual racing speed is unheard of.

Most of real life, on the other hand, according to Taleb, takes place in what he calls “Extremistan”. There, there’s no nice trailing-off around the center. Things like personal income, product success, and the severity of wars all fall into this category. For every person who makes a certain amount of money, for instance, there’s a very real chance that someone else will be making twice that much, and someone else ten times as much, regardless of what that original number was. Things that happen in Extremistan are much more unpredictable for just that reason. That’s why the prediction market Hollywood Stock Exchange, which gets headlines for predicting Oscar winners, fails spectacularly when it comes to guessing box office revenues (and there’s a link I wish I had read before starting Betocracy; though who knows if it would have had any effect on me at the time.)

There’s a mathematical explanation for the difference between the two “worlds” of Mediocristan and Extremistan, and it has to do with conditional vs. independent probabilities. In the Mediocristan world of sports, elections, etc., all the factors going into the final outcome are fairly independent of one another: the number of points a team scores in the first half of a game doesn’t really affect the number of points they score in the second; whether a person votes for a certain candidate doesn’t affect whether their neighbor will vote for that candidate. Thus, for a result to be significantly different from expectations, many things would have to go right (or wrong) independently - enough to make such a result all but impossible. On the other hand, in Extremistan, every event affects every subsequent event. If a book sells a million copies, bookstores begin displaying it prominently; the author gets invited on talk shows to plug it, etc: selling the next million becomes a much easier proposition. Similarly with the price of a stock, or the success of a website, or really most of the other interesting questions in life. On the negative side, events like wars can easily snowball as well. Taleb notes that before World War I, which is a classic case of a small event mushrooming completely out of control, stock markets in Europe were doing good business - no one had any inkling of the grand tragedy that was just about to befall them.

So there’s a mathematical basis for explaining why the systems that do so well in predicting certain outcomes will fail at all the rest. And why we’ll have to remain in the dark about the really important issues, like maybe the most pressing unknown of the day: whether Iran will “push the button”, to quote a contemporary Israeli song. And it goes without saying that, in retrospect, that might not even be the thing we need to worry about the most.

UPDATE: Sorry I was too harsh about the Hollywood Stock Exchange - “fails spectacularly” was sort of a spur-of-the-moment phrase on my part, and probably unwarranted.

UPDATE 2: Oh, damn, Taleb linked to this post! I wouldn’t have predicted that.

“Everything is Miscellaneous”

Wednesday, June 6th, 2007

My Amazon review of David Weinberger’s “Everything is Miscellaneous” is up. It was nice to read a book about meta-data, one that even mentioned the Semantic Web, though I thought it should have stuck more to its central topic.

Karl Popper, the pragmatic philosopher

Friday, April 6th, 2007

I’d never heard of Karl Popper until recently, but curiously, the last two books I read mention him at length (Wittgenstein’s Poker and Fooled by Randomness). I figure it’s a sign, unless of course I’ve been “fooled by randomness”.

Yeah, I’ve been reading a lot recently; I have more “subway time” these days.

Karl Popper Anyway, Popper was a mid-20th-century philosopher, one of the many notable figures who escaped from Austria and Germany to the west during the 30’s and 40’s. His basic idea was the concept of “falsifiability”. He started with the premise that theories about the world are at a natural disadvantage: it’s extremely easy to disprove a theory, since all you have to do is find one counterexample. On the other hand, it’s impossible to prove a theory. From this, Popper surmised that there are only two kinds of theories: those that have been proven to be false, and those that have not yet been proven to be false. In other words, any attempt to describe the world is bound to be false to same extent. Whatever set of theories hold sway at any given time are those that are the most useful, and those that best fit the set of facts known at the time. Theories are useful, Popper said, both in describing the world and in decision-making, but they should never be mistaken for absolute “truth”. It’s a familiar concept for those of use who are involved in the world of wikis, where the current version of the truth tends to change in an orderly fashion from day to day.

This view of the progress of ideas as just moving from one fallacy to the next might come off as bleak, but there are certain advantages Popper saw from it. For one thing, he felt that it makes it easy to distinguish science from pseudo-science: true science sets itself up to be disproven, by making specific statements and predictions. Pseudo-science and dogmatic religious beliefs (Popper was especially preoccupied with Marxism) are careful to make no claim that can be disproven. (That’s not to say that Popper was anti-religious, just that he felt that science and belief should always be kept distinct.) And as with science, so with politics: Popper defined a good government as simply one that allows for its own removal without violence. In both cases, there’s an emphasis on pragmatism above all.

Babbage and the engines

Friday, March 23rd, 2007

I just finished reading “The Difference Engine” by Doron Swade. It’s a strange sort of hybrid book: the first half is a biography of Charles Babbage, who spent much of his life obsessed by the idea of making first a calculator, then what we now know as a computer (with built-in memory and even a printer), using the technology of his day: gears, axles and the like. The second half is the story of how the author, as a curator at the London Museum of Science, spent the last half of the 1980’s shepherding a project to try to get a working version of Babbage’s Difference Engine built and displayed at the museum by December 1991, which was the 200th anniversary of Babbage’s birth (anniversaries, I think, used to be a more notable thing than they are now). The difference engine had in fact never been fully built before, and some people throughout history had doubted whether the design would even work.

It’s a very interesting book, impressive because its author is gifted both as a biographer and a technical manager. There are a lot of interesting, humanizing details: in his time, Babbage was nearly as known for hosting high-society parties as for his inventions. And far from being a noble pursuer of truth, he’d every once in a while write a book or pamphlet attacking people he disliked; which would go on to hurt him when he tried to get additional grants from the government (which was often). Ada Lovelace was a young woman who co-authored a paper on computing with Babbage, and some have regarded as “the first programmer”; Swade portrays her as an unhelpful hanger-on, who got involved with the project only so she could prove that she was the equal of her famous father (the poet Lord Byron).

It’s interesting to imagine what would have happened if Babbage’s difference engine and analytical engine (the computer-like one) had been successfully built 150 years ago. Could computing have gotten a massive head start? There’s a whole genre of science-fiction known as “steampunk” that posits that scenario (its most famous novel is also called “The Difference Engine”). But Swade’s experiences in building a machine from even one of Babbage’s simpler designs make that scenario seem far-fetched. The project took years and went through a lot of dead-ends, even with the vast improvements in machining technology that had occurred in the over-100-year gap. It seems like the problems inherent in the idea are visible in the name itself - “analytical engine”. An engine is intended for brute-force applications, like moving things a large distance in one direction. It seems that navigating through math and information is just too subtle a task for metal and springs.

Business, numbers, money, people

Tuesday, November 7th, 2006

My reading material on the trip was Niall Ferguson’s The Cash Nexus: Money and Power in the Modern World, 1700-2000. I really recommend it: the book is basically a solid overview of macroeconomics, which is especially important if you’ve never taken a macroeconomics course (I never have). It covers the intersection of politics, economic policy, military strength, and cultural values, and their history in different countries over the last 300 (actually, even a little more than that) years; and how different countries have reacted differently to similar crises (bankruptcy, war, etc.) as a result of these four factors.

Ferguson writes from a liberal (that’s in the original sense of the term, the one they still use in Europe) perspective, so his conclusions aren’t for everyone, even though he defends them convincingly. He comes out as a vocal defender of British Empire and argues that the world would be well-served if America behaved more like an empire; portrays the current social safety net as on the verge of collapse; and argues that having too many voters who aren’t paying taxes can be harmful to a democracy; and there’s more where that came form. But the historical facts and graphs are pretty non-controversial. Holiday gift idea, maybe? Hey, it’s a thought.