Uses This

1278 interviews since 2009

A picture of Dorian Taylor

Dorian Taylor

Entropy schlepper, maker of toys and diversions

in developer, mac, researcher

Who are you, and what do you do?

My name is Dorian Taylor, and my job is to clean up very specific kinds of messes. I've tried all sorts of florid and important-sounding job descriptions and concomitant titles, and at the end of the day they're not only pompous and alienating, but just plain inaccurate. I don't "design experiences"; I certainly don't "build products". I don't even really "architect information". I schlep entropy. I take the disorder I find in organizations and do my best to clean it up. In that way I'm more like a janitor than a designer, engineer, architect, strategist, management consultant, whatever.

Right now, I am particularly interested in the mess around cleaning up messes: meta-mess if you will. It has to do with the ostensible fact that the second a process touches a computer, it becomes an IT problem. In other words, we become preoccupied with whether we can solve the problem rather than what the problem even is. It becomes attractive, in organizations, to enter into arrangements that look like they clean up messes, but instead create bigger messes.

Of course, if you bleach the "technology" out of "information technology", you're left with "information", which exhibits properties that a reasonably shrewd person of any era can understand:

  • Is your model of the world both comprehensible and accurate?
  • Are you getting accurate and timely data that enables you to make decisions which lead to favourable outcomes (and avoid disfavourable ones)?
  • Do your business relationships reconcile with your model, and facilitate the collection and interpretation of information (and subsequently your ability to act on that information), or do they conflict with and/or inhibit these activities?
  • If the latter, how hard is it to get out of those relationships? (And presumably into more favourable ones?)

Knowledge is (leverage over) power, so disparities in knowledge mark disparities in power. The "tech industry" is really all about making people dependent on your own little vision of reality. I don't believe it has to be that way, though.

What hardware do you use?

I do a lot of work on ordinary photocopier paper with a BIC mechanical pencil. If I have to travel, I use a Moleskine. I have a thing I made called a "cell calendar" which is just a piece of Bristol board that represents a week's worth of cells -- four-hour contiguous units of time in which the real thinking (and subsequent entropy-schlepping) gets done. When you subtract the irreducible maintenance time of sleep, food, hygiene and chores, I find you can max out on about three of these in a day. Emphasis on max out. When you're flying solo, the thing you consider to be your "actual job" only takes up a sliver of your waking life. With everything else going on, I'm lucky to get one of these in a day, and I might do a three-cell day only a handful of times a year.

Or by "hardware" did you mean "computers"?

I suppose it befits a self-described "entropy janitor" to use reclaimed e-waste. I have a hand-me-down MacBook of ambiguous vintage which I use as a front end to an old Dell I picked up from Free Geek for fifty bucks. That lives in a closet and runs all my work stuff, and I can connect to it from wherever, with whatever device I have on hand. My phone is some Android thing I bought off my little brother that's probably approaching 3 years old. About the only object containing a CPU I've bought new in the last eight years is the second-crappiest possible tablet I could buy, and that was only because I wanted a multi-touch control surface for a tool I was working on.

Computers have been "fast enough" to be serviceable for most contemporary needs for almost two decades. Three decades if you count pre-Internet uses. What's more (Moore?) is that the laws of physics have finally put a hard upper bound on megahertz, so now chip manufacturers are just stacking on cores. Okay, so let's say you've got the hulkingest monster twelve-core 3-gigahertz Xeon Mac Pro money can buy. Guaranteed eleven of those things are going to be sitting idle, and the twelfth is going to spike here and there when you apply a Photoshop filter or watch a llama video on YouTube. That's the CPU going NOP, NOP, NOP, times eleven and a half, three billion times a second.

If you're doing 3D or video compositing (and I suppose now, VR and/or AI) then I can see caring about hardware. The last time I did 3D was as a teenager on a 486, and I remember drooling over those $50,000 SGI Octanes that could render mesh in real time. Now that kind of thing is an OS effect - on phones no less - which is really what most of those CPU cycles are needed for these days. Oh, and games, I suppose, but I haven't played one of those in a while.

All that said, I suppose one of these days soon I'll get off my ass and buy some slick little MacBook Air in case I have to pull it out in front of a client - for the exact same reason a real estate agent drives a Mercedes.

And what software?

I suppose I should start with the OS: The computer I actually type into is always a Mac because it has the finish quality you can only get with commercial software, but is still properly POSIX and won't fuss over open-source stuff. The development server is always some Debian variant, because their stringent policy tends to produce a sane working environment, including a vastly superior packaging system. In my case the variant is Ubuntu, which is slightly less crunchy-granola than the original. Any router or firewall-like thing is always OpenBSD, because reasons. I've been running this hardware/OS configuration for about ten years now, and before then I would dual-boot Windows and Debian.

On top of that, there aren't really many "apps" I use. There was a period in my career in which I spent a lot of time with the Photoshop/Illustrator/InDesign trifecta. I barely ever touch those these days. Most apps I use are commodity front-ends to standard protocols and data formats: Mail, iCal, etc. Consider my browser: I use Firefox, because I put the effort into tarting it up with add-ons, but in a pinch just about any other browser will do. I'm intentionally cultivating a non-committal stance toward apps. That's not to say the function of the app isn't important - it's often essential. It's the vendor I have no commitment to. This is deliberate.

There's probably one exception, and that's Emacs, the venerable text editor, wherein I actually do the work that pays me. Switching away from that would be a nightmare.

After my ménagerie of operating systems and relative paucity of apps, I actually write a lot of my own tools. I could list the whole stack but that isn't very interesting, so I'll just summarize the languages: Perl is still my daily driver; I've been writing in it since 1997. That said I am not averse to Python or Ruby for doing the same kind of, let's call it, "utility coding" work. I am also finding myself spending more and more time with R. For more organized "systems", I've recently been looking at Clojure. It's slick as hell. I expect to be fully weaponized in it by the end of the year.

As is the trend with most programming polyglots, I have a workable proficiency in about eight other languages, each waiting for its opportunity to get put to meaningful use. I also have preferences: I generally eschew anything Microsoft (like Visual Basic), anything that relies too heavily on JavaScript, and you could not pay me enough to touch PHP.

What would be your dream setup?

My "dream setup" doesn't exist. It could have existed - it did exist, in experimental form, almost fifty years ago. There's no reason in principle why it couldn't exist, but it seems to chafe against prevailing cultural values.

There's a philosophical debate that's been going on almost since the beginning, about what the role of the computer in society ought to be. It boils down to a question like, "Are computers supposed to do our thinking for us, or are they supposed to be dumb tools that help us think?"

It shouldn't really surprise anybody which is the majority position and which is the minority. There's something romantic - nay, eschatological - about artificial intelligence. The first thing people tried to get computers to do, after aiming nuclear missiles, was think. They're still trying. And they're sort of getting somewhere, and everybody oohs and aahs at the latest self-driving car or face-recognizing camera or sassy chatbot, but if you think about it, these represent the absolute basic understanding of "intelligence", artificial or otherwise.

Meanwhile, the property of computers that people have been harnessing to construct such baroque artifices for conducting elementary cognitive tasks has been available for direct use in the augmentation of complex human cognitive tasks, almost ever since Alan Turing cooked up the idea for his ticker tape machine.

What I mean is this: We human beings reason over conceptual entities, and the relations that bind them. When these structures get too big to hold in our heads all at once, we outsource them to a representational medium, such as paper. Then we can take our time to comprehend them. However, a two-dimensional plane such as a piece of paper is still extremely limited in its capacity for coherently representing a complex conceptual structure, unless you resort to more and more esoteric mathematical representations. Even then, you're still screwed if you have a lot of data.

Now: we can think of a Turing machine as a sort of mutant cousin to the film projector, and both as the logical successors to the zoetrope. The zoetrope, of course, is the toy for which you draw little pictures at set intervals along a strip of paper, then you put the paper into the zoetrope which is shaped like a large ring. Then you spin the zoetrope on its axis and peek through slits near the top edge, and you can see the pictures move. A zoetrope moves faster than our eyes can keep up, and can thus effectively take a multiple of (two dimensional) space and translate it into the dimension of time.

So this is what a computer does that's truly novel: A Turing machine (and by extension any computer based on the design, which currently is all of them) is doing the exact same thing, save for the fact that it uses symbols instead of images. It also has the feature that every "frame" has an address. This leads to a trick, where you can set the meaning of one frame to be the address of another. The net effect is, unlike a zoetrope which just runs in a loop, or a film projector which runs front to back, you have something which can jump around in from frame to frame in either direction, reuse segments of "film", and even rewrite the contents in situ. This is all while leveraging the same persistence-of-vision effect, essentially amortizing complexity over very small slices of time.

What that means is that you can represent conceptual structures which are much, much more complex than you ever could on a piece of paper, and you can manipulate those structures in milliseconds in ways that would take months or even years otherwise. And what that means is you could solve really complex problems - even ones that are too wacky to fob off to AI. We don't see people taking direct advantage of this capability very often though, unless it's for the purpose of making software, which, ironically, is either contorted one way, to pretend to be some technology that existed before computers, or another way, to pretend to be an intelligent agent. This, to me, has the air of obscurantist, dissimulative hocus-pocus.

I suppose that's really the issue for me: sovereignty. Every artifact embodies, in some way, the values of its creator. A shrinkwrapped app is basically a recording of its author saying "I want you to think about X the way I do. I want you to work the way I think you should." Even the developer-grade frameworks and languages I use to make my own software are opinionated, but at least I have the final say on which ones I use, and how the overall system behaves. Go a teeny bit farther in the AI direction, however, and the message is something like "we're not exactly sure how it works, but you should do what it says anyway." If you're going to do things that way, you could just as easily look for messages in chicken guts or something.

I understand that we live in an increasingly interdependent world. I'm okay with interdependence. What I'm not okay with is one-way dependence, on particular people, business entities, robots, whatever. I'm not espousing some form of digital survivalism, I just want to be able to pick who I deal with, and if it doesn't work out, I want to be able to pick somebody else - all the way up and down the stack. Proximately what that means is that I can get my data out, and if I can't find a replacement for some particular operation, I can make one. Ultimately what it means, then, is that I understand my "dream system" as well as I need to in order to be sovereign over it.

App/platform vendors don't want sovereigns, of course. Their entire business models are designed around creating dependents, and then it's wall-to-wall ads and behavioural data sold out the back alley, all day long. I don't view that as a conspiracy though, it's more like "econophysics". There just hasn't been a strong enough alternative yet.