What is a computer?

When we think about computers, we think of laptops and smartphones, devices of all sizes that we use to process information, to search the web, to read documents like the one you’re reading now. But what is it that makes a computer a computer? What is it that computers do, deep down? What unites them? How do they differ from other machines, like flashlights, refrigerators, or calculators?

To answer these questions, I’m going to share with you a very unusual computer that I found recently hidden in plain sight. It doesn’t look anything like the computers we’re used to. There’s no mouse or screen. It’s not connected to the internet. It requires a lot of people to operate.

We’ll get into all that—that weird computer, the nature of computation. But first, we need to take a short diversion a couple centuries past to answer a seemingly unrelated question about Pride and Prejudice, Jane Austen’s great book.

Pride Nand Prejudice

Q: What’s the best Pride and Prejudice adaptation?

A: The one from 1995 with Jennifer Ehle and Colin Firth. Obviously.

Just watch a few seconds of this elegant dance scene.

The candlelight, the music, the general gaiety of a country ball… it’s magical! I love how the dance enables Ehle’s Lizzie to literally see Firth’s Darcy from all angles as her words probe his personality. I love the courtship of the dance, the moments of physical intimacy punctuated by brief separations. And stepping back, something struck me about the visual structure, the logical procession of dancers up and down the line.

For my partner, Anne, Pride and Prejudice is a religion. Like all religious zealots, appreciating scripture is only part of her faith. It wasn’t enough to appreciate the dance. She needed to learn it. It was therefore a moral obligation to go to an eighteenth century baroque dance class that was held a few weekends ago in Brooklyn. With twenty other curious and playful students, we learned the basics of the contra dance, which you often see performed in Jane Austen film adaptations and other period dramas.

Here’s how it works. First, the dancers divide into couples. Each half of a couple joins one of two inward-facing parallel lines, standing across from their partner. In the video, you can see one line consists of men and the other of women. The dance requires a binary distinction, but let’s leave behind those outdated notions of gender. For our purposes, participants can be represented by any pair: by a suit or a dress, by a square or a circle, by a one or a zero.

Here’s a diagram of what that looks like with three couples, represented as squares and circles. A symbolic representation of the formation of a contra danse: one vertical line of 3 blue circles, on the left, and one vertical line of red 3 squares, on the right.

Throughout the dance, the dancers engage in movements that either leave their original positions intact or alter them in the following two ways:

One, the partners in a pair can change position. in the circles and squares representation of dancers, one of the square/circle couples switches sides

Two, a pair can switch positions with an adjacent pair. in the circles and squares representation of dances, one of the pairs switches positions with another pair

Some clever behavior emerges from these simple mechanisms. Notice how, in the video, the couple on the far left of the screen does not engage in the dance. Rather, they’re turned, watching the rest of the dancers. The choreography specifies two kinds of pairs. The first kind knows the dance, and so leads the movements. The second kind follows suit. As the dance progresses, the seconds move up the line, learning the dance by following the firsts. Eventually, they become the top of the line, where they sit out to watch the dancers and learn how to lead it. In the next dance, they rejoin, but taking the first role this time.

Having first hand experience in the class, I came away thinking, what other clever behaviors might be lurking in the dance? What else was it capable of?

Turing Machines

In 1936, the 24-year-old Alan Turing wouldn’t have known he’d become the father of modern computing. At the time, he was a graduate student in mathematics, and he was preoccupied with an open question about the nature of his field of study. Namely, was there a formal procedure that could tell you whether or not there was a proof for a given mathematical statement? This was such an open question that no one had a clear idea of what “formal procedure” actually meant.

A mathematical statement, on the other hand, is easy enough to define. That can be a statement as simple as “2+2=4” or as complex as “every even number can be expressed as the sum of two prime numbers”. We can also include statements that are false, like “2+2=5”. For such false statements, we would expect this elusive “formal procedure” in Turing’s problem to tell us that there was no proof.

But before Turing could approach the problem of whether such a formal procedure existed, he first had to define what that even meant. It was through discovering that definition he developed the idea we now call a computer.

In his seminal paper, On Computable Numbers, Turing considers what it means for a person to perform a calculation. Let’s say we just finished eating at a restaurant. The bill comes, and it’s time to determine the tip. What do we do? To get a 20% tip, I usually take the initial cost, move the decimal over to the left, then double the result. I write the tip’s value below the food’s, then add them together, place by place, carrying the ones when necessary.

With the deminal-moving and the one-carrying, there’s a set of deeper rules at work. But to calculate the tip, I don’t necessarily need to know them. I just need to know how to get from a bill to setting down socially acceptable pen marks on a receipt.

Turing’s insight was that these rules were so automatic, so mindless that you didn’t actually need a person, a brain, or any organic matter to carry them out. You could mimic these mental patterns in the operations of a hypothetical machine.

These machines, which we now call “Turing Machines”, are very simple. Deliberately so. They’re meant to imitate each atomic mental step we take when following a calculation. Accordingly, they consist of just a few parts: a long one-dimensional tape, divided into single-character cells, which a scanner unit moves back and forth, reading and periodically writing ones and zeros. The machine writes to and moves the tape in according to pre-set rules. When it reaches a situation for which it has no more rules, the machine stops. What’s left on the tape is the result of the calculation.

a drawing of a Turing Machine

This might feel a little abstract, so let’s describe a Turing Machine that can perform a single, simple task. When I figure out the tip at a restaurant, at one point in the process I double a number. How can we make a number-doubling Turing Machine? First, we need a way to put our input onto the tape. If we’re trying to double the number 14, we might be tempted to input a one and a four in adjacent cells, then feed it to the machine. But remember that our Turing Machine only knows how to work with ones and zeros. We need to “encode” our numbers into a format that only uses these two numbers. Binary, or base 2, is a logical choice for this requirement, since it only uses ones and zeros.

Now, there’s a neat trick where you can double a number in binary by adding a zero to the end of it. When you add a zero to 14’s binary representation, 1110, you get 11100, or 28 in base ten. You might notice you can use the same trick in base ten when you multiply a number by ten. The principle at work is the same. You can even see that two in binary is written down as 10.

To create a Turing Machine that doubles a number, we just need to make a machine that adds a zero to the end of the binary number inputted on the tape. Its rules would be simple: move the tape until you get to the end of the input, write a zero, then stop.

This Turing Machine isn’t all that useful. It can only perform one function. If I’m trying to figure out what I’m supposed to pay at the end of a meal, it can only help me with one step in that process. You might be able to create Turing Machines for other parts of the tip calculation. You might even be able to make one compound machine to compute all the steps. But that device would still only have a single function: to calculate a tip. There’s still a limit to what it can do. What if I wanted to calculate an 18% tip? What if I wanted to split the check? Or what if I wanted to do something totally unrelated to restaurant math, or to math at all, like tell if a word is spelled correctly? Would we keep needing new machines for each new task?

Empirically, no. I can use my iPhone to do almost anything, to calculate my check, to write an email, to scroll mindlessly through photos of cute dogs. And, more than that, I can always download new software to it and run that software. Most machines—flashlights, refrigerators, handheld calculators—perform a predefined set of functions. The number of things a computer can do is bounded only by the imagination of its programmers.

That jump, from specific to unbounded functionality, lives at the heart of Turing’s second great insight. He describes a special Turing Machine—a universal machine—that can simulate any other possible Turing Machine. The trick is to encode a description of the machine you want to emulate into the universal machine’s tape. The universal machine would then follow those instructions on the input you would have otherwise given to the specific machine.

If doubling a number can be reduced to a mechanical, mindless task that a machine can carry out, then so can the task of following instructions for how to double a number. Or following instructions for following any procedure. Turing had his definition. A formal procedure is anything you could do with one of his machines. He didn’t know that he had simultaneously defined what we now call computers. As elegant as their design, as imaginative as their software, as intricate as their hardware, moderns computers are at their core just as capable as my clumsy cartoon of a scanner and a long tape.

The morning after our dance lesson, my legs happily sore, my mind dancing with baroque violins, I suddenly saw that the essential components of a Turing Machine—a tape and the means to alter ones and zeros written on it—were present in the contra dance. You can model a Turing Machine in the contra dance, and that means it’s theoretically as powerful as any computer that has been or ever could be invented.

The Turing Machine Contra Dance

Here’s a sketch of how it works:

  • The Turing Machine’s “tape” is represented by the line of dancers.
  • Our language has two symbols, “One” and “Zero”. “One” is represented by a pair of dancers where the one wearing a suit is on the right side. “Zero” is when that dress-wearing dancer is on the right side.
  • The “scanner” is represented by one pair of dancers that moves up and down the line.
  • The “scanner” couple can “write” a one or zero by dancing with an adjacent couple and leaving them in the opposite configuration that they started out in. If not engaging with the “scanner” couple, pairs will engage in dances that preserve their positioning.

Let’s go back to our previous problem of multiplying a number by two. Recall that we encode the number we want to double in binary. There’s one more wrinkle we need to iron out. Right now, since our tape only consists of ones and zeros, we don’t have a way of telling when an inputted number ends, so we can’t tell the difference between 1, 10, or 10000000. To get around that, we’ll change the way we encode numbers slightly. After we translate a number into binary, we’ll then add a zero before each digit. Then, we can add two consecutive ones to indicate when we’re done with a number. 14, or 1110, would then become 0 1 0 1 0 1 0 0 1 1. When the scanner sees two ones in a row, it will know it has come to the end of the number.

With our new encoding scheme, we represent the ones and zeros with suits on the right or left side of the line. At the top of the column of dancers, we’ll place the “scanner” pair.

We can ask the scanner to follow this algorithm to multiply the input by 2:

  • So long as a 0 appears on an even position, engage in a dance to move two positions down the line.
  • When the scanner finds a 1 on an even position (indicating the start of the two “end of input” ones), engage in a dance to change that “1” to a “0”. Repeat for the next “1”.
  • Finally, switch the next dancing pairs to “1”s to indicate an “end of output”.
  • Courteously bow to your partner, since the dance—and calculation—is complete.

Starting from our input of 14 (0 1 0 1 0 1 0 0 1 1), if we follow this procedure, we end up with a line of suits and dresses that we can interpret as 0 1 0 1 0 1 0 0 0 0 1 1, which we can translate to 28!

Now, this is a very specific algorithm for one kind of calculation. But it turns out, if we’re clever and have a LOT of dancers, that we can create a rule-based task for following a rules-based task that gets encoded in the tape’s input! That is to say, we could choreograph a Universal Turing Machine.

With enough time, dancers, stamina, and willingness to buck the aesthetic standards of a centuries-old art form, you can compute anything with a contra dance that you can with a computer: directions between two cities, interactive games (you’d need a lot of patience), even the latest advancements in artificial intelligence. You just couldn’t tell if any given dance would ever end—but that’s another story.

The iPhone X’s base model has 64GB of memory. That’s 549,755,813,888 ones and zeros stored in a small, metallic rectangle. To hold that much information in a contra dance computer, we would need twice that number of dancers, or over a trillion people. There are only 7.5 billion humans alive right now. Only 108 billion human beings have ever even been born. To implement a contra dance as capatious as an iPhone, we would need to a multigenerational initiative of unparalleled international coordination. The dance line alone, assuming we’d need 5 feet for each dancer, would span over 520,602,096 miles, or about the distance from the Sun to Jupiter. Not exactly feasible. But the same thing, in principle, as the small computers we carry around in our pockets.


Many thanks to Alasdair Wilkins for guiding me through several drafts of this post.