You Are Already Living Inside a Computer

Suddenly, everything is a computer. Phones, of course, and televisions. Also toasters and door locks, baby monitors and juicers, doorbells and gas grills. Even faucets. Even garden hoses. Even fidget spinners. Supposedly “smart” gadgets are everywhere, spreading the gospel of computation to everyday objects.

It’s enough to make the mundane seem new—for a time anyway. But quickly, doubts arise. Nobody really needs smartphone-operated bike locks or propane tanks. And they certainly don’t need gadgets that are less trustworthy than the “dumb” ones they replace, a sin many smart devices commit. But people do seem to want them—and in increasing numbers. There are now billions of connected devices, representing a market that might reach $250 billion in value by 2020.

Why? One answer is that consumers buy what is on offer, and manufacturers are eager to turn their dumb devices smart. Doing so allows them more revenue, more control, and more opportunity for planned obsolescence. It also creates a secondary market for data collected by means of these devices. Roomba, for example, hopes to deduce floor plans from the movement of its robotic home vacuums so that it can sell them as business intelligence.

But market coercion isn’t a sufficient explanation. More so, the computational aspects of ordinary things have become goals unto themselves, rather than just a means to an end. As it spreads from desktops and back-offices to pockets, cameras, cars, and door locks, the affection people have with computers transfers onto other, even more ordinary objects. And the more people love using computers for everything, the more life feels incomplete unless it takes place inside them.

* * *

A while back, I wrote about a device called GasWatch, a propane-tank scale that connects to a smartphone app. It promises to avert the threat of cookouts ruined by depleted gas tanks.

When seeing devices like this one, I used to be struck by how ridiculous they seemed, and how little their creators and customers appeared to notice, or care. Why use a computer to keep tabs on propane levels when a cheap gauge would suffice?

But now that internet-connected devices and services are increasingly the norm, ridicule seems toothless. Connected toasters promise to help people “toast smarter.” Smartphone-connected bike locks vow to “Eliminate the hassle and frustration of lost keys and forgotten combinations,” at the low price of just $149.99. There’s Nest, the smart thermostat made by the former designer of the iPod and later bought by Google for $3.2 billion. The company also makes home security cameras, which connect to the network to transmit video to their owners’ smartphones. Once self-contained, gizmos like baby monitors now boast internet access as an essential benefit.

The trend has spread faster than I expected. Several years ago, a stylish hotel I stayed at boasted that its keycards would soon be made obsolete by smartphones. Today, even the most humdrum Hampton Inn room can be opened with Hilton’s app. Home versions are available, too. One even keeps analytics on how long doors have been locked—data I didn’t realize I might ever need.

These devices pose numerous problems. Cost is one. Like a cheap propane gauge, a traditional bike lock is a commodity. It can be had for $10 to $15, a tenth of the price of Nokē’s connected version. Security and privacy are others. The CIA was rumored to have a back door into Samsung TVs for spying. Disturbed people have been caught speaking to children over hacked baby monitors. A botnet commandeered thousands of poorly secured internet-of-things devices to launch a massive distributed denial-of-service attack against the domain-name system.

Reliability plagues internet-connected gadgets, too. When the network is down, or the app’s service isn’t reachable, or some other software behavior gets in the way, the products often cease to function properly—or at all.

Take doorbells. An ordinary doorbell closes a circuit that activates an electromagnet, which moves a piston to sound a bell. A smart doorbell called Ring replaces the button with a box containing a motion sensor and camera. Nice idea. But according to some users, Ring sometimes fails to sound the bell, or does so after a substantial delay, or even absent any visitor, like a poltergeist. This sort of thing is so common that there’s a popular Twitter account, Internet of Shit, which catalogs connected gadgets’ shortcomings.

As the technology critic Nicholas Carr recently wisecracked, these are not the robots we were promised. Flying cars, robotic homes, and faster-than-light travel still haven’t arrived. Meanwhile, newer dreams of what’s to come predict that humans and machines might meld, either through biohacking or simulated consciousness. That future also feels very far away—and perhaps impossible. Its remoteness might lessen the fear of an AI apocalypse, but it also obscures a certain truth about machines’ role in humankind’s destiny: Computers already are predominant, human life already plays out mostly within them, and people are satisfied with the results.

* * *

The chasm between the ordinary and extraordinary uses of computers started almost 70 years ago, when Alan Turing proposed a gimmick that accidentally helped found the field of artificial intelligence. Turing guessed that machines would become most compelling when they became convincing companions, which is essentially what today’s smartphones (and smart toasters) do. But computer scientists missed the point by contorting Turing’s thought experiment into a challenge to simulate or replace the human mind.

In his 1950 paper, Turing described a party game, which he called the imitation game. Two people, a man and a woman, would go behind closed doors, and another person outside would ask questions in an attempt to guess which one was which. Turing then imagined a version in which one of the players behind the door is a human and the other a machine, like a computer. The computer passes the test if the human interlocutor can’t tell which is which. As it institutionalized, the Turing test, as it is known, has come to focus on computer characters—the precursors of the chatbots now popular on Twitter and Facebook Messenger. There’s even an annual competition for them. Some still cite the test as a legitimate way to validate machine intelligence.

But Turing never claimed that machines could think, let alone that they might equal the human mind. Rather, he surmised that machines might be able to exhibit convincing behavior. For Turing, that involves a machine’s ability to passas something else. As computer science progressed, “passing” the Turing test came to imply success as if on a licensure exam, rather than accurately portraying a role.

That misinterpretation might have marked the end of Turing’s vision of computers as convincing machines. But he also baked persuasion into the design of computer hardware itself. In 1936, Turing proposed a conceptual machine that manipulates symbols on a strip of tape according to a finite series of rules. The machine positions a head that can read and write symbols on discrete cells of the tape. Each symbol corresponds with an instruction, like writing or erasing, which the machine executes before moving to another cell on the tape.

The design, known as the universal Turing machine, became an influential model for computer processing. After a series of revisions by John von Neumann and others, it evolved into the stored-programming technique—a computer that keeps its program instructions as well as its data in memory.

In the history of computing, the Turing machine is usually considered an innovation independent from the Turing test. But they’re connected. General computation entails a machine’s ability to simulate any Turing machine (computer scientists call this feat Turing completeness). A Turing machine, and therefore a computer, is a machine that pretends to be another machine.

[“Source-theatlantic”]