Centaur Style: Clive Thompson on Smarter Than You Think

Many of us are certain we know what the advent of smartphones, social media and ubiquitous computing has done to us.  The corrosive effect of new technology on society is a crisis that generates Op-Eds and outrage over everything from digitally spawned breaches of etiquette to broad-based interrogations into the extent that computing is rewiring our brains away from deep thought.  The only thing that generates more reliable Internet traffic than a video of a baby panda and manatee playing together is a fresh article calling attention to a shocking new low in tech-oriented behavior.

 

Clive Thompson, a contributing editor at Wired magazine and longtime monitor of the digital world's reach into our lives, started out from much the same presumption -- that growing dependence on technology must be having a straightforwardly bad effect on our abilities to think for ourselves. But two decades of reporting on how people actually use technology --  through conversations with scientists, chess players, activists, designers, writers and ordinary people  -- have led him to a very different and more hopeful conclusion.  His new book, Smarter Than You Think, documents our entanglement with new technology and finds that as high-powered computing has permeated much of the developed world, a kind of co-evolution is taking place, in which both humans and the tools we use to communicate and calculate are changing, each in response to the other.  The result, he argues, is an explosion of written language, new opportunities for creativity and deep analysis of problems small and large.

 

I spoke with Thompson recently via the old-fashioned method of email about his book, the questions his findings raise, and what -- even given his book's optimism -- we ought to be cautious about as the Age of the "Digital Centaur" dawns.  An edited transcript of our conversation follows. -- Bill Tipper

 

The Barnes & Noble Review: You begin Smarter Than You Think with the concept of the dawn of "centaur" -- your term for the computer-assisted human brain -- and tell the story of how, rather than turning chess over to ever-more-competent computers, the chess world is excited by the notion of two computer-aided Grand Masters playing one another. Later, you talk about the notion that IBM's Jeopardy-champion computer, Watson, could become a sort of genius physician's assistant. Will we have to undergo some sort of great retraining to take up the centaur life?

 

Clive Thompson: Yes, I do think training is necessary to really use our new cognitive tools well. That's already true for the ones we already use every day! Search engines are basically massive A.I. machines using statistical analysis to help us find documents. They're very powerful -- particularly when wedded, "centaur" style, to our unique human ability to make meaning of things. But they have a whole suite of assumptions built into their A.I. For example, they rank each web site partly based on how many other people link to it, which is in one sense a useful social signal (if other people have found this useful, you might also), but can also descend into a popularity contest -- or get hijacked by search-engine-optimization strategies. The more you know about how a search engine works, the more you're able to use it intelligently -- and that means training.

 

Where's that training going to happen? Adults are, at the moment, on their own -- we have to read up on this stuff if we want to be intelligent and critical users of these tools. For kids, you can see the shift happening in schools, which is heartening. Smart teachers -- and most of all librarians -- are already working on making sure kids learn these skills.

 

This sort of training has always been necessary, frankly. A research library is a powerful tool for thinking -- but only if you've been taught how to use it. The same goes for a slide rule. And historically, only a small number of folks ever bothered to acquire these skills. I went to the University of Toronto, which has a fabulous research library -- but out of thousands of incoming students every year, barely a handful ever attended the training sessions on how to use the library. Professors would complain endlessly about students not using secondary sources, never consulting journals, not even being aware journals existed.

 

So some of these challenges we face today aren't particularly new.

 

BNR:  One person who's become a "centaur" rather sooner than most is "wearable computing" pioneer Thad Starner, the computer science professor who has helped Google developed its famous Google Glass specs, and who has for years now carried a specially built computer that connects to the Internet and projects the products of his searches on a screen in front of one eye. He suggests that in order to use such devices safely and politely we need to "multiplex" -- use technology to supplement what we're doing in the moment -- not "multitask." Is that a distinction people are ready to take in?

 

CT: Sure -- in fact, I think many people already are aware that they shouldn't let their devices come between each other.

 

There's been a flood of handwringing op-eds lately about how glassy-eyed mobile-phone zombies are ignoring each other at the restaurant instead of talking to another another. I think these pundits are somewhat overblowing the frequency of this behavior, frankly. Very similar alarms were raised about the wave of supposedly society-ending isolation that would wreaked by previous newfangled media -- like the telephone in the late 19th century, and the Walkman in the '80s. We didn't suffer a social apocalypse then, and I don't think we're going to suffer one now.

 

That said, I actually think the op-ed handwringing is useful in its own way. It's part of how a society creates social codes around new technologies. When mobile phones inched into the mainstream in the '90s, people who bought them used to answer them, every single time they rang, whenever and wherever they rang: At the dinner table, at the funeral, while having sex. It took about a decade of this behavior peaking before society collectively began to realize this was kind of terrible behavior, and we starting poking fun at it -- you saw lots of jokes about it, like that "inconsiderate cell phone man" ad that used to run before movies. And eventually we moved away from the behavior. We're probably in the middle of this curve with social media.

 

BNR:  Part of what creates expertise in a practice or subject is the immersion in the mundane aspects of a given discipline -- its knowledge base at the most detailed level. Is there a risk that cybernetic dependence might affect our ability to spend time within a set of basic tasks and "absorb" them?

 

CT: Absolutely. For example, research into the effect of calculators on the acquisition of math suggests that if you give a child a calculator too early on in their math-learning curve, they can rely on it too much, and it can interfere with their ability to internalize math. However, if you give it to them after the point where they've acquired a good grounding in basic math, the calculator can be quite beneficial: It lets them muck around with numbers and run fun experiments that reinforces, and extends, their existing number sense. Interestingly, even mental algorithms can shortchange thought. Math teachers now know that the "carry the 1" technique for adding large numbers together actually inhibits people from really understanding the way numbers work; kids who use the carry-the-number approach for adding make weirder and bigger errors than kids who don't, because they're not really thinking of the numbers as quantitie…they're just executing an algorithm to "solve the problem." So yes, using algorithms and tools for our mental work affects how we do our mental work.

 

That said, our thinking processes are already extremely reliant on tools outside of our heads. They've been that way for thousands of years. We have, as the philosopher Andy Clark puts it, "extended minds." We use paper documents to store knowledge so we can consult and reconsult it, giving us a type of recall impossible with our unaided minds; we use pencils to scratch down material so we can manipulate it in a fashion impossible in our unaided minds. I can't do long division without a pen to write the numbers down; does that mean I'm "stupider" when I don't have a pen and paper around? I can't organize and produce a 5,000-word magazine article without being able to store information on paper, on my computer; does that mean I'm stupider without those tools around?

 

The only reason we don't notice how absolutely interwoven our thinking processes have become with older technologies -- pencils, paper, electric light, penicillin, fire -- is that they're old, so we've ceased to notice their effects. But take them away, and the caliber and fabric of our thinking changes dramatically -- and, I'd argue, for the worse.

 

In Western culture, we tend to believe that "thinking" takes places only in the pose of Rodin's Thinker: When you're isolated, alone, and unaided. But that's not at all what thinking looks like in the real world. When we think, muse, wonder, and cogitate, our minds are scaffolded by tools, by our environment, and by each other. We're actually very social thinkers.

 

So the long answer to your question is that yes, one can indeed get out of the habit of doing some forms of mental activity, when we use a "tool for thought" (to borrow Howard Rheingold's lovely phrase). And we do have to be attentive to the tools we use, to make sure that we're not over-relying on them. But at the same time, "overreliance" is a hard thing to separate out from "use."

 

BNR: You make the point that we've been "outsourcing" parts of our thinking -- especially memory -- for as long as writing has been widely used, and you suggest that one of the underappreciated effects of the technological changes of the last few decades is the explosion in the amount of writing that many people do, not just in emails and texts but in Tweets, short status updates, and comment threads. Are we in a golden age of conversational writing?

 

CT:  I think so, yes. The amount of writing that people do online is astonishing, and historically unprecedented. This is something that's often hard for journalists and academics to grasp, but as scholars of rhetoric will tell you, before the Internet, most people graduated high school or college and did very little writing for the rest of their lives. When they did write, it was usually just some memos for work or the like. Only very rare people wrote tons of letters or diary entries; most of us wrote nothing. So we're in the middle of this crazy social shift where everyday people regularly wonder about something, or have an idea about something, or have a thought about a movie they've seen or a song they've listened to, and they sit down and write a few sentences about it. When the last episode of the second season of the BBC's Sherlock aired, I was a fan, and I immediately went to a discussion board to see what other fans thought about it … and the thread was, barely an hour or two after the episode aired, already 10,000 words long, and filled with clever, smart theories about the big mystery. People were alternately offering evidence for their ideas, rebutting each other, flattering each other, disagreeing. It was, as you say, a particularly conversational form of writing -- part bar-room debate, part exchange of letters out of the 18th century.

 

Indeed, the 18th-century literary culture in England fascinates me, because it reminds me a lot of today's online culture. It was the early days of journals and newspapers, and the literate folks in London at the time were constantly writing missives to one another publicly, ridiculing each other, arguing and tossing thoughts back and forth. When you go back and read The Rambler and magazines like that, it feels awfully reminiscent of today's online conversational culture. This is something that Tom Standage points out in his terrific new book, Writing on the Wall: Social Media -- the First 2,000 Years. For most of human history, Standage argues, our media were very conversational and participatory. It was only for a relatively short period of a few decades in the 20th century that media became massive, centralized, and something the average person couldn't participate in.

 

There were clear benefits in the rise of huge, well-funded media organizations -- but something was lost, too, when so many other voices got crowded out. The same balance obtains today, in reverse: We have a lot more voices, but arguably a less centralized social discourse, and plenty of cranks and flat-out abusive speech that wouldn't have been allowed in the big centralized media.

 

BNR: You talk about the change in the society that comes with an "always on" world -- and suggest that sooner or later we may collectively rethink that posture; as the networked life becomes more and more normal, the anxious and fascinated need to post dinner plates to Instagram or interrupt a conversation to check Twitter may wane. But won't device makers and new platforms always be looking for ways to seduce us back into the pursuit of the new?  Is it just the fascination with technology we have to be mindful of, or those who want to sell it to us?

 

CT: The answer here is yes and yes.

 

I think many people are already realizing the limits of their appetite for contact. That's why participation in Facebook looks like it's tailing off. Facebook has sort of overdesigned itself: By trying to be the key to all human interactions, it's shoved so much socializing into one place that it is becoming really ungainly. In my three years of research for my book, I talked to tons of people who find Facebook extremely useful -- but few who said they had fun using it. Whereas most people told me they enjoyed simple tools that offered one relatively clear mode of contact: Photos on Instagram, evanescent messages on Snapchat, or a completely public Twitter account. But it's also true that we are, most of us, pretty social people, and so a lot of us -- certainly not everyone -- will probably keep up a reasonable level of ambient contact with each other. The pleasures of it are as real as the annoyances.

 

Here's a comparison point: Our adjustment online is similar to how European and British societies adjusted to urbanization in the 18th or 19th century. If you moved from a rural area to a city, the new environment was simultaneously stimulating -- it's why cities are such creative and productive hubs -- and enervating. If you wanted to survive in the long run, you had to figure out how to carve out moments and spaces of peace and quiet. Living online is pretty similar in a lot of ways.

 

But yes, even as people try to adjust to social media -- and figure out its proper role in their daily lives -- high-tech companies will be constantly offering new gewgaws, trying to lure us in so they can sell us ads.

 

Personally, I'd love to see more social media firms develop business models that aren't reliant on advertising. If you're a social media firm selling ads, your goal is to get people to interrupt what they're doing all day long so they come and stare at your service as much as possible. It's in your economic self-interest to interrupt your "users" -- a telling word, that -- as much as possible. But social media services that actually charge people for their products no longer have this misalignment with the cognitive needs of their customers. (And they're no longer "users" -- they're "customers.")

 

I'd be interested to see if pay-for-our-service social networks could survive. Facebook's average revenues per person are a measly $5 per person, annually. So they could charge you a little over 40 cents per month on your mobile phone bill -- a pretty modest sum in developed countries, though less so globally -- and make all the money they're currently making, while removing all ads from their service, and redesigning it to suit the needs of the paying customers, not the advertisers.

 

BNR: One of the things I enjoyed most in reading Smarter Than You Think was its inclusion of ideas that take turn current assumptions about how to use our new computer-and-Internet tools on their heads.  I'm thinking particularly of Drop.io, the now-defunct service file-sharing service that had a built-in delete function as a default: users were asked to specify how long their uploads would remain before self-destructing.  What were the most surprising ideas you encountered as you researched?

 

CT: I loved Drop.io's model! I was so sad when they were phagocytosed by Facebook. I think the "artificial forgetting" model is one clever and surprising concept that I encountered and continue to encounter. People are realizing that there can be enormous value in not saving copies of their utterances -- that the evanescence of the utterance is pat of what makes it valuable. This is precisely what I hear from people who use Snapchat. They like the idea that they're just tossing pictures and witticisms back and forth with friends that aren't for the archives. Sure, they know it's possible for your friend to sneakily save a picture you intended to be transient…but the whole point of the app is that it establishes a new social contract, and if it's violated, you'll know who violated it. (Whereas with Facebook, people regularly tell me they're baffled by who, precisely, can see what.)

 

Another idea that surprised and delighted me were the ideas in "memory engineering" -- i.e., tools for making our vast archives of our lives useful again. If you've used a social network for years, you've effectively done a sort of diarying, but there's usually no way to go back in time easily. So I loved the idea behind "Timehop", an app that you can let into your online archives -- your photos, your updates, your checkins, your texts -- and every day it sends you a little gazette summarizing what you were doing a year ago. It becomes, as the people who used it told me, a sort of daily Proustian cookie, a way to reflect on the shape of their life. They were, in classic "centaur" fashion, using the unique abilities of computers -- their perfect and relentless timekeeping, their capacious storage -- to help tickle and tease their human memories.

 

BNR: Recently, at the 2013 National Book Awards ceremony, the novelist E. L. Doctorow delivered a speech telegraphing his concern, as a writer, over the digitally-mediated future, specifically with regard to the sense that both powerful states and large corporations have a natural tendency to want to control free expression.  You cite a number of powerful examples of autocracies -- like Azerbaijan's government -- learning to use the Internet in ways that belie the "netroots" utopia suggested by early champions.  Are we in danger of underestimating the power of regimes to use the networked world to more efficiently hold onto power?

 

CT: New media have often produced giddily utopian ideas about politics. The telegraph was supposed to make humanity so linked that war would end ("It is impossible that old prejudices and hostilities should longer exist," as Charles F. Briggs and Augustus Maverick wrote in 1858). They said the same thing about radio, and even the airplane. ("Aerial man" would transcend such tribal urges, argued Charlotte Perkins Gilman.) It was the same in the early days of the Internet -- plenty of naïve predictions that governments would be unable to contain speech and activism online.

 

I think most of that naivety is gone, though. When I talk to political dissidents in other countries, local activists, and political thinkers, they all have a pretty robust understanding of the civic challenges of online life: The fact that governments and spy agencies hoover up utterances, that they quite adeptly filter and block speech they don't like. You don't see much hyperbole about "Twitter revolutions" or "Facebook revolutions" either, thankfully.

 

In contrast -- and quite hopefully -- I'm seeing more conversation about "what to do about it" -- i.e., what sort of changes are necessary to make online speech less spied-upon, scooped-up, and surveyed ("surveilled"? I've never been clear on that usage). A lot of the big solutions aren't technological. They're political. In the U.S., activists of all stripes are realizing that the US spy agency's addiction to dragnet spying was created by bad laws -- like the PATRIOT Act -- and is only going to be curtailed with better laws that rein it in. It's an open question as to whether the sclerotic U.S. political system will really take action on this; I'm kind of doubtful. Then again, as Ann Cavoukian, the privacy commission for Ontario, pointed out to me this summer, even the mood in the heavily gerrymandered House is changing. This summer, it nearly voted to defund the NSA program that collects Americans' phone call info: 205 to 217, with 94 Republicans voting to defund the program. That's kind of amazing. If the Snowden leaks keep coming, and I gather they will, this sort of political headwind will only increase. I hope, anyway!

 

BNR:  And as more Internet conversation moves to branded corporate spaces, should we be doing more to resist the privatization of the sphere in which political and creative exchanges are now taking place? Do you believe we are headed for a place where a so-called Digital Magna Carta is going to be necessary?

 

CT: I totally agree. I think we all need to be more actively trying to pull our social lives out of these few, highly centralized, highly commercialized spaces -- and put it back in the hands of the people. In addition to the political changes I've talked about here (reining in governmental spying in the West), we also ought to start running more of our own DIY online services.

 

We don't need huge companies like Twitter and Facebook and Pinterest to broker all of our online expression. Publishing stuff online isn't rocket science any more. There's oodles of free software for running one's own discussion forums, like PhpBBS. There's software like Pogoplug or Tonido that let you run a "cloud" service for sharing your photos, videos, or whatever from your own laptop at home. There are paid microblogging services that don't rely on advertising like App.net. And there are public-spirited geeks like the ones at the Freedom Box Foundation that are working on creating social-networking software that, again, lives at home, under your control.

 

I'm not naïve enough to think people are going to abandon all the big commercial services for these bespoke, roll-your-own ones. Nor do I think they should. There are some big advantages to the huge, commercial spaces. They create very large audiences, which can be great for everyone from artists to activists. They're harder for governments to shut down, as per Ethan Zuckerman's "Cute Cat Theory" (Google it!). But I think everyone should be encouraged to learn how to use DIY tools for networking. Frankly, it should be taught at the high school level, the way we teach things like Home Ec and music. It's a modern part of civics and culture.

 

The more we carve out noncommercial spaces for online talk, the better!

 

BNR: Of the technology-based changes in our thinking that you chronicle -- social engagement across networks, collaborative problem-solving, "new literacies" like video, or human-computer cognitive partnerships -- is there one that you think is poised to make the biggest change in our world?  Is anything sneaking up on us?

 

CT: There's nothing I can predict, no.

 

But that's because I'm not a futurist. I'm a reporter! I don't try to predict the future. I just report on what I see happening in the world and people around me. The downside is I can't see any farther into the future than the average bear. The upside is, hopefully, what I'm describing is more accurate than futuristic predictions, because it's based in actual interviews and stories of real people.

 

BNR: This is a book that tries to capture a rapidly-changing terrain.  What's come up since you completed the book?  Has anything emerged that made you wish you'd had a chance to get in just one more chapter?

 

CT: There were about 25 chapters I wish I'd had time to write. I'd hoped to write about the idea of creating more noncorporate spaces online. I'd hoped to write more about how books and the act of long-form reading is evolving. I'd hoped to write about the literary style of short-form utterances, and the existential weirdness of the so-called "Internet of Things" (objects that start talking to each other online), and how online maps are becoming sort of like word processors that we use to think geographically.

 

So I'm shaving all that stuff off into my future journalism.

 

BNR: In your column for Wired, you've championed the defter-than-usual portrayal of innovative technology on the CBS drama The Good Wife. A recent episode featured rolling "telepresence avatars" like those made by Double Robotics -- a screen showing your face is mounted on a camera-equipped, remote controlled mini-Segway.  Can we expect to see these trundling into our conference rooms any time soon?

 

CT: Heh -- I doubt it. They're pretty silly. Robots that roll around have serious trouble navigating the human world, which was engineered primarily for people with full use of both legs. I think [series creators] Robert and Michelle King know this, though, which is precisely why they put them in the show: To make fun of them.

 

On the other hand, flying drones equipped with cameras: Those are real, and they're coming to a corner of the sky near you. They're incredibly cheap, currently pretty unregulated, and for amateurs just looking to horse around, awfully fun to fly. (I've built and flown a DIY drone myself, albeit one that doesn't have a camera.) So everyday drones are going to cause all manner of crazy privacy issues as they start peeking over rooftops, fences, corporate headquarters, major protests, and everywhere else. This is something I'm aiming to write a lot more about.

July 26: On this day in 1602 "A booke called the Revenge of Hamlett Prince Denmarke" was entered in the Stationers' Register by printer James Robertes.

Crime fiction legends Dennis Lehane and Michael Connelly discuss the new book that unites their beloved sleuths Patrick Kenzie and Harry Bosch.

advertisement
Books, CDs, DVDs to know about now
Paradise and Elsewhere

Canadian short story marvel Kathy Page emerges as the Alice Munro of the supernatural from these heartfelt tales of shapeshifting swimmers, mild-mannered cannibals, and personality-shifting viruses transmitted through kisses.

Pastoral

When a persuasive pastor arrives in a sleepy farm town, his sage influence has otherworldly results (talking sheep, a mayor who walks on water). But can he pull off the miracle of finding kindly local Liz Denny the love of her life?  Small wonder looms large in this charmer from Andre Alexis.

The Hundred-Year House

When a poetry scholar goes digging through the decrepit estate of his wife's family to uncover a bygone arts colony's strange mysteries, he awakens a tenacious monster: his mother-in-law. A wickedly funny take on aging aristocracies from author Rebecca Makkai (The Borrower).