The original (paywalled) is here.
Google Should Not Choose Right and Wrong
By Evgeny Morozov
Oscar Wilde once wrote that “on mechanical slavery, on the slavery of the machine, the future of the world depends”. The Irishman would have been a fan of Google Now, the web company’s rival to Siri, Apple’s snarky voice-activated virtual personal assistant.
Google Now is less chatty than Siri. It pre-empts your questions by analysing what Google already knows about you. It then selects the snippets of information you need at the moment, presenting them as beautiful index cards. Available on Android phones, Google Now might soon also be coming to our browsers.
It is the archetypal anti-hassle machine. Catching a flight later? Since your reservation is in your Google-run inbox or your Google-run calendar, Google Now would show a reminder, tell you what weather to expect on arrival and map the best route to the airport, checking traffic conditions beforehand.
This is all possible because Google can anticipate your information needs. Its predictions rest on some theory about who you are, what you want and where you want to be. For many activities such a theory doesn’t have to be deep to produce accurate predictions. Google doesn’t need to guess your politics to notice that you drive to the airport on Mondays.
But the latest version of Google Now also generates a very different reminder. At the end of each month, Google happily reports – without you ever asking for it! – how many miles you’ve walked or cycled. This intervention is no simple weather trivia. Here Google assumes that walking is more important – perhaps, even more moral – than, say, driving. It explicitly “bakes” morality into its app, engaging in what one might term “algorithmic nudging”.
Had governments advocated such surveillance-powered interventions, many would find them intrusive, not least because their terms must be subject to public debate. Are we measuring the right things? Are we unfairly blaming individuals for failures of institutions? Walking is undoubtedly easier in Manhattan than in the suburbs of Los Angeles.
With Google at the helm, however, resistance is minimal. We don’t mind our phones spying on us – at least not when Google needs this data to tell us about flight delays. Likewise, we have been persuaded by Google’s efforts to recast the information it collects as objective and simply existing “out there” – in nature – unaffected by their recording devices or systems of measurement.
Google’s power and temptation to do good are only poised to increase. As its services are integrated under one umbrella – maps, emails, calendars, videos, books – it knows even more about our moral failings. And as Google begins to mediate our interactions with the built environment – through its self-driving cars, smart glasses, smartphones – the scope for “algorithmic nudging” also expands.
Google doesn’t hide its aims. As Eric Schmidt, its executive chairman, put it: “Technology is not really about hardware and software any more. It’s really about the mining and use of this enormous data to make the world a better place.”
Policy makers might also be interested, since the company can take “nudging” – the application of behavioural economics to policy – a step further. (An example of a “nudge” is a change in a default option for a pension contribution.) In a recent paper, Cass Sunstein, a former official in the Obama administration, celebrates the new, highly personalised defaults possible in an information-rich world. Google could be his natural ally.
Imagine dining out wearing the company’s latest innovation – its smart glasses. The moment you walk in, Google already knows if you’ve met your calorie target for the day. It also knows that fatty foods are bad for you. Should it play the benevolent Big Brother and tell you what menu items to avoid? Should it make those items invisible? Why have you battled the temptation?
In a Google-run world, there’s no need to ban outsized soda drinks either, as the smart glasses can make you believe that the soda cup in your hand is actually larger than it is. It isn’t science fiction: Japanese researchers have recently unveiled a head-mounted optical system that does exactly that with food portions.
Such technologies endorse a rather impoverished view of their human masters. Humans, no longer seen as citizens capable of deliberation, are treated as cogs in a system preoccupied with self-optimisation, as if the very composition of that system was uncontroversial.
But will Google, with its rhetoric of personal empowerment, do a better job at fighting obesity or climate change than government-led reforms? Corporate-led “algorithmic nudging” promotes the illusion that problems can be solved through individual action alone. It is an Oprah Winfrey-style model of social change – a Silicon Valley fantasy.
As befits a corporation, Google treats us as utility-maximising consumers and not as citizens, who might care about other members of the community. But shouldn’t we, as citizens, also reflect on those who can’t afford to eat healthy food or live within walkable distance to a farmers’ market? Or should we just give them an Android phone?
We must set clear limits to corporate do-goodism and protect our political process from well-meaning Google’s engineers. The latter can, perhaps, build a card to remind our politicians why we elect them.
The writer is author of the forthcoming ‘To Save Everything, Click Here: The Folly of Technological Solutionism’