A TEXT POST

My FT oped on privacy, subjectivity and Facebook

It’s here — and below

/// 

Facebook invades your personality, not your privacy

By Evgeny Morozov

Facebook’s quarterly earnings, released last month, have surpassed most market expectations, sending its stock price to an all-time high. They have also confirmed the company’s Teflon credentials: no public criticism ever seems to stick.

Wall Street has already forgiven Facebook’s experiment on its users, in which some had more negative posts removed from their feeds while another group had more positive ones removed. This revealed that those exposed to positive posts feel happier and write more positive posts as a result. This, in turn, results in more clicks, which result in more advertising revenue.

Troubling ethics notwithstanding, the experiment has revealed a deeper shift in Facebook’s business model: the company can make money even when it deigns to allow its users a modicum of privacy. It no longer needs to celebrate ubiquitous sharing – only ubiquitous clicking.

At the earnings call, chief executive Mark Zuckerberg acknowledged that the company now aims to create “private spaces for people to share things and have interactions that they couldn’t have had elsewhere”. So Facebook has recently allowed users to see how they are being tracked, and even to fine tune such tracking in order to receive only those adverts they feel are relevant. The company, once a cheerleader for sharing, has even launched a nifty tool warning users against “oversharing”.

As usual with Facebook, this is not the whole story. For one, it has begun tracking users’ browsing history to identify their interests better. Its latest mobile app can identify songs and films playing nearby, nudging users to write about them. It has acquired the Moves app, which does something similar with physical activity, using sensors to recognise whether users are walking, driving or cycling.

Still, if Facebook is so quick to embrace – and profit from – the language of privacy, should privacy advocates not fear they are the latest group to be “disrupted”? Yes, they should: as Facebook’s modus operandi mutates, their vocabulary ceases to match the magnitude of the task at hand. Fortunately, the “happiness” experiment also shows us where the true dangers lie.

For example, many commentators have attacked Facebook’s experiment for making some users feel sadder; yet the company’s happiness fetish is just as troubling. Facebook’s “obligation to be happy” is the converse of the “right to be forgotten” that Google was accused of trampling over. Both rely on filters. But, while Google has begun to hide negative results because it has been told to do so by European authorities, Facebook hides negative results because it is good for business. Yet since unhappy people make the best dissidents in most dystopian novels, should we not also be concerned with all those happy, all too happy, users?

The happiness experiment confirms that Facebook does not hesitate to tinker with its algorithms if it suits its business or social agenda. Consider how on May 1 2012 it altered its settings to allow users to express their organ donor status, complete with a link to their state’s donor registry. A later study found this led to more than 13,000 registrations on the first day of the initiative alone. Whatever the public benefits, discoveries of this kind could clearly be useful both for companies and politicians. Alas, few nudging initiatives are as ethically unambiguous as organ donation.

The reason to fear Facebook and its ilk is not that they violate our privacy. It is that they define the parameters of the grey and mostly invisible technological infrastructure that shapes our identity. They do not yet have the power to make us happy or sad but they will readily make us happier or sadder if it helps their earnings.

The privacy debate, incapacitated by misplaced pragmatism, defines privacy as individual control over information flows. This treats users as if they exist in a world free of data-hungry insurance companies, banks, advertisers or government nudgers. Can we continue feigning such innocence?

A robust privacy debate should ask who needs our data and why, while proposing institutional arrangements for resisting the path offered by Silicon Valley. Instead of bickering over interpretations of Facebook’s privacy policy as if it were the US constitution, why not ask how our sense of who we are is shaped by algorithms, databases and apps, which extend political, commercial and state efforts to make us – as the dystopian Radiohead song has it – “fitter, happier, more productive”?

This question stands outside the privacy debate, which, in the hands of legal academics, is disconnected from broader political and economic issues. The intellectual ping pong over privacy between corporate counsels and legal academics moonlighting as radicals always avoids the most basic question: why build the “private spaces” celebrated by Mr Zuckerberg if our freedom to behave there as we wish – and not as companies or states nudge us to – is so limited?

The writer is the author of ‘To Save Everything, Click Here’

A TEXT POST

my oped in tomorrow’s FT

Silicon Valley is turning our lives into an asset class

By Evgeny Morozov

Tech titans with better data and engineers will disrupt Wall Street, writes Evgeny Morozov

In the past few decades, Wall Street has made finance a central feature of both the global economy and of our everyday lives – a process often described as “financialisation”. Silicon Valley, almost contemporaneously, has done the same for digital media technologies. That process, too, has a fancy name: “mediatisation”.

With reports that Facebook is seeking to buy a drone-manufacturing company, ostensibly to connect the most remote corners of the globe, the days of blessed disconnection seem firmly behind us.

Understandably, many social critics find this troublesome, blaming technology for invading our lives. But it is a false target: mediatisation is actually financialisation in disguise. Having disrupted Madison Avenue, the likes of Google and Facebook – armed with better data, better engineers and better databases – will disrupt Wall Street next.

Silicon Valley companies sit on a trove of data about our most banal daily pursuits. And the kind of data that they gather will only grow more diverse, as the Faustian bargain that we first accepted in our browsers – letting strangers monitor what we do online in exchange for nominally free services – will be accepted in many other domains, especially as the rise of the “internet of things” makes daily interaction with sensors, screens and other data-capturing devices unavoidable.

There is much to like here. A fridge that not only knows that you are running out of milk but can do something about it sounds empowering. Yet in the longer term there is a more consequential side: sensors and internet connectivity are also turning “dumb” gadgets into powerful vehicles of prediction and speculation. The data they capture can be integrated with data from other gadgets and databases to create new information commodities whose value might eclipse the value of the gadgets used to generate the underlying data. Soon, the devices might even be given away for free.

Consider your toothbrush. Armed with a sensor that knows when you are using it, it can detect behaviour patterns – how often you use it (or not use, as the case might be) – that help determine when you should see the dentist. That prediction would be more accurate if some other sensor-equipped gadget – say, a smart fork – knew how much sugar you consumed. The more data-tracking devices are hooked to the network, the more accurate the predictions.

Needless to say, there is always someone eager to pay for this – and ubiquitous connectivity will also mean ubiquitous and instantaneous data markets. Perhaps there would even be several bidders for such data so that an ad hoc online auction, along the lines Google uses to sell its adverts, is called for.

If Amazon can already study your history, predict your purchases and ship them before you even place an order – the online retailer’s “anticipatory shipping” technique – imagine what predictions other data-heavy companies could make.

For example, in a year or two, Google will be present in your car (thanks to its self-driving vehicles but also to the Android operating system that powers other models); in your bedroom (thanks to its acquisition of Nest, which manufactures smart thermostats and smoke detectors); in your pocket (through Android-powered smartphones); and in your entire visual field (via Google Glass, the wearable camera and screen).

In knowing your routes, your daily patterns and your contacts, Google has a far better picture of risk – for example, the odds that you will have an accident or default on a mortgage – than any insurance company or bank. And, in having unmediated access to you via your phone, Google can also sell you insurance or make you an offer for your personal data on the go, using a price point that you are most likely to accept.

That the financial value of your personal data is unstable, fluctuating based on your location, health and social status, means the spirit of speculation will not just invade our everyday life but will also make self-surveillance of our “data portfolios” highly appealing. We will resemble the confused analysts of the US National Security Agency: unsure of the future value of the data we generate, we will opt to store them for posterity. And, unsure of how to maximise that value, we will keep adding data streams in the vain hope that the value of our data portfolio (the sum total of our life) will rise.

The hope that such precarious data entrepreneurship can mitigate the problems of automation or ease our growing reliance on debt is the utopian conceit of the digital elites. Just because the World Economic Forum argues that personal data are emerging as a new asset class, that does not make it a natural or irreversible development. Nor is this development driven solely by technological innovation: like financialisation, mediatisation is primarily a failure of regulation.

Silicon Valley might, indeed, succeed in disrupting Wall Street. Alas, it has shown no real interest in disrupting its long-term agenda of making our lives tick in sync with the speculative logic of finance.

The writer is the author of ‘To Save Everything, Click Here’

///

original is here 

A TEXT POST

My FT oped: “The Snowden saga heralds a radical shift in capitalism”

The Snowden saga heralds a radical shift in capitalism

By Evgeny Morozov

The benefits of personal data to consumers are obvious; the costs are not, writes Evgeny Morozov

Following his revelations this year about Washington’s spying excesses, Edward Snowden now faces a growing wave of surveillance fatigue among the public – and the reason is that the National Security Agency contractor turned whistleblower has revealed too many uncomfortable truths about how today’s world works.


Technical infrastructure and geopolitical power; rampant consumerism and ubiquitous surveillance; the lofty rhetoric of “internet freedom” and the sober reality of the ever-increasing internet control – all these are interconnected in ways most of us would rather not acknowledge or think about. Instead, we have focused on just one element in this long chain – state spying – but have mostly ignored all others.


But the spying debate has quickly turned narrow and unbearably technical; issues such as the soundness of US foreign policy, the ambivalent future of digital capitalism, the relocation of power from Washington and Brussels to Silicon Valley have not received due attention. But it is not just the NSA that is broken: the way we do – and pay for – our communicating today is broken as well. And it is broken for political and economic reasons, not just legal and technological ones: too many governments, strapped for cash and low on infrastructural imagination, have surrendered their communications networks to technology companies a tad too soon.


Mr Snowden created an opening for a much-needed global debate that could have highlighted many of these issues. Alas, it has never arrived. The revelations of the US’s surveillance addiction were met with a rather lacklustre, one-dimensional response. Much of this overheated rhetoric – tinged with anti-Americanism and channelled into unproductive forms of reform – has been useless. Many foreign leaders still cling to the fantasy that, if only the US would promise them a no-spy agreement, or at least stop monitoring their gadgets, the perversions revealed by Mr Snowden would disappear.


Here the politicians are making the same mistake as Mr Snowden himself, who, in his rare but thoughtful public remarks, attributes those misdeeds to the over-reach of the intelligence agencies. Ironically, even he might not be fully aware of what he has uncovered. These are not isolated instances of power abuse that can be corrected by updating laws, introducing tighter checks on spying, building more privacy tools, or making state demands to tech companies more transparent.


Of course, all those things must be done: they are the low-hanging policy fruit that we know how to reach and harvest. At the very least, such measures can create the impression that something is being done. But what good are these steps to counter the much more disturbing trend whereby our personal information – rather than money – becomes the chief way in which we pay for services – and soon, perhaps, everyday objects – that we use?


No laws and tools will protect citizens who, inspired by the empowerment fairy tales of Silicon Valley, are rushing to become data entrepreneurs, always on the lookout for new, quicker, more profitable ways to monetise their own data – be it information about their shopping or copies of their genome. These citizens want tools for disclosing their data, not guarding it. Now that every piece of data, no matter how trivial, is also an asset in disguise, they just need to find the right buyer. Or the buyer might find them, offering to create a convenient service paid for by their data – which seems to be Google’s model with Gmail, its email service.


What eludes Mr Snowden – along with most of his detractors and supporters – is that we might be living through a transformation in how capitalism works, with personal data emerging as an alternative payment regime. The benefits to consumers are already obvious; the potential costs to citizens are not. As markets in personal information proliferate, so do the externalities – with democracy the main victim.

This ongoing transition from money to data is unlikely to weaken the clout of the NSA; on the contrary, it might create more and stronger intermediaries that can indulge its data obsession. So to remain relevant and have some political teeth, the surveillance debate must be linked to debates about capitalism – or risk obscurity in the highly legalistic ghetto of the privacy debate.


Other overlooked dimensions are as crucial. Should we not be more critical of the rationale, advanced by the NSA and other agencies, that they need this data to engage in pre-emptive problem-solving? We should not allow the falling costs of pre-emption to crowd out more systemic attempts to pinpoint the origins of the problems that we are trying to solve. Just because US intelligence agencies hope to one day rank all Yemeni kids based on their propensity to blow up aircraft does not obviate the need to address the sources of their discontent – one of which might be the excessive use of drones to target their fathers.


Unfortunately, these issues are not on today’s agenda, in part because many of us have bought into the simplistic narrative – convenient to both Washington and Silicon Valley – that we just need more laws, more tools, more transparency. What Mr Snowden has revealed is the new tension at the very foundations of modern-day capitalism and democratic life. A bit more imagination is needed to resolve it.


The writer is author of ‘To Save Everything, Click Here’

A TEXT POST

The ‘sharing economy’ undermines workers’ rights - my FT oped

The “sharing economy” has many fans but Eric Schneiderman, New York State’s attorney-general, is not one of them. He has demanded that Airbnb, a company that allows anyone to rent their property to strangers, hand over records of its 15,000 hosts in New York City to verify that they pay taxes levied on hotels. But the company, a pioneer of the sharing economy, is fighting the order.

Why fear the sharing economy? Why not let people share apartments, cars, drills and washing machines and make some money on the side? Won’t this promote efficiency, create markets and help with problems such as congestion? It might. But as we celebrate the disruption of old industries, we also must inquire into the structural effects of the sharing economy on equality and basic working conditions.

To some, this might seem an odd concern. Has not the sharing economy already helped the middle classes in despair, the unemployed and the uninsured, those on the brink of bankruptcy? Start-ups such as Airbnb flaunt their credentials as latter-day Franklin Roosevelts, highlighting users whose livelihoods were transformed by the service.

But notice how Silicon Valley moguls disrupt with one hand – only to comfort with another. Lost your job as Amazon forced your local bookstore to close? Do not worry: you can rent out your apartment via Airbnb. Jeff Bezos, Amazon’s chief executive, wins either way: he is an investor in Airbnb.

The advocates of the sharing economy invite us to imagine it as a feel-good utopia that, while fully compliant with market logic, is driven by the altruistic spirit of Wikipedia and open-source software. Such parallels are tricky, as many contributors to open-source projects have full-time jobs at for-profit software companies that subsidise their extracurricular activities.

And how altruistic is all this sharing? Is it true that we no longer value profits over human relationships, that we do not pay taxi drivers for getting us from point A to point B but rather “make a donation” to fellow citizens concerned with reducing carbon emissions, that we rent out rooms in our apartments not to make ends meet but to meet new people? “It’s like the UN at every kitchen table,” Brian Chesky, Airbnb’s chief executive, said of the social benefits of his company. “I think we’re in the midst of a revolution,” he hastened to add. Workers of the world, turn on your smartphones!

But for all their rhetoric, many of these start-ups pursue rather un-revolutionary agendas. They are not interested in reorienting the global economy towards a better quality of life – as proposed by Robert and Edward Skidelsky – or human flourishing – as proposed by Amartya Sen and Martha Nussbaum.

When SF Weekly, a San Francisco newspaper, asked an executive at Uber, an upmarket taxi app, about a protest by Uber drivers concerned by recent firings, he responded that a “driver contracting with Uber is not a bona fide employee” so that “firing, in this case, amounts to deactivating a driver’s account because he’s received low ratings from passengers.” At TaskRabbit, a company that connects those who need their errands run with those who need the money to run them, the “task rabbits” cannot easily communicate with each other. Who knows what trouble they could cause on discovering the subversive Wikipedia page about trade unions?

Or take Airbnb’s resident cosmopolitan, Mr Chesky. Asked how the sharing economy would treat people who “don’t want to be brands”, he did not mince words. “Some people will choose to be anonymous their whole life. That’s OK. But if you don’t opt into this online identity, you’ll have less access to services. The rest of us build a history. We build a brand online.” The power model behind the sharing economy is more Michel Foucault than Joseph Stalin: no one forces you to be part of it – but you may have little choice anyway.

A new UN, indeed: the erosion of full-time employment, the disappearance of healthcare and insurance benefits, the assault on unions and the transformation of workers into always-on self-employed entrepreneurs who must think like brands. The sharing economy amplifies the worst excesses of the dominant economic model: it is neoliberalism on steroids.

Last August, companies including TaskRabbit and Airbnb launched Peers.org – a “grassroots organisation that supports the sharing economy movement”. Ordinary citizens are invited to sign a pledge stating that they “believe the sharing economy should be the biggest economic movement of the 21st century – by building an economy that benefits everyone”.

How exactly one builds an “economy that benefits everyone” was not explained. But Peers.org promised they would not be hiring any lobbyists. And they do not have to: as long as the sharing economy is seen as a logical extension of app-enabled humanitarianism, lobbyists won’t be needed at all – the utopian rhetoric alone will suffice.

The writer is author of ‘To Save Everything, Click Here: The Folly 
of Technological Solutionism’

A TEXT POST

Review of “Smarter Than You Think”

I’ve got a short review of Clive Thompson’s "Smarter Than You Think" in The Times (of London). Below is full version (it lost a phrase or two after the edit): 

/// 

If you doubt the wonders of smart technology, look no further than your inbox. Chances are you are probably sending fewer “forgot-the-attachment” emails than you did five years ago. All it took was for popular email services like Gmail to introduce an extra prompt – “Did you forget to attach a file?” – that pops up every time you type the word “attachment” but click “send” without actually attaching anything. How could one deny progress?

There’s something to the idea that technology companies could solve the very problems they’ve created while enabling us to build richer social connections. If the choice is between “dumb email” of the yesteryear and “smart email” of today, we should probably go with the latter and hope, perhaps naively, that we would never have to choose between “dumb glasses” – that know nothing and say nothing – and “smart glasses” that never tire of nudging us to resist that smoothie.

In “Smarter Than You Think,” Clive Thompson, a prolific US-based technology journalist, explores dozens of latest technologies, from all-remembering wearable cameras to all-manufacturing 3D printers, to argue that they are already changing how we act, think and live. Forget the gloomy prophecies of digital naysayers: on balance, everything is getting better.

Thompson’s case is not exactly original. Thanks to small and ubiquitous sensors – present in smartphones but also in simple household appliances – we can collect ever more data. Thanks to cheaper storage devices, we can save and access it in new ways. Thanks to new platforms for sharing, visualizing and discussing this data, we can find and create new connections – between users, ideas, causes – and unleash new waves of innovation (yes, most of the innovation so far is in the sloganeering field but we are only at the early stages!). Condensed to a tweet – Thompson, keen to promote new cognitive formats, would approve– the book’s message is: “Don’t worry – be appy!”

Published just a few months after Edward Snowden took on the National Security Agency, “Smarter Than You Think” reads, well, odder than you think. Even if one grants that Thompson’s picture of the contemporary intellectual renaissance is accurate – if so, museums of the future will have entire galleries dedicated to funny pictures of cats – what exactly are its real costs? A few decades ago, we, somewhat belatedly, realized that, for all the economic benefits of globalization, humanity may not live long enough to enjoy them – not if we don’t take the costs of climate change into account.

Arguably, we are reaching a similar realization with regards to information; there are vast hidden costs to all the wonderful trends described in this book and Thompson doesn’t broach them. Information might be the oxygen of modern democracies but it doesn’t necessarily follow that more information equals more democracy. It’s not just privacy and the spooks but also a new generation of digital technocrats, who, confident that information is infallible while citizens are imperfect, would rather put politics on auto-pilot than consult the pesky voters. (Hence the recent nudging craze.)

Still, Thompson agenda is far from modest; “this book maps out ‘the future of thought’” announces the very first chapter. Alas, this is not a very reliable map. It could be because Thompson is just too nice to everyone. But could the TED conference – he enthusiastically quotes its curator – have a far more ambiguous effect on “the future of thought” than he lets on? Yes, Facebook, Twitter, and Wikipedia can do wonders but their presumed bottom-up authenticity have also turned them into favorite tools of the PR industry. Yes, there might be benefits to thinking in public but could it also explain why so much commentary about technology is so banal and unoriginal? (Thanks again, Twitter!)

Had Thompson embarked on a tour to celebrate digital projects that excite him, this could have been a wonderful, idiosyncratic travelogue. But – and this might be a sign of the “future of thought” that Thompson seeks to document – this has all the markings of a “big idea” book, with an inevitable thesis, a handful of buzzwords that, with some viral luck, might become memes (he particularly likes “ambient awareness,” “pluralistic ignorance,” and any expression that contains the word “cognitive”) and minor interventions in ongoing debates with only a peripheral connection to his own argument (a section on the future of tech-savvy dissidents in Azerbaijan is only a few pages away from a section on the future of IBM’s Watson supercomputer.)

Thompson is a talented storyteller but superb reporting is not enough to back up his ambitious thesis that “on balance…what is happening is deeply positive.” “Deeply positive” is an apt description of the trends on display in this book but the world outside is, on balance, much weirder than you think.

A TEXT POST

Fiction vs reality

Tim Wu on my book

Too much assault and battery creates a more serious problem: wrongful appropriation, as Morozov tends to borrow heavily, without attribution, from those he attacks. His critique of Google and other firms engaged in “algorithmic gatekeeping”is basically taken from Lessig’s first book, “Code and Other Laws of Cyberspace,” in which Lessig argued that technology is necessarily ideological and that choices embodied in code, unlike law, are dangerously insulated from political debate. Morozov presents these ideas as his own and, instead of crediting Lessig, bludgeons him repeatedly. Similarly, Morozov warns readers of the dangers of excessively perfect technologies as if Jonathan Zittrain hadn’t been saying the same thing for the past 10 years. His failure to credit his targets gives the misimpression that Morozov figured it all out himself and that everyone else is an idiot.

What my book actually says: 

Alas, Internet-centrism prevents us from grasping many of these issues as clearly as we must. To their credit, Larry Lessig and Jonathan Zittrain have written extensively about digital preemption (and Lessig even touched on the future of civil disobedience). However, both of them, enthralled with the epochalist proclamations of Internet-centrism, seem to operate under the false assumption that digital preemption is mostly a new phenomenon that owes its existence to “the Internet,” e-books, and MP3 files. Code is law—but so are turnstiles. Lessig does note that buildings and architecture can and do regulate, but he makes little effort to explain whether the possible shift to code-based regulation is the product of unique contemporary circumstances or merely the continuation of various long-term trends in criminological thinking. 

As Daniel Rosenthal notes in discussing the work of both Lessig and Zittrain, “Academics have sometimes portrayed digital preemption as an unfamiliar and novel prospect… In truth, digital preemption is less of a revolution than an extension of existing regulatory techniques.” In Zittrain’s case, his fascination with “the Internet” and its values of “openness” and “generativity,” as well as his belief that “the Internet” has important lessons to teach us, generates the kind of totalizing discourse that refuses to see that some attempts to work in the technological register might indeed be legitimate and do not necessarily lead to moral depravity.

A TEXT POST

Recycle the Cycle - II

Oh I completely forgot that my book had an even more damning section on Tim Wu than the one I posted a few hours ago. So here it is for your amusement: 

***

Openness and Its Messiahs

Perhaps some of the worst problems of information reductionism could be avoided if only the solutionists’ transparency vocabulary didn’t brim with ambiguous terms. Appeals for “transparency” no longer look problematic once solutionists start to talk about “openness.” It’s bad enough that our cultural and intellectual heritage makes us view those concepts as worth pursuing in their own right. Solutionists—especially those of the geek persuasion—regularly develop and consume their own myths about how “openness” contributes to progress and success, which only adds to the confusion.

It might be tempting to view this openness fetish as originating in communities promoting open-source software. But according to Chris Kelty, the UCLA anthropologist who studies geek cultures, there is not much agreement about the value of openness—about whether it’s worth pursuing as its own end or only instrumental to some higher goods—even in geek circles. As Kelty points out, “Open tends toward obfuscation. Everyone claims to be open, everyone has something to share, everyone agrees that being open is the obvious thing to do—after all, openness is the other half of ‘open source’—but for all its obviousness, being ‘open’ is perhaps the most complex component of Free Software.” Thus, as we have already noticed with the transparency rhetoric, it is never quite clear whether being open is a means or an end.  

As a result, notes Kelty, there is no geek consensus on the merits of openness at all. “Is openness good in itself, or is openness a means to achieve something else—and if so what? Who wants to achieve openness, and for what purpose? Is openness a goal? Or is it a means by which a different goal—say, ‘interoperability’ or ‘integration’—is achieved? Whose goals are these, and who sets them? Are the goals of corporations different from or at odds with the goals of university researchers or government officials?” So, if Kelty is to be believed, the community that has done the most to infuse technology debates with respect for “openness” is itself torn about its merits and meanings.

Our Internet debates, in contrast, tend to be dominated by a form of openness fundamentalism, whereby “openness” is seen as a fail-safe solution to virtually any problem. Instead of debating how openness may be fostering or harming innovation, promoting or demoting justice, facilitating or complicating deliberation—the kinds of debates we are likely to have about the uses of openness in the messy world that we live in—“openness” in networks and technological systems is presumed to be always good and its opposite—it’s quite telling that we can’t quite define what that is—always bad.

This Manichean tendency to view every technological issue in open-versus-closed terms leads to almost religious celebration of companies that embrace openness for tactical purposes and use it to their own advantage. The tactic here is once again very similar to what Elizabeth Eisenstein did with attributing qualities like fixity to “print culture.” Openness is presumed to be an “Internet” value, so whenever it can be read into the actions of “Internet ambassadors”—the Googles and Facebooks of this world—it’s invoked to explain their success. Then, this success is itself invoked to prove that “openness” is indeed an Internet value. This explains why our Internet theorists are never wrong.

Take Tim Wu, who celebrates Google, an arch-open company in his view, as if it were a divine creature. In The Master Switch, Wu writes that Google’s birth was “audacious” and its ideas are “vaguely messianic.” Its founders—perhaps like Jesus?—“style themselves the challengers to the existing order, to the most basic assumptions about the proper organization of information, the nature of property, the duties of the American corporation, and even the purpose of life.” Google represents nothing less than the “utopia of openness,” which aims to “plant the flag of openness deep in the heart of the telephone territory” and never dares to “resist or subdue the Internet’s essential structure” (remember: resistance is futile; the network, with its “essential structure” and “architecture,” is not going away). It is “the greatest corporate champion of openness,” the leader of the “openness movement,” and “the incarnation of the Internet gospel of openness.” Wu’s Google is also one of the “apostles of openness”—very much unlike Steve Jobs, the “apostle of perfectibility”; former FCC chairman Reed Hundt, who is a “competition apostle”; and former Time Warner CEO Gerald Levin, who is “an apostle par excellence of [the] control model.”

Gospel, messiah, apostle, incarnation—Wu writes as if he had some kind of spiritual awakening while visiting Google’s temple in the holy city of Mountain View. Oddly enough, he never mentions that he himself has been an (unpaid) adviser for Google and helped greatly to shape its early strategy on, well, “openness.” (In 2007 Chris Sacca, then head of special initiatives at Google, told Businessweek, “Tim helped us catalyze a strategy… He’s a singular force in this space. You’re just seeing the start of what he’s going to accomplish.”) Such disclosures make it difficult at times to tell whether Wu is praising Google’s genius or his own.

Wu’s effervescent analysis portrays Google’s predilection for openness as natural and inevitable; its executives simply saw the structure of the network and couldn’t resist it. It’s the print debate all over again, with Google’s “openness” being just a by-product of “the Internet’s essential structure,” much like fixity, in Elizabeth Eisenstein’s account, was just a manifestation of some eternal quality of print. That Google may have played a role in shaping or maintaining this very structure of “the Internet,” positioning it as “essential” rather than “contingent,” that it might have spent a lot of marketing and think tank money to be seen as an “evangelist of openness,” that it surrounded itself with an army of “openness” evangelists—none of this enters Wu’s analysis (but then, he’s one of the evangelists in question).

Compare Wu’s messianic pronouncements with a very different kind of empirical analysis that makes no a priori assumptions about Google’s divine status in the pantheon of openness gods and instead tries to explain what that status does for Google and how it has been achieved. Kimberley Spreeuwenberg and Thomas Poell, two Dutch academics, conducted a detailed study of how Google has created, managed, and positioned the work done within the Open Handset Alliance—a consortium of eighty-four companies that develop software and hardware for Google’s Android platform. Google and its executives never miss a chance to brag that their approach to mobile platforms, unlike that of Apple, is dominated by “openness.”

Yet, as the Dutch study points out, “open” in Open Handset Alliance might be something of a misnomer, for “it is highly questionable whether Android, in the light of the ideals of open source, can in fact be characterized as an ‘open source project.’” Thus, the authors note, “while Android was publicly introduced as a project aimed at preventing any ‘industry player to restrict or control the innovations of any other,’ within the Android ecology Google clearly has control over the other involved actors.”

This control is achieved through tricky software licenses and restrictive technological specifications for how software and hardware should be designed, all of them wrapped in the stale language of “compatibility.” Furthermore, leaked communication between Google and one of the hardware partners in the Open Handset Alliance illustrated that Google can exercise control over its partners in a nominally “open” ecosystem by tinkering with various carrots and sticks, for instance, by allowing well-behaving partners to acquire certain features ahead of the competition or threatening to disable certain features for partners that do not behave.

Likewise, since Google’s interest in expanding into mobile handsets is partly driven by its desire to remain a powerful player in advertising, the company has no strategic interest in following the “open-source” playbook down to the last rule. Instead, it picks the rules it wants to follow based on its own corporate strategy (e.g., it won’t let independent developers code the operating system itself, as this might weaken its control over development and, indirectly, its utility for harvesting user data—which would make achieving its advertising goals much harder).

This is not unexpected, but instead of celebrating what Google does for openness, it’s important to investigate what openness does for Google. As one perceptive observer noted of Google, “‘Openness’ and ‘connectedness’ are not the principles on which it is organized so much as the products that it sells.” Why this market for openness and connectedness exists, how it relates to other tenets of Internet-centrism, and how this market is manipulated: all of these are not the kinds of questions one is likely to ask when the occurrence of “openness” on “the Internet” is presumed to be natural and unproblematic. To use the dreadful language of social theory, ideas like “openness” and “the Internet” are constructed—and mutually co-constructed at that—and they do not drop down on us from the sky. Unless we are prepared to trace how such construction happens, not only will we write bad history of technology, but we will end up with extremely confused policy making that treats contingent and fluid phenomena (which, of course, might be worth defending) as permanent and natural fixtures of the environment.

Thus, while Internet-centrists assume that Google is “open” by default, their opponents—let’s call them Internet realists—assume that Google does a lot of work to look “open” and investigate what that work involves. While Internet-centrists tend to be populist and unempirical, Internet realists start with no assumptions about the intrinsic values of “openness” and “transparency”—let alone their inherent presence in digital networks—and pay particular attention to how these notions are involved and manifested in particular debates and technologies. While Internet-centrists believe that “openness” is good in itself, Internet realists investigate what the rhetoric of “openness” does for governments and companies—and what they do for it.

A TEXT POST

On Kevin Kelly

As a follow-up to the previous post - on Tim Wu - I’ve also decided to post another long section from the book - this time on Kevin Kelly. (Wu’s own thoughts on Kelly are here; my even longer review of Kelly’s “What Technology Wants” can be found here). See just what an “ambitious work of tech philosophy” - as Wu calls it in his review -  Kelly’s book is!

***

Against Technological Defeatism

Viewed in the abstract, it may seem that the tides of digital preemption, situational crime prevention, and reputation-based controls are unstoppable and irreversible. Information is everywhere, and it’s getting cheaper. All of us are carrying mobile phones. Technology seems to be moving in accordance with its own law—Moore’s law—and we, the humans, can only conform and tinker with our laws to meet technology’s demands.

This sentiment pervades our public debate about technology. Thus, the Wall Street Journal ’s Gordon Crovitz writes that “whatever the mix of good and bad, technology only advances and cannot be put back.” The New York Times’s Nick Bilton, writing of multitasking, notes that “whether it’s good for society or bad … is somewhat irrelevant at this point.” Parag and Ayesha Khanna argue in Hybrid Reality that “the flow of technology is at most slowed by reluctant governments, but it is more accurate to say that technology simply evades or ignores them in search of willing receivers.” All these commentators adopt the stance of what I call “digital defeatism,” which—by arguing that this amorphous and autonomous creature called “Technology” with a capital T has its own agenda—tends to acknowledge implicitly or explicitly that there’s little we humans can do about it.

This view of technology as an autonomous force has its own rather long intellectual pedigree; in 1978 Langdon Winner offered perhaps the best summary in his Autonomous Technology: Technics-out-of- Control as a Theme in Political Thought. This view has been debunked hundreds of times as a lazy, unempirical approach to studying technological change, and yet it has never really left the popular discourse about technology. It has recently made a forceful appearance in Kevin Kelly’s What Technology Wants, and Kelly’s thought is not a bad place to observe technological defeatism up close, if only because he is a Silicon Valley maven and the first executive editor of Wired. Besides, very diverse thinkers about “the Internet”—from Tim Wu to Steven Johnson—cite Kelly’s What Technology Wants as an influence. Thus, it won’t be such a great stretch to say that Kelly’s theories do provide the intellectual grounds on which Internet-centrism grows and flourishes.

The defining feature of Kelly’s thought is its explicit denial of its own defeatism. Kelly, using a fancy word, “technium,” as a stand-in for “Technology” with a capital T, reassures his readers that “the technium wants what we design it to want and what we try to direct it to do.” This sounds like a rather uplifting, humanist message—but the very next sentence shatters it: “But in addition to those drives, the technium has its own wants. It wants to sort itself out, to self-assemble into hierarchical levels, just as most large, deeply interconnected systems do. The technium also wants what every living system wants: to perpetuate itself, to keep itself going. And as it grows, those inherent wants are gaining in complexity and force.”

Kelly offers the best of all possible worlds: technology is both what we make it of it and an autonomous force with its own wants and desires and largely independent of humans. Kelly’s thought is full of such doublespeak, by which we are simultaneously promised control over technology and assured that we need no such control because it’s too late. Thus, he can write that “our concern should not be about whether to embrace [technology]. We are beyond embrace; we are already symbiotic with it,” only to follow with “and most of the time, after we’ve weighed downsides and upsides in the balance of our experience, we find that technology offers a greater benefit, but not by much. In other words, we freely choose to embrace it—and pay the price.” So we get both mysticism—we are symbiotic with technology; we’ve already embraced it!—and radical empowerment—whenever we embrace technology, it’s because we want to!—which is a rather odd combination.  

But, promises Kelly, none of this actually matters, because technology wants the same things as evolution, for technology is just evolution by other means. Thus, he notes that “with minor differences, the evolution of the technium—the organism of ideas—mimics the evolution of genetic organisms.” Technology is nature, and nature is technology; resistance is futile—who would want to challenge nature? With this simple insight, Kelly develops a whole theory that can explain literally every development—from malware like Stuxnet to Google glasses—by claiming that this is just what technology wants.

All we have to do is to develop the right listening tools—and the rest will follow. Hence, notes Kelly, “only by listening to technology’s story, divining its tendencies and biases, and tracing its current direction can we hope to solve our personal puzzles.” Elsewhere, he writes, “We can choose to modify our legal and political and economic assumptions to meet the ordained [technological] trajectories ahead. But we cannot escape from them.” So, what he is saying here is this: technology has a story to tell; we should listen to it and modify our political and economic assumptions accordingly.

But why, one might ask, should we modify our political and economic assumptions if we can instead shape those trajectories? What if they are not ordained? Why alter our conception of privacy if we can regulate Facebook and Google? Why accept the proliferation of measures inspired by situational crime prevention and digital preemption everywhere if we can instead limit them only to instances in which they do not undermine dissent and deliberation? And how far should we go in modifying our assumptions? What if the voice of technology that Kelly pretends to hear is actually the marketing speak of Silicon Valley’s public relations departments? Kelly doesn’t bother with such questions; instead, he succumbs to the pro-innovation bias and declares that no meme should ever go to waste: “The first response to a new idea should be to immediately try it out. And to keep trying it out, and testing it, as long as it exists.” Do you hear that, the land mine?

Concerns over distribution never appear in Kelly’s analysis. Instead of discussing who should get to play the proverbial Aristotelian flute—the rich? the talented? the random?—Kelly imagines that technology will simply produce enough flutes so that questions of distribution will themselves become obsolete. Like Peter Diamandis, Kelly depicts a world in which technology will guarantee abundance, and abundance will make conflicts over resources unnecessary. This seems a rather shallow reading of human nature, for when everyone has a flute, some people will certainly want two, if only to stand out from their neighbors. Abundance in the absence of robust political institutions means little.

What’s most disturbing about Kelly’s ideas—and here he’s quite representative of many other technology pundits—is that he thinks beyond local communities and even nation-states. His playing field is the whole of humanity, the entire cosmos. It’s a philosophy best described as macroscopism: everything is analyzed based on how well it fulfills the needs of humanity as a whole. Thus, local communities that choose to restrict certain technologies or prohibit them outright are portrayed as essentially stealing something from humanity. By the same logic, Europeans are holding back possibilities for all of us because they regulate genetically modified food or have tougher environmental standards. It’s one of those cases in which the vacuity of rhetoric surrounding global justice empties existing local practices of any meaning and space for maneuver.

This is most pronounced in Kelly’s discussion of the Amish and their notoriously limited—some might say well-thought-out—use of technology. What bothers Kelly about the Amish is that, by refusing to use certain technologies, they are actually slowing down innovation everywhere: “By constraining the suite of acceptable occupations and narrowing education, the Amish are holding back possibilities not just for their children but indirectly for all.” The idea never occurs to Kelly that political communities might be entitled to self-determination and that, as long as they arrive at some restrictions on technology in a democratic fashion—alas, this is not always the case with the Amish—it might actually be good for humanity. Instead of criticizing the undemocratic means, he is only concerned with the ends.

Likewise, when discussing restrictions on technology, Kelly views all of them as ineffective, even harmful. “If we take a global view of technology, prohibition seems very ephemeral. While an item may be banned in one place, it will thrive in another.” He continues, “In a global marketplace, nothing is eliminated. Where a technology is banned locally, it slips away to pool somewhere else on the globe.” But why should we take a global view of technology when we live in a world where technology is regulated by local communities? A certain technology might disappear in one place but appear in another because, in the former case, the community deemed it unacceptable and was powerful enough to enforce the ban, while in the latter case, the community either embraced the technology of its own will or was simply to weak or corrupt to resist the marketing talk of whoever came pitching.

The problem with Kelly’s thought is that, while nominally about technology, it’s actually deeply political; what’s worse, it traffics in rather obnoxious politics. No one liked the idea that technology is just an extension of nature more than the Nazis (well, at least before the possibility of defeat forced them into a more pragmatic mode). Here is Kelly on nature and technology: “Technology’s dominance ultimately stems … from its origin in the same self-organization that brought galaxies, planets, life, and minds into existence.” Or consider this passage: “We tend to isolate manufactured technology from nature, even to the point of thinking of it as anti-nature, only because it has grown to rival the impact and power of its home. But in its origins and fundamentals, a tool is as natural as our life.” Now compare Kelly’s proclamations with philosophizing by the Nazi technology functionary Fritz Todt: “It would be paradoxical if the works of technology stood in contradiction to nature in their outward expression since the real essence of technology is a consequence of the laws of nature… The works of technology must be erected in harmony with nature; they may not be permitted to come into conflict with nature as thoughtless, egotistical measures.” The Nazis heard the voice of technology: it informed them about gas chambers.

Likewise, the laissez-faire part of Kelly’s thought comes directly from Ayn Rand, even though he doesn’t acknowledge the connection.

Rand’s name rarely comes up in the context of technology theory, but she did write one essay, “The New Anti-Industrial Revolution,” that addressed the subject of technology regulation head-on. The crux of Rand’s argument can be boiled down to one pithy saying: “A ‘restricted’ technology is the equivalent of a censored mind.” Thus, Rand writes, in the best tradition of macroscopism, that “restrictions [on technology] mean the attempt to regulate the unknown, to limit the unborn, to set rules for the undiscovered.” Because we never know what new innovation a technology regulation might thwart, we should never attempt it in the first place. “Who can predict when, where or how a given bit of information will strike an active mind and what it will produce?” wonders Rand before warning that the “ecological crusade” would rid us of our toothbrushes, and “computers programmed by a bunch of hippies” (she actually wrote that—in 1971!) would retard human progress. By this logic, societies should not restrict the use of biological weapons or asbestos because we don’t know what good might come of them.

To support the idea that technologies—and now “the Internet”—develop in accordance with their own rules, Kelly and other pundits usually invoke Moore’s law. For Kelly, “the curve [behind Moore’s law] is one way the technium speaks to us.” The idea that Moore’s law is akin to a natural law is widespread in Silicon Valley—it’s one of the original myths of Ray Kurzweil’s singularity movement—and it has long spread beyond the technology industry, frequently invoked to justify some course of action.

There are few empirically rigorous studies of Moore’s law, but Finnish innovation scholar Ilkka Tuomi has done perhaps the most impressive work, digging up industry data, calculating actual growth rates, and tracking various expressions and references to Moore’s law in the media. Tuomi’s conclusion? “Strictly speaking there is no such Law. Most discussions that quote Moore’s Law are historically inaccurate and extend its scope far beyond available empirical evidence,” he writes. Furthermore, notes Tuomi, “sociologically Moore’s Law is a fascinating case of how myths are manufactured in the modern society and how such myths rapidly propagate into scientific articles, speeches of leading industrialists, and government policy reports around the world.”

In its original 1965 formulation by Intel cofounder Gordon Moore, the law stated that the number of components on chips with the smallest manufacturing costs per component would double roughly every twelve months. Ten years later Moore significantly revised his estimates, updating the growth rate to twenty-four months. But he also changed what was being measured. Thus, writes Tuomi, while still counting the number of components on semiconductor chips, Moore now no longer focused on optimal-cost circuits but rather mapped the evolution of the maximum complexity of existing chips. In 1979 he revised the law yet again. The industry, in the meantime, took his law to mean whatever it wanted, even embracing a different time estimate of eighteen months. As most media reports will attest, many still believe that eighteen months is what Moore said—even Intel’s site used to claim this—but Moore never said any such thing, and he is usually the first to point it out (“I never said eighteen months. I said one year and then two years.”).

By analyzing the actual growth rates, Tuomi found that while the semiconductor industry was experiencing significant growth, it was anything but neat and exponential. The growth in the 1970s exhibited different patterns from that in the 1980s; growth patterns in the 1990s differed again. There was even more diversity across individual microprocessors. To question Moore’s law, then, is not to deny that important changes have happened over the last five decades but only to see how well those changes fit a singular pattern that a “law” predicts. As Tuomi points out, Moore’s law has always been about the future, not about the past; historical accuracy has never really bothered the semiconductor industry.

One intriguing interpretation of Tuomi’s work is that the semiconductor industry greatly benefited from the rhetoric surrounding Moore’s law, for it promised ever-cheaper semiconductors and helped ease concerns about where they would actually be used, thus boosting the initially weak demand for the industry’s products. In retrospect, this may have been for the better. “The industry has been continuously falling forward, hoping that Moore’s Law will hold, economically save the day, and justify the belief in technical progress,” notes Tuomi. “Instead of filling a market need, the semiconductor industry has actively and aggressively created markets.” But we shouldn’t mistake the clever marketing and rhetorical tricks of the semiconductor and computer industries for divine laws that inform us about the future.

A concept like Moore’s law doesn’t just fall from the sky; nor does it stay around for so long simply because of its accuracy (which, at any rate, isn’t great). Instead of postulating that technology speaks to us through Moore’s law, why not study who else—perhaps Intel?—might be doing the talking. That this “what technology wants” kind of discourse allows technology companies to present their business strategies as a natural unfolding of history is not something we should treat lightly. Technology wants nothing—and neither does “the Internet.”

A TEXT POST

Recycle the Cycle

Following Tim Wu’s review of my book in The Washington Post, I thought it would be fun to post the long section from the book where I’m critiquing Wu. No wonder he sounds so annoyed: the empire of bullshit that is “the Internet” is slowly beginning to crumble. The section is called “Recycle the Cycle” (a reference to Wu’s “cycle” theory - which, surprisingly, I don’t find very convincing). 

***

Recycle the Cycle

If Eisenstein’s print culture is an example of how clumsily history can be appropriated to frame the present debate about “the Internet,” the traffic occasionally goes in the other direction as well—as in when our Internet commentators start with contemporary anxieties and travel back in history to show how many of the modern debates associated with “the Internet” are themselves just a subset of much greater, longer debates about networks, information, and technology. There is nothing wrong with their mission per se—some might even argue that this is what history is for—but most such accounts are peculiar in that, in their quest to tell a certain story about “the Internet,” they misrepresent and badly mangle the past, leaving us with an impoverished reading of history and a confused game plan for the future.

This should make us pause to ponder if Internet-centrism—whatever its own origins in bad history—might be nudging us to rewrite the history of other, pre-Internet periods with one simple purpose: to establish a coherent teleological account of how all other technologies paved the way for “the Internet” and how their own governance failed to embrace “Internet values” and may have delayed the arrival of this “network of all networks.” This is the ideology of Internet-centrism at its purest: it suggests what kinds of questions we could and should be asking of the past. As an ideology, it has no need to dictate the answers, for we already know what we need to find in order to complete the grand narrative of “the Internet” itself.

A troubling example of what Internet-centrism does to history—in terms of both mangling the content and giving a second life to arcane, long-forgotten methodologies—can be found in Tim Wu’s much-acclaimed The Master Switch. Wu, a legal scholar who coined the term “net neutrality,” is a leading contributor to unfolding debates about “the Internet”; The Master Switch is his attempt to explore the history of other technologies—the telegraph, telephone, radio, cinema, television—and illuminate what those technologies can tell us about our current predicaments. This sounds like a noble mission, but anyone undertaking it should be aware of the immense difficulty of engaging with the past on its own terms. At worst, an attempt to illuminate the present by studying the past can turn into a fishing expedition, where the past becomes just a giant toxic aquarium, storing enough factoids and exotic characters to buttress any interpretation of virtually any contemporary trend or phenomenon.

Wu’s argument in The Master Switch goes like this: There’s something peculiar about information industries, for they tend to be dominated (and intellectually ravaged) by “information emperors”—Steve Jobs–like personalities who strive for absolute control. The dictatorial rule of such emperors and several structural qualities of their information empires usually lead to what Wu calls “the Cycle,” which is the inevitable closing of the once open and innovative industries. It happens either because the information emperors are clever but ruthless businessmen or because they co-opt the government into giving them protection from competition. This is how we got Hollywood’s studio system, which exercised unprecedented control over what films to make and what issues to censor; a closed telephone network, where AT&T banned users from plugging in their own devices, thereby potentially delaying the advent of “the Internet”; and, more recently, Apple’s world of apps, in which a politburo sitting somewhere in Cupertino reviews and approves the apps it likes and deletes those it doesn’t.

Wu’s proposed solution to this problem is to prevent companies in the information business from integrating vertically—that is, to prohibit companies that create information from owning or creating infrastructure for its dissemination and vice versa. But the government’s involvement would end there: Wu’s reading of history suggests that government involvement has been mostly detrimental to the growth of information industries. His ideal is to keep both big government and big business out of the information industries; this, according to Wu, is how all successful information industries have developed, including “the Internet,” and this is how it should be in the future. Amen.

This might seem like an appealing and elegant argument, but in reality it’s just an attempt to come up with one of those “theories of everything.” In this instance, “everything” is to be explained by a fixed set of concerns—in Wu’s case, concerns over openness and innovation—that have come to dominate our thinking about “the Internet.” First of all, Wu conveniently leaves aside those information industries—like book publishing—in which no dominant information emperor has emerged. The Cycle doesn’t go there; it’s too crowded. Curiously, one such emperor might emerge very soon—his name is Jeff Bezos, and he runs a small start-up called Amazon—but Wu himself seems to be enamored of Amazon and the price efficiencies it brings. Second, by limiting his history only to America—and why would “the Cycle,” if it were real, unfold in America only?—he misses many foreign cases in which information emperors have done much good.

Wasn’t André Malraux, France’s powerful minister of cultural affairs under Charles de Gaulle and the godfather of New Wave cinema, one such emperor, albeit perhaps of a public-service variety? Zooming in on Malraux’s career would reveal that the success of the French film industry in the 1960s was the direct consequence of the government’s eagerness to subsidize risky low-budget films and support maisons de la culture, where such films could be shown. It’s not a story of market-led innovation; quite the opposite. Information emperors don’t have to be seen as evil (perhaps they don’t have to be seen at all; Internet-centrism, in Wu’s hands, has miraculously resuscitated the much discredited “great-man-of-history” style of narrating the past). Likewise, governments, despite the many conspiratorial suspicions that geeks harbor about them, can be powerful and benevolent players in the information industry.

One doesn’t have to travel to France to see that; in fact, a more comprehensive look at the history of information empires in America reveals as much. As Paul Starr has shown in his devastating review of The Master Switch in the American Prospect, even a cursory look at the history of the post office—a communications network created by the government to foster free expression—is enough to disprove many of Wu’s theories. The post office was conceived of as a monopoly, and it’s been extremely successful in its mission. According to Starr, “The government didn’t invite rival postal firms to compete; in fact, it created a monopoly. That monopoly, however, was conducive to free expression because of the policies Congress adopted, which subsidized the circulation of newspapers irrespective of their viewpoint and spread postal service throughout the country.” But on “the Internet,” no one likes monopolies—they smack of Microsoft and IBM—so this chapter of telecommunications history simply gets thrown overboard. Internet-centrism tolerates no competing hypotheses.

As Starr points out, had the US government followed Wu’s dictum that “government’s only proper role is as a check on private power, never as an aid to it,” it “would not have created the Post Office or fostered the rapid development of newspapers, and American democracy would have suffered. More recently, the United States would not have developed the Internet or public broadcasting”—both of which required massive public financing. Such strong antigovernment sentiment—that it’s always a parasite on innovation—is a recurring feature of the geek mentality, which is partly responsible for the disgust many geeks feel toward politics. As Starr notes, “Government policy, in Wu’s distorted recounting, is mostly a record of regulatory capture and craven mistakes that Americans should be ashamed of—even though, strangely enough, the United States has for much of its history been a leader in communications, partly because of the constructive role government has played.” Is it really that surprising, then, that a recent column on the technology site Info World was titled “Why Politicians Should Never Make Laws about Technology”? If geeks learn their history from Tim Wu, this sentiment follows quite naturally.

Methodologically, Wu’s treatment of information industries is very close to Eisenstein’s treatment of print culture: he starts by simply projecting the qualities he associates with “the Internet” back into the past and assuming that the industries and technologies he studies have a nature, a fixed set of qualities and propensities, then proceeds to celebrate selectively those examples that support those qualities and discard those that don’t. So Wu starts with the hunch that the openness of “the Internet” is under threat, travels back in history to find trends that suggest all information industries have experienced similar pressures, and returns to the present to announce that history reveals that openness is indeed under threat on “the Internet.”

That this is the very premise on which he starts his intellectual journey doesn’t much matter in the end because such history has a very clear activist bend; the goal is not to understand the history of technology but to find enough historical arguments in order to—just like in Jonathan Zittrain’s case—make “the Internet” live forever. Such Internet-centrism would be bad in itself, but it is also exerting a very unhealthy influence on technology and media history, where everything that transpired before “the Internet” is now reexamined according to its benchmarks. Historical accounts inspired by Internet-centrism are simply bad history, even if they occasionally make for effective policy advocacy on issues like net neutrality. That Internet-centrism makes us blind to this reality is a reason to worry, not celebrate.

A TEXT POST

My FT oped: Google Revolution Isn’t Worth Our Privacy

Let’s give credit where it is due: Google is not hiding its revolutionary ambitions. As its co-founder Larry Page put it in 2004, eventually its search function “will be included in people’s brains” so that “when you think about something and don’t really know much about it, you will automatically get information”.

Science fiction? The implant is a rhetorical flourish but Mr Page’s utopian project is not a distant dream. In reality, the implant does not have be connected to our brains. We carry it in our pockets – it’s called a smartphone.

So long as Google can interpret – and predict – our intentions, Mr Page’s vision of a continuous and frictionless information supply could be fulfilled. However, to realise this vision, Google needs a wealth of data about us. Knowing what we search for helps – but so does knowing about our movements, our surroundings, our daily routines and our favourite cat videos.

Some of this information has been collected through our browsers but in a messy, disaggregated form. Back in 1996, Google didn’t set out with a strategy for world domination. Its acquisition of services such as YouTube was driven by tactics more than strategy. While it was collecting a lot of data from its many services, from email to calendar, such data were kept in separate databases – which made the implant scenario hard to accomplish.

Thus, when last year Google announced its privacy policy, which would bring the data collected through its more than 60 online services under one roof, that move made sense. The obvious reason for doing so is to make individual user profiles even more appealing to advertisers: when Google tracks you it can predict what ads to serve you much better than when it tracks you only across one such service.

But there is another reason, of course – and it has to do with the Grand Implant Agenda: the more Google knows about us, the easier it can make predictions about what we want – or will want in the near future. Google Now, the company’s latest offering, is meant to do just that: by tracking our every email, appointment and social networking activity, it can predict where we need to be, when, and with whom. Perhaps, it might even order a car to drive us there – the whole point is to relieve us of active decision-making. The implant future is already here – it’s just not evenly resisted.

This week, data protection authorities from six European countries showed some such resistance when they announced an effort to investigate if Google’s policy violates their national privacy laws. This announcement follows several months of consultation – preceded by a letter that EU data regulators sent to Mr Page in October – which yielded little response from Google. The letter urged the company to disclose how it processes personal data in each service and to clarify why and how it combines data that come from its multiple services.

Google believes it has met all the formal requirements on announcing the policy back in 2012. Under the current legal regime, Google, even if fined, doesn’t stand to lose much from these investigations. However, if the recent proposal to create a new single EU data regulator that can fine companies up to 2 per cent of their global turnover goes through, it might present Google with a bill as high as $1bn, if any breaches were found. Even if their investigations fail, European regulators must be applauded for embarking on a mission that their colleagues across the Atlantic wouldn’t even dare contemplate.

Europe, with its unflinching defence of privacy as a fundamental human value, cannot afford to act disjointedly – not at a time when the most powerful company in Silicon Valley is amassing a fleet of self-driving cars and releasing Google Glass, a line of smart glasses that some privacy advocates rightfully compare to stylish CCTV cameras that, for reasons unknown, we have accepted to wear on our heads.

Google’s intrusion into the physical world means that, were its privacy policy to stay in place and cover self-driving cars and Google Glass, our internet searches might be linked to our driving routes, while our favourite cat videos might be linked to the actual cats we see in the streets. It also means that everything that Google already knows about us based on our search, email and calendar would enable it to serve us ads linked to the actual physical products and establishments we encounter via Google Glass.

For many this may be a very enticing future. We can have it, but we must also find a way to know – in great detail, not just in summary form – what happens to our data once we share it with Google, and to retain some control over what it can track and for how long.

It would also help if one could drive through the neighbourhood in one of Google’s autonomous vehicles without having to log into Google Plus, the company’s social network, or any other Google service.

The European regulators are not planning to thwart Google’s agenda or nip innovation in the bud. This is an unflattering portrayal that might benefit Google’s lobbying efforts but has no bearing in reality. Quite the opposite: it is only by taking full stock of the revolutionary nature of Google’s agenda that we can get the company to act more responsibly towards its users.

Engineering, as the tech historian Ken Alder once put it, “operates on a simple, but radical assumption: that the present is nothing more than the raw material from which to construct a better future”. This might well be the case but not all raw materials are alike; if European history teaches us anything, it’s that some raw materials – and privacy is certainly among them – are worth cherishing and preserving in their own right, even if it means that the much-anticipated future will take somewhat more effort and energy to construct. A revolutionary future built on shaky foundations: to that, we must say a resounding No.

The writer is author of ‘To Save Everything, Click Here: The Folly of Technological Solutionism’

THE ORIGINAL ARTICLE IS HERE