As a follow-up to the previous post - on Tim Wu - I’ve also decided to post another long section from the book - this time on Kevin Kelly. (Wu’s own thoughts on Kelly are here; my even longer review of Kelly’s “What Technology Wants” can be found here). See just what an “ambitious work of tech philosophy” - as Wu calls it in his review - Kelly’s book is!
Against Technological Defeatism
Viewed in the abstract, it may seem that the tides of digital preemption, situational crime prevention, and reputation-based controls are unstoppable and irreversible. Information is everywhere, and it’s getting cheaper. All of us are carrying mobile phones. Technology seems to be moving in accordance with its own law—Moore’s law—and we, the humans, can only conform and tinker with our laws to meet technology’s demands.
This sentiment pervades our public debate about technology. Thus, the Wall Street Journal ’s Gordon Crovitz writes that “whatever the mix of good and bad, technology only advances and cannot be put back.” The New York Times’s Nick Bilton, writing of multitasking, notes that “whether it’s good for society or bad … is somewhat irrelevant at this point.” Parag and Ayesha Khanna argue in Hybrid Reality that “the flow of technology is at most slowed by reluctant governments, but it is more accurate to say that technology simply evades or ignores them in search of willing receivers.” All these commentators adopt the stance of what I call “digital defeatism,” which—by arguing that this amorphous and autonomous creature called “Technology” with a capital T has its own agenda—tends to acknowledge implicitly or explicitly that there’s little we humans can do about it.
This view of technology as an autonomous force has its own rather long intellectual pedigree; in 1978 Langdon Winner offered perhaps the best summary in his Autonomous Technology: Technics-out-of- Control as a Theme in Political Thought. This view has been debunked hundreds of times as a lazy, unempirical approach to studying technological change, and yet it has never really left the popular discourse about technology. It has recently made a forceful appearance in Kevin Kelly’s What Technology Wants, and Kelly’s thought is not a bad place to observe technological defeatism up close, if only because he is a Silicon Valley maven and the first executive editor of Wired. Besides, very diverse thinkers about “the Internet”—from Tim Wu to Steven Johnson—cite Kelly’s What Technology Wants as an influence. Thus, it won’t be such a great stretch to say that Kelly’s theories do provide the intellectual grounds on which Internet-centrism grows and flourishes.
The defining feature of Kelly’s thought is its explicit denial of its own defeatism. Kelly, using a fancy word, “technium,” as a stand-in for “Technology” with a capital T, reassures his readers that “the technium wants what we design it to want and what we try to direct it to do.” This sounds like a rather uplifting, humanist message—but the very next sentence shatters it: “But in addition to those drives, the technium has its own wants. It wants to sort itself out, to self-assemble into hierarchical levels, just as most large, deeply interconnected systems do. The technium also wants what every living system wants: to perpetuate itself, to keep itself going. And as it grows, those inherent wants are gaining in complexity and force.”
Kelly offers the best of all possible worlds: technology is both what we make it of it and an autonomous force with its own wants and desires and largely independent of humans. Kelly’s thought is full of such doublespeak, by which we are simultaneously promised control over technology and assured that we need no such control because it’s too late. Thus, he can write that “our concern should not be about whether to embrace [technology]. We are beyond embrace; we are already symbiotic with it,” only to follow with “and most of the time, after we’ve weighed downsides and upsides in the balance of our experience, we find that technology offers a greater benefit, but not by much. In other words, we freely choose to embrace it—and pay the price.” So we get both mysticism—we are symbiotic with technology; we’ve already embraced it!—and radical empowerment—whenever we embrace technology, it’s because we want to!—which is a rather odd combination.
But, promises Kelly, none of this actually matters, because technology wants the same things as evolution, for technology is just evolution by other means. Thus, he notes that “with minor differences, the evolution of the technium—the organism of ideas—mimics the evolution of genetic organisms.” Technology is nature, and nature is technology; resistance is futile—who would want to challenge nature? With this simple insight, Kelly develops a whole theory that can explain literally every development—from malware like Stuxnet to Google glasses—by claiming that this is just what technology wants.
All we have to do is to develop the right listening tools—and the rest will follow. Hence, notes Kelly, “only by listening to technology’s story, divining its tendencies and biases, and tracing its current direction can we hope to solve our personal puzzles.” Elsewhere, he writes, “We can choose to modify our legal and political and economic assumptions to meet the ordained [technological] trajectories ahead. But we cannot escape from them.” So, what he is saying here is this: technology has a story to tell; we should listen to it and modify our political and economic assumptions accordingly.
But why, one might ask, should we modify our political and economic assumptions if we can instead shape those trajectories? What if they are not ordained? Why alter our conception of privacy if we can regulate Facebook and Google? Why accept the proliferation of measures inspired by situational crime prevention and digital preemption everywhere if we can instead limit them only to instances in which they do not undermine dissent and deliberation? And how far should we go in modifying our assumptions? What if the voice of technology that Kelly pretends to hear is actually the marketing speak of Silicon Valley’s public relations departments? Kelly doesn’t bother with such questions; instead, he succumbs to the pro-innovation bias and declares that no meme should ever go to waste: “The first response to a new idea should be to immediately try it out. And to keep trying it out, and testing it, as long as it exists.” Do you hear that, the land mine?
Concerns over distribution never appear in Kelly’s analysis. Instead of discussing who should get to play the proverbial Aristotelian flute—the rich? the talented? the random?—Kelly imagines that technology will simply produce enough flutes so that questions of distribution will themselves become obsolete. Like Peter Diamandis, Kelly depicts a world in which technology will guarantee abundance, and abundance will make conflicts over resources unnecessary. This seems a rather shallow reading of human nature, for when everyone has a flute, some people will certainly want two, if only to stand out from their neighbors. Abundance in the absence of robust political institutions means little.
What’s most disturbing about Kelly’s ideas—and here he’s quite representative of many other technology pundits—is that he thinks beyond local communities and even nation-states. His playing field is the whole of humanity, the entire cosmos. It’s a philosophy best described as macroscopism: everything is analyzed based on how well it fulfills the needs of humanity as a whole. Thus, local communities that choose to restrict certain technologies or prohibit them outright are portrayed as essentially stealing something from humanity. By the same logic, Europeans are holding back possibilities for all of us because they regulate genetically modified food or have tougher environmental standards. It’s one of those cases in which the vacuity of rhetoric surrounding global justice empties existing local practices of any meaning and space for maneuver.
This is most pronounced in Kelly’s discussion of the Amish and their notoriously limited—some might say well-thought-out—use of technology. What bothers Kelly about the Amish is that, by refusing to use certain technologies, they are actually slowing down innovation everywhere: “By constraining the suite of acceptable occupations and narrowing education, the Amish are holding back possibilities not just for their children but indirectly for all.” The idea never occurs to Kelly that political communities might be entitled to self-determination and that, as long as they arrive at some restrictions on technology in a democratic fashion—alas, this is not always the case with the Amish—it might actually be good for humanity. Instead of criticizing the undemocratic means, he is only concerned with the ends.
Likewise, when discussing restrictions on technology, Kelly views all of them as ineffective, even harmful. “If we take a global view of technology, prohibition seems very ephemeral. While an item may be banned in one place, it will thrive in another.” He continues, “In a global marketplace, nothing is eliminated. Where a technology is banned locally, it slips away to pool somewhere else on the globe.” But why should we take a global view of technology when we live in a world where technology is regulated by local communities? A certain technology might disappear in one place but appear in another because, in the former case, the community deemed it unacceptable and was powerful enough to enforce the ban, while in the latter case, the community either embraced the technology of its own will or was simply to weak or corrupt to resist the marketing talk of whoever came pitching.
The problem with Kelly’s thought is that, while nominally about technology, it’s actually deeply political; what’s worse, it traffics in rather obnoxious politics. No one liked the idea that technology is just an extension of nature more than the Nazis (well, at least before the possibility of defeat forced them into a more pragmatic mode). Here is Kelly on nature and technology: “Technology’s dominance ultimately stems … from its origin in the same self-organization that brought galaxies, planets, life, and minds into existence.” Or consider this passage: “We tend to isolate manufactured technology from nature, even to the point of thinking of it as anti-nature, only because it has grown to rival the impact and power of its home. But in its origins and fundamentals, a tool is as natural as our life.” Now compare Kelly’s proclamations with philosophizing by the Nazi technology functionary Fritz Todt: “It would be paradoxical if the works of technology stood in contradiction to nature in their outward expression since the real essence of technology is a consequence of the laws of nature… The works of technology must be erected in harmony with nature; they may not be permitted to come into conflict with nature as thoughtless, egotistical measures.” The Nazis heard the voice of technology: it informed them about gas chambers.
Likewise, the laissez-faire part of Kelly’s thought comes directly from Ayn Rand, even though he doesn’t acknowledge the connection.
Rand’s name rarely comes up in the context of technology theory, but she did write one essay, “The New Anti-Industrial Revolution,” that addressed the subject of technology regulation head-on. The crux of Rand’s argument can be boiled down to one pithy saying: “A ‘restricted’ technology is the equivalent of a censored mind.” Thus, Rand writes, in the best tradition of macroscopism, that “restrictions [on technology] mean the attempt to regulate the unknown, to limit the unborn, to set rules for the undiscovered.” Because we never know what new innovation a technology regulation might thwart, we should never attempt it in the first place. “Who can predict when, where or how a given bit of information will strike an active mind and what it will produce?” wonders Rand before warning that the “ecological crusade” would rid us of our toothbrushes, and “computers programmed by a bunch of hippies” (she actually wrote that—in 1971!) would retard human progress. By this logic, societies should not restrict the use of biological weapons or asbestos because we don’t know what good might come of them.
To support the idea that technologies—and now “the Internet”—develop in accordance with their own rules, Kelly and other pundits usually invoke Moore’s law. For Kelly, “the curve [behind Moore’s law] is one way the technium speaks to us.” The idea that Moore’s law is akin to a natural law is widespread in Silicon Valley—it’s one of the original myths of Ray Kurzweil’s singularity movement—and it has long spread beyond the technology industry, frequently invoked to justify some course of action.
There are few empirically rigorous studies of Moore’s law, but Finnish innovation scholar Ilkka Tuomi has done perhaps the most impressive work, digging up industry data, calculating actual growth rates, and tracking various expressions and references to Moore’s law in the media. Tuomi’s conclusion? “Strictly speaking there is no such Law. Most discussions that quote Moore’s Law are historically inaccurate and extend its scope far beyond available empirical evidence,” he writes. Furthermore, notes Tuomi, “sociologically Moore’s Law is a fascinating case of how myths are manufactured in the modern society and how such myths rapidly propagate into scientific articles, speeches of leading industrialists, and government policy reports around the world.”
In its original 1965 formulation by Intel cofounder Gordon Moore, the law stated that the number of components on chips with the smallest manufacturing costs per component would double roughly every twelve months. Ten years later Moore significantly revised his estimates, updating the growth rate to twenty-four months. But he also changed what was being measured. Thus, writes Tuomi, while still counting the number of components on semiconductor chips, Moore now no longer focused on optimal-cost circuits but rather mapped the evolution of the maximum complexity of existing chips. In 1979 he revised the law yet again. The industry, in the meantime, took his law to mean whatever it wanted, even embracing a different time estimate of eighteen months. As most media reports will attest, many still believe that eighteen months is what Moore said—even Intel’s site used to claim this—but Moore never said any such thing, and he is usually the first to point it out (“I never said eighteen months. I said one year and then two years.”).
By analyzing the actual growth rates, Tuomi found that while the semiconductor industry was experiencing significant growth, it was anything but neat and exponential. The growth in the 1970s exhibited different patterns from that in the 1980s; growth patterns in the 1990s differed again. There was even more diversity across individual microprocessors. To question Moore’s law, then, is not to deny that important changes have happened over the last five decades but only to see how well those changes fit a singular pattern that a “law” predicts. As Tuomi points out, Moore’s law has always been about the future, not about the past; historical accuracy has never really bothered the semiconductor industry.
One intriguing interpretation of Tuomi’s work is that the semiconductor industry greatly benefited from the rhetoric surrounding Moore’s law, for it promised ever-cheaper semiconductors and helped ease concerns about where they would actually be used, thus boosting the initially weak demand for the industry’s products. In retrospect, this may have been for the better. “The industry has been continuously falling forward, hoping that Moore’s Law will hold, economically save the day, and justify the belief in technical progress,” notes Tuomi. “Instead of filling a market need, the semiconductor industry has actively and aggressively created markets.” But we shouldn’t mistake the clever marketing and rhetorical tricks of the semiconductor and computer industries for divine laws that inform us about the future.
A concept like Moore’s law doesn’t just fall from the sky; nor does it stay around for so long simply because of its accuracy (which, at any rate, isn’t great). Instead of postulating that technology speaks to us through Moore’s law, why not study who else—perhaps Intel?—might be doing the talking. That this “what technology wants” kind of discourse allows technology companies to present their business strategies as a natural unfolding of history is not something we should treat lightly. Technology wants nothing—and neither does “the Internet.”