I've seen a few links to this:
The rise of fast processors and cheap storage means that remembering,once incredibly difficult for humans, has become simple. Viktor Mayer-Schönberger, a professor in Harvard's JFK School of Government, argues that this shift has been bad for society, and he calls insteadfor a new era of "forgetfulness."
This notion is not new, of course: Bowker and Star discuss "organizational forgetting" in their 2000 book Sorting Things Out: Classification and its Consequences. But Mayer-Schönberger takes the argument further, according to the gloss on Ars Technica:
Why would we want our machines to "forget"? Mayer-Schönberger suggests that we are creating a Benthamist panopticon by archiving so many bits of knowledge for so long. The accumulated weight of stored Google searches, thousands of family photographs, millions of books, credit bureau information, air travel reservations, massive government databases, archived e-mail, etc., can actually be a detriment to speech and action, he argues.
"If whatever we do can be held against us years later, if all our impulsive comments are preserved, they can easily be combined into a composite picture of ourselves," he writes in the paper. "Afraid how our words and actions may be perceived years later and taken out of context, the lack of forgetting may prompt us to speak less freely and openly."
In other words, it threatens to make us all politicians.
I've been talking about this same phenomenon for a few years, including in my plenary session at Computers and Writing 2006, but I've tried to differentiate between the panopticon and what I've called the agora, the state in which anyone can review anyone else's actions. Note that some people are embracing the idea of reunifying the diverse factors of their online identities through things such as lifestreams, and these include things more intrusive and personal than what you might find in Google searches: software usage through Wakoopa, music choices through last.fm, browsing through Clutzr. That is, a segment of the online population does not fear the consequences of archiving knowledge, they actively seek and embrace it as a way of textualizing and preserving their identities. They don't want the sort of forgetting that Mayer-Schönberger describes. They want the ability to create what Latour calls "oligopticons," narrow but detailed views of specific experiences that are open to multiple people.
The panopticon is an easy metaphor here, but it is not the appropriate metaphor.
With that in mind, Mayer-Schönberger's solution seems hamfisted:
In contrast to omnibus data protection legislation, Mayer-Schönberger proposes a combination of law and software to ensure that most data is "forgotten" by default. A law would decree that "those who create software that collects and stores data build into their code not only the ability to forget with time, but make such forgetting the default." Essentially, this means that all collected data is tagged with a new piece of metadata that defines when the information should expire.
The law -- especially the kind of law proposed here -- is too much of a blunt instrument here. On the one hand, this is not going to be perceived as a benefit to those lifestreamers and others who have embraced the notion of the oligopticon. Yes, it provides an opt-out clause for those who want to preserve their data, but it assumes that the default should be forgetting -- which is an assumption I would question, especially after reviewing the lifestreaming literature.
On the other hand, the law is almost guaranteed not to work well. It overlays the already considerable requirements on service providers with other requirements that could be difficult to interpret and implement. It imposes an additional burden on service providers operating lawfully in countries that have embraced this law. It does nothing to address countries that have not embraced the law.
My prediction: This notion may gain some traction, but not for long. It will be fought by most of the industry and opposed, at least passively, by a significant minority of users located in democracies. Instead, people in democracies will develop new strategies to manage their online personalities, perhaps becoming "politicians," perhaps developing plausible deniability. Those unfortunate enough to live in totalitarian countries will have to develop alternate strategies for navigating this landscape.
Escaping the data panopticon: Prof says computers must learn to "forget"technorati tags:panopticon, agora, oligopticon
Blogged with Flock
4 comments:
I appreciate your comment on my proposal. I'd like to encourage to read the paper, though, before critiquing it from what you have read second hand. It my clear up some of the confusion reflected in your comment.
Kidn regards,
VMS
Thanks for the comment. As you inferred, I relied on Ars Technica's gloss to characterize the paper. I have now read the paper through.
Could you be more specific about the confusion you see in my post, though? I certainly think your proposal is less intrusive than omnibus legislation, but I still think the proposal to make forgetting the default is too blunt an instrument.
In the paper, you argue:
I propose that we shift the default when storing personal information back to where it has
been for millennia, from remembering forever to forgetting over time. I suggest that we
achieve this reversal with a combination of law and software. The primary role of law in my
proposal is to mandate that those who create software that collects and stores data build into
their code not only the ability to forget with time, but make such forgetting the default. The
technical principle is similarly simple: Data is associated with meta-data that defines how
long the underlying personal information ought to be stored. Once data has reached its
expiry date, it will be deleted automatically by software, by Lessig’s West Coast Code.
Certainly the proposal is simple, but it is also quite sweeping.
On the legal end, it would face multiple requests for exemptions from services whose whole point is to obviate forgetting (ex: Microsoft's MyLifeBits project, Dandelife, iStalkr, and other sites whose whole aim is to provide a comprehensive, durable record of individuals' lives). Because it is sweeping, it will also face severe enforcement issues, requiring a large investment in people and equipment, if it is to have any teeth.
On the software end, this simple proposal requires a remediation effort on the scale of the Y2K efforts of 1999. Associating timestamps with information will be easy for many segments, but nontrivial for others. Certainly the human oversight of "forgetting" would be nontrivial at first, though it would be primarily automated over time.
Lastly, on the social end, the effort strikes me as trying to ungrind the hamburger. Phenomena such as lifestreaming, blogs, and social photo and music sharing are often valued because of, not despite, the permanent record. They represent the ongoing construction of the individual's identity, a textual construction that is motivated in part by the increasing distribution of work, civic, and leisure lives across spatial and temporal boundaries. Again, this phenomenon's extreme edge is represented by projects such as MyLifeBits, but there's plenty of more moderate users who expect their data to exist in perpetuity.
Bottom line, I appreciate the clarification, but I still don't believe this proposal constitutes a workable policy.
Sure. Let me respond to the three dimensions you suggest:
(1) Legally, it is straigt foward: when users agree to have their records kept for a long period of time then that is fine. I assume that this would be part of the terms of some of persistent identity agreements. I would still suggest that data have expiry dates, even if they are far into the future rather than having none at all, but that is detail.
(2) On the softwrae end, the implementatio is relatively staright foward given that the main operating systems we use today are all quite capable of working with a plethora of file meta-data; expiry date would just be another small addition to the meta-data zoo. (And to clean out, in principle all that one would have to do is run a simple cron job or equivalent say once a day.)
(3) Socially, my sense is that you see culture as too much the pre-determined result of the artificats that surround us and their possibilities, yet you seem to underestimate the fact that these artifacts insofar as they are driven by software are extremely plastic, and can be formed many different ways. Different software behavior may prompt different uses.
Moreover, I do not propose that we need perfect or almost perfect enforcement. Shifting the default back is good enough.
And yes, forgetting may be a blunt instrument; but it always has been.
Best,
VMS
Hmm. All right, thanks again for your comments and the time you have spent in clarifying the discussion.
Post a Comment