By Philip E. Agre
A while back, I reviewed a manuscript for Cambridge and they gave me a ton of books in lieu of payment. Well, this is the last of those books. It's been sitting on the shelf for a while, just waiting for me to get to it. I finally did, and I'm glad of it.
The book is in Cambridge's Learning in Doing series, which also contains classics such as Suchman's Plans and Situated Actions and Chaiklin and Lave's Understanding Practice. Like those books, this one takes a sociocultural perspective on human activity. But unlike them, it uses a philosophical framework to examine artificial intelligence research. And it's not an outside perspective: Agre did his dissertation work at MIT's Artificial Intelligence Laboratory and wrote AI programs as well as analyzing the metaphors and arguments active in the AI community.
Traversing these two worlds is not easy, since they involve very different concepts and vocabularies, and I was fascinated with how Agre frames the book in Chapter 1:
As the successive chapters of this book lay out some technical exercises of my own, the attentive reader will be able to draw up an extensive intellectual indictment of them, consisting of all the bogus assumptions that were required to put forth some proposal for evaluation. But the point of those intellectual exercises does not lie in their detailed empirical adequacy, or in their practical applicability; they do not provide canned techniques to take down from a shelf and apply in other cases. Instead, each exercise should be understood in the past tense as a case study, an attempt in good faith to evolve technical practice toward new ways of exploring human life. What matters, again, is the process. I hope simply to illustrate a kind of research, a way of learning through critical reflection on computational modeling projects. (p.16)
I initially read this passage as proactive damage control, but in retrospect I don't think Agre is being defensive, just trying to head off a misinterpretation so that readers can get more out of the book. In any case, seeing the book as a series of developing case studies is really instructive. At different times, the book "speaks" to philosophers and to AI researchers, and those of us in a third position sometimes feel as if we are listening in on someone else's conversation. But what an interesting conversation it turns out to be. Agre discusses common metaphors in AI in ways that explain the field to philosophers while simultaneously helping AI researchers to reexamine their assumptions -- but he also manages to make sure philosophers understand the AI project and why its problems are so narrowly defined and scoped.
This conversation between philosophy and AI yields this quote: "Technology at present is covert philosophy; the point is to make it openly philosophical" (p.240).
Computation and Human Experience is, unfortunately, fairly narrow in scope itself. It should appeal to those who want to understand artificial intelligence and, more broadly, software development from a sociocultural or interactionist perspective. And the discussion of metaphors, although familiar to rhetoricians, is still worthwhile in a broader rhetoric of technology sense. But the book is hard to generalize more broadly. Nevertheless, it's an interesting read and an interesting introduction to both interactionism and AI.