Wednesday, October 17, 2018

Reading :: Mind over Machine

Mind Over Machine
By Hubert L. Dreyfus and Stewart E. Dreyfus


I've been rereading some of the mid-1980s work that introduced activity theory to the West, specifically paying attention to how it was positioned as an alternative to information-processing cognitive psychology (IPCP). Among others that were heavily cited at the time is this 1986 book, which asks the question: why hasn't artificial intelligence (AI) yielded the results that had been predicted in the 1950s and 1960s?

The 1950s and 1960s were a fertile time for beginning AI research, with Newell and Simon working on cognitive simulation at RAND (pp.6-7). Also at RAND was Stewart Dreyfus (henceforth SD), who was programming JOHNNIAC. Hubert Dreyfus (HD), a philosopher at MIT, expressed misgivings about AI to his brother and was in short order hired as a consultant for RAND in 1964 (p.5).

HD recognized that AI researchers were animated by the "continuum hypothesis": they believed that they were making the first steps, and if they continued, they would make steady progress. In contrast, HD saw a pattern in which AI researchers would solve a simple problem, consider it a first step to more complex problems, then encounter "failure when confronted with the more intuitive forms of intelligence" (p.7). His observations were not greeted with enthusiasm (p.8), but constituted the first detailed criticism of AI. As SD says later on, "Current claims and hopes for making progress in models for making computers intelligent are like the belief that someone climbing a tree is making progress toward reaching the moon" (qtd. on p.10).

To explore the contrast between AI approaches and human expertise, the authors distinguish between "know-how" and "know-that" knowledge—i.e., tacit and embodied knowledge vs. explicit knowledge (p.16). They propose a model with five steps to expertise:

  1. Novice
  2. Advanced beginner
  3. Competence
  4. Proficiency
  5. Expertise (Ch.1).
In the early stages, formalized or explicit knowledge is critical. These also represent the areas in which AI is most suited to assist, since AI excels at processing formal knowledge. But in the later stages, what is required is intuition: "Intuition or know-how, as we understand it, is neither wild guessing nor supernatural inspiration, but the sort of ability we all use all the time as we go about our everyday tasks" (p.29, their emphasis). See Klein for more on intuition in this vein. And like Klein, the authors state that "When things are proceeding normally, experts don't solve problems and don't make decisions; they just do what normally works" (pp.30-31, their emphasis). Put another way: "Competent performance is rational; experts act arationally" (p.36). In this sense, computers are "ideal beginners" (p.63) 

There's more to the book, but let's stop here, because this criticism and this model constitute the enduring legacy of the book. If you're interested in the historical development of AI and understandings of expertise, or in a model of expertise, definitely pick up this book.

Reading :: The Extended Mind

The Extended Mind
Edited by Richary Menary


Don't confuse this book with the one I recently reviewed with the same title by Robert K. Logan. This one is a collection of essays about the extended mind hypothesis, famously discussed in an article by Clark and Chalmers and more recently elaborated in Andy Clark's book Supersizing the Mind.

This collection includes an introduction by the editor; Clark and Chalmers' original essay; and discussions by various respondents interspersed with Clark's own rejoinders. Note that the original essay was also reprinted in Supersizing the Mind and Clark also repurposed some of his rejoinders there, so if you read both books, you'll find a lot of overlapping material.

In the interest of time, I'll just highlight a few things that do not overlap.

In a massive Ch.6, "The varieties of externalism," Susan Hurley situates the extended mind hypothesis among other externalist thought. You can read the entire chapter, but see Table 6.3 on p.144, which summarizes the different strands of externalism in a four-field whose axes are "content/quality" and "what/how."

In Ch.8, "Meaning making and the mind of the externalist," Robert A. Wilson also discusses variants of externalism. Wilson notes that "active cognition arguments" have been with us a while; he categorizes Vygotsky and Luria's mediational approach as such an argument (p.172). "These arguments all focus on determinate forms of a particular cognitive ability (e.g., memory, attention, problem solving) as they are exercised by individual agents. They view the integration of individuals with both their biological and artificial environments as critical to their status as cognitive agents with these particular capabilities" (p.172). In contrast, he characterizes Clark and Chalmers' argument as a "cyborg fantasy" argument (p.173, his emphasis). Such arguments "proceed by introducing an imaginative example in which an individual's performance is mediated by external forms of technology, typically arguing, through a comparison with cases in which the same kind of activity is performed without such mediation, to the conclusion that that the boundary between what's inside the head and what is in the environment is irrelevant to whether a given agent has some particular cognitive capacity" (p.173). And "the chief aim of cyborg fantasy arguments has been to establish the extended mind as a conceptual default; they do so by shifting the burden of proof to internalists, challenging them to identify why the skin should be a relevant boundary for cognition at all" (p.173).

Overall, this collection was an interesting shakedown of the extended mind concept. Unlike Supersizing the Mind, it included different perspectives and criticisms of the concept, and I especially appreciated Wilson's contrast with mediation. At the same time, many of the authors are not as engaging writers as Clark. Nevertheless, if you've become interested in the extended mind concept, I suggest you start here first.

Thursday, September 13, 2018

(The regular production of innovation)

Recently I was asked to put together three videos and a webinar for graduate students at the program "Learning in and for the New Working Life" (LINA) at University West in Trollhättan,
Sweden. We had a really enjoyable session (or at least I had a great time and I hope they did too).

The topic was Cultural-Historical Activity Theory and Work-Integrated Learning. They asked me discuss my research approach and specifically one of my case studies using activity theory.

Now that the webinar is over, I have decided to release these videos to the wild. These all draw from my recent case studies on entrepreneurship training, and each focuses on a different aspect of activity theory applied to these cases. If you've been unsure about how to apply AT to an actual living, breathing case, I hope these videos will help:

Researching the regular production of innovation

I may not monitor comments closely on the videos themselves, but I always see the ones on the blog. If these are useful, please don't hesitate to share them!

Wednesday, September 12, 2018

Reading :: Supersizing the Mind

Supersizing the Mind: Embodiment, Action, and Cognitive Extension
By Andy Clark


I'm pretty sure I picked this 2008 book up several years ago with the intention of reading and reviewing it, but somewhere along the way, I put it down again. Now I can see why: It's really best read as a lengthy reply to critics of the original Clark and Chalmers (1998) article, which proposed the extended mind concept 20 years earlier. As such, it's an intimidating book for an outsider to pick up and read: the author discusses a broad range of psychological phenomena (perception, processing, sensorimotor dependencies) and objections involving each, including alternate readings. For someone like me, there's a lot of inside baseball.

However, I stuck with it this time and found some parts to be general enough to be useful to me. Among other things, it helped me to better and more systematically understand the principled objections to the extended mind thesis—a thesis with which I have felt more or less comfortable ever since Hutchins' Cognition in the Wild (1995).

In the Introduction, Clark contrasts two views of the mind: "BRAINBOUND" and "EXTENDED." To kick off the discussion, Clark quotes physicist Richard Feynman discussing some of his papers with historian Charles Weiner. Weiner characterizes the papers as a "record" of the work done in Feynman's head, and Feynman objects: no, the paper is the work (p.xxv). Clark claims:
Such considerations of parity [between in-head and out-of-head computation], once we put our bioprejudices aside, reveal the outward loop as a functional part of an extended cognitive machine. Such body- and world-involving cycles are best understood, or so I shall argue, are quite literally extending the machinery of mind out into the world—as building extended cognitive circuits that are themselves the minimal material bases for important aspects of human thought and reason. Such cycles supersize the mind. (p.xxvi)
That is, embodied cognition arises from bodily interaction with the world (p.xxvi).

Clark characterizes two extremes to understandings of the mind: (1) BRAINBOUND, in which "all human cognition depends on neural activity alone," the brain is the mind, and the rest of the body provides merely sensory input (p.xxvii); and (2) EXTENDED, in which "the actual local operations that realize certain forms of human cognizing include inextricable tangles of feedback, feed-forward, and feed-around loops: loops that promiscuously criss-cross the boundaries of brain, body, and world. The local mechanisms of mind, if this is correct, are not all in the head. Cognition leaks out into body or world" (p.xxvii).

With this succinct characterization of the two views, let's get into the body of the book.

Ch.1, "The Active Body," explores the consequences of being embodied. According to the "Principle of Ecological Assembly (PEA)," "the canny cognizer tends to recruit, on the spot, whatever mix of problem-solving resources will yield an acceptable result with a minimum of effort" (p.13, his emphasis). The explanatory strategy of "distributed functional decomposition (DTD)" (p.13) states that "it is the roles played by various elements, and not the way those elements are realized, that do the explanatory work" (p.14)—that is, when looking at cognition, we should identify chunks of computation, no matter where they take place (e.g., whether someone holds a figure in memory or writes it on paper). And finally, Clark invites us to think of sensing as coupling: people often "use the sensor as an open conduit allowing environmental magnitudes to exert a constant influence on behavior" (p.16, his emphasis). That is, "The embodied agent is empowered to use active sensing and perceptual coupling in ways that simplify neural problem solving by making the most of environmental opportunities and information freely available in the optic array" (p.17, still his emphasis).

With these principles in mind, Ch.2, "The Negotiable Body," explores the author's assertion that "human minds and bodies are essentially open to episodes of deep and transformative restructuring in which new equipment (both physical and 'mental') can become quite literally incorporated into the thinking and acting systems that we identify as our minds and bodies" (pp.30-31). He of course brings up the illustration of the blind man and the stick, an illustration that has been used by Merleau-Ponty, Gibson, and many others (p.31). Clark acknowledges the obvious criticism (is the tool used as a tool or actually incorporated?) (p.37) and uses research into mirror neurons to argue that, functionally, it is the latter (p.38). He concludes that "our own embodied activity enacts or brings forth new systemic wholes" (p.39, his emphasis, of course).

In Ch.3, "Material Symbols," Clark explores language as cognitive scaffolding with three effects:

  • "First, the simple act of labeling the world opens up a variety of new computational opportunities and supports the discovery of increasingly abstract patterns in nature."
  • "Second, encountering or recalling structured sentences supports the development of otherwise unattainable kinds of expertise."
  • "third, linguistic structures contribute to ... our ability to reflect on our own thoughts and characters and our limited but genuine capacity to control and guide the shape and contents of our own thinking." (p.44)
Labeling, he says, allows us to "cheaply and open-endedly project new groupings and structures onto a perceived scene" (p.46). 

Structured language, he says, has been explored in many literatures, including the Vygotskian (p.47). He's particularly interested in "the role of linguistic rehearsal in expert performance itself," i.e., how it "enables experts to temporarily alter their own focus of attention, thus fine-tuning the patterns of inputs that are to be processed by fast, fluent, highly trained subpersonal resources" (p.48). 

In Ch.4, he examines "niche construction," in which an organism alters its environment "in ways that may sometimes change the fitness landscape of the animal itself" (p.61). Niche construction can include "the cultural transmission of knowledge and practices resulting from individual lifetime learning, when combined with the physical persistence of artifacts" (p.62; compare Cole and Gajdamaschko's chapter "Vygotsky and Culture" for a Vygotskian understanding of this view). Ultimately, what matters is "the way niche-construction activity leads to new feedback cycles" (p.62). Niche construction involves space (p.64; think of how people do paperwork by arranging stacks, for instance). Clark notes that Sterelny (2003) dubs humans as "epistemic engineers" who "engineer their own habitats, and ... these are transmitted to the next generation, who further modify the habitat," including the "epistemic environment," modifications that "affect the informational structures and opportunities presented to each subsequent generation" (p.66). Clark compares epistemic and pragmatic actions: Pragmatic actions advance toward a goal, while epistemic actions extract or uncover information (p.71). Humans do both: "the human agent, one might say, is nature's expert at becoming expert" (p.75).

The machinery of mind, then, is not just biomachinery: external traces are among the physical vehicles of beliefs (p.76). Clark and Chalmers (1998) offered the Parity Principle: rather than starting with the assumption that cognition is in the head, they identify functions that, if they were to be implemented in the head, would count as cognitive functions. If those functions happen outside the head, Clark and Chalmers say, they should still be counted as cognitive (p.77). 

So ends Section I. In Section II, Clark addresses criticisms of the argument.

Ch.5, "Mind Re-bound?", acknowledges a criticism by Adams and Aizawa: "The fallacy is to move from the causal coupling of some object or process to some cognitive agent to the conclusion that the object or process is part of the cognitive agent or part of the agent's cognitive processing" (p.86—Clark's summary of the argument). 

Ch.6, "The Cure for Cognitive Hiccups," acknowledges Rupert's (2004, 2006) criticism that "cognitive processes lean heavily on environmental structures and scaffoldings but do not thereby include those structures and scaffoldings themselves" (p.111). 

Ch.7, "Rediscovering the Brain," acknowledges a criticism from a variety of quarters: that of "the asymmetric relationship that obtains between the organism and its props and aids. Subtract the props and aids, they argue, and the organism may create replacements. But subtract the organism, and all cognitive activity ceases" (p.162). Clark counters: "it is true that the key microlocus of plasticity is the individual human brain. It is the brain's great plasticity and thirst for cheap, outsourced labor that drives the distributed engines of sociotechnological adaptation and change. It is true, too, that by subtracting [the brain], the whole process grinds to a standstill. ... But it by no means follows, from the fact that [brains] are in that way lopsidedly essential to all this, that the rest of the hybrid, distributed circuitry is not part of the mechanistic base for specific episodes of cognitive processing" (p.162). 

In Ch.9, Clark summarizes his argument. He argues that embodiment matters to cognition in three ways:
  • "spreading the load"
  • "self-structuring of information"
  • "supporting extended cognition" (p.196). 
He goes on to sketch out theses for supporting a science of the extended mind. 

In all, the book grapples with the objections to, and implications of, understanding cognition as extending beyond the brain. Yet, as I found on my first attempt, these discussions are really focused on developing the above-mentioned science of the extended mind—that is, they go deeply into the philosophy of psychology. Since I'm neither a philosopher nor a psychologist, I found these aspects of the book to be less interesting (although I recognize their value). 

If you have become interested in extended cognition or others in this area (situated cognition, distributed cognition), this book is a great way to think more deeply about the issues involved and the objections leveled against such views. Notice that it takes as given the notion that cognition is computation, something that Hutchins (1995) also takes as his starting point, but that situated cognitionists such as Lave (I think) would not concede. Check it out.

Wednesday, September 05, 2018

Reading :: On Justification

On Justification: Economies of Worth
By Luc Boltanski and Laurent Thevenot


This book was published in French in 1991 and in English in 2006. Here, a sociologist (Boltanski) and an economist (Thevenot) examine the question: What frameworks do people use to understand the social world? Where do these frameworks conflict, and when they do, how do these conflicts manifest (p.9)? And they approach it with an interestingly rhetorical tool: How do people justify things (p.9)? That is, when someone addresses a dispute, what arguments do they use, and what assumed frameworks—specifically, political philosophies—do those arguments rest upon (p.13)? What sense of injustice do people feel when forms of justification impinge on inapplicable situations? What problems arise when arguments are based on "illegitimate" (i.e., incompatible) values (p.15)?

To borrow a metaphor from Vygotsky, the authors propose to go fishing. The phenomenon in question—the frameworks on which people draw—is not directly observable. But the justifications they provide allow us to identify the contours of these frameworks. Importantly, these frameworks are not attached to collectives (e.g., discourse communities) but to situations (p.16). People confront uncertainty by using objects to establish orders, and they consolidate objects by attaching them to these orders (p.17). The approach allows us to examine these objects and orders, and it also allows us to understand organizations as composite assemblages from different worlds, ones that can tolerate situations of different natures and that can deploy a plurality of mechanisms from different worlds (p.18). By examining a bilevel construction between people and generality (pp.18-19), the authors examine how the two relate. As the authors succinctly tell us, the book's "primary aim is to build a framework within which a single set of theoretical instruments and methods can be used to analyze the critical operations that people carry out when they want to show their disagreement without resorting to violence, and the ways they construct, display, and conclude more or less lasting agreements" (p.25).

This setup is exciting. The project opens up a way for us to talk about heterogenous logics in organizations, something that is also discussed in places such as Latour's An Inquiry into Modes of Existence (2013), Heckscher & Adler's The Firm as a Collaborative Community (2006), and my own book All Edge (2015). Beyond these books, my reading of this book suggests a methodology for getting at these logics through carefully structured empirical work. At this point in the book, I expected some clever fine-grained empirical research, perhaps along the lines of psychological experiments or focus groups in which participants encountered a carefully constructed (and conflicted) scenario that would prompt different and contrasting justifications. Or, perhaps, some closely structured fieldwork.

Alas, although the authors hint at that sort of work, they mainly stay at the theoretical level. When they look at specific cases, these are mainly through Boltanski's close reading of management books (the same approach he took elsewhere)—an approach that I think is inadequate for understanding the relationship between individual justifications and larger frameworks, especially since management books are not the product of a single author but rather a collaboration among authors, editors, editorial boards, marketers, distributors and others.

The authors also reduce the frameworks—perhaps a priori, though that is not clear to me—by identifying political orders and characterizing them through the earliest proponents to have presented the polity in systematic form (p.71). Thus we get a "critical matrix" (Ch.8, although, alas, not presented as a matrix but rather in blocks of text) consisting of the following worlds:

  • inspired
  • domestic
  • fame
  • civic
  • market
  • industrial (p.237)
Obviously members of a society must be able to navigate situations from different worlds; for instance, artists (inspired) must be able to stand in line like anyone else (industrial) (p.216). And obviously people who argue from different premises can compromise (i.e., agree to suspend a clash between worlds without recourse to a one-world test; p.277). Such compromises are always composites, and therefore can always unravel via a recourse to a one-world test; but they can solidify with objects composed of elements from different worlds, making them more resistant to critiques (p.278; you can see the relationship with actor-network theory, on which the authors draw). 

But, the authors argue, when multiple worlds are in play, the critique is inconsistent (p.282). I wondered: is this inconsistency perhaps because their a priori framework is so confining? Compare it to, say, Bakhtin, who takes a much more open approach to different logics and worlds. 

Nevertheless, the authors have my thanks for explicitly tying their project to rhetoric (p.73) and for deeply thinking through the issues inherent in justifications across different frameworks/views/political orders. Their work is more fine-grained than I've been able to represent here, and I expect that I'll return to this book again as I think about these issues. 

Reading :: Human-Machine Reconfigurations

Human-Machine Reconfigurations: Plans and Situated Actions, 2nd Edition
By Lucy A. Suchman


Lucy Suchman's Plans and Situated Actions (1987) was a huge influence on the fields of human-computer interaction and computer-supported cooperative work. It currently has 11586 citations, according to Google Scholar, and for good reason: This book changed HCI by demonstrating that the then-dominant view—the view embraced by information-processing cognitive psychology, in which an abstract mind engaged in cognition-as-computation—could not adequately account for how people engaged with machines. That is, when working with machines, humans did not use plans (in the sense of stepwise series of abstract actions) so much as situated actions (in the sense of local, sensed alternatives at each moment). Based on her ethnomethodological studies at Xerox PARC, conducted for her PhD in anthropology, Suchman demonstrated that photocopiers' help systems were built for plans, but their users worked through situated actions, resulting in mismatches and rendering the help systems unhelpful.

I discovered this book early in my PhD program (1994 or 1995) and spent a while with it and similar books, as well as the PDFs of technical reports supplied by Xerox PARC and EuroPARC. (The WWW was relatively new then, and the idea of downloading research papers directly from the research institution seemed magical.) Suchman, who was a PhD student at the time, wasn't alone—others with interests in ethnography, ethnomethodology, and action research also worked or interned at these institutions, including ethnomethodologist Graham Button, linguist Geoffrey Nunberg, and computer scientist Susanne Bodker, whose 1991 Through the Interface (a foundational text for activity theory in HCI) was based on her 1987 dissertation. In fact, the late 1980s and early 1990s yielded many texts that questioned the then-dominant information-processing cognitive psychology in HCI and introduced constructivist perspectives. But Suchman, deservedly, became perhaps the best known.

What's the relation of that 1987 book to this 2007 book, Human-Machine Reconfigurations? It's complicated. On one hand, Plans and Situated Actions is in here as Ch.2-10, nearly untouched (some of the footnotes have been updated). On the other hand, the new book has a new Ch.1, "Readings and Responses," and a new Ch.11-15. That's an additional six chapters, more than one would expect from a second edition, but less than one might expect from a new book. The new title and subtitle reflect the mixed message.

To be honest, the core of the original book holds up, and Chapter 1 works to situate it as a historical document, but I did not think that the new chapters at the end were of the same quality. Since I haven't reviewed the original Plans and Situated Actions on this blog, I'll spend most of my time in this review on it.

In Ch.1 (a new chapter), Suchman gives us the background of the original study from her viewpoint 20 years later. That study began "in 1979, when I arrived at PARC as a doctoral student interested in critical anthropology of contemporary American institutions and with a background as well in ethnomethodology and interaction analysis" (p.8). She became interested in interactivity when her colleagues attempted to design the interface for a new photocopier. The copier had been advertised as so simple that one had only to press a green button (p.8)—but customers complained that it was too complicated (p.9). To investigate, she videotaped her colleagues attempting to work with the machine, and on that basis, concluded that "the machine's complexity was tied less to its esoteric technical characteristics than to mundane difficulties of interpretation characteristic of any unfamiliar artifact" (p.9). Based on this study, and on her understanding via ethnomethodology that contra AI assumptions "human conversation does not follow the kind of message-passing or exchange model that formal, mathematical theories of communication posit" (p.10), she essentially applied conversation analysis to "people's encounters with the machine" (p.10). She noted that when she was in the room, she could see how she "might have intervened," but "the machine appeared quite oblivious"—so "what resources was I ... a full-fledged intelligent observer, making use of in my analyses" compared to those of the machine? "The machine had access only to a very small subset of the observable actions of its users" (p.11). "My analysis, in sum, located the problem of human-machine communication in continued and deep asymmetries between person and machine" (p.11).

This problem is more complicated than her colleagues assumed, she says, because "I take the boundaries between persons and machines to be discursively and materially enacted rather than naturally effected and to be available ... for refiguring" (p.12).

Suchman then gets to her distinction between plans and situated actions. "My position then and now has been that plans are conceptual and rhetorical devices (often materialized in various ways, as texts, diagrams and the like) that are deeply consequential for the lived activities of those of us who organize our actions in their terms" (p.20), but they do not constitute a stepwise program. Here, she addresses some of the pushback that she received after the book's initial publication, regretting a word choice that caused some readers to misinterpret her argument.

From here, we get into the original book. In Ch.4, Suchman overviews interactive artifacts as they have been historically treated in computer science, noting that in the early 20th century, in the name of turning the study of cognition into a science, "the study of cognition as something apart from overt behavior was effectively abandoned" (p.36); the strategy in cognitive science in 1987, and especially in AI research, was to reduce cognition to computation (p.37).

In Ch.5, "Plans," Suchman discusses AI's consequent understanding of plans: "plans are prerequisite to and prescribe action, at every level of detail," and "mutual intelligibility is a matter of the reciprocal recognizability of our plans, enabled by common conventions for the expression of intent and shared knowledge about typical situations and appropriate actions" (p.51). That is, they are programs: "The planning model in cognitive science treats a plan as a sequence of actions designed to accomplish some preconceived end," where "action is a form of problem-solving" (p.52) and "actions are described, at whatever level of detail, by their preconditions and their consequences" (p.53). "Goals define the actor's relationship to the situation" and "the plan is prerequisite to the action" (p.53). Notice that this understanding of planning assumes an individual actor. In interaction, the model is extended to 2+ actors, and others' actions must be seen as expressions of their underlying plans (p.56). As Suchman notes, the literature equivocates "between plans as a conceptual framework for analysis and simulation of action and plans as a psychological mechanism for its actual production" (p.58). Yet the relationship between intent and the actual course of action is "enormously contingent" (p.60). This situation is not helped by AI's uptake of speech acts, which takes Austin's claim that "language is a form of action" to mean that communication can be subsumed to the planning model (p.61). Here, the problem of inferences is handled through "scripts" (p.64).

In contrast, Suchman embraces Garfinkel's view: "a background assumption ... is generated by the activity of accounting for an action when the premise of the action is called into question" (i.e., post hoc) (p.67). Thus "plans and goals do not provide the solution" for the problem of interaction; "they just restate it" (p.67).

In Ch.6, Suchman overviews the contrasting notion of situated actions, drawn from anthropology and sociology (p.69). In this view, plans are "resources for people's practical deliberations about action" that are "located in the larger context of some ongoing practical activity" (p.69). (In the late 1980s and early 1990s, it was popular to characterize the larger context as a way to get past cognitive science's stepwise focus.) Suchman asserts that "every course of action depends ... on its material and social circumstances" and her aim is "to investigate how people produce and find evidence for plans in the course of situated actions" (p.70). To do so, she turns to ethnomethodology, which she overviews in the remainder of the chapter.

Let's skip a bit here, since the actual study data are not that interesting for us here in 2018. Suffice it to say that Suchman demonstrates that the cognitive science understanding of plans is not adequate for explaining the frustrating encounters with the copy machine that her users encountered, and that those of us who have used copy machines recognize.

Now we get into the new areas of the book, Ch.11-15.

In Ch.11, "Plans, Scripts, and Other Ordering Devices," Suchman surveys literature on ordering devices that has come out since the original book. This literature survey includes John Law, Liam Bannon, Phil Agre, Steve Woolgar, Madeline Akrich, and others who will be familiar to those who read this blog.

In Ch.12, "Agencies at the Interface," Suchman surveys literature on the so-called "smart machine," starting with ALICE and ELIZA.

In Ch.13, "Figuring the Human in AI and Robotics," Suchman surveys the literature on humanlike machines.

Chapter 14, "Demystifications and Reenchantments of the Humanlike Machine," surveys encounters with humanlike machines.

Finally, in Chapter 15, "Reconfigurations," Suchman says "In this chapter I consider some new resources for thinking about, and acting within, the interface of persons and things" (p.259).

As you can tell from my characterizations of the new chapters, I didn't find much to write about in these chapters. I'm not sure why the author and publisher decided to release the second edition in this form, but the contrast between the two parts of the book is profound. In Plans and Situated Actions, Suchman is white-hot, incisively identifying fundamental problems in AI and HCI and deftly illustrating them with data that we can all recognize from our own interactions. In contrast, the expanded chapters are bland literature reviews that do not clearly relate back to the original argument, do not seem to advance the ball, and do not appear to make a contribution other than the survey.  In terms of argument, they don't seem to connect. In terms of genre, the literature reviews come at the wrong end of the book. And in terms of story, Suchman goes from being an MVP to a spectator.

None of this takes away from Plans and Situated Actions, which remains vital reading and is perhaps enhanced by Suchman's new context-setting chapter (Ch.1). For that reason, I still highly recommend the book (or its first edition, which is still on my shelf). But I wish that Suchman and the publisher had kept the original title and replaced the last five chapters with a brief afterword.

Monday, September 03, 2018

Reading :: Decision Making in Action

Decision Making in Action: Models and Methods
Edited by Gary A. Klein, Judith Orasanu, Roberta Calderwood, and Caronline E. Zsambok


Recently I reviewed an edited collection on naturalistic decision making (NDM) that resulted from the Second Naturalistic Decision Making Conference in 1994. This edited collection similarly comes from a 1989 workshop and is edited by some of the same people. Its goal was "to describe naturalistic decision making" and its four key features: "dynamic and continually changing conditions, real-time reactions to these changes, ill-defined goals and ill-structured tasks, and knowledgeable people"; the collection presents "models and methods pertaining to these four features" (p.vii). Importantly, NDM emphasizes ecological research (p.vii).

The book has five sections: overview/background, NDM paradigms, methodology, applications, and evaluations. I will just overview a few chapters in the first section.

In Section A, Chapter 1, Judith Orasanu and Terry Connolly discuss the fact that traditional decision making research has focused on the "decision event," in which one decision maker chooses among known, fixed alternatives (p.5). In contrast, the authors argue that "decision performance in everyday situations is a joint function of two factors: (1) features of the task and (2) the subject's knowledge and experience relevant to that task" (p.7). They list 8 factors of NDM:

  1. "Ill-structured problems"
  2. "Uncertain, dynamic environments"
  3. "Shifting, ill-defined, or competing goals"
  4. "Action/feedback loops"
  5. "Time stress"
  6. "High stakes"
  7. "Multiple players"
  8. "Organizational goals and norms" (p.7)
The authors emphasize that NDM happens within a "decision cycle," which "reflects the incomplete knowledge, dynamically changing conditions, and competing goal structures that characterize NDM situations" (p.19).

In Chapter 3, Marvin Cohen describes three paradigms on decision making: formal-empiricist, rationalist, and naturalistic. He compares these based on criteria of normative evaluation, style of psychological modeling, and style of empirical observation (p.43). In comparing these, Cohen ends up leveling some sharp critiques of Kahneman-type decision bias claims (e.g., p.82). 

In Chapter 5, Raanan Lipshitz overviews decision making models, including Klein's Recognition-Primed Decisions (RPD) model. She compares these in detail—a very useful chapter, since most of these make appearances in the same order in later chapters.

As someone who is new to NDM, I appreciated the overviews and broad strokes. NDM went on to be popularized in Klein's later books, but here we see some early work contrasting with the then-dominant views of decision making. If you're interested in NDM, yes, check this collection out.

Reading :: The Extended Mind: The Emergence of Language, the Human Mind, and Culture

The Extended Mind: The Emergence of Language, the Human Mind, and Culture
By Robert K. Logan


My friends, I use this blog to process my readings, evaluate them, and understand how they fit in with various scholarly conversations. Usually this approach works well for me. But every once in a while, I run into a reading that I have a hard time drawing a bead on. And this book is most certainly one of them.

The author is a professor emeritus in the Department of Physics at U-Toronto. But, long ago, he became interested in questions of mind, language, and culture—"I am not a linguist or a cognitive scientist but rather a physicist who morphed into a media ecologist" (p.7) and "as far as the field of linguistics goes, I am autodidactic and not an expert" (p.10). Here, he seeks to (a) advance the Extended Mind Model to explain the emergence of language, (b) supplement that model with others in the literature, and (c) "develop the notion of Universal Culture, which is to culture what Chomsky's Universal Grammar is to language" (p.4).

He summarizes the sweep of his argument at the beginning of the book: argues that "the origins of speech and the human mind are shown [in this book] to have emerged simultaneously as the bifurcation from precepts to concepts and a response to the chaos associated with the information overload that resulted from the increased complexity of hominid life." That is, information overload necessitated a new way to handle it, speech. "Rather than regarding thought as silent speech, one may just as well regard speech as vocalized thought" (p.5). Essentially, he argues, language extended the human brain, creating the human mind (p.6).

He extends this argument in the next chapter, overviewing the "six modes of language" (p.30): speech; writing and numbers; "the language of science," which is "a form of knowledge management" (p.31); computing; and the internet. (He does not appear to offer a definition of language.) Each form of language, he argues, arises from information overload, requiring a new way to deal with the resulting abstractions (p.33). "Consequently, each new mode of language is informatically more powerful than its predecessors, but at the same time a little less poetic, with the exception of the Net" (p.33). Each arises from a "crisis" in which information overload led to chaos, leading to "a new abstract level of order" (p.42). This idea is interesting and highly appealing—if it can be substantiated.

The author claims that he is "influenced" by Vygotsky (and quotes the 1962 edition of Thought and Language; see pp.43-44).

Like Logan, I'm not a linguist, and I certainly haven't put in the time reading Chomsky that he has. But I do see problems with the overall thesis. For instance, Logan does not discuss what happens if the crisis is not dealt with—if a new abstract order does not emerge. Does the social group reach a steady state of low-boil crisis? Does it collapse? These questions seem crucial, since to the best of our knowledge, writing was invented only three times in world history. (For that matter, does he mean "crisis" the way Vygotsky used the term, as a dialectical contradiction?) He also characterizes science as a "language," which seems to be at odds with what we know about science as a social practice.

Beyond these questions, I began seeing argumentation patterns that seem foreign to the social sciences with which I am familiar—although perhaps my linguist friends can tell me whether they are common in linguistics. For instance,

  • Logan argues that "language speeds up thought processes" and, as proof, notes that he had formulated this idea independently of Marshall McLuhan, who he later discovered said the same thing (p.47). (He offers no empirical proof that language speeds up thought processes, which would be a momentous claim. One can safely argue that language changes the task, allowing someone to do the same work in less time.) 
  • Similarly, in discussing how tools emerged from trial-and-error, he asserts: "A scientific analysis of these tools would reveal that they had achieved an optimum design long before the advent of modern engineering" (p.59)—but provides neither a proposed analysis nor other proof. 
  • Later, he asserts without explanation: "Dance is basically a form of body language set to music" (p.61). 
  • Although Chomsky's Universal Grammar and Logan's own Extended Mind Model are not falsifiable, he concedes, "the one hope I have, however, that I might be on the right track is that, I believe, Christiansen (1994), Deacon (1997), Donald (1991, 1998), Tomasello (1999), and I have come to similar conclusions from completely different starting points" (p.82; see a similar argument on p.223).
  • Although two approaches might explain known facts about grammar acquisition, "the one trump card that I would play now ... is Occam's razor..." (p.141).

He dismisses arguments the same way: "In numerous cases, however, I felt that the conflicts in the literature were artificial and were more a case of academic turf wars than anything substantive" (p.71). Well, okay then.

Many other examples show up across the book, and as I read through them, I wondered whether they had to do with a different style of argument—or with Logan's autodidacticism. For instance, Logan argues confidently that oral speech is changed by writing in that "orally composed sentences rarely contain more than seven to nine words" because "short-term memory can only deal with seven plus or minus two items at a time," and thus "it is only sentences that are written out that will contain large numbers of words with the exception of certain erudite lecturers who compose written prose in their head or read them from a prepared text. Such a speaking style often puts its listeners to sleep because they are unable to keep track of such long sentences" (p.125). Put aside the fact that Logan offers no evidence for these claims—a bigger problem is that words and sentences are not natural units of speech, they are analytical units that were invented in written speech. (Also, Logan doesn't seem to have conversations with the same people I do.)

In fact, as the examples above suggest, Logan rarely refers to empirical research when constructing his theory, although that research is both plentiful and produced by some of the same people he cites. He is, however, comfortable citing occasional simulations—"The results of the computational simulations and artificial language learning show that it is possible that languages evolved to match hominid cognitive abilities rather than the other way around" (p.144).

Beyond these issues, Logan doesn't seem to be consistent with the theorists that he cites. Although I don't know Chomsky well, I do know Vygotsky, and Vygotsky would not agree with the following assertions that Logan makes:

  • Speech is vocalized thought (p.5).
  • Speech is a biological process and thus governed by Darwinism (p.17; cf. Understanding Vygotsky Ch.9 among other places).
  • "mind = brain + language" (p.63) — later changed to "mind = brain + language + culture" (p.250).
  • Words encode concepts and allow for more efficient processing (p.67).
  • "Word meanings are personal, belonging to the individual producing them or interpreting them" (p.232)
  • The mind is best explained as dualist (p.235).
Logan ends with a chapter in which he asks whether culture is an organism, then suggests that an individual's culture is an organism while a society's culture is a species (pp.264-266). I really couldn't tell whether Logan was using "organism" as a metaphor here or whether he meant this assertion literally. But whatever it is, he extends it by arguing that if individual culture is an organism, then memes are its genes (p.277). 

As I mentioned above, I use this blog to process what I think of books. Most of the time, I have a good idea of what I think before I begin writing the review. This time, I didn't, and the relatively unedited text you see in this review chronicles my thinking through of the book. As you'll intuit, I think the book is deeply flawed. It doesn't engage deeply with at least some of the literature; it makes some superficial arguments at crucial places; it asserts as undisputed facts claims that need empirical grounding; it makes claims that are in fact disproved by empirical language and literacy research. That is, it reads like the work of an autodidact from another field who has not put in the time to test his arguments. 

I make this evaluation with some hesitation. After all, the author does make a point of saying that he has worked with luminaries such as Schmandt-Besserat and McLuhan, and he has published about media since at least 1977, long before I even thought about such matters. Yet, at least in this book, he does not effectively make his case. I can't recommend it.

Reading :: Culture and Thought

Culture and Thought: A Psychological Introduction
By Michael Cole and Sylvia Scribner


Cole and Scribner published this book in 1974 for "the beginning student" who was interested in culture and cognition as well as "the advanced student and the professional" who have been suspicious about cultural psychology (p.v). To orient ourselves, this book was published two years before Luria's Cognitive Development (1976) and four years before the coauthors coedited Vygotsky's Mind in Society (1978). So the matters discussed here, though Vygotskian in orientation, were discussed without much of the published work that would underpin later efforts by the authors.

Side note: As sometimes happens, I had an unwelcome companion while reading the book: a previous reader, who left singularly unhelpful comments penciled in the margins. When Cole and Scribner describe Bartlett's (1932) characterization of South Africans (p.2), the comments read: "Holy shit what blind ignorance & eurocentrism!" And when Cole and Scribner criticize Bartlett's characterization on methodological grounds (p.3), the comments read: "Thank you." I was reminded of why I no longer go to movie theaters. Please don't leave comments in library books, everyone.

Back to the text. The authors note that anthropologists, philologists, and psychologists have developed separate definitions of "thinking" (p.2) and discuss these, concluding that their concern will be "to get beneath the performance shown in a particular situation to the psychological processes responsible for it" (p.5). "As yet there is no general theory or conceptual framework in psychology that would generate specific hypotheses about how culturally patterned experiences influence the development of cognitive processes in the individual" (pp.6-7). They propose to "discover a strategy of research that will help us to uncover how individual and cultural processes interweave with each other as the child develops and becomes integrated into society" (p.8).

The authors provide a brief history of this line of inquiry, ending with the Vygotsky-Luria line of inquiry (p.30). From there, they provide chapters on culture as it applies to language, perception,  conceptual processes, learning and memory, and problem solving. (The latter discusses Luria's Uzbek expedition, which at the time was discussed in only one 1971 article; p.161).

In a concluding chapter on culture and cognition, the authors use the language of higher mental functions and functional systems (via Luria 1966) to characterize the relationship between the two (p.192). They argue, "we are unlikely to find cultural differences in basic component cognitive processes" or what Vygotsky would call lower mental functions (p.193, their emphasis).

Overall, although this book was interesting, I did not find it as valuable as some of the later work by these authors. Partly that is due to the fact that I'm not especially interested in aspects such as perception (I similarly hit this wall when reading Soviet psychology last spring and summer). But partly it's due to the dearth of grounding for some of the more complex arguments: most of Vygotsky's work had not been translated yet, and Luria's book hadn't been published either. So, as a historical document, the book is valuable, but if I were to build arguments on cultural psychology, I'd reach for later work.

Nevertheless, if you're interested in cultural psychology's development, this book is a must. Pick it up—but please don't write marginal comments unless it's yours.

Reading :: The Psychology of Literacy

The Psychology of Literacy
By Sylvia Scribner and Michael Cole


This book was published in 1981, shortly after Cole helped Luria publish Cognitive Development (1976) and after Cole, Scribner and colleagues coedited the Vygotsky collection Mind in Society (1978). It describes part of five years' worth of fieldwork in Liberia conducted by the authors (1973-1978), specifically work with the Vai, a minority population with the distinction of having developed and sustained their own written language. This phenomenon intrigued the authors, who had Luria's Uzbek expedition and Vygotsky's understanding of thought and language in mind as they developed a study of Vai literacy.

The authors clearly ground the study in Vygotsky, who provided grounding in psychology for "testing speculations about written language and thought"—grounding that was otherwise lacking (p.8). Based largely on Vygotsky's discussion of signs and tools and of lower and higher mental functions (pp.8-9), and drawing on Luria's Uzbek expedition for methodology (p.10), the authors hoped to conduct studies that could separate schooling from literacy (p.15).

The authors note, "we failed to find a discrete dimension of Vai life that could be labeled 'literacy'" (p.107): beyond family status in Vai society and sex, no other factors seemed to be common to literacy. They compared Vai literates, English literates (English was taught in schools), and Arabic literates (who were taught in religious schooling) (p.107), recognizing that these groups sometimes overlapped.

Like Luria, they tested the three groups of literates in terms of taxonomic classification: "in our population of informants, the only factor consistently associated with taxonomic choices was urban living" (p.122). In terms of logic, non-schooled literacy did not produce general cognitive effects—that is, there was no general "literacy" phenomenon, and therefore non-school literacies did not function as surrogates for schooling (p.132).

In terms of literacy and metalinguistic knowledge, they found that the adult Vai they studied did not perform like US/European children, with the exception of grammar: studies "had produced consistent results indicating that Vai script literates are better conversationalists about formal features of sentences than their nonliterate neighbors," probably because Vai literates often conversed about what makes good or correct Vai (p.159). In fact, the Vai practice of criticism required well-developed expository skills (p.219).

Literacy, the authors argue based on these studies, produces localized rather than generalized changes in cognitive skills. Yet "we have identified skills that are associated with literacy learning and that are not byproducts of general learning experiences in the classroom" (p.234). They explain these skills with the concept of practice, which "consists of three components: technology, knowledge, and skills" (p.236). Practice is

  • recurrent
  • goal-directed
  • using a particular technology
  • using particular systems of knowledge (p.236)
And "practice always refers to socially developed and patterned ways of using technology and knowledge to accomplish tasks" (p.236).

"This notion of practice guides the way we seek to understand literacy," they add: "We approach literacy as a set of socially organized practices which make use of a symbol system and a technology for producing and disseminating it" (p.236). They conclude, "our evidence justifies the position that literacy, as well as schooling, has identifiable cognitive consequences" (p.251).

This book is slow going in places—the authors really show their work in terms of methodology and data analysis—and sometimes the conclusions seem like they are small in proportion to the work put into the study. But in part that's because so much work has been done in Vygotskian studies of literacy since then. Overall, this book is a strong example of a painstakingly constructed methodology yielding specific results. If you're interested in literacy, cross-cultural studies, or Vygotskian theories of language, check it out.



Wednesday, July 11, 2018

Reading :: A Civic Entrepreneur

A Civic Entrepreneur: The Life of Technology Visionary George Kozmetsky
By Monty Jones


George Kozmetsky (1917-2003) “is most widely known today for two accomplishments -- taking the early steps that propelled the business school at the University of Texas at Austin toward its current position as an internationally prominent institution, and playing a central role in the economic transformation of Austin from a sleepy college town in the mid-twentieth century to its present-day status as a center of high-technology research, development, and manufacturing” (p.3). In the second role, Kozmetsky founded the IC2 Institute and its unit, the Austin Technology Incubator, and played a pivotal role in bringing MCC to Austin in the early 1980s. This thick biography (461pp plus end matter) covers his entire life, from his birth as the child of Russian immigrants in Seattle to his death of ALS.

For my purposes, the most important parts of this story relate to Kozmetsky’s hand in Austin’s economic transformation. Kozmetsky was at one point one of the richest men in the US, thanks to his cofounding of Teledyne, and one unofficial reason that he was selected as Dean of UT’s business school was that the University expected him to be a donor in addition to courting donors. He did not disappoint on either count, but his personal donations often strategically advanced his own objectives. One example was the Institute for Constructive Capitalism, which he began planning in 1966 and established in 1977 (p.293), largely through his own donations. This “think and do tank,” which had “an emphasis on practical, hands-on economic development activities as well as academic research” (p.293), was meant to promote a constructive capitalism in line with Kozmetsky’s politically liberal views, involving collaboration among government, business, education, and labor (p.297). The term “capitalism” fell out of favor, so in 1985 the Institute received a new name: the IC2 Institute (named for innovation, creativity, and capital) (p.305).

IC2 evolved over the years. Originally, its goals were to “underwrite advanced research on issues such as determining the role of capitalism in society, encouraging business enterprises to contribute more toward solving societal problems and improving life, nurturing entrepreneurship and gaining a better understanding of the role of small business in caapitalist society, and promoting better public understanding of business, including improving business education in public schools.” In the 1980s, its scope shifted “into a more proactive role of spurring economic development” (p.296). In the 1990s, Kozmetsky focused on technology commercialization pursued as a commitment shared by stakeholders in government, business, education, and labor (p.297). Kozmetsky remained director until 1995; after stepping down, he remained closely involved in its leadership (p.300).

IC2 had six characteristics that, Kozmetsky said, made it a unique research organization: (1) it dealt with unstructured problems; (2) it attacked those problems with interdisciplinary “think teams”; (3) it went beyond traditional business ed subjects; (4) it linked theory and practice viaa interdisciplinary conferences, initiatives, and “experienced practitioners”; (5) it tried new ways of solving problems while remaining in a university environment; (6) it transferred research results to other institutions (pp.301-302).

IC2 developed several innovations that were eventually transferred to the business school. One was Moot Corp, a business plan competition (p.314). Another was the Master’s Degree in Science and Technology Commercialization (MSTC), which began in 1996 and was one of the earliest degree programs of its type, and was moved to the business school in 2014 (p.382). (Note: The MSTC was one of the models for the HDO program.)

Kozmetsky wanted to build up the regional capacity for developing new firms, based in the philosophy that it’s better to create new firms than to “steal” existing ones (p.336). Among his efforts was the Center for Technology Venturing, a 1988 joint project between IC2 and the business school’s Bureau of Business Research (p.336). In 1989, this Center spawned teh Austin Technology Incubator (p.337), which “provided companies with strategic advice, mentoring, financing, marketing, public relations assistance, emnployee benefit programs, and office and manufacturing space for as long as three years” (p.339). The Center for Technology Venturing also spawned the Texas Capital Network in 1989; “by 1994 it had expanded nationwide and changed its name to the Capital Network” (p.338).

Kozmetsky also “led a group of entrepreneurs in establishing the Austin Entrepreneurs’ council” in 1991 (p.345), which in turn spawned the Austin Software Council, a group that in 1998 separated from IC2; in 2002 it changed its name to the Austin Technology Council (p.347).

IC2 also did work for NASA in 1993, laying the ground for NASA’s “nationwide network of technology transfer programs by the end of the century” (p.381).

As you’ll notice, I’ve mainly focused on the Austin parts of the Kozmetsky story. But the entire book is interesting reading, painting Kozmetsky as a driven, focused, generous, yet sometimes flawed person. He was a visionary strategist and he also excelled at the “retail” parts of being dean -- maintaining relations with local and national government agencies and external stakeholders. On the other hand, he was not always effective at the “management” parts of being dean, such as identifying and addressing faculty grievances.


If you’re interested in technology, entrepreneurship, leadership, technology transfer, technology commercialization -- or Austin -- definitely pick up this book.

Reading :: The Sustainability Edge

The Sustainability Edge: How to Drive Top-Line Growth with Triple-Bottom-Line Thinking
By Suhas Apte and Jagdish Sheth


This business book tackles the question: How do you build a business around sustainability? That is, rather than building a business around simply maximizing business profits and/or maximizing shareholder value, how do you maximize benefits for all stakeholders, and how do you do it as a source of competitive advantage (p.16)? As the authors state, “Today, the best companies are generating every form of value that matters: emotional, experiential, social, and financial. And they’re doing it for all their stakeholders, not because it’s ‘politically correct’ but because it’s the only path to long-term competitive advantage” (p.17).

This angle may remind my readers of the socially responsible capitalism of Kozmetsky or, more cynically, Boltanski and Chiapello’s argument that capitalism incorporates its critiques. In any case, the book is built around the “sustainability stakeholders framework,” which attends to “triple-bottom-line thinking”: direct impact (consumers, customers, employees), indirect impact (NGOs, governments, media), and enabler impact (suppliers, investors, communities) (p.25). Most of the book involves describing each kind of impact and each stakeholder, providing illustrations from major companies such as Clorox.

To be honest, most of the heavy lifting in the book is done by the framework described above (depicted at the beginning of most chapters) and the attendant Stakeholder Sustainability Audit in the appendix. The (singular) bottom line of the book is that companies will be more sustainable if they identify a way to balance the needs of all the listed stakeholders. In other words, this book could be easily summarized in an HBR article, but the illustrations make it easier to apply.

Should you read this book? If you’re trying to formulate a business model, I’d suggest skimming it.

Wednesday, June 27, 2018

Reading :: New Wealth

New Wealth: Commercialization of Science and Technology for Business and Economic Development
By George Kozmetsky, Frederick Williams, and Victoria Williams


George Kozmetsky was dean of the School of Business at the University of Texas as well as the founder and first director of the IC2 Institute. Since I've been doing with IC2 and specifically with technology commercialization, I thought I'd better pick up this 2004 book.

Kozmetsky was an enthusiastic promoter of the development of socially responsible capitalism. In this book, he and the other two authors describe a research agenda for understanding technology-based enterprise creation, "with the initial goal of identifying those variables apparently critical in the creation of businesses where success was based on the commercialization of technologies, application or both" (p.13). Their research, they say, confirmed:

  • technology as a type of wealth, one that may need new measurements
  • the need for technology policy
  • the interactions among markets, no one of which is wholly insulated from others
  • the need for effective management and entrepreneurial training
  • technology transfer as a process
  • fast-company design
  • new management strategies
  • the consequent need for enhanced applied research (p.14)
They list policy implications (pp.14-15), which amount to finding ways to encourage technological activities through public policy that appropriately harnesses private talent and enterprise.

Throughout the book, they discuss relevant concepts, often drawing from other IC2 and IC2-adjacent publications. For instance, Chapter 8 is about creating the technopolis; it summarizes the insights from Smilor, Kozmetsky, and Gibson as well as Gibson and Rogers. Chapter 9, Adoption of Innovations, treads the same ground as Rogers

They also clarify some pieces that I haven't seen discussed elsewhere. For instance, they succinctly summarize factors of technology commercialization: 
  1. "Technology is a constantly replenishable national resource."
  2. "Technology generates wealth, which in turn is the key to economic, social, and political power."
  3. "Technology is a prime factor for domestic productivity and international competitiveness."
  4. "Technology is the driver for new alliances among academia, business, and government."
  5. "Technology requires a new managerial philosophy and practice." (p.62)
In technology commercialization, R&D results are "transformed into the marketplace as products and services in a timely manner" (p.65). Traditionally, "industrial laboratories concentrate on mission-oriented products and universities confine themselves primarily to basic research and teaching," but this approach is inadequate, resulting in fewer opportunities, more layoffs and closures, a weaker global position, poorer regional and local development, and poorer growth opportunities. "Since 1996, a new paradigm has been emerging ... [which] includes institutional developments involving academia, business, and government technology venturing." This new paradigm involves "accelerating the successful commercialization of innovation in a competitive environment" (p.65). (For examples, see my recent papers on technology entrepreneurship education.) 

Related, the authors have a chapter on industrial parks and incubators. This chapter includes a short history of the IC2 Institute's Austin Technology Incubator (p.85). In 1989, ATI was founded. In 1995, it moved into the MCC building (p.85). 

Chapter 15 reviews "The Austin Model"; I want to note this chapter for later, but I won't review it.

Finally, the book concludes with Chapter 20, "Toward Capitalism with Conscience." Specifically, "we will consider the 'conscience' of capitalism as that of avoiding or rectifying inequities in the sharing of wealth and prosperity" (p.200). The authors draw on Milton Friedman here in claiming an interdependence between free enterprise and freedom (p.201). More skeptical readers may be reminded of Boltanski and Chiapello's claim that capitalism always reconfigures itself to incorporate its critiques.

In all, this was a useful book for me in terms of understanding IC2, ATI, and Austin as well as technology commercialization's raison d'etre more broadly. If you're interested in such things, definitely pick it up.

Reading :: Posthumanism

Posthumanism: Anthropological Insights
By Alan Smart and Josephine Smart


This slim book (98pp. plus end matter) provides a useful, accessible introduction to posthumanism, a term that I have been hearing but have been until now unmotivated to explore. Spoiler alert: it involves Haraway, Hayles, Latour, Maturana & Vela, Pickering, Wrangham, and others I've reviewed and written about. So, although the term has been a bit of a question mark for me, it encompasses a great deal of familiar material.

The authors note that for some, "posthumanism is mostly about how new technologies are changing what it means to be human," but for them, "we have always been posthuman" in the sense that "becoming human involved our intimate interaction with more-than-human elements" such as fire and bacteria (p.2). "Becoming human involved the adoption of new extrasomatic technologies (i.e., things that go beyond our bodies and their basic abilities) and fundamental changes in our microbial ecologies. ... Inhabiting the globe required collaboration with plants and animals" (p.3).

Posthumanism, as the authors put it, denotes both posthuman-ism (after humans) and post-humanism (after the Western humanist tradition, with its emphases on Western-defined secularity, rationality, and human progress) (p.4).

Not surprisingly, actor-network theory constitutes a big chunk of the discussion, with the authors essentially claiming that Latour's "modernity" is roughly equal to their "humanism" (p.23). The authors are interested in the poststructuralist critique of the coherence of the individual leveled by Latour as well as Derrida, Foucault, Haraway, Althusser, and Deleuze & Guattari (p.52). But the authors also point to other lines of thought, such as distributed cognition and Haraway's cyborg anthropology (p.77).

All in all, I appreciated the straightforward simplicity of this book. The authors manage to lay out a clear, well illustrated account of posthumanism, which is quite a trick given some of the abtruse philosophical sources from which they draw (I'm thinking of Deleuze and Guattari here). They draw relationships among the lines of thought that contribute to posthumanism, and they abstract some basic principles for us. If, like me, you have been wondering about the term, this book is a strong introduction; pick it up.

Reading :: Naturalistic Decision Making

Naturalistic Decision Making
Edited by Caroline E. Zsambok and Gary Klein


This book was originally published in 1994 based on the Second Naturalistic Decision Making Conference that year. It was reprinted in 2009.

Naturalistic decision making (NDM), as Caroline Zsambok argues in Chapter 1 ("Naturalistic Decision Making: Where are We Now?"), "is the way people use their experience to make decisions in field settings" (p.4, her emphasis). NDM studies suggest that "the processes and strategies of 'naturalistic' decision making differ from those revealed in traditional field research" (p.4). For instance, in NDM, "the focus of the decision event is more front-loaded, so that decision makers are more concerned about sizing up the situation and refreshing their situation awareness through feedback"—in contrast with traditional decision making, which "emphasizes understanding the back end of the decision event—choosing among options" (p.4).

Key contextual factors of NDM, Zsambok says (quoting Orasanu & Connaly, 1993), are:

  1. "Ill-structured problems"
  2. "Uncertain, dynamic environments"
  3. "Shifting, ill-defined, or competing goals"
  4. "Action/feedback loops"
  5. "Time stress"
  6. "High stakes"
  7. "Multiple players"
  8. "Organizational goals and norms" (p.5)
We can see how these relate to Klein's later books, which are reviewed on this blog. Interestingly, many (especially 4) are also related to John Boyd's OODA loop, with potential interaction between these two lines of inquiry. (It looks like this connection has been explored somewhat in the literature.) Zsambok also notes the connections with research on expertise (p.9) and the difference between cognitive and behavioral task analysis (p.13; see also Crandall et al.). 

Gary Klein discusses applications of NDM in Chapter 5, "An Overview of Naturalistic Decision Making Applications." Here, he notes that "The initial impetus behind the NDM movement was to describe what people do, whereas the motivation behind traditional decision research was to improve the way people made decisions" (p.49). NDM research "tries to describe the strategies proficient decision makers are doing, and does not yet have any central claims about what might led to implications for improving decision quality" (p.50). (Klein later felt comfortable producing such claims, leading to his string of books.) He identifies reasons that NDM might be better applied to decision quality than traditional approaches:
  • "Classical methods do not apply in many naturalistic settings."
  • "Experienced decision makers can be used as standards for performance."
  • "Naturalistic Decision Making tries to build on the strategies people use."
  • "Experience lets people generate reasonable courses of action."
  • "Situation awareness may be more critical than deliberating about alternative courses of action."
  • "Decision requirements are context specific." (p.50)
Zsambok takes up this theme in Chapter 11, "Naturalistic Decision Making Research and Improving Team Decision Making." Based on research, she asserts that good decision-making teams "monitor their performance and self-correct; offer feedback; maintain awareness of roles and functions and take action consistent with that knowledge; adapt to changes in the task or the team; communicate effectively; converge on a shared understanding of their situation and course of action; anticipate each others' actions or needs; and coordinate their actions" (p.112). NDM field studies validate these assertions (p.112) and specifically the idea that teams share mental models (p.113). 

In Chapter 13, "Cognitive Task Analysis," Sallie E. Gordon and Richard T. Gill argue for cognitive task analysis as opposed to behavioral task analysis. Whereas BTA focuses on what people do externally, CTA attempts to capture their cognitive work as well (p.132). CTA analysts try to capture a subset of these:
  • "Concepts and principles, their interrelationships with each other, and their relationship to the task(s)."
  • "Goals and goal structures"
  • "Cognitive skills, rules, strategies, and plans."
  • "Perceptual learning, pattern recognition, and implicit or tacit knowledge."
  • "Mental models"
  • "Problem models"
  • "How novices move through all of the above in various stages to become expert."
  • "Difficulties in acquiring domain knowledge and skills."
  • "Instructional procedures useful for moving a person from novice to expert." (p.132)
In all, this was a useful look at how NDM researchers were positioning their approach against traditional decision making in 1994. We can see here why Klein positions his subsequent books the way he does, specifically pursuing CTA in field studies. We readers from other fields, especially those with a strong field research tradition, may find it odd that some of these arguments have to be made—but the way in which they are made helps us to understand how NDM developed in the subsequent years. 

Wednesday, June 20, 2018

Reading :: Streetlights and Shadows

Streetlights and Shadows: Searching for the Keys to Adaptive Decision Making
By Gary Klein


I just reviewed the methodology text that Klein coauthored; this book is a chance to see his methodological approach in action. Here, Klein focuses on how we make decisions in ambiguous situations. This question is actually quite hard to investigate in the lab, since "systematic errors aren't so serious outside the lab"; indeed, "reasoning strategies let us do many kinds of tasks without consciously or subconsciously performing calculations to perform an estimate" (p.59). So Klein turns to the scenarios that he always turns to in his popular/summary books: aircraft controllers and pilots, firefighters, NICU nurses, etc. (I would complain that he rehashes these scenarios too much across books, but I understand why he does so—they're all great illustrations, and the books use them to make related-but-different arguments to related-but-different audiences.)

Much of this book goes over principles that Klein addresses in his other books, so I'll just highlight a few standouts.

Klein points out that experts avoid data saturation by self-selecting which data to seek. That is, they know which data are most relevant and they shut out the extraneous data, making them more effective (p.133). In fact, he says, "there is never a right amount of information" and "we would be better off if we stopped worrying about getting the right amount of information and instead tried to see the meaning in the data that we do have" (p.135).

People need feedback—"feedback is essential for helping people become more skilled." But feedback itself isn't sufficient (p.165): outcome feedback (what was the result?) does not improve performance as much as process feedback (what were the cause-feedback relations in the performance?) (p.166).

Problems with emergent goals—so-called wicked problems (p.212)—include things such as business models (p.213). For such problems, "when facing wicked problems we have to re-define the goals as we try to reach them. ... No amount of thinking and analysis will make these goals well defined. In such cases, we are going to have to figure out the goals as we go along. The faster we can learn, the more successful we'll be" (p.223, his italics). Yet, he points out, many in this situation will instead "try to increase their control over events" and will thus "stumble into goal fixation" (p.223). In such situations, he advocates "Management by Discovery": "when we face complex conditions we should expect to revise and replace goals on the basis of what we learn" (p.224).

Overall, this book is readable and valuable. It's a little less valuable if you've read Klein's other books, since there's a lot of overlap, but his angle here is different—to dispel myths about decision making. If you're interested in how people make decisions in ambiguous situations (for instance, when entrepreneurs evaluate their business models), definitely pick it up.

Reading :: Working Minds

Working Minds: A Practitioner's Guide to Cognitive Task Analysis
By B. Crandall, G. Klein, and R.R. Hoffman


I've discussed Gary Klein's work before, and specifically how much I appreciate his attitude of trust and respect toward his participants. Klein's work focuses on how experienced professionals (such as firefighters, NICU nurses, and soldiers) make intuitive decisions in high-stakes, high-pressure environments.

To research such cases, Klein needed an ecological approach that allowed him to get at situated decision making in cases in which the participants couldn't necessarily articulate their assumptions, options, or triggers. At the same time, Klein couldn't just follow firefighters around—the events he wanted to study were just too rare, and when they happen, he didn't want his team to get in the way of rescue operations.

The approach that Klein and his partners developed for such cases is called cognitive task analysis (CTA), which "helps researchers understand how cognitive skills and strategies make it possible for people to act effectively and get things done," according to the back of this book. The book is, as the subtitle states, "A Practitioner's Guide to Cognitive Task Analysis." That is, it describes CTA and the situations in which it could be useful; it offers tools and strategies for performing CTA; and it discusses how CTA brings value to the participants. In this sense, it reminds me of Beyer and Holtzblatt's Contextual Design, a similar methodology book written by consultants for practitioners (although addressing different situations with a different methodological approach).

What struck me about Working Minds, though, was that the coauthors had developed a qualitative approach within psychology. As the authors note, in psychology and human factors, analysis typically happens quantitatively; students have little qualitative research training and use "preset plays" based on common statistical tests (p.107). "However, many CTA methods generate data that do not fit easily into standard statistical approaches" (p.107), and this is a problem since "quantification typically means stripping a body of data of its contextual links and decomposing it in order to assign numerical values" (p.108). At the same time, qualitative methods emerging from sociology, anthropology, and education tend to be focused on "topics that do not have a cognitive focus, such as analysis of social processes or attitudes surrounding terminal illness" (p.108).

Faced with this disjuncture, the authors set out to develop a suitable qualitative research approach for psychology's foci. Like many qualitative research approaches, this approach is not linear, with oscillations between structuring data and identifying meaning (p.110). It involves four main steps: preparation; structure data; discover meaning; identify/represent key findings (p.111). And the analysis involves creating "an audit trail that links raw data to eventual outcomes" (p.113). That is, it looks a lot like structured qualitative case study research.

In Chapter 8, the authors "introduce a level of cognitive phenomena—referred to as macrocognition— that emerges when we shift the focus to natural contexts. These are the types of cognition that CTA methods are uniquely designed to capture" (p.131). They discuss this level of cognition in terms of purpose, prior experience, situation, challenge, tools, team members, and organizational constraints (p.132). Macrocognition, they say later, is a "collection of cognitive processes and functions that characterize how people think in natural settings," as opposed to microcognition, which is "studied using carefully controlled methods and procedures" and is supposed to investigate basic, universal features (p.136). Think here of the contrast between Klein's contextualized field interviews and Kahnemann's word problems — or the contrast between laboratory measures of executive functions and ecologically valid measures. As the authors assert, "individuals make decisions but so do teams" and "decision making often depends on artifacts" (p.136). Cognitive activity, the authors assert (citing Hutchins), is "distributed across multiple agents as part of a stream of activity" (p.157).

Overall, I found this book to be rewarding. The authors have identified a need for a qualitative methodology in psychology, oriented to decision-making; they have drawn when appropriate from qualitative traditions in adjoining disciplines; but they have also recognized the differences between those methodological orientations and the one they need. They have carefully and responsibly developed and validated an approach that works for their objectives. And they have articulated it clearly and well—the book is well organized and easy to read. The result is a good intro for practitioners, but I think it would also be suitable for a methods class (with suitable framing). If you're interested in qualitative methodology, and especially if you're wondering why someone would pursue qualitative methods instead of quantitative ones, check it out.

(catching up)

I've been blogging much less regularly lately, about once a month. That's not a function of my reading so much as it is a function of my schedule: there's only so much time in the day, and the Wednesday mornings that I usually blog have been taken up with other things. Consequently, the books have been piling up.

Currently waiting to be blogged are:

  • three books on decision-making (psychology)
  • one classic book on human-computer interaction (based in anthropology)
  • one book on wealth generation (business)
  • a biography of a business leader
In addition, I'm reading a book describing a theory of the origin of language and in my Unread pile are books on posthumanism, the textual society, decision making, value, and sustainability. I'm hoping to clear my blogging backlog so I will be prepared to discuss those books as well. Stay tuned!

Wednesday, June 06, 2018

Reading :: R&D Collaboration on Trial

R & D Collaboration on Trial: The Microelectronics and Computer Technology Corporation
By David V. Gibson and Everett M. Rogers


I've been studying entrepreneurship in Austin, so I picked up this 1994 book about MCC, "America's first major, for-profit R&D consortium," which was launched in 1982 and which arguably laid the foundations for Austin's current status as a technology hub (or "technopolis," a term we will discuss more in a moment). This book, written by IC2 senior research fellows David V. Gibson and Everett M. Rogers, uses archival materials and 9 years' worth of retrospective interviews to tell the story of this consortium from the viewpoints of leaders in technology, politics, and academia. The story ends well for Austin, although MCC itself struggled through much of its history and ceased operations in 2004, a decade after this book was published.

MCC was "the United States' first major, for-profit R&D consortium," launched "by a select group of U.S. computer executives to help save their industry from Japanese competition. They collaborated in planning, implementing, and funding MCC" (p.xv). It had rules: it did not seek US governmental funding and it did not allow foreign firms to join. It also skirted US antitrust law, at least until the 1984 National Cooperative Research Act was passed in reaction to it (p.xv). The book examines the lessons of this consortium, specifically in terms of forming such R&D alliances; understanding cross-organization technology transfer and commercialization; and public/private collaboration to develop jobs and capabilities (p.4).

To understand the need for MCC, we have to recall that in the early 1980s, the US technology industry was worried that Japanese tech companies—which cooperated closely via initiatives such as the VLSI Project—would overwhelm the capabilities of US firms (p.9). This fact worried not just the US tech industry but also the US federal government, which did not want its military technology to rely on foreign companies for "essential semiconductor and computer components" (p.10).

In fact, the authors examine various examples of research consortia, beginning with English research associations in 1917, then moving to Japanese consortia modeled on those associations in the 1950s, then small-scale US consortia and larger Japanese projects in the 1970s (p.14). They discuss MCC's formation in Chapter 2, but we'll rejoin the story in Ch.3, "MCC Comes to Texas."

MCC's site selection committee, headed by Admiral Bobby Inman (who is currently on faculty at the University of Texas), considered dozens of sites before narrowing them down to four: Raleigh-Durham, San Diego, Atlanta, and Austin. That was the order in which the cities were preferred at the beginning of the final selection process. But by the end of the process, a different order emerged: Austin first, then Atlanta, Raleigh-Durham, and finally San Diego (p.99). When MCC announced that it was coming to Austin, observers in the high-tech industry reacted in disbelief: it was nowhere near the top three universities in electronics and computer science research (Carnegie Mellon, MIT, Stanford) and it was perceived as a backwater (p.103).

The authors tell an instructive story about how the selection order changed and how Austin was eventually selected. But before they do, they discuss the notion of the technopolis:
The modern technopolis interactively links technology commercialization with public and private sectors to promote regional economic development and technology diversification. Four factors are fundamental in the development of a region as a technopolis: (1) the achievement of scientific preeminence in technology-based research, (2) the development of new technologies for emerging industries, (3) the attraction of major technology companies, and (4) the creation of home-grown technology companies. (p.100)
 Using this framework, the authors argue that Austin won out in large part "because of the planned-for excellence of its research universities in microelectronics research and graduate education, which coincided with MCC's research agenda" (p.105). Local and regional leaders, including business school dean George Kozmetsky, Governor Mark White, San Antonio mayor Henry Cisneros, and Ross Perot, coordinated closely to put together a package that involved endowing professorships in computer science and electrical engineering at UT (p.117). This was a good move: the proximity to a top research university was critical to the MCC site selection group (p.124), perhaps more critical than direct financial incentives for the consortium—although Texas offered these too:
However, what won for Texas was how its incentives were structured, which reflected how well the Texas leaders obtained and used information about MCC. The Texas incentive package came largely from the private sector, statewide, not from state and local taxes, and the incentives were structured so as to be an investment in the future of Texas as a state, its universities, and business development rather than funds given to MCC. (p.148)
About a third of the incentives involved building MCC a building, owned by the UT system, on land belonging to UT (p.148). Others included $15m for endowed UT positions in electrical engineering and computer science; 30 new faculty positions in microelectronics and computer science; and $2m for purchasing new equipment for research and teaching in these areas (p.149). Texas A&M made similar commitments, but with less specificity (p.150). "In the eyes of MCC's site visitors, the university component of the Texas incentive represented a brilliant strategy" — characterized as "'what we can do together with MCC to improve university research in microelectronics'" (p.158).

Critically, UT "agreed to triple the size of its microelectronics research program and establish 30 new endowed professorships in electrical engineering and computer science" via a two-week process (pp.158-159). This is perhaps the biggest miracle of all, if you are familiar with the inner workings of universities, and it was (obviously) accomplished by ignoring typical decision-making procedures (p.159).

Texas wanted MCC and its leaders put in the effort, did the homework, and talked to the right people in order to make it happen. As the authors point out, acquiring MCC set the conditions for Austin to become a technopolis. Specifically, it built the educational infrastructure for Austin's technology focus; attracted companies and people working in technology; worked with the Austin Technology Incubator (established in 1989; see p.271) to spin off new technology companies; and set priorities for keeping such companies.

In Ch.5, the authors go on to discuss the necessity and problems of technology transfer, which they characterize (rightly, in my view) as a type of communication:
There is usually agreement ... that (1) technology is not just a "thing," and (2) transfer is a profoundly human endeavor. Essentially, "technology" is information that is put to use in order to accomplish some task, the knowledge of how to do something. "Transfer" is the movement of technology via some channel from one individual or organization to another. So technology transfer involves the application of knowledge, putting a tool to use.
The transfer of technology is a particularly difficult type of communication, in that it often requires collaborative activity between two or more individuals or functional units that are separated by a range of barriers. ... we can think of technology transfer as an interactive process with a great deal of back-and-forth exchange among individuals over an extended period of time. (p.333)
The authors characterize technology transfer as having four levels: quality R&D; acceptance; implementation; and application (p.335).

Skipping a bit, let's get to an aside about the establishment of the Austin Technology Incubator (ATI). "ATI had been formed in 1989 as an alliance of public and private interests to nurture technology-based companies for regionally based job growth and economic development. The IC2 Institute, The University of Texas at Austin, and the Institute's director, George Kozmetsky, had launched ATI as an experiment in business, academic, and government collaboration." (pp.413-414). In 1989,
the idea of a regionally based technology incubator was being championed by Dr. George Kozmetsky. To Kozmetsky, such an incubator would facilitate public/private collaboration at the regional level and it would spur economic development, fill vacant office space, train entrepreneurs, and create high-value jobs. The facility, which came to be called the Austin Technology Incubator (ATI), would act as a 'lightening rod,' linking talent, technology, capital, and business know-how to market needs. (p.451). 
The authors also give a thumbnail history of IC2, which Kozmetsky founded in 1977, while still dean of the business school (p.453).

Overall, the book is just what I was looking for. In providing a history of MCC, it also provides a history of Austin's emergence as a technopolis, including backgrounds for institutions with which I am working—IC2, ATI—and greater insights into people who continue to be associated with them. It covers relevant subjects, such as technology transfer and infrastructure. And it's well told. If you're interested in technology, entrepreneurship, civic development, or Austin, definitely pick it up.

As a side note, this book made me think more about the question of cities competing to be sites for companies. Austin is currently one of the cities competing to be Amazon's second headquarters, and critics focus on the question of whether Amazon's potential contribution to the city actually outweighs the city's incentive package. But these deals have broader effects than raw revenue. Structured well—like the MCC deal—these deals can be realized in infrastructure-building that sets the city up for long-term success far beyond that of an individual company. After all, MCC closed its doors in 2004, but Austin remains a technopolis.