Friday, October 28, 2011
Me, Myself and I, Robot - From 9/29/11
In Sherry Turkle's first part of her book, "Alone Together," Turkle paints an interesting picture of the psychological shifts that are occurring in the way humans view and approach both interpersonal interactions and our interactions with machines (or robots.) In a very different tone than Jenkins in "Convergence Culture," Turkle mainly traces these changes through studies and observations in the way children interact/play with "interactive computer toys" (and eventually robots themselves) through the 80s, 90s and into the current day. Her main argument seems to be that the trends set by children (and perhaps the elderly) and slowly co-opted by adults, of how we interact with and understand artificial intelligences; tell us a lot about the ways we are coming to view human interaction as well. Namely, that an increasing willingness to "participate" in the interaction with AI can be seen as a psychological shift away from the messy, unpredictable, "dangerous" state of human affairs and into the arms of the safe, predictable, reliable robot. This is a very frustrating thought for someone like me, and I catch myself wondering how much truth there is in some of this analysis.
One concept that caught my eye in the first chapter was the idea of what constitutes "liveliness." Turkle cites a 1920s study which looked at an object's life status by "considering its physical movement." She then moves into the 1980s when, first confronted with computational objects, children shift their understanding of "liveliness" from physical movement to psychology. Machines were "alive enough" (a common phrase) if gave off the impression of "knowledge" or "thought." In the 90s, amongst the advent of simulation video games, this liveliness was more concerned with evolution- a changing, growing and adapting object was life-like. In the late 90s, this changed again and a certain "sociability" took hold. Turkle says: "...as criteria for life, everything pales in comparison to a robot's capacity to care."
This idea of a robot caring is what frustrates me. To belabor the obvious, robots don't "care." They don't "feel." They don't "understand." They don't have "emotions." According to Turkle, humans have started looking to robots for these things- but what they are truly getting are ILLUSIONS of these things. Turkle calls it the "performance" of caring, understanding, empathizing, etc. The disturbing thing is that, according to Turkle, humans are becoming increasingly willing to seek out and accept the performances of these things, in place of the real thing. The performance of caring gives us an outlet, but requires little to nothing of us- there is no REAL interaction required.
Turkle argues, and I will vehemently agree and focus on the idea that these limited interactions deprive us of the depth that human life is designed to have. While robots may have certain functional uses in the future, I have a hard time seeing them stepping into a motherly role, or a role where they are turned to in order to truly fulfill human emotional needs. They are not human. I am a firm believer in the value of interpersonal interaction, so it makes all the sense in the world to me, some of the things Turkle lists that are missing from companionship when it is made with a robot, and not a human. She goes so far as to say that there is a "psychological risk" in this "robotic moment." Turkle argues that when humans seek companionship with a robot, it boils down the "notion of companionship" all the way down to the basics of interaction. It doesn't even have to be interaction with feelings, as long as there is something we can interact with. No empathy, no understanding.
So why would humans turn to robots at any point rather than interact with another human? Turkle phrases it well when she says "to sustain relationships, one must accept others in their complexity. When we imagine a robot as a true companion, there is no need to do any of this work." She goes on to describe, in some of her case studies, the ways in which kids preferred their Furbies to real pets because they were cleaner and less complicated; or preferred a robot's interaction because it "seemed real" or was able to interact. Robots offer neatness; simplicity; maybe even order or safety in a world full of humans disappointing one another. Robots, and all sociable technology, promise what it/they cannot deliver. As Turkle says, they promise friendship but can only deliver a performance.
But it says something about humans and the way we are in this world, that we are so willingly complicit with this illusion. Turkle describes a lot of ways in which we willingly fool ourselves into making robots (or any machine) more than they are/is. She also describes any number of ways in which robot interaction is illegitimate, particularly in comparison to human interaction. While Turkle describes these changes to preferring more safety and order in our companionships, and settling for cheap imitations- it seems as though it is just one direction into which the world is going. However, the importance of recognizing these things is not to see just what it is that drives us further into being "alone, together" - that we fear being hurt, humiliated, let down, etc. It is also important to see that it is worth the risk of all of those things, to be open to true, human interaction.
Images:
I, Robot
Skynet Becomes Self Aware
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment