August 20, 2013

Fear and Loathing in Robotistan

Do you fear your (future) robot overlords? In a recent Mashable op-ed [1], John Havens argued that we should fear the future of artificial intelligence, if only for it's propensity to get things wrong and our propensity to put too much trust in the machine's output. Another emerging theme in popular culture, from fear of the coming singularity [2] to fear of the deleterious impact robots will have on job growth [3], is something I will call robo-utopianism and robo-angst, respectively.

Ken Jennings. One man who welcomes our new robotic overlords.

Is robo-angst a general fear of the unknown? Or is it a justified response to an emerging threat? I would argue that it is mostly the former. In a previous Synthetic Daisies post critiquing futurism, I postulated that predicting the future involves both unbridled optimism and potential catastrophe. While some of this uncertainty can be overcome by considering the historical contingencies involved, the mere existence of unknowns (particularly if they involve intangibles) drive angsty and utopian impulses alike.

Both of these impulses are also based on the nature of modern robotic technology. Perhaps due to our desire to cheaply replicate a docile labor force, robots represent intelligent behavior that is ultra-logical, but not particularly human [4]. Perhaps the other aspects of human intelligence are hard to reproduce, or perhaps there is indeed something else at work here. Nevertheless, this constraint can be seen and nature of tests for sentience such as the Captcha (Turing test-like pattern recognition in context) to distinguish humans from spambots.

Examples of Captcha technology. COURTESY: captcha.net

So how do we go about achieving sentience? As robo-utopians would have it, this is the next logical step in artificial intelligence research, requiring only natural increases in the current technology platform given time. Does becoming sentient involve massive increases in the ultra-logical paradigm, massive increases in embedded context, or the development of an artificial theory of mind? And if making robots more human requires something else, do we even need to mimic human intelligence?

Perhaps part of the answer is that robots (physical and virtual) need to understand humans well enough to understand their questions. A recent piece by Gary Marcus in the New Yorker [5] posits that modern search and "knowledge" engines (e.g. Wolfram|Alpha) can do no better than chance (e.g. robo-stupidity) for truly deep, multilayered questions that involve contextual knowledge. 

When robots do things well, it usually involves the aspects of human cognition and performance that we understand fairly well, such as logical analysis and pattern recognition. Much of the current techniques in machine learning and data mining are derived from topics that have been studied for decades. But what about the activities humans engage in that are not logical? 

Example of the biological absurdity test.

One example of adding to the ultra-logical framework comes from social robotics and the simulation of emotional intelligence [6]. But animals exhibit individual cognition, social cognition, and something else which cannot be replicated simply by adding parallel processing, emotional reflexivity, or "good enough" heuristics. What's more, the "logical/systematic" and "irrational/creative" aspects of human behavior are not independent. For better or worse, the right-brained, left-brained dichotomy is a myth. For robots to be feared (or not to be feared), they must be like us (e.g. assimilated).

Examples of machine absurdity. TOP: an absurd conclusion from a collection of facts, BOTTOM: deep irony and unexpected results, courtesy of a recommender system.

Perhaps shared cultural patterns among a group of robots, or "cultural" behaviors that are nonsense from a purely logical perspective and/or traditional evolutionary perspective. Examples include: the use of rhetoric and folklore to convey information, the subjective classification of the environment, and conceptual and axiomatic blends [7]. 

How do you incorporate new information into an old framework? For humans, it may or may not be easy. If it falls within the prevailing conceptual framework, it is something humans AND robots can do fairly well. However, when the idea (or exemplar in the case of artificial intelligence) falls outside the prevailing conceptual framework, we face what I call the oddball cultural behavior problem

Take ideas that lie outside the sphere of the prevailing conceptual model (e.g. spherical earth vs. flat earth, infection vs. pre-germ theory medicine) as an example. These ideas could be viewed as revolutionary findings, ideas at odds with the status quo, or as crackpot musings [8]. The chosen point-of-view is informed either by naive theory (e.g. conceptual and axiomatic blends) or pure logical deduction. Regardless of which is used, when the number of empirical observations in a given area is largely unknown, the less tied to formal models the arguments become, and wild stories may predominate. This may explain why artificial intelligence sometimes makes nonsensical predictions, or why humans sometimes embrace seemingly nonsensical ideas.

Incorporating new information into an old framework, a.k.a. the oddball cultural behavior problem. When the idea falls well outside of the existing framework, how is it acted upon?

In some cases, oddball cultural behavior is classified using conceptual blends (or short-cuts) [9] are used to integrate information. This is similar but distinct from how heuristics are used in decision-making. In this case, cultural change (or change in larger context/structures) is regulated (implemented in a combinatorial manner) by these short-cuts. One might use a short-cut (more flexible than changing a finite number of rules) to respond to the immediate needs of the environment, but because it is not an exact response, the cultural system overshoots the optimal response, thus requiring additional short-cuts.

Moving on from what robots don't do well, some of the robo-angst is directed towards the integration of people and machines (or computation). The discussion in Haven's op-ed about Steve Mann might be understood as radically-transparent ubiquitous computing [10]. Steve Mann's experience is intriguing for the same reasons that human culture is a selectively-transparent ubiquitous framework for human cognition and survival. The real breakthroughs in autonomous intelligence in the future might only be made by incorporating radically-transparent ubiquitous computing into the design of such agents.

When tasks require intersubjective context, it is worth asking the question: which is funnier to the professional clown? A robotic comedian? Perhaps, but he's not quite skilled in the art. COURTESY: New Scientist and Dilbert comic strip.

Why would we want a robot that makes rhetorical slogans [11]? Or a robot that uses ritual to relate with other robots? Or a denialist [12] bot? Before the concurrent rise of big data, social media, and machine learning, the answer might be: we don't. After all, a major advantage of robots is to create autonomous agents that do not exhibit human foibles. Why would we want to screw that up?

However, it is worth considering that these same expert systems have uncovered a lot of aggregate human behavior that both violate our intuition [13] and are not something to be proud of. These behaviors (such as purchasing patterns or dishonesty) may not be optimal, yet they are the product of intelligent behavior all the same [14]. If we want to understand what it means to be human, then we must build robots that engage in this side of the equation. Then perhaps we may see the confluence of robo-angst and robo-utopia on the other side of the uncanny valley.

NOTES: 

[1] Havens, J.   You should be afraid of Artificial Intelligence. Mashable news aggregator, August 3 (2013).

[2] Barrat, J.   Our Final Invention: Artificial Intelligence and the End of the Human Era. Thomas Dune Books (2013).

[3] Drum, K.   Welcome, robot overlords. Please don't fire us? Mother Jones Magazine, May/June (2013) AND Coppola, F.   The Wastefulness of Automation. Pieria magazine, July 13 (2013).

For a fun take on this, see: Morgan R.   The (Robot) Creative Class. New York Magazine, June 9 (2013).

[4] Galef, J.   The Straw Vulcan: Hollywood's illogical appraoch to logical decisionmaking. Measure of Doubt Blog, November 26 (2011).

[5] Marcus, G.   Why can't my computer understand me? New Yorker Magazine, August 16 (2013).

For a take on recommender systems and other intelligent agents gone bad (e.g. the annoying valley hypothesis), please see: Moyer, B.   The Annoying Valley. EE Journal, November 17 (2011).

[6] Dautenhahn, K., Bond, A.H., Canamero, L., Edmonds, B.   Socially Intelligent Agents. Kluwer (2002).

[7] Fauconnier, G. and Turner, M.   The Way We Think: Conceptual Blending And The Mind's Hidden Complexities. Basic Books (2013) AND Sweetser, E.   Blended spaces and performativity. Cognitive Linguistics, 11(3-4), 305-334 (2000).

[8] For an example of oddball and potentially crackpot ideas in science, please see: Wertheim, M.   Physics on the Fringe: Smoke Rings, Circlons, and Alternative Theories of Everything. Walker & Company (2011) AND Horgan, J.   In Physics, telling cranks from experts ain't easy. Cross-Check blog, December 11 (2011).


[9] Edgerton, R.B.   Rules, Exceptions, and Social Order University of California Press, Berkeley (1985).

[10] For an interesting take on Steve Mann's approach to Augmented Reality and its social implications, please see: Alicea, B.   Steve Mann, misunderstood. Synthetic Daisies blog, July 18 (2012).

[11] Denton, R.E.   The rhetorical functions of slogans: Classifications and characteristics. Communication Quarterly, 28(2), 10-18 (1980).


[13] For an accessible review, please see the following feature and book: 

Lohr, S.   Sizing up Big Data, Broadening Beyond the Internet. Big Data 2013 feature, New York Times Bits blog, June 19 (2013).

Mayer-Schonberger, V. and Cukier, K.   Big Data: A Revolution That Will Transform How We Live, Work, and Think. Houghton-Mifflin (2013).

[14] Similar types of behaviors (e.g. the Machiavellian Intelligence hypothesis) can be seen in non-human animal species. For classic examples from monkeys, please see: Byrne, R.W. and Whiten, A.   Machiavellian Intelligence: Social Expertise and the Evolution of Intellect in Monkeys, Apes, and Humans. Oxford University Press (1989). 

No comments:

Post a Comment

Printfriendly