September 28, 2013

Fireside Science: Bits of Blue-sky Scientific Computing

This is the first post in a series being cross-posted to Fireside Science (the new group blog sponsored by SciFund).

For my first post to Fireside Science, I would like to discuss some advances in scientific computing from a "blue sky" perspective. I will approach this exploration by talking about how computing can improve both the the modeling of our world and the analysis of data.

Better Science Through Computation

The traditional model of science has been (ideally) an interplay between theory and data. With the rise of high-fidelity and high-performance computing, however, simulation and data analysis has become a critical component in this dialogue. 

Simulation: controllable worlds, better science?

The need to deliver high-quality simulations of real-world scenarios in a controlled manner has many scientific benefits. Uses for these virtual environments include simulating hard-to-observe events (Supernovae or other events in stellar evolution) or provide highly-controlled environments for cognitive neuroscience experimentation (simulations relevant to human behavior).

A CAVE environment, being used for data visualization.

Virtual environments that achieve high levels of realism and customizability are rapidly becoming an integral asset to experimental science. Not only can stimuli be presented in a controlled manner, but all aspects of the environment (and even human interactions with the environment) can be quantified and tracked. This allows for three main improvements on the practice of science (discussed in greater detail in [1]):

1) Better ecological validity. In psychology and other experimental sciences, high ecological validity allows for the results of a given experiment to be generalized across contexts. High ecological validity results from environments which do not differ greatly from conditions found in the real-world.

Modern virtual settings allow for high degrees of environmental complexity to be replicated in a way that does not impede normal patterns of interaction. Modern virtual worlds allows for interaction using gaze, touch, and other means often used in the real-world. Contrast this with a 1980s era video game: we have come a long way since crude interactions with 8-bit characters using a joystick. And it will only get better in the future.

Virtual environments have made the cover of major scientific journals, and have great potential in scientific discovery as well [1].

2) The customization of environmental variables. While behavioral and biological scientists often talk about the effects of environment, these effects must often remain qualitative (or at best crudely quantitative). With virtual environments, environmental variables be added, subtracted, and manipulated in a controlled fashion.

Not only can the presence/absence and intensities of these variables be directly measured, but the interactions between virtual environment objects and an individual (e.g. human or animal subject) can be directly detected and quantified as well.

3) Greater compatibility with big data and computational dynamics: The continuous tracking of all environmental and interaction information results in the immediate conversion of this information to computable form [2]. This allows us to build more complete models of the complex processes underlying behavior or discover subtle patterns in the data.

Big Data Models

Once you have data, what do you do with it? That's a question that many social scientists and biologists have traditionally taken for granted. With the concurrent rise of high-throughput data collection (e.g. next-gen sequencing) and high-performance computing (HPC), however, this is becoming an important issue for reconsideration. Here I will briefly highlight some recent developments in big data-related computing.

Big data can come from many sources. High-throughput experiments in biology (e.g. next-generation sequencing) is one such example. The internet and sensor networks also provide a source of large datasets. Big datasets and difficult problems [3] require computing resources that are many times more powerful than what is currently available to the casual computer user. Enter petabyte (or petascale) computing.

National Petascale Computing Facility (Blue Waters, UIUC). COURTESY: Wikipedia.

Most new laptop computers (circa 2013) are examples of gigabyte computing. These computers utilize 2 to 4 processors (often using only one at a time). Supercomputers such as the Blue Waters computer at UIUC have many more processors, and operate at the petabyte scale [4]. Supercomputers such as IBM's Roadrunner, had well over 10,000 processors. Some of the most powerful computers even run at the exascale (e.g. 1000x faster than petascale). The point of all this computing power is to perform many calculations quickly, as the complexity of a very large dataset can make its analysis impractical using small-scale devices.

Even using petascale machines, difficult problems (such as drug discovery or very-large phylogenetic analyses) can take an unreasonable amount of time when run serially. So increasingly, scientists are also using parallel computing as a strategy for analyzing and processing big data. Parallel computing involves dividing up the task of computation amongst multiple processors so as to reduce the overall amount of compute time. This requires specialized hardware and advances in software, as the algorithms and tools designed for small-scale computing (e.g. analyses done on a laptop) are often inadequate to take full advantage of the parallel processing that supercomputers enable.

Physical size of the Cray Jaguar supercomputer. Petascale computing courtesy of the Oak Ridge National Lab.

Media-based Computation and Natural Systems Lab

This is an idea I presented to a Social Simulation conference (hosted in Second Life) back in 2007. The idea involves building a virtual world that would be accessible to people from around the world. Experiments could then be conducted through the use of virtual models, avatars, secondary data, and data capture interfaces (e.g. motion sensors, physiological state sensors).

The Media-based Computation and Natural Systems (CNS) Lab, in its original Second Life location, circa 2007.

The CNS Lab (as proposed) features two components related to experiments not easily done in the real-world [5]. This is an extension of virtual environments to a domain that is relatively unexplored using virtual environments: the interface between the biological world and the virtual world. With increasingly sophisticated I/O devices and increases in computational power, we might be able to simulate and replicate the black box of physiological processes and the hard-to-observe process of long-term phenotypic adaptation.

Component #1: A real-time experiment demonstrating the effect of extreme environments on the human body. 

This would be a simulation to demonstrate and understand the limits of human physiological capacity usually observed in limited contexts [6]. In the virtual world, an avatar would enter a long tube or tank, the depth of which would serve as a environmental gradient. As the avatar moves deeper into the length of the tube, several parameters representing variables such as atmospheric pressure, temperature, and medium would increase or decrease accordingly.

There should also be ways to map individual-level variation to the avatar in order to provide some connection between the participant and the simulation of human physiology. Because this experience is distributed on the internet (originally proposed as a Second Life application) a variety of individuals could experience and participate in an experiment once limited to a physiology laboratory.

Examples of deep-sea fishes (from top): Barreleye (Macropinna microstoma), Fangtooth (Anoplogaster cornuta), Frilled Shark (Chlamydoselachus anguineus)COURTESY: National Geographic and Monterey Bay Aquarium.

Component #2: An exploration of deep sea fish anatomy and physiology. 

Deep sea fishes are used as an example of organisms that adapted to deep sea environments that may have evolved from ancestral forms originating in shallow, coastal environments [7]. The object of this simulation is to observe a “population” change over from ancestral pelagic fishes to derived deep sea fishes as environmental parameters within the tank change. The participant will be able to watch evolution “in progress” through a time-elapsed overview of fish phylogeny.

This would be an opportunity to observe adaptation as it happens, in a way not necessarily possible in real-world experimentation. The key components of the simulation would be: 1) time-elapsed morphological change and 2) the ability to examine a virtual model of the morphology before and after adaptation. While these capabilities would be largely (and in some cases wholly) inferential, it would provide an interactive means to better appreciate the effects of macroevolution.

A highly stylized (e.g. scala naturae) view of improving techniques in human discovery, culminating in computing.

A tongue-in-cheek cartoon showing the evolution of computer storage (as opposed to processing power). Nevertheless, this is pretty rapid evolution.


[1] These journal covers are in reference to the following articles: Science cover, Bainbridge, W.S.   The Scientific Research Potential of Virtual Worlds. Science, 317, 412 (2007). Nature Reviews Neuroscience cover, Bohil, C., Alicea, B., and Biocca, F. Virtual Reality in Neuroscience Research and Therapy. Nature Reviews Neuroscience, 12, 752-762 (2011).

[2] Raw numeric data, measurement indices, and, ultimately, zeros and ones.

[3] Garcia-Risueno, P. and Ibanez, P.E.   A review of High Performance Computing foundations for scientists. arXiv, 1205.5177 (2012).

For a very basic introduction to big data, please see: Mayer-Schonberger, V. and Cukier, K.   Big Data: a revolution that will transform how we live, work, and think. Eamon Dolan (2013).

[4] Hemsoth, N.   Inside the National Petascale Computing Facility. HPCWire blog, May 12 (2011).

[5] Alicea, B.   Reverse Distributed Computing: doing science experiments in Second Life. European Social Simulation Association/Artificial Life Group (2007).

[6] Downey, G.   Human (amphibious model): living in and on the water. Neuroanthropology blog, February 3 (2011).

For an example of how human adaptability in extreme environments has traditionally been quantified, please see: LeScanff, C., Larue, J., and Rosnet, E.   How to measure human adaptation in extreme environments: the case of Antarctic wintering-over. Aviation, Space, and Environmental Medicine, 68(12), 1144-1149 (1997).

[7] For more information on deep sea fishes, please see: Romero, A.   The Biology of Hypogean Fishes. Developments in Environmental Biology of Fishes, Vol. 21. Springer (2002).

September 24, 2013

Perceptual Time and the Evolution of Informational Investment

We tend to think of the flow of time in the context of evolution and biology as a fairly consistent thing [1]. We are used to the conceptual mechanisms of molecular clocks, thermodynamic entropy, and circadian rhythms. All of these mechanisms maintain regularity with respect to the flow of time. However, this order may not be as universal as we would like to believe. In fact, there may be a form of perceptual relativism enabled by evolution, physiology, and (increasingly) technology that unseats this order in many unexpected ways. This post will utilize two recent papers as a means to explore this issue.

A recent paper entitled "Metabolic rate and body size are linked with perception of temporal information" by Kevin Healy and colleagues [2] demonstrates that differences in visual sampling across species living in different ecosystems are constrained by the metabolic rate of the organism. Visual sampling can be measured in terms of an organism's critical flicker fusion (CFF) frequency. CFF frequency is the sampling rate (or rather, the minimum sampling rate) at which images captured by the retina are integrated by the brain into coherent visual scenes. A very-high or very-low CFF frequency may lead to differences in how the flow of environmental events is perceived by the organism [3], and can lead to other differences in performance (see Figure 1 for example).

Figure 1. Human trying to swat a fly.

As a sidenote, this paper has elicited an interesting set of reactions in the news media. In some cases, this is being sold as a suggestion that flies experience a lifespan equivalent to that of humans, even though the human lifespan (in terms of biological processes) is much longer [4]. Regardless of the speculation, the potential for relativistic time-keeping [5] across species may also be interesting from an evolutionary standpoint. Is the CFF frequency determined solely by the requirements of ecological niche [6], or is the CFF frequency constrained by metabolic rate, and why? It is well-known that metabolic rate scales allometrically with body size [7], for reasons that are clearly due to energetic efficiency. But might this also extent to CFF frequency?

Figure 2. CFF frequency, explained using a classic reel-to-reel movie projector.

The authors suggest that CFF frequency is merely constrained but not determined by the metabolic rate. This pattern is predicted by the expensive tissue hypothesis [8]. This hypothesis suggests that amount and structure of neural tissue in an organism must be highly-optimized due to the high energetic cost of electrical activity/excitability. In general, the more neural tissue used by the organism (e.g. bigger brains, more elaborate eyes), the higher the energetic cost. If the cost is high enough, this clearly serves as an absolute constraint on the size of an organism’s neuronal architecture. But what are the consequences of this with respect to cognitive complexity [9]?

Figure 3. The principle behind the expensive tissue hypothesis and metabolic scaling, visualized. COURTESY:

At this point, I would like to propose something called the "animation bottleneck" hypothesis, and is similar to one role ascribed to early attentional selection mechanisms in human consciousness [10]. The animation hypothesis suggests that while higher frequency visual sampling of the environmental provides an advantage for identifying and executing very-high frequency events (being able to catch insect prey or the beginnings of an explosion), lower frequency visual sampling may have other advantages that result in an evolutionary tradeoff. In the case of CFF frequency, a lower sampling rate might result in a greater need to make proper inferences as to what will happen in between the samples. This could result in bigger brains. However, it might also have other consequences, such as the evolution of attentional capacity [11]. If so, variation in the sampling rate within a species might have unique fitness consequences.

So what happens when you have variation in the environment that far exceeds the baseline ability of perception? In natural populations, the findings of [2] demonstrate that environmental stimuli are less of a selective pressure than often assumed [6]. But in the technological environment, there might be signals that exceed what are typically found in the natural world (e.g. ultrafast extreme events). Such is the case with HFT (high-frequency trading) [12]. HFT is defined by [13] as computer-driven algorithm-based trading at speeds measured in millionths of a second.

Figure 4. Trading of stocks according to the HFT model [14]: thirty (top) and five-hundred (bottom) millisecond advantages. COURTESY: New York Times.

A recent paper on HFT and decision-making called "Abrupt rise of new machine ecology beyond human response time" considers just this possibility [15]. In this case one can use artificial agents to explore whether or not the information sampling capacity discussed in [2] is merely a constraint of animal visual systems or if it is a consequence of fitness and selection. This is an intriguing paper, but does not fits cleanly into the context of biological evolution and/or human performance. Nevertheless, it raises a few key points about the evolution of neurobiological information processing.

Figure 5. Illustration of ultrafast extreme events in the context of HFT. COURTESY: Figure 1 in [15].

The authors model this as a competitive process of trading agents using their own ecological perspective. A simulation is used to better understand the relationship between strategies employed by a population of agents and ultrafast extreme events (e.g. the high-frequency trading of shares or, in some cases, a so-called flash-crash). In [16], HFT-reliant trading behavior enables three key advantages, which stem from both having access to very-high-frequency environmental samples and the ability to act upon them [15]. These include: better access to the market, a major speed advantage, and a greater understanding of the market's temporal microstructure. Another ecological explanation suggests that computers trading at very-high frequencies safeguard the market against irrational or inexperienced traders [17].

By now, it may seem self-evident that stimuli related to HFT provide an incentive for higher sampling rates, if not some sort of intra-generational selective advantage. Yet it may also be that most of these ultra-high-frequency events are not the information they are made out to be. Rather, the real information in markets may reside in longer interval periods. Individuals that sample the markets at HFT-like frequencies may actually be aliasing (or oversampling) their environment [18]. In fact, it may be that the purported advantages of HFT are largely driven by noise (e.g. benefits derived from chance) and not information. Returning to the fly visual system, it could also be the case that flies deal with large amounts of visual noise, which may act to suppress any perceptual (or fitness) advantage they may gain from being able to detect things at very high frequencies.

Figure 6. The relationship between the collective action of agents, share price, and strategies employed by agents. COURTESY: Figure 6 in [15].

In short, are visual systems and cognitive complexity selected for the right amount of information in the environment, or are they constrained by other factors? And what consequences does this have for the evolution of signaling and perception? Perhaps very-high and very-low frequency events can be inferred by the visual system and brain in a way that is "good enough" not to be detrimental to fitness. Or the other hand, perhaps very-high and very-low frequency events provide an opportunity to create new niches and hide communication from predators/prey and/or conspecifics. In both cases, these type of events have a subtle effect on cognition and behavior that is largely mysterious in nature. The deployment of evolutionary simulations might provide us with some answers.

UPDATE (6/18/2014): This post has been re-published with slight modifications at Machines Like Us.


[1] For a alternate philosophical and theoretical view, this book might be interesting: Vrobel, S., Rossler, O.E., Marks-Tarlow, T.   Simultaneity: Temporal Structures and Observer Perspectives. World Scientific (2008).

[2] Healy, K., McNally, L., Ruxton, G.D., Cooper, N., and Jackson, A.L.   Metabolic rate and body size are linked with perception of temporal information. Animal Behavior, 86, 685-696 (2013).

[3] Assuming that CFF frequency is the only component of visual perception, and that the functions of these components are not linked. For more on this, please see: Skorupski, P. and Chittka, L. Differences in photoreceptor processing speed for chromatic and achromatic vision in the bumblebee Bombus terrestris. The Journal of Neuroscience, 30: 3896–3903 (2010).

[4] The popular media have spun this paper a number of different ways (e.g. Google: "Healy" + "Animal Behavior" + "Fly" + "Hz"), and seems to be a lesson in what the public takes away from a research paper. A few examples

a) Silverman, R.   Flies see the world in slow motion, say scientists. The Telegraph, September 16 (2013).

b) Slo-mo Mojo. Economist, September 21 (2013).

c) Time is in the eye of the beholder: time perception in animals depends on their pace of life. ScienceDaily, September 16 (2013).

[5] There are a number of interesting parallels between the percpetual dilation of visual cues in time and the opportunities afforded by virtual world simulations. For more information, please see: Alicea, B.   Relativistic Virtual Worlds: a emerging framework. arXiv, 1104.4586 (2011).

[6] A competing hypothesis (strong ecological) predicts that the response dynamics of retina are ecosystem-specific. For more information, please see: Autrum, H.   Electrophysiological analysis of the visual system in insects. Experimental Cell Research, 14(S5), 426-439 (1958).

[7] Brown, J.H., Gillooly, J.F., Allen, A.P., Savage, V.M., and West, G.B.   Towards a Metabolic Theory of Ecology. Ecology, 85(7), 1771-1789 (2004).

[8] Aiello, L.C. and Wheeler, P.   The Expensive-tissue hypothesis: the brain and the digestive system in human and primate evolution. Current Anthropology, 36(2), 199-221 (1995).

[9] Assuming these comparisons can be made for CFF frequency in the first place. For more on this, please see: Chittka, L., Rossiter, S.J., Skorupski, P. & Fernando, C. (2012). What is comparable in comparative cognition? Philosophical Transactions Of The Royal Society, 3671: 2677-2685.

[10] For more background on early attentional selection and the connections between vision and the construction of conscious percepts, please see:

a) Zhaoping, L. and Dayan, P.   Pre-attentive visual selection. Neural Networks, 19, 1437-1439 (2006).

b) Van Rullen, R. and Koch, C.   Is perception discrete or continuous? Trends in Cognitive Sciences, 7(5), 207 (2003).

c) Ogman, H. and Breitmeyer, B.G.   The First Half-Second: the microgenesis and temporal dynamics of unconscious and conscious visual processes. MIT Press (2006).

[11] For more on the evolution and life-history variability in attentional capacity, please see:

a) Kruschke, J.K. and Hullinger, R.A.   Evolution of attention in learning. In N.A. Schmajuk (ed.) Computational Models of Conditioning. pgs. 10-52, Cambirdge Press, Cambridge, UK (2010).

b) McAvinue, L.P., Habekost, T., Johnson, K.A., Kyllingsbek, S., Vangkilde, S., Bundesen, C., and Robertson, I.H.   Sustained attention, attentional selectivity, and attentional capacity across the lifespan. Attention, Perception, and Psychophysics, 74(8), 1570-1580 (2012).

c) Humphreys, G.W., Kumar, S., Yoon, E.Y., Wulff, M., Roberts, K.L., and Riddoch, M.J.   Attending to the possibilities of action. Philosophical Transactions of the Royal Society B, 368, 20130059 (2013).

[12] Ritholtz, B.   What Happens During 1 Second of HFT? Big Picture blog, May 7th (2013).

[13] Patterson, S. and Rogow, J.   What's behind high-frequency trading? Wall Street Journal, August 1 (2009).

[14] Duhigg, C.   Stock traders find speed pays, in Milliseconds. NYTimes, July 23 (2009).

[15] Johnson, N., Zhao, G., Hunsader, E., Qi, H., Johnson, N., Meng, J., and Tivnan, B.   Abrupt rise of new machine ecology beyond human response time. Scientific Reports, 3, 2627 (2013).

[16] Lopez, L.   A high-frequency trader explains his three basic advantages. Business Insider, September 20 (2012).

[17] Smith, N.   A healthy side-effect of high-frequency trading? Not Quite Noahpinion blog, August 11 (2013).

[18] Alicea, B.   Economic trace, pondered. Synthetic Daisies blog, November 12 (2011).

September 16, 2013

I, Automaton

Here are a few robotic-themed posts from Tumbld Thoughts. The first (Mechatronoids -- Artificial Muscle-heads) gives my take on the difference between robotics and mechtronics. The second (Spock vs. Spock vs. Autonomous Control) is a face-off between three kinds of highly-logical intelligence. Scroll down the page to see who wins.

I. Mechatronoids -- Artificial Muscle-heads

What is the difference between artificially intelligent (AI) robots and mechatronics? The informal answer: while some forms of AI are trying to get into University [1], bio-inspired mechatronic devices are fighting it out in the ring. 

This video of the Otherlab's bopem popem robots at this year's Google I/O conference is a nice example of pneubotics (soft robots [2] controlled by air pumps) in action. For more information, see this Synthetic Daisies feature on the OtherLab from December 2011 [3].

II. Spock vs. Spock vs. Autonomous Control

Here are clips from a rather lengthy Audi advertisement featuring a comical duel of Spock vs. Spock. Playing 3-D chess on their iPads is only the beginning. Featuring a cameo by the self-driving car from the Dynamic Design Lab at Stanford.

For more, check out the Audi Spock Challenge on YouTube. And speaking of autonomous machines, check out the DARPA Robotic Challenge, featuring RoboSimian from NASA's Jet Propulsion Lab (JPL).


[1] Strickland, E.   Can an AI get into the University of Tokyo? IEEE Spectrum, August 21 (2013).

[2] For more information on soft robots, please see the Popsci soft robots tag.

[3] Alicea, B.   Tour of the OtherLab. Synthetic Daisies blog, December 1 (2011).

September 12, 2013

Inaugural post for the Fireside Science group blog

As mentioned in a post from June, I was a part of the first #SciFund Challenge outreach course. The course was held digitally and via Google+ hangouts. We covered a number of topics related to alternative sources of project funding (e.g. "selling" your research to a broader audience) and scientific outreach (e.g. re-interpreting your research for a broader audience).

One thing that grew out of that initiative was a group blog called Fireside Science [1]. In my last post on #SciFund, I promised to provide more information when the first post went live. The first post is now live (complied and edited by Jenna Walrath), and as promised, here is the link. We will rotate [2] the guest posts -- Synthetic Daisies will host a cross-posted feature in about three weeks or so. Enjoy.

UPDATE (9/26): a unique site (and logo) for Fireside Science was provided by #SciFund, and is now live.

[1] Rowlands, C.L.   Come Together: a guide for group blogs. Just Another WordPress Weblog, July 10 (2013).

[2] We have a very diverse and eclectic lineup, but topics will focus on the areas of biology/medical science and outreach strategies.

September 10, 2013

The Value of Academic Work (brief exploration)

Here are two items cross-posted from Tumbld Thoughts. They are both relevant to innovation and the relative value of achievement. The first post (Need a Social Media strategy?) highlights the value of social media in the scheme of academic production. The second (On Value, Celebrity, and their Discontents) focuses on how value is both extracted from intellectual work and ascribed to its producers.

I. Need a Social Media strategy?

How do you use social media to promote your work? In academia, a term such as synergy is probably premature. However, C. Titus Brown, speaking at the BEACON Center congress, provided an overview (drawn mostly from personal experiences) of how social media can be used for promoting and advancing academic scholarship. 

The centerpiece of a social media strategy is the open-source archiving of your work. One popular option has been the arXiv preprint server. Growth in the number of new q-bio category submissions over the past ten years has exceeded 500%. There are other viable options for this as well. And this not only includes manuscripts, but computer code and datasets. Perhaps Aaron Schwartz will have the last laugh......

As a response to the issues raised in C. Titus' talk, the image below presents my personal (and perhaps idealized) pipeline for scholastic production, from hazy idea to finished product. 

II. On Value, Celebrity, and their Discontents

Here are two tangentially-related items about ideas and their relative value (with particular relevance to academia). The first article is about better ways to monetize innovation, and the second is about the value of ideas vs. the added value of celebrity.

I) Arbesman, S.   "Dark Intellectual Property": why we need a Kickstarter for patents. Social Dimension blog, July 25 (2013).
We need to democratize ways in which IP is discovered/ licensed. The focus of this article is on finding ways to shining light on the potential value in patents and other innovations that are often missed using current techniques. Some key points:

* we need to leverage the power of information technologies and social media to build IP marketplaces and connect innovators with investors.

* building a marketplace must go beyond property enclosure and include enabling better navigation of the patent system.

* we need to promote new ways of interacting with intellectual property (community vs. pure transactions).

* the use of auctions (e.g. bidding wars) will reduce the negative effects of patent troll behavior on IP markets.

* in general, the IP market is highly illiquid. This poses an ever-present problem for properly valuing innovations.

II) Hanson, R.   Beware Star Academia. Overcoming Bias blog, July 27 (2013).

This post focuses on the professionalization of many spheres of human creativity and innovation over the last century or so. Two examples include popular comedy and music, which have moved from freely-exchanged, well-known standards to "star" (e.g. personality-driven) performers and performances. Some key points:

* content and personality have become intertwined (example: Andy Warhol's art).

* academia may be exhibiting a similar trend. The structure of academia is a way to professionalize arguments and concepts of the world. 

* due to the nature of academic discourse and scientific inquiry, a consensus that subsumes individual arguments (e.g. theoretical syntheses) is required.

* to counter the "star power" trend, the focus should be more about ways intellectuals present arguments rather than arguments themselves.

* in cases where "star power" predominates (e.g. science popularizers), the standard of excellence should be rather people care about the overall impressiveness of an argument, or the actual argument being made?

Other (semi-relevant) articles I ran across while compiling this post:

Bilton, N.   Internet pirates will always win. New York Times, August 4 (2012).

Krugman, P.   Nate Silver, superstar. Conscience of a Liberal blog, August 5 (2013).

Jones, J.   Was celebrity really Warhol's legacy? Jonathan Jones On Art blog, May 13 (2009).

September 3, 2013

Advances in AI for and from the mind

Here are four short features from my micro-blog (Tumbld Thoughts) that creatively discuss the current and future state of Artificial Intelligence/Machine Learning research. Featured are: LIVE from Annoying Valley, Internet (I), Thinking more like a theorist...... (II), Trends in Future Research (III), and The path to machine consciousness will run through the executive network (IV). 

I. LIVE from Annoying Valley, Internet

Here are a few readings [1] on how recommender systems and other intelligent agents go AWOL on a platform of creative destruction. Ads such as "enlarge the size of your portfolio using this one simple trick" [2] are an example of the "annoying valley", which is a version of the uncanny valley ubiquitous in human-robot interaction.

Thanks go to Calvin and Hobbes (Bill Watterson), and Thomas Hobbes and Joseph Schumpeter for the quotes (reworked from their original syntax).

II. Thinking more like a theorist......

Now, I think I'll get in touch with my inner theorist. Sheldon from "Big Bang Theory" summarizes the story of my academic career (at the 00:10 mark). What can we do Glenn Shafer's mathematical theory of evidence [3]? How about enabling complex data fusion and context-aware robotics (see figure below)? Next, Paul Krugman brings us his thoughts on the purported death of economic theory. Last but not least, a group of psychologists ask under which conditions theory can obstruct research progress [4]. This can be contrasted with the exclusive use of naive theories (e.g. common sense models) in AI research.

III. Trends in Future Research

Here is a Quora thread on the top problems in machine learning. Answers range from a bulleted list of hot topics to longer discussions. Major problems (as defined by the crowd) include: gesture recognition, learning from social networks and media, deep learning [5], newsfeed aggregation, and scalability. Interestingly, there is only moderate overlap between individual answers. 

Also interesting is the related Quora thread on important problems in the field of Artificial Intelligence over the short term (5-10 years). It will be interesting to see how these predictions correspond with future breakthroughs and developments in the field [6].

IV. The path to machine consciousness will run through the executive network

"Attention (awareness) is a data-handling method used by neurons. It isn’t a substance and it doesn’t flow"

Interesting quote from a even more interesting story [7] by Michael Graziano, a Neuroscientist at Princeton. He makes the case for how and how not to study consciousness. While theories of consciousness are plentiful, he argues (along with Christof Koch [8] of Caltech/Allen Brain Institute) that consciousness is primarily of phenomenon of attention and mental reflection. 

Returning to the quote. If consciousness is a form of awareness, and awareness is a model of attention, then they all seem to be a data handling procedure the brain uses to select and reflect on information from the environment. In that sense, it is a non-physical entity that does not operate like a physical intention [9]. The flow of physical intention can be distinguished from experiential flow involved with creativity [10] or the flow of information in attentional networks of the brain [11]. By contrast, the idea that consciousness flows into objects in the environment (e.g. a portrait or object) is pre-scientific superstition.

In addition, there is also evidence (highlighted in Graziano's article) that consciousness is largely an "after the facts" mental construction, which feeds back to subsequent attentional selection. What is the missing piece of science's understanding of consciousness? Graziano's answer might surprise you.

Images in IV: Debugger, xckd comics; human attentional network [11] with my own annotations; Superman fighting Zod from Superman II


[1] Read the following articles in succession:
Turley, J.   Damn You, Autocorrect! EE Journal, August 21 (2013).
Moyer, B.   The Annoying Valley. EE Journal, November 17 (2011).

[2] Rothschild, M.   One Weird Trick to rule them all. Skeptoid blog, January 28 (2013) AND Stepney, S.   What do these have in common? A Memory Less Ephemeral blog, August 30 (2013).

[3] Notes on belief functions from the book: Shafer, G.   A mathematical theory of evidence. Princeton University Press, Princeton, NJ (1976). A precursor to Dempster-Shafer theory.

[4] see the following reference, with an explanation grounded in the philosophy of science: Greenwald, A.G., Pratkanis, A.R., Leippe, M.R., and Baumgardner, M.H. Under what conditions does theory obstruct research progress? Psychological Review, 93(2), 216-229 (1986).

[5] For more on the promise of deep learning, please see: 10 Breakthrough Technologies 2013. MIT Technology Review, April 23 (2013).

[6] For more on the predictability of research trends, see the chart at bottom and the following reference: LeHong, H. and Fenn, J.   Key Trends to Watch in Gartner 2012 Emerging Technologies Hype Cycle. Forbes Tech news, September 18 (2012).

[7] Graziano, M.   How Consciousness Works. Aeon Magazine, August 23 (2013).

For a comparison between human and non-human brains, please see the following article: Boly, M., Seth, A.K., Wilke, M., Ingmundson, P., Baars, B., Laureys, S., Edelman, D., and Tsuchiya, N.   Consciousness in humans and non-human animals: Recent advances and future directions. Frontiers in Psychology, 4:625 (2013) doi:10.3389/fpsyg.2013.00625.

[8] Koch, C.   Consciousness is Everywhere. HuffPo blog, August 15 (2012).

[9] Graziano contrasts consciousness (the awareness of stuff) with "extramission theory", a naive theory that posits human control over the natural world using visual cues.

[10] For more on this idea, please see: Csikszentmihalyi, M.   Flow: the psychology of optimal experience. Harper and Row, New York (1990).

[11] Posner, M.I. and Patoine, B.   How Arts Training Improves Attention and Cognition. DANA Foundation News, September 14 (2009).

For more in brain networks involved in creativity and perhaps consciousness as well, please see: Kaufman, S.B.   The Real Neuroscience of Creativity. SciAm Beautiful Minds blog, August 19 (2013).