September 17, 2014

Heuristic Haystacks and the Messy Lesson

As an admitted and self-styled parsimony skeptic, I was interested to see a discussion in the blogosphere on the seductive allure of simple explanations [1]. This was in the context of economic policy and decision-making, with Paul Krugman even offering an H.L. Mencken quote: "For every complex problem there is an answer that is clear, simple, and wrong" [2]. Yet while parsimony was never brought up, I suspect that hypotheses and arguments related to the efficient markets hypothesis were always somewhat in mind.


There are, of course, broader parallels between seductive simplicity and parsimony. As I have pointed out before, I find parsimony to be a overly-seductive null model [3]. The simplest explanation often leads us not to the truth, but to what is most conceptually consistent. In some cases (where theory is well-established) this works out well. Intuition in support of serendipity and serendipity in support of discovery is an unassuming (and often underplayed) pillar of science [4]. However, in cases where our intuitions get in the way of objective analysis, this becomes problematic. And this seeming exception is actually quite common. In a related manner, this brings up an interesting problem of the relationship between parsimony as a decision-making criterion and the epistomology of a scientific phenomenon.

An appalling lack of faith in both Occam's and Einstein's worldviews. More horrifying details in my Ignite! talk on the topic.

This relationship, or more accurately inconsistency, is due to argumentatively-influenced judgments on a naturalistic search space. Even in children, it is observed that argumentation is rife with confirmation bias and logically arguing to absurd positions [5]. While argumentation allows us to build hypotheses, it also gets us stuck in a conceptual minimum (my own ad-hoc phrase). In a previous post, I pointed to recent work on how belief systems and associated systems of argumentation can shape our perception of reality. But, of course, this cannot will the natural world to our liking. In fact, it often serves to muddy the conceptual and theoretical waters [6]. Therefore, you often have a conceptual gap unrelated to problem incompleteness which we will flesh out in the rest of this post.

The first point to be made here is that such an inconsistency introduces two biases that shape how we think about the simplest explanation, and more generally about what is optimal. First of all, can we even find the true simplest explanation? Perhaps the simplest possible statement that can be constructed cannot capture the true complexity of a given situation. This is particularly true when there are competing dimensions (or layers or levels) of complexity. Secondly, and particularly in the face of complexity, simplicity can often be a foil to deep understanding. Unfortunately, this is often conceptualized of and practiced upon in a destructive way, favoring simple and homogeneous mental models over more subtle ones.

How to dream of complex sheep....

In the parlance of decision-making theory, parsimony is consistent with the notion of good-enough heuristics. In the work of Gigerenzer [7], such heuristics are claimed to be nearly optimal when compared to formal analysis of a problem. This can also be seen with statistical prediction rules that outperform human judgments in a number of everyday contexts [8]. But is this a statement of problem "wickedness", or a statement of superiority with respect to human cognition? When compared to problems that require needle in a haystack criteria, fast and frugal heuristics (and hence parsimony) is severely lacking.

So complexity introduces a secondary bias at best and serves as a severe limitation to achieving parsimony at worst. One might expect that experimentally verifying a prediction made in conjunction with Occam's Razor requires finding an exact analytical solution. Finding this proverbial "needle in a haystack" requires both a multi-criterion, algorithmically-friendly heuristic solution in addition to a formal strategy that often defies intuition. Seemingly, the simple solution cannot keep up.

I found it! It was quick, but I was also quite lucky,

NOTES:
[1] The Simplicity Paradox. Stumbling and Mumbling blog, September 9 (2014) AND Krugman, P.   Simply Unacceptable. The Conscience of a Liberal blog, September 5 (2014).

[2] This is not to equate parsimony with methodological snake oil -- in fact, I am arguing quite the opposite. But I am merely pointing out that parsimony is an incomplete hypothesis for acquiring knowledge.

[3] For more, please see this Synthetic Daisies post: Alicea, B.   Argument from Non-Optimality: what does it mean to be optimal? Synthetic Daisies blog, July 28 (2013).

[4] Kantorovich, A.   Scientific Discovery: Logic and Tinkering. SUNY Press, Albany (1993).

[5] I say "even" in children even though the latter (logically arguing to absurd conclusions) is often expected from children. But we see these things in adults as well, and such is the point of argumentation theory. For more, please see: Mercier, H.   Reasoning Serves Argumentation in Children. Cognitive Development, 26(3), 177–191 (2011).

[6] Wolchover, N.   Is Nature Unnatural? Quanta Magazine, May 24 (2013).

[7] While there are likely other (and perhaps better) examples, I am using a reference cited in [1]: Gigerenzer, G.   Bounded and Rational. In "Contemporary Debates in Cognitive Science", R.J. Stainton eds. Blackwell, Oxford, UK (2006).

[8] lukeprog   Statistical Prediction Rules Out-Perform Expert Human Judgments. LessWrong blog, January 18 (2011).

September 10, 2014

Upcoming DevoWorm talk to the OpenWorm group


This Friday (9/12) at 9am PDT, I will be presenting a talk to the OpenWorm consortium Journal Club on the DevoWorm project. For those of you who are unfamiliar, DevoWorm is a collaborative attempt to simulate and theoretically re-interpret C. elegans development.

Cover slide with a list of the DevoWorm collaborators, circa September 2014.

The structure of the talk will loosely follow the white paper, with some additional theoretical and translational information. We are also trying to organize/raise money for a "science hackathon", which would greatly improve the state of the project [1].


 An explanation of a scientific hackathon (sensu DevoWorm Collaboration).

The talk will also deal with the issue of whole-organism emulation. In this case, we are using a sparse representation of the organism to model developmental processes. The key is to balance tractability with biological realism. Sparko the Robotic Dog and the EPFL's Human Brain Project are used as examples.





We also discuss the potential usefulness of C. elegans emulations to biological problems. One problem we identified was the need to emulate and identify the precursors and mechanisms of phenotypic mutants. While our discussion of this will be limited to only a few slides, DevoWorm has the potential to model the possibility space of phenotypic mutants and perhaps even suggest developmental precursors to phenotypic mutations. 



If you are interested in attending, here is the Google Hangouts link. Look forward to a good presentation.

UPDATE 9/12:
The talk went very well. We also changed the name to "DevoWorm: raising the (Open)Worm". Lots of discussion about the potential for future collaboration and the regenerative capacity of C. elegans (or lack thereof). The talk was recorded to YouTube, and the link is here.



NOTES:
[1] Improvements largely involve physically bringing the group together, solving some problems related to data analysis, and perhaps even planning out additional data collection. Apparently, the term "hackathon" has a rather broad definition. But if you are interested in participating/helping to facilitate this, please contact me.

August 31, 2014

Godel's Revenge: All-Encompassing Formalisms vs. Incomplete Formalisms

This content is cross-posted to Tumbld Thoughts. A loosely-formed story in two parts about the pros and cons of predicting the outcome of and otherwise controlling complex sociocultural systems. Kurt Godel is sitting in the afterlife cafe right now, scoffing but also watching with great interest.



I. It's an All-encompassing, Self-regulation, Charlie Brown!


Here is a video [1] by the complexity theorist Dirk Helbing about the possibility of a self-regulating society. Essentially, by combining big data with the principles of complexity would allow us to solve previously intractable problems [2]. This includes more effective management of everything from massively parallel collective behaviors to very-rare events.


But controlling how big data is used can keep us from getting into trouble as well. Writing at Gigaom blog, Derrick Harris argues that the potentially catastrophic effects of AI taking over society (the downside of the singularity) can be avoided by keeping key data away from such systems [3]. In this case, even hyper-complex AI systems based on deep learning can become positively self-regulating.

NOTES:

[2] For a cursory review of algorithmic regulation, please see: Morozov, E.   The rise of data and the death of politics. The Guardian, July 19 (2014).

For a discussion as to why governmental regulation is a wicked problem and how algorithmic approaches might be inherently unworkable, please see: McCormick, T.   A brief exchange with Tim O’Reilly about “algorithmic regulation”. Tim McCormick blog, February 15 (2014).

[3] Harris, D.   When data become dangerous: why Elon Musk is right and wrong about AI. Gigaom blog, August 4 (2014).


II. Arguing Past Each Other Using Mathematical Formalisms


Here are a few papers on argumentation, game theory, and culture. My notes are below each set of citations. A good reading list (short but dense) nonetheless.

Brandenburger, A. and Keisler, H.J.   An Impossibility Theorem on Beliefs in Games. Studia Logica, 84(2), 211-240 (2006).

* shows that any two-player game is embedded in a system of reflexive, meta-cognitive beliefs. Players not only model payoffs that maximize their utility, but also model the beliefs of the other player. The resulting "belief model" cannot be completely self-consistent: beliefs about beliefs have holes which serve as sources of logical incompleteness.

What is Russell's Paradox? Scientific American, August 17 (1998).

* intorduction to a logical paradox which can be resolved by distinguishing between sets and sets that describe sets using a hierarchical classification method. This paradox is the basis for the Brandenburger and Keisler paper.

Mercier, H. and Sperber, D.   Why do humans reason? Arguments for an argumentative theory. Behavioral and Brain Sciences, 34, 57-111 (2011).


Oaksford, M.   Normativity, interpretation, and Bayesian models. Frontiers in Psychology, 5, 332 (2014).

* a new-ish take on culture and cognition called argumentation theory. Rather than reasoning to maximize individual utility, reasoning is done to maximize argumentative context. This includes decision-making that optimizes ideonational consistency. This theory predicts phenomena such as epistemic closure, and might be thought of as a postmodern version of rational agent theory. 

There also seems to be an underlying connection between the "holes" is a culturally-specific argument and the phenomenon of conceptual blending, but that is a topic for a future post.

August 26, 2014

Fireside Science: Fun with F1000: publish it and the peers will come

This content is cross-posted to Fireside Science. Please also see the update before the notes section.


For the last several months, I have been working on a paper called "Animal-oriented Virtual Environments: illusion, dilation, and discovery" [1] that is now published at F1000 Research (also available as a pre-print at PeerJ). This is a paper that has gone through several iterations, from a short 1800-word piece (first draft) to a full-length article. This includes several stages of editor-driven peer review [2], and took approximately nine months. Because of its speculative nature, this paper could be an excellent candidate for testing out this review method.

The paper is now live at F1000 Research.

Evolution of a research paper. The manuscript has been hosted at PeerJ Preprints since Draft 2.

F1000 Research uses a method of peer-review called post-publication peer review. For those who are not aware, F1000 approaches peer-review in two steps: the submission and approval by an editor stage, and the publication and review by selected peer stage. Let's walk through these.


The first step is to submit an article. For some articles (data-driven), they are published to the website immediately. However, for position pieces and theoretically-driven articles such as this one, a developmental editor is consulted to provide pre-publication feedback. This helps to tighten the arguments for the next stage: post-publication peer review. 

The next stage is to garner comments and reviews from other academics and the public (likely unsolicited academics). While this might take some time, the reviews (edited for relevance and brevity) will appear alongside the paper. The paper's "success" will then be judged on those comments. No matter what the peer reviewers have to say, however, the paper will be citable in perpetuity and might well have a very different life in terms of its citation index.

Why would we want to have such alternative available to us? Such alternative forms of peer review and evaluation can both open up the scope of the scientific debate and resolve some of the vagaries of conventional peer review [3]. This is not to say that we should strive towards the "fair-and-balanced" approach of journalistic myth. Rather, it is a recognition that scientists do a lot of work (e.g. peer review, negative results, conceptual formulation) that either falls through the cracks or does not get made public. Alternative approaches such as post-publication peer review is an attempt to remedy that, and as a consequence also serve to enhance the scientific approach.


COURTESY: Figure from [5].

The rise of social media and digital technologies have also changed the need for new scientific dissemination tools. While traditional scientific discovery operates at a relatively long time-scale [6], science communication and inspiration do not. Using an open science approach will effectively open up the scientific process, both in terms of new perspectives from the community and insights that arise purely from interactions with colleagues [7].

One proposed model of multi-staged peer review. COURTESY: Figure 1 in [8].

UPDATE: 9/2/2014:
I received an e-mail from the staff at F1000Research in appreciation of this post. They also wanted me to make the following points about their version of post-publication peer review a bit more clear. So, to make sure this process is not misrepresented, here are the major features of the F1000 approach in bullet-point form:

* input from the developmental editors is usually fairly brief. This involves checking for coherence and sentence structure. The developmental process is substantial only when a paper requires additional feedback before publication.

* most papers, regardless of article type, are published within a week to 10 days of initial submission.  

* the peer reviewing process is strictly by invitation only, and only reports from the invited reviewers contribute to what is indexed along with the article. 

* commenting from scientists with institutional email addresses is also allowed. However, these comments do not affect whether or not the article passes the peer review threshold (e.g. two "acceptable" or "positive" reviews).  


NOTES:
[1] Alicea B.   Animal-oriented virtual environments: illusion, dilation, and discovery [v1; ref status: awaiting peer review, http://f1000r.es/2xt] F1000Research 2014, 3:202 (doi: 10.12688/f1000research.3557.1).

This paper was the derivative of a Nature Reviews Neuroscience paper and several popular press interviews [a, b] that resulted.

[2] Aside from an in-house editor at F1000, Corey Bohil (a colleague from my time at the MIND Lab) was also gracious enough to read through and offer commentary.

[3] Hunter, J.   Post-publication peer review: opening up scientific conversation. Frontiers in Computational Science, doi: 10.3389/fncom.2012.00063 (2012) AND Tscheke, T.   New Frontiers in Open Access Publishing. SlideShare, October 22 (2013) AND Torkar, M.   Whose decision is it anyway? f1000 Research blog, August 4 (2014).

[4]  By opening up of peer review and manuscript publication, scientific discovery might become more piecemeal, with smaller discoveries and curiosities (and even negative results) getting their due. This will produce a richer and more nuanced picture of any given research endeavor.

[5] Mandavilli, A.   Trial by Twitter. Nature, 469, 286-287 (2011).

[6] One high-profile "discovery" (even based on flashes of brilliance) can take anywhere from years to decades, with a substantial period of interpersonal peer-review. Most scientists keep a lab notebook (or some other set of records) that document many of these "pers.comm." interactions.

[7] Sometimes, venues like F1000 can be used to feature attempts at replicating high-profile studies (such as the Stimulus-triggered Acquisition of Pluripotency (STAP) paper, which was published and retracted at Nature within a span of five months).



August 22, 2014

Six Degrees of the Alpha Male: breeding networks to understand population structure

This post is part of a continuing series on ways to think more deeply about human biological diversity. In last month's post (One Evolutionary Trajectory, Many Processes), I discussed how dual-process models (such as the DIT model) might be used to include a new dimension to more traditional studies of population genetics. This example did not spend too much time on the specifics of what such a model would look like. Nevertheless, a dual-process model provides a broader view of the evolutionary process, particularly for highly social (and cultural) species like humans.


In this post, I will lay out another idea briefly mentioned in the "Long Game of Human Biological Variation" post. This involves the use of complex network theory to model the nature of structure in populations. To review, the null hypothesis (e.g. no structure) is generally modeled using an assumption of panmixia [1]. In this conception, structure emerges from interactions between individuals and demes (semi-isolated breeding populations). Thus, a deviation from the null model involves the generation of structure via selective breeding, reproductive isolation, or some other mechanism.

One way to view these types of population dynamics is to use a population genetics model such as the one I just described. However, we can also use complex network theory to better understand how populations evolve, particularly when populations are suspected to deviate from the null expectation [2]. Complex networks provide us with a means to statistically characterize the interactions between individual organisms, in addition to rigorously characterizing sexual selection and the long-range effects of mating patterns.

An example of a small-world network with extensive weak ties. Importantly, this network topology is not random, but instead feature shortcuts and extensive structure. Data represents the human brain. COURTESY: Reference [3].

Since attending the Network Frontiers Workshop (Northwestern University) last December, I have been toying around with a new approach called "breeding networks". The breeding network concept [2] involves using multilayered, dynamical networks to characterize breeding events, the creation of offspring, the subsequent breeding events for those offspring, and macro-scale population patterns that result. This allows us to characterize a number of parameters in one model, such as the effects of animal social networks on population dynamics [4]. This includes traditional network statistics (e.g. connectivity and modularity parameters) that translate into theoretical measures of fecundity, the diffusion of genotypic markers within a population, and structural independence between demic populations. But these statistics are determined by a meta-process, one that is explicitly social and behavioral.

Before we continue, it is worth asking why complex networks are relevant. No doubt you have heard of the "small-world network" phenomenon, which postulates that given a certain type of network topology, networks with many nodes and connections can be traversed in a very small number of steps [5]. This is the famous "six degrees" phenomenon in action. But complex networks can range from random connectivity to various degrees of concentration. This approach, which comes with its own mathematical formalisms, allows us to neatly characterize the behavior, physiology, and other non-genetic factors that result in the population dynamics that produces structured genetic variation.

An example of regular, small-world, and random networks, ordered by to what extent their connectivity is determined by random processes [6]. In breeding networks, non-random connectivity is determined by sexual selection (e.g. selective breeding). As sexual selection increases or decreases, it can change the connectivity of a population.

As complex networks are made up of nodes and connections, the connections themselves are subject to
connection rules. In some networks, these rules can be observed as laws of preferential attachment [7].
But in general, each node or class of nodes can have simple rules for preferring (or ignoring) association with one node over another. If this sounds like an informal selection rule, this is no accident. While complex network theory does not approach connectivity rules in such a way, breeding networks are expected to be influenced by sexual selection at a very fundamental (e.g. dyadic interaction) level.

The complex network zoo, and the three parameters (heterogeneity, randomness, and modularity) that define the connectivity of a network topology. Examples of specific network types are given in the three-dimensional example above, but breeding networks could fall anywhere within this space. COURTESY: Reference [8].

Another feature of breeding networks involves connectivity trends over time. For example, a founder population with a small effective population size might indeed be panmictic (in this case represented by a random network topology). However, as the population size increases and connectivity rules change, this topology can evolve to one with scale-free or even small-world properties. This is not only due to the selective nature of producing offspring, but difference in the fecundity of individual nodes.

Once you start paying around with this basic model, a number of alternative network structures [9] can be used to represent the null model. Types of configuration such as star topologies, hyperbolic trees, and cactus graphs can approximate inherent geographic structure in a population's distribution. These alternative graph topologies are the product of factors such as geography or migration, and may have pre-existing structure. The key is to use these features as the null hypothesis as appropriate. This will provide us with a better accounting of the true complexity involved in shaping the structural features of an evolving population.

A map showing the seasonal migration of shark populations in the Pacific, including aggregation points. COURTESY: The Fisheries' Blog.

NOTES:
[1] One model organism for understanding local and global panmixia is the aquatic parasite Lecithochirium fusiforme. For more, please see: Criscione, C.D., Vilas, R., Paniagua, E., and Blouin, M.S.   More than meets the eye: detecting cryptic microgeographic population structure in a parasite with a complex life cycle.
Molecular Ecology, 20(12), 2510-2524 (2011).

[2] The idea of breeding networks is similar to the idea of sexual networks, except that breeding networks are more explicitly tied to population genetics. This paper give good insight into how sexual selection factor into the formation of structured, complex networks: McDonald, G.C., James, R., Krause, J., and Pizzari, T.   Sexual networks: measuring sexual selection in structured, polyandrous populations. Proceedings of the Royal Societiy B, 368, 20120356 (2013).

[3] Gallos, L.K., Makse, H.A., and Sigman, M.   A small world of weak ties provides optimal global integration of self-similar modules in functional brain networks. PNAS, 109(8), 2825-2830 (2012).

[4] For more on animal social networks and their relationship to evolution, please see the following references:

a) Oh, K.P. and Badyaev, A.V.   Structure of social networks in a passerine bird: consequences for sexual selection and the evolution of mating strategies. American Naturalist, 176(3), E80-89 (2010).

b) Kurvers, R.H.J.M., Krause, J., Croft, D.P., Wilson, A.D.M., Wolf, M.   The evolutionary and ecological consequences of animal social networks: emerging issues. Trends in Ecology and Evolution, 29(6), 326–335 (2014).

[5] For a definition of network diameter in context, please see: Porter, M.A.   Small-world Network. Scholarpedia, 7(2), 1739 (2012).

[6] This classification of idealized graph models is based on the Watts-Strogatz model of complex networks. For more information, please see: Watts, D.J. and Strogatz, S.H.   Collective dynamics of 'small-world' networks. Nature, 393, 440-442 (1998).

[7] This property of idealized graph models is based on the Barabasi-Albert model of complex networks. For more information, please see: Barabasi, A-L. and Albert, R.   Emergence of scaling in random networks. Science, 286(5439), 509–512 (1999).

[8] Sole, R.V. and Valverde, S.   Information Theory of Complex Networks: On Evolution and Architectural Constraints, Lecture Notes in Physics, 650, 189–207 (2004).

[9] Oikonomou, P. and Cluzel, P.   Effects of topology on network evolution. Nature Physics 2, 532-536 (2006).

Printfriendly