July 16, 2017

Wandering Towards an Essay of Laws


The winners of the FQXi "Wandering Towards a Goal" essay contest have been announced. I made an entry into the contest (my first FQXi contest entry) and did not win, but had a good time creating a number of interesting threads for future exploration. 

The essay itself, "Inverting the Phenomenology of Mathematical Lawfulness to Establish the Conditions of Intention-like Goal-oriented Behavior" [1], is the product of my work in an area I call Physical Intelligence in addition to intellectual discussions with colleagues (acknowledged in the essay). 

I did not have much time to polish and reflect upon the essay at the time it was submitted, but since then I have come up with a few additional points. So here are a few more focused observations extracted from the more exploratory essay form:

1) there is an underappreciated connection between biological physics, evolution, and psychophysics. There is an subtle but important research question here: why did some biological systems evolve in accordance with "law-like" behavior, while many others did not? 

2) the question of whether mathematical laws are discovered or invented (Mathematical Platonism) may be highly relevant to the application of mathematical models in the biological and social sciences [2]. While mathematicians have a commonly encountered answer (laws are discovered, notation was invented), an answer based on discovering laws from empirically-driven observations will likely provide a different answer.

3) how exactly do we define laws in the context of empirical science? While laws can be demonstrated in the biological sciences [3], biology itself is not thought of as particularly lawful. According to [4], "laws" fall somewhere in-between hypotheses and theories. In this sense, laws are both exercises in prediction and part of theory-building. Historically, biologists have tended to employ statistical models without reference to theory, while physicists and chemists often use statistical models to demonstrate theoretical principles [5]. In fields such as biology or the social sciences, the use of different or novel analytical or symbolic paradigms might facilitate the discovery of lawlike invariants.

4) the inclusion of cybernetic principles (Ashby's Law of Requisite Variety) may also bring together new insights on how laws occur in biological and social systems, and whether such laws are based on deep structural regularities in nature (as argued in the FQXi essay) or the mode of representating empirical observations (an idea to be explored in another post).

5) Aneural cognition is something that might guide information processing in a number of contexts. This has been explored further in another paper from the DevoWorm group [6] on the potential role of aneural cognition in embryos. It has also been explored in the form of the free-energy principle leading to information processing in plants [7]. Is cognition a unified theory of adaptive information processing? Now that's something to explore.


NOTES:
[1] A printable version can be downloaded from Figshare (doi:10.6084/m9.figshare.4725235).

[2] I experienced a nice discussion of this issue during an recent NSF-sponsored workshop. The bottom line is that while the variation typical of biology often makes the discovery of universal principles intractable, perhaps law discovery in biology simply requires a several hundred year investment in research (h/t Dr. Rob Phillips). For more, please see:

Phillips, R. (2015). Theory in Biology: Figure 1 or Figure 7? Trends in Cell Biology 25(12), 1-7.

[3] Trevors, J.T. and Saier, M.H. (2010). Three Laws of Biology. Water Air and Soil Pollution, 205(S1), S87-S89.

[4] el-Showk, S. (2014). Does Biology Have Laws? Accumulating Glitches blog, Nature Scitable. http://www.nature.com/scitable/blog/accumulating-glitches/does_biology_have-laws

[5] Ruse, M.E. (1970). Are there laws in biology? Australasian Journal of Philosophy, 48(2), 234-246. doi:10.1080/00048407012341201.

[6] Stone, R., Portegys, T.E., Mihkailovsky, G., and Alicea, B. (2017). Origins of the Embryo: self-organization through cybernetic regulation​. Figshare, doi:10.6084/m9.figshare.5089558.

[7] Calvo, P. and Friston, K. (2017). Predicting green: really radical (plant) predictive processing. Journal of the Royal Society Interface, 14, 20170096.

July 2, 2017

Excellence vs. Relevance

Impetus for this blog post. Twitter h/t to Alex Lancaster (@LancasterSci).

In academia, the term excellence is often used in the context of scarcity and competitive dynamics (e.g. publications, career promotion), and as a result can be used quite arbitrarily [1]. In [1], a distinction is made between excellence and soundness. Excellence is seen as a subjective concept, while soundness (enabled through completeness, thoroughness, and an emphasis on reproducibility) is the adherence to clearly defined and practiced research standards. While it may also be true that the concept of soundness can suffer from the same subjective limitations, it is probably an improvement over the current discussions surrounding excellence.

Another term we rarely refer to, but may be of even greater importance, is the relevance of scientific research. In a previous post, I brought relevance theory to bear on potential biases in scientific selectivity. One way to think of relevance is as the collective attentional focus of a given research group, community, or field. Collective attention (and thus relevance) can change with time: papers, methods, and influences rise and fall as research ideas are executed and new information is encountered [2]. As such, relevance defines the scope of scientific research that defines a particular field or community of researchers. Given a particular focus, what is relevant defines what is excellent. In this case, we return to the biases inherent in excellence, but this time with a framework for understanding what it means in a given context.

There is also an interesting relationship between soundness and relevance. For example, the stated goal of venues like PLoS One and Scientific Reports is to evaluate manuscripts based on methodological soundness rather than merely on field-specific relevance. To some extent this has eliminated issues of arbitrary selectivity, yet reviewers and editors from various fields may still surruptitiously impose their own field-specific conventions to the review process. Interestingly, soundness itself can be a matter of relevance, as the use of specific methodologies and modes of investigation can be highly field-specific.

Sometimes relevance is a matching problem between an individual researcher and the conventions of a specific field. Relevance can be represented as a formalized conceptual problem using skillset geometries [3]. In the example below, I have shown how the relevance of a specific individual overlaps with what is considered relevant in a specific field. In this case, the researcher has expertise in multiple areas of expertise, while the field is deeply rooted in a single domain of knowledge. The area of overlap, or Area of Mutual Relevance, describes the degree of shared relevance between individual and community (sometimes called "fit").


How relevant is a single person's skillset in the context of a research community, and how do we leverage this expertise in an inclusive manner? The mutual relevance criterion might provide opportunities in cases where there seems to be a "lack of fit". Understanding the role of collective attention within research communities might allow us to consider how this affects both the flow of new ideas between fields and the successful practice of interdisciplinarity.


NOTES:
[1] Moore, S., Neylon, C., Eve, M.P., O'Donnell, D.P., and Pattinson, D. (2016). Excellence R Us: university research and the fetishisation of excellence. Palgrave Communications, 3, 16105. doi:10.1057/palcomms.2016.105.

[2] Wu, F. and Huberman, B.A. (2007). Novelty and collective attention. PNAS, 104(45), 17599-17601. doi: 10.1073/pnas.0704916104.

[3] First introduced in: Alicea, B. (2017). A peripheral Darwin Day post, but centrality in his collaboration graph. Synthetic Daisies blog, February 16.

June 18, 2017

Loose Ends Tied, Interdisciplinarity, and Consilience

LEFT: A network of scientific disciplines and concepts built from clickstream data. RIGHT: Science mapping based on relationships among a large database of publications. COURTESY: Figure 5 in [1] (left) and SciTech Strategies (right).

Having a diverse background in a number of fields, I have been quite interested in how people from different disciplines converge (or do not converge) upon similar findings. Given that disciplines are often methodologically distinct communities [2], it is encouraging when multiple disciplines can exhibit consilience [3] in attacking the same problem. For me, it is encouraging because it supports the notion that the phenomena we study are derived from deep principles consistent with a grand theorizing [4]. And we can see this is areas of inquiry such as learning and memory, with potential relevance to a wide variety of disciplines (e.g. cognitive psychology, history, cell biology) and the emergence of common themes according to various definitions of the phenomenon.

Maximum spanning tree of disciplinary interactions based on the Physics and Astronomy Classification Scheme (PACS). COURTESY: Figure 5 in [5].

The ability to converge upon a common set of findings may be an important part of establishing and maintaining coherent multidisciplinary communities. Porter and Rafols [6] have examined the growth of interdisciplinary citations as a proxy for increasing interdisciplinarity. Interdisciplinary citations tend to be less common than within-discipline citations, while also favoring linkages between closely-aligned topical fields. Perhaps consilience also relies upon the completeness of literature inclusion for people from different disciplines in an interdisciplinary context. Another recent paper [7] suggests that more complete literature citation might lead to better interdisciplinary science and perhaps ultimately consilience. This of course depends on whether the set of evidence itself is actually convergent or divergent, and what it means for concepts to be coherent. In the interest of not getting any more abstract and esoteric, I will leave the notion of coherence for another post.


NOTES:
[1] Bollen, J., Van de Sompel, H., Hagberg, A., Bettencourt, L., Chute, R., Rodriguez, M.A., and Balakireva, L. (2009). Clickstream Data Yields High-Resolution Maps of Science. PLoS One, 4(3), e4803. doi:10.1371/journal.pone.0004803.

[2] Osborne, P.  (2015). Problematizing Disciplinarity, Transdisciplinary Problematics. Theory, Culture, and Society, 32(5-6), 3–35.

[3] Wilson, E.O. (1998). Consilience: the unity of knowledge. Random House, New York.

[4] Weinberg, S. (1993). Dreams of a Final Theory: the scientist's search for the ultimate laws of nature. Vintage Books, New York.

[5] Pan, R.J., Sinha, S., Kaski, K., and Saramaki, J. (2012). The evolution of interdisciplinarity in physics research. Scientific Reports, 2, 551. doi:10.1038/srep00551.

[6] Porter, A.L. and Rafols, I. (2009). Is science becoming more interdisciplinary? Measuring and mapping six research fields over time. Scientometrics, 81, 719.

[7] Estrada, E. (2017). The other fields also exist. Journal of Complex Networks, 5(3), 335-336.

June 5, 2017

"Hello World", project version

The DevoWorm group has two new students that will be working over this summer on topics in computational embryogenesis. To begin their projects, I have asked each student to prepare a short presentation based on their original proposal, which serves as a variant of the traditional "Hello World" program. We will then compare this talk with one they will give at the end of the summer to evaluate their learning and accomplishment trajectory.

One student (Siddharth Yadav, who is a current Google Summer of Code student) is interested in pursuing work in computer vision, machine learning, and data science, while the other (Josh Desmond, a Google Summer of Code applicant) is interested in pursuing work in computational biology and modeling/simulation. You may view their presentations (about 20 minutes each) below, and follow along with their progress at the DevoWorm Github repository [1].

Siddharth Yadav's project talk  YouTube

Josh Desmond's project talk  YouTube

NOTES:
[1] Siddharth's project repo (GSoC 2017) and Josh's project repo (CC3D-local).

May 18, 2017

Innovation, Peer Review, and Bees

This post was inspired by a couple of Twitter conversations by people I follow, as well as my own experience with peer-review and innovation. The first is from Hiroki Sayama, who is contemplating a range of peer review opinions on a submitted proposal.


I like the using the notion of entropy to describe a wide range of peer-review opinions based on the same piece of work. This reminds me of the "bifurcating opinion" phenomenon I sketched out a few years ago [1]. In that case, I conceptually demonstrated how a divergence of opinion can prevent consensus decision-making and lead to editorial deliberation. Whether this leads to subjective intervention by the editor is unclear and could be addressed with data.

Hiroki points out that "high-entropy" reviews (wider range of opinions) represent a high degree of innovation. This is an interesting interpretation, one which leads to another Twitter conversation-turned complementary blog posts from Michael Neilsen [2] and Julia Galef [3] on the relationship between creativity and innovation.


In my interpretation of the conversation, Michael point out that there is a tension between creativity and rational thinking. On one side (creativity) we have seemingly crazy and irrational ideas, while on the other side we have optimal ideas given the current body of knowledge. In particular, Michael argues that the practice of "fooling oneself" (or being overly confident of the novel interpretation) is critical for nurturing innovative ideas. An overconfidence in conventional knowledge and typical approaches both work to stifle innovation, even in cases where the innovation is clearly superior.

Feynman though that "fooling oneself" was generally to be avoided, but also serves as a hallmark of scientific rationality. However, the very act of thinking (cognitive processes such as focusing attention) might be based on fooling ourselves [4], and thus might define any well-argued position. 

Julia disagrees with this premise, and thinks there is no tension between rationality and innovative ideas. Rather, there is a difference between confidence that an idea can be turned into an artifact and confidence that it will be practical. Innovation is stifled by a combination of overconfidence in practical failure combined with a lack of thinking in terms of expected value. I take this to be similar to normative risk-aversion by the wider community. If individual innovators are confident in their own ideas, despite the sanctions imposed by negative social feedback, they are more likely to pursue them.

Nikola Tesla's approach was "irrational", it was also a sign of his purposeful self-delusion and perhaps even his social isolation from the scientific community [5]. Remember, in the context of this blogpost, these are all good things.

Putting this in the context of peer review, it could be said that confidence or overconfidence is related to the existence and temporary suspension of sociocultural mores in a given intellectual community. A standard definition of social mores are customs and practices enforced through social pressure. In the example given by Michael Neilsen, fooling oneself in order to advance a controversial position requires an individual to temporarily suspend social mores held by members of a specific intellectual community. In this case, mores are defined as commonly-held knowledge and expected outcomes, but can also include idiosyncratic practices and intuitions [6]. From a cognitive standpoint, this may be similar to the requisite temporary suspension of disbelief during enjoyable experiences.

While this suspension allows for innovation, violations of social mores can also lead to a generally negative response, including moral panics and the occasional face full of bees [7]. Therefore, I would amend Hiroki's observation by saying that innovation is marked not only by a wide range of peer-review opinion, but also by universal rejection. Separating the wheat from the chaff amongst the universally rejected works is work for another time.

The price of innovation equals a swarm of angry bees!

NOTES:
[1] Alicea, B. (2013). Fireside Science: The Consensus-Novelty Dampening. Synthetic Daisies blog, October 22.

[2] Nielsen, M. (2017). Is there is tension between creativity and accuracy? April 8.

[3] Galef, J. (2017). Does irrationality fuel innovation? Julia Galef blog, April 7.

[4] Scientific American (2010). How We Fool Ourselves Over and Over. 60-second Mind podcast, June 19.

[5] Bradnam, K. (2014). The Tesla index: a measure of social isolation for scientists. ACGT blog, July 31.

[6] Lucey, B. (2015). A dozen ways to get your academic paper rejected. Brian M. Lucey blog, September 9.

[7] "Face full of bees" is a term I just coined to describe the universal rejection of a particularly innovative piece of work. "Many bees on face" = "Stinging rebuke".

Printfriendly