Lynx Roundup, September 20th

Map projections! Jupyter stuff! A new kind of neuron!

Resident Scientist Snkia works tirelessly towards robot utopia. These are his findings.

What is this weird Twitter army of Amazon drones cheerfully defending warehouse work?

http://willcrichton.net/notes/lessons-from-jupytercon/

Equal Earth map projection

https://www.wired.com/story/meet-the-rosehip-cell-a-new-kind-of-neuron

Pareto’s 80-20 rule

Earlier this week, I was at the Second Joint Congress on Evolutionary Biology (Evol2018). It was overwhelming, but very educational.

Many of the talks were about very specific evolutionary mechanisms in very specific model organisms. This diversity of questions and approaches to answers reminded me of the importance of bouquets of heuristic models in biology. But what made this particularly overwhelming for me as a non-biologist was the lack of unifying formal framework to make sense of what was happening. Without the encyclopedic knowledge of a good naturalist, I had a very difficult time linking topics to each other. I was experiencing the pluralistic nature of biology. This was stressed by Laura Nuño De La Rosa‘s slide that contrasts the pluralism of biology with the theory reduction of physics:

That’s right, to highlight the pluralism, there were great talks from philosophers of biology along side all the experimental and theoretical biology at Evol2018.

As I’ve discussed before, I think that theoretical computer science can provide the unifying formal framework that biology needs. In particular, the cstheory approach to reductions is the more robust (compared to physics) notion of ‘theory reduction’ that a pluralistic discipline like evolutionary biology could benefit from. However, I still don’t have any idea of how such a formal framework would look in practice. Hence, throughout Evol2018 I needed refuge from the overwhelming overstimulation of organisms and mechanisms that were foreign to me.

One of the places I sought refuge was in talks on computational studies. There, I heard speakers emphasize several times that they weren’t “just simulating evolution” but that their programs were evolution (or evolving) in a computer. Not only were they looking at evolution in a computer, but this model organism gave them an advantage over other systems because of its transparency: they could track every lineage, every offspring, every mutation, and every random event. Plus, computation is cheaper and easier than culturing E.coli, brewing yeast, or raising fruit flies. And just like those model organisms, computational models could test evolutionary hypotheses and generate new ones.

This defensive emphasis surprised me. It suggested that these researchers have often been questioned on the usefulness of their simulations for the study of evolution.

In this post, I want to reflect on some reasons for such questioning.

Let’s rewind to a time before computers. To a time before Darwin’s evolution by natural selection. Just to stress that this debate could have been had (and to some extent, has been had) before either computers or evolution. Let’s rewind to the time of Thomas Hobbes.

When Hobbes was writing, clocks and watches were some of the best examples of technology; and the hottest idea was the new science of mechanistic physics. Except Hobbes wanted to write about politics — more than that, he wanted to write a science of politics. The problem was that by looking at the commonwealth, he saw the importance of its form and the relative unimportance of its matter. If he was a pure Aristotelian, this would be no issue, but he accepted the new science’s eliminate of form as an explanatory tool. For the mechanistic physics, formal cause was not seen as an acceptable mode of explanation.

This forced Hobbes to distinguish between two kinds of knowledge. First, there was knowledge of things that we have made ourselves — for him, the central examples of this were geometry and the state. Second, there was knowledge of things that we did not make — i.e., the domain of mechanistic physics. In the case of physics, we could not deconstruct the machine because different mechanisms can produce the same effect. Thus, if we tried to reason from effects to causes, we could only arrive at reasonable conjectures and hypotheses. But for the state, we could know the causes because we had constructed them ourselves. With this move, Hobbes could avoid the problem of underdetermination.

This is also the move that a computational modeler employs. By explicitly specifying all the rules that the digital organism follows, she is making its world. Thus, she can then dismantle the machine and understand all of its parts and how they contribute to the effect of interest. Unlike Hobbes, she has the extra advantage of not having had the State build around her and being able to dismantle her simulation at will. Of course, in practice, just like Hobbes, most computer modelers usually don’t fully understand the code they’ve written. Still, this powerful determination is the computational modeler’s cake.

Unfortunately, the modeler wants to eat her cake, too. By appealing to multiple realizability, the modeler can claim that evolution does not need to be realized in DNA but can also be in silico. In other words, that evolution is underdetermined. She will usually proceed further by saying that a big advantage of a computational model is that it can be run in conditions that aren’t easily accessible to wet-lab experiments. In other words, she wants to assume a set of rules — which are underdetermined by a set of intuitions of real experiments — and then extrapolate their effects to carry out unreal experiments.

I think it is this tension between having your cake and eating it that causes the criticisms of “just a simulation”. All the advantages of peering under-the-hood come from determination, but all the applicability to non-simulations comes from underdetermination. And since we don’t usually inherently care about in silico organisms, we have to embrace the underdeterminism for the sake of applicability. Once we do that, all the power of peering under-the-hood disappears: since the detailed mechanisms are just conjectural. This is made worse by the curse of computing in big simulations, where the modeler doesn’t actually understand all the details of the mechanism they implemented — for example, when the organisms are arbitrary programs in some simple specification language.

Some of this critique can be avoided by replacing in silico with in logico. And I think computational modelers often offer this defence, too. For this, let’s turn again to Hobbes.

After sidestepping the problem of underdetermination, Hobbes could imagine the State as a giant watch or more general automaton. But he did not see the gears of that watch as the humans that make up society. Instead he compares artificial constructs like “wealth of the population” to strength of the automaton, counselors to memory, and reward/punishment to nerves. In this way, he was not implementing the State through physical processes (which would then make its study the extension of physical mechanics) but through conceptual human-made processes.

We can do a similar move with simulations. We can recognize that the physical world is separate from our descriptions of it. And that evolution is our way of making sense of the order and diversity in the physical world. As such, evolution is a concept which we can implement with other concepts. A computer simulation is then just the physical model of those concepts. Much like a clock was — for a long time — often used as a physical model of our astronomical hypotheses. This is the same sort of separation of theory and reality that I tried to do with Post’s variant of the Church-Turing thesis. And this provide a way to interpret evolutionary simulations as implementations of theory.

I think that modelers make the above argument when they point out that what matters is not the DNA/RNA/squishy-stuff of biology, but some set of logical process-based rules that defines the applicability of evolution. However, I think that when we make this argument, we have to be mindful of the underdetermination of our theory. In particular, that our goal is to improve how the theory is determined. In practice, I think that this can only be done if we provide an opportunity to directly link to systems of interest. We want our processes to have operationalizations that apply both in our computational model and other model organisms or natural organisms. For me, this can mean giving up some of the peaking under the hood in favor of an effective theory rather than a reductive one.

Of course, the above considerations are not limited to computer models. Model organisms in conditions designed for the purpose of a particular experiment are both conceptual and physical systems. And although computer models are also both conceptual and physical systems, these two aspects of them are usually easier to disentangle than for model organisms. This means that the above considerations could be repeated for experimental systems, but more care would be required.

https://www.oreilly.com/ideas/jupyter-notebooks-and-the-intersection-of-data-science-and-data-engineering

Author image
Center of the Universe Website
Super villain in somebody's action hero movie. Experienced a radioactive freak accident at a young age, which rendered him part-snake and strangely adept at Python.
Author image
Center of the Universe

Super villain in somebody's action hero movie. Experienced a radioactive freak accident at a young age, which rendered him part-snake and strangely adept at Python.