Thursday, May 19, 2016

Another look at 'complexity'

A fascinating and clear description of one contemporary problem of sciences involved in 'complexity' can be found in an excellent discussion of how brains work, in yesterday's Aeon Magazine essay ("The Empty Brain," by Robert Epstein).  Or rather, of how brains don't work.  Despite the ubiquity of the metaphor, brains are not computers.  Newborn babies, Epstein says, are born with brains that can learn, respond to the environment and change as they grow.
But here is what we are not born with: information, data, rules, software, knowledge, lexicons, representations, algorithms, programs, models, memories, images, processors, subroutines, encoders, decoders, symbols, or buffers – design elements that allow digital computers to behave somewhat intelligently. Not only are we not born with such things, we also don’t develop them – ever.
We are absolutely unqualified to discuss or even comment on the details or the neurobiology discussed.  Indeed, even the author himself doesn't provide any sort of explanation of how brains actually work, using general hand-waving terms that are almost tautologically true, as when he says that experiences 'change' the brains.  This involves countless neural connections (it must, since what else is there in the brain that is relevant?), and would be entirely different in two different people.

In dismissing the computer metaphor as a fad based on current culture, which seems like a very apt critique, he substitutes vague reasons without giving a better explanation.  So, if we don't somehow 'store' an image of things in some 'place' in the brain, somehow we obviously do retain abilities to recall it.  If the data-processing imagery is misleading, what else could there be?

We have no idea!  But one important thing is that this essay reveals is that the problem of understanding multiple-component phenomena is a general one.  The issues with the brain seem essentially the same as the issues in genomics, that we write about all the time, in which causation of the 'same' trait in different people is not due to the same causal factors (and we are struggling to figure out what they are in the first place).

A human brain, but what is it?  Wikipedia

In some fields like physics, chemistry, and cosmology, each item of a given kind, like an electron or a field or photon or mass is identical and their interactions replicable (if current understanding is correct).  Complexities like the interactions or curves of motion among many galaxies each with many stars, planets, and interstellar material and energy, the computational and mathematical details are far too intricate and extensive for simple solutions.  So one has to break the pattern down into subsets and simulate them on a computer.  This seems to work well, however, and the reason is that the laws of behavior in physics apply equally to every object or component.

Biology is comprised of molecules and at their level of course the same must be true.  But at anything close to the level of our needs for understanding, replicability is often very weak, except in the general sense that each person is 'more or less' alike in its physiology, neural structures, and so on. But at the level of underlying causation, we know that we're generally each different, often in ways that are important.  This applies to normal development, health and even to behavior.  Evolution works by screening differences, because that's how new species and adaptations and so on arise.  So it is difference that is fundamental to us, and part of that is that each individual with the 'same' trait has it for different reasons.  They may be nearly the same or very different--we have no a priori way to know, no general theory that is of much use in predicting, and we should stop pouring resources into projects to nibble away at tiny details, a convenient distraction from the hard thinking that we should be doing (as well as addressing many clearly tractable problems in genetics and behavior, where causal factors are strong, and well-known).

What are the issues?
There are several issues here and it's important to ask how we might think about them.  Our current scientific legacy has us trying to identify fundamental causal units, and then to show how they 'add up' to produce the trait we are interested in.  Add up means they act independently and each may, in a given individual, have its own particular strength (for example, variants at multiple contributing genes, with each person carrying a unique set of variants, and the variants having some specifiable independent effect).  When one speaks of 'interactions' in this context, what is usually meant is that (usually) two factors combine beyond just adding up.  The classical example within a given gene is 'dominance', in which the effect of the Aa genotype is not just the sum of the A and the a effects.  Statistical methods allow for two-way interactions in roughly this way, by including terms like zAXB (some quantitative coefficient times the A and the B state in the individual), assuming that this is the same in every A-B instance (z is constant).

This is very generic (not based on any theory of how these factors interact), but for general inference that they do act in relevant ways, it seems fine.  Theories of causality invoke such patterns as paths of factor interaction, but they almost always assume various clearly relevant simplifications:  that interactions are only pair-wise, that there is no looping (the presence of A and B set up the effect, but A and B don't keep interacting in ways that might change that and there's no feedback from other factors), that the size of effects are fixed rather than being different in each individual context.

For discovery purposes this may be fine in many multivariate situations, and that's what the statistical package industry is about. But the assumptions may not be accurate and/or the number and complexity of interactions too great to be usefully inferred in practical data--too many interactions for achievable sample sizes, their parameters being affected by unmeasured variables, their individual effects too small to reach statistical 'significance' but in aggregate accounting for the bulk of effects, and so on.

These are not newly discovered issues, but often they can only be found by looking under the rug, where they've been conveniently swept because our statistical industry doesn't and cannot adequately deal with them.  This is not a fault of the statistics except in the sense that they are not modeling things accurately enough, and in really complex situations, which seem to be the rule rather than the exception, it is simply not an appropriate way to make inferences.

We need, or should seek, something different.  But what?
Finding better approaches is not easy, because we don't know what form they should take.  Can we just tweak what we have, or are we asking the wrong sorts of questions for the methods we know about?  Are our notions of causality somehow fundamentally inadequate?  We don't know the answers.  But what we now do have is a knowledge of the causal landscape that we face.  It tells us that enumerative approaches are what we know how to do, but what we also know are not an optimal way to achieve understanding.  The Aeon essay describes yet another such situation, so we know that we face the same sort of problem, which we call 'complexity' as a not very helpful catchword, in many areas.  Modern science has shown this to us.  Now we need to use appropriate science to figure it out.

9 comments:

Anonymous said...

We need a taxonomy of modes of persistence: how things are transmitted through time (remaining a recognizable thing). One suggestion:

1. pristineness: unchanging original object - some crystals, manufactured objects
2. substitution: same object is gradually replaced - Ship of Theseus thought experiment, fossils (different materials), extracellular matrix (of course dynamic change often functional)
3. replication: mechanical copy - DNA replication, photocopy, 3D printing, computer 'memory'
4. reproduction: replicated component of thing interacts with local environment to generate thing developmentally - daughter cells, organisms, memories, transmitted ideas, language

OK, it needs work. But perhaps it helps illustrate how organisms and thoughts are nothing like crystals or DNA, and transmitted ideas even less so. It also highlights the limitation of our metaphors. Note also that all of these are subject to decay/error. In the case of reproduction, fidelity depends on the local environment (intra-organism, niche etc.) and in some sense the notion that the feature/species/other entity is the same thing through time is something of a Platonic fiction.

Ken Weiss said...

These are points to think about. In a way, the ideas about 'systems' or 'complexity' relate to how the net result can persist but not the individual components. We mentioned 'phenogenetic drift' and work by Andreas Wagner in recent posts. Lots of people are thinking about the issue, but mostly trying to force it into reductionistic modes.

A current Nature article about agricultural research (here: http://www.nature.com/news/the-race-to-create-super-crops-1.19943) is essentially making the same point. Evolution in biological nature works directly through traits and only indirectly through individual causal genetic elements.

Human culture--history, the arts, language and so on--have similar attributes, I think. Reductionism has a powerful history but is only one way to view the nature of causation.

David J. Littleboy said...

Sorry to be disagreeable again, but I disagree.

The mathematics of computation answers the question of whether or not the brain is a computer quite clearly: the brain has to be a computer. There simply isn't anything the very simplest model of computation (the Turing machine) can't compute*. What this means is that the brain is either a computer of some sort, or it's magic. Really, there isn't anything else out there. The brain has to be a functional assemblage of functional parts that together creates a conscious thinking machine.

Of course, that doesn't mean that the computer metaphor is particularly helpful (at least in it's current crudely applied forms) in understanding how the brain works, especially since our current understanding of what's going on in the brain is near zilch (we don't have the slightest clue how memories are stored: the latest is that the "memory reconsolidation" theory is probably simply wrong**). But if you look at the visual processing in the early stages of the optical system (edge detection and the like), you see very sensible circuits that do quite clear things. My bet is that it's like that all the way down, i.e. neural circuits implementing specific functions that are wired together so as to make a brain that works. (FWIW, I think the "neural networks" stuff is all barking up the wrong tree, but that's another rant.)

So, yes, we don't have a clue how the brain works, and no, the computer model is probably doing more harm than good. But as long as you don't believe in magic, that the brain is only performing chemical and electrical processes, then at some point, what it's doing has to be computation in the mathematical sense. Because that's all there is.

And as a final egregiously off-topic rant: the Turing test, as described by Turing himself (not as described in popular press accounts) is actually a quite serious test in which humans and computers are pitted against one another in one of the hardest tasks for humans to perform well on: empathizing with someone of the opposite sex well enough pretend to be the other sex. (Turing only asks that the computer perform as well as your generic male in this task.)

*: It may be possible to compute some things faster by using quantum entanglement, but that doesn't change the set of things that can be computed.
**: http://blogs.discovermagazine.com/neuroskeptic/2016/05/19/does-reconsolidation-exist/#.Vz81uml-Pxs

Ken Weiss said...

I don't remember exchanging comments with you unless you were the 'anonymous' commenter above. I know very little about the brain but I have been writing computer programs for a long time and I think the analogy is forced. That the brain 'computes' or moves signal around in some way and 'stores' its experiences somehow seems obvious (and not magic!). But programs are 'linear' in the sense of one instruction following another in pre-prescribed order. I think it would have to be shown that there were similar pre-prescribed pathway orders in brains, in a very similar sense. Anyway, I have no deeper understanding of the motivation of the Aeon author, and our point was rather different, which was the basic idea of how complex causation works and the problems of trying to treat things in a reductionistic way when there are clearly multiple ways to achieve the same or similar outcomes (which may make things mysterious, but not mystical).

Anonymous said...

I'm not Littleboy, but thanks for your response and link. Perhaps multilevel selection is partly responsible for the robusticity of developmental-genetic systems. The indirectness of organismic selection tends to, over time, spread out the genetic substrate of a trait, including modifying genes that ensure its expression and reduce any deleterious aspects. With the result that loss/substitution of single allele will be less likely to be fatally disruptive and beneficial traits will be likely to appear in different environments. Selection at the genic level results in multiple copies and segregation distortion of those genes indirectly selected.

As for computation, our silicon digital computers are hardly a better metaphor than Descartes' hydraulic automata. But computation in a generic sense is ubiquitous: receptor expression, hormone and cytokine secretion, cell differentiation, all 'interpreting' and influencing subsystems of organismal states.

Ken Weiss said...

Thanks for these thoughts!
I think my take on the Aeon article is that our metaphors are products of our time, and our limitations. That doesn't make them wrong, perhaps, since the only real model is the real thing. So it's a message to be wary of similes and metaphors.

The things you mention clearly (at least based on what current knowledge can show us) occur. There have been various plausible theories or catch-phrases, like 'canalization' that capture some of what you describe, and having done developmental work myself, I think there's a lot of truth there.

However, one can, and to be a good scientist should, always ask whether what seems obvious can turn out to be wrong and/or limiting. The major scientific revolutions, to use Kuhn's over-invoked analysis, come with fundamental change of frameworks. Whether some metaphor other than computing and 'information' will serve us better remains to be seen. I always try to think about how things might be different, and I use computer simulation of evolution! But one can't just wish for deeper insights. Some facts or quirks, or luck, must come along for that to happen.

David J. Littleboy said...

But computation in a generic sense is ubiquitous: receptor expression, hormone and cytokine secretion, cell differentiation, all 'interpreting' and influencing subsystems of organismal states.

Yes. Exactly. I'd state it more strongly than that: anything you actually find and explicate going on in the brain is going to be computation. The _architecture_ of the brain isn't going to look very much like a generic "computer", but everything it does is going to be computation. That's why I get worried when people decide they dislike the computer metaphor: it may mean that they are looking for something that can't possibly be there, or are missing important constraints on how things might work.

Ken Weiss said...

We're now in the realm of semantics and differences in view of the value and constraints provided by metaphor and simile. If there is a truth out there, our meandering path to it can take many directions, mainly unpredictable. We can each hope, I think, that the usages we find helpful really are helpful.

John R. Vokey said...

Sean Carrol's recent book, "The Big Picture On the Origins of Life, Meaning, and the Universe Itself" has much to say on this broad topic, and I highly recommend it. For one provocative example, he declares causation an emergent, but not fundamental property of the universe: a highly useful way off talking about things. Her refers to his perspective as poetic naturalism. I found it compelling.