Monday, November 30, 2015

Quantum spookiness is nothing compared to biology's mysteries!

The news is properly filled these days with reports of studies documenting various very mysterious aspects of the cosmos, on scales large and small.  News media feed on stories of outer space's inner secrets.  We have dark matter and dark energy that, if models of gravitational effects and other phenomena are correct, comprise the majority of the cosmos's contents. We have relativity, that shows that space and even time itself are curved.  We have ideas that there may be infinitely many universes (there are various versions of this, some called the multiverse).  We have quantum uncertainty by which a particle or wave or whatever can be everywhere at once and have multiple superposed states that are characterized in part only when we observe it.  We have space itself inflating (maybe faster than the speed of light).  And then there's entanglement, by which there seem to be instant correlated actions at unlimited distances.  And there is some idea that everything is just a manifestation of many-dimensional vibrations ('strings').

The general explanations are that these things make no 'sense' in terms of normal human experience, using just our built in sensory systems (eyes, ears, touch-sense, smell, etc.) but that mathematically observable data fit the above sorts of explanations to a huge degree of accuracy.  You cannot understand these phenomena in any real natural way but only by accustoming yourself to accept the mathematical results, the read-outs of instrumentation, and their interpretation.  Even the most thoughtful physicists routinely tell us this.

These kinds of ideas rightfully make the news, and biologists (perhaps not wanting to be left out, especially those in human-related areas) are thus led to concocting other-worldly ideas of their own, making promises of miracle precision and more or less health immortality, based on genes and the like.  There is a difference, however: unlike physicists, biologists reduce things to concepts like individual genes and their enumerable effects, treating them as basically simple, primary and independent causes.

In physics, if we could enumerate the properties of all the molecules in an object, like a baseball, comet, or a specified set of such objects, we (physicists, that is!) could write formal equations to describe their interactions with great precision.  Some of the factors might be probabilistic if we wanted to go beyond gravity and momentum and so on, to describe quantum-scale properties, but everything would follow the same set of rules for contributing to every interaction.  Physics is to a great, and perhaps ultimate extent, about replicable complexity.  A region of space or an object may be made of countless individual bits, but each bit is the same (in terms of things like gravity per unit mass and so on).  Each pair, say, of interactions of similar particles etc. follows the same rules. Every electron is alike as far as is known.  That is why physics can be expressed confidently as a manifestation of laws of nature, laws that seem to hold true everywhere in our detectable cosmos.

Of cats and Schroedinger's cat
Biology is very different.  We're clearly made of molecules and use energy just as inanimate objects do, and the laws of chemistry and physics apply 100% of the time at the molecular and physics levels. But the nature of life is essentially the product of non-replicable complexity, of uniquely interacting interactions.  Life is composed strictly of identifiable elements and forces etc at the molecular level. Yet the essence of life is descent with modification from a common origin, Darwin's key phrase, and this is all about differences.  Differences are essential when it comes to the adaptation of organisms, whether by natural selection, genetic drift, or whatever, because adaptation means change.  Without life's constituent units being different, there would be no evolution beyond purely mechanical changes like the formation of crystals.  Even if life is, in a sense the assembling of molecular structures, it is the difference in their makeups that makes us different from crystals.

Evolution and its genetic basis are often described in assertively simple terms, as if we understood them in a profound ultimate sense.  But that is a great exaggeration: the fact that some simple molecules interacted 4 billion years ago, in ways that captured energy and enabled the accretion of molecular complexity to generate today's magnificent biosphere, is every bit as mysterious, in the subjective sense of the term at least, as anything quantum mechanics or relativity can throw at us. Indeed, the essential nature of life itself is equally as non-intuitive. And that's just a start.

The evolution of complex organisms, like cats, built through developmental interactions of awe-inspiring complexity, leading to units made up of associated organ systems that communicate internally in some molecular ways (physiology) and externally in basically different (sensory) ways is as easy to say as "it's genetic!", but again as mysterious as quantum entanglement.  Organisms are the self-assembly of an assemblage of traits with interlocking function, that can be achieved in countless ways (because the genomes and environments of every individual are at least slightly different).  An important difference is that quantum entanglement may simply happen, but we--evolved bags of molecular reactions--can discover that it happens!

The poor cat in the box.  Source: "Schrödinger cat" by File:Kamee01.jpg: Martin Bahmann, Wilimedia Commons

This self-assembly is wondrous, even more so than the dual existence of Schroedinger's famous cat in a box.  That cat is alive and dead at the same time depending on whether a probilistic event has happened inside the box (see this interesting discussion), until you open the box, in which case the cat is alive or dead. This humorous illustration of quantum superposition garnered a lot of attention, though not that much by Schroedinger himself for which it was just a whimsical way to make the point about quantum strangeness.

But nobody seems to give a thought beyond sympathy for the poor cat!  That's too bad, because what's really amazing is the cat itself.  That feline construct makes most of physics pale by comparison.  A cat is not just a thing, but a massively well-organized entity, a phenomenon of interactions, thanks to the incredible dance of embryonic development.  Yet even development and the lives that plants and animals (and, indeed, single-celled organisms) live, impressively elaborate as they are, pale by comparison with various aspects these organisms have of awareness, self-awareness, and consciousness.

This is worth thinking about (so to speak) when inundated by the fully justified media blitz that weird physics evokes, but then you should ask whether anything in the incomprehensibly grand physics and cosmology worlds are even close to the elusiveness and amazing reality of these properties of life and how these properties could possibly come about, how they evolved and how they develop in each individual--as particular traits, not just the result of some generic evolutionary process.

And there's even more:  If flies or cats are not 'conscious' in the way that we are, then it is perhaps as amazing that their behavior, which so seems to have aspects of those traits, could be achieved without conscious awareness.  But if that be so, then the mystery of the nature of consciousness having evolved, and the nature of its nature, are only augmented many-fold, and even farther from our intuition than quantum entanglement.

Caveat emptor
Of course, we may have evolved to perceive the world just the way the world really is (extending our native senses with sensitive instruments to do so).  Maybe what seems strange or weird is just our own misunderstanding or willingness to jump on strangeness bandwagons.  Here from Aeon Magazine is a recent and thoughtful expression of reservations about such concepts as dark matter and energy.

If quantum entanglement and superposition, or relativity's time dilation and length contraction, are inscrutable, and stump our intuition, then surely consciousness trumps those stumps.  Will anyone reading this blog live to see even a comparable level of understanding in biology to what we have in physics?

Wednesday, November 25, 2015

Epigenetics: what is it and what isn't it? Part II: How far can it take us?

Some people are heralding discoveries in epigenetics as a vindication of Lamarck's idea of the inheritance of acquired characteristics.  As we said yesterday, and in our previous posts on Lamarck, in a technical sense this may be so, if multigenerational transmission of epigenetic marking can be clearly demonstrated.  This is a big 'if' but in subtle ways.  DNA usage can be inherited, as epigenetic research shows.
Lamarck.  Any serious connection to epigenetics? Source: wikimedia commons
Let's assume it can be transgenerational--that is, not erased in the formation of sperm or egg cells so that a given gene's epigenetic marking can be transmitted across many generations.  In a way, this would subvert the adaptability of organisms by pre-programming, as if they had already been exposed to whatever environmental factor leads to the epigenetic marking.  That would be anti-evolutionary in the sense that organisms have, by and large, evolved to adapt to environments, not to predict them, so it would be better to re-set the genome so it doesn't express genes that it doesn't need to express, etc.

But let's assume current findings are basically supported by research that shows transgenerational inheritance.  This would be Lamarckian inheritance--the inheritance of acquired characteristics--but is it Lamarckian evolution?  That's a more subtle question.  If it is, say, useful to respond to some nutrient X, then passing a hyper-responsive gene usage pattern would be adaptive.  It would be based on experience and inherited, Lamarckian.

What sorts of traits are we talking about here?  We need to be careful in several ways.  First, the epigenetic effects that have been identified affect gene regulation (gene expression or not, expression levels), but not the structure of genes or their regulatory DNA sequences.  In itself, that's no big deal, as regulation is a part of function and hence evolution.  ln fact, if individuals who do better because they mark certain genes for (say) increased expression, and that is inherited, and if the modification increases survival, that's good--it's adaptive!  If then at some later time a DNA sequence change, a mutation, makes the cell respond in the same way without requiring the more temporary epigenetic marking, then the new cellular responsiveness would become built into the genome itself.  That has long been known as a potential evolutionary phenomenon (called either the 'Baldwin effect' or 'genetic assimilation').  It presents no challenge to evolutionary thinking and is still basically Darwinian.  It would mean that while evolution can occur by epigenetic means, it eventually goes the usual genome-based way of 20th century evolutionary theory.  No problem.  It can be called Lamarckian, but we shouldn't give Lamarck too much credit, because he had no knowledge of any of these mechanisms, and was (like Darwin after him) largely guessing about how to fit what is observed to some persuasive theory.

But what Lamarck was talking about were 'real' adaptations, like flight in bats, or whales in the sea, or the complex organs by which mammals execute special function, or the evolution of eyes. These are the 'real' adaptive changes that he, and Darwin and many others, were trying to account for in natural rather than theological terms.

Modification of expression levels of insulin or some metabolic trait, or even some skin pigment based on life-experience would seem so trivial relative to more complex 'real' adaptations, that we seemingly can keep Darwin on his pedestal of honor, and Lamarck in the laughing gallery.  Or can we?

Different sorts of 'adaptation'
The epigenetic examples that have been studied so far concern the usage of genes, that is their regulation.  They do not affect the structure of the genes themselves (that is, the protein or functional RNA that they code for).  In that sense, the standard anti-Lamarckian view of epigenetics would be that it doesn't really introduce anything new, but just adjusts what is already there.  Yet Darwin and even Lamarck were concerned with  how new traits arise.

We all know of the ridicule levied at Lamarck for arguing that giraffe ancestors became modern giraffes by striving to eat high leaves, which in essence stretched their necks and that change was then inherited.  Lamarck didn't require random mutation and natural selection, and in fact he only mentioned giraffes once in his book. There he noted that legs and other structures, not just necks, had to change in their evolution.  Laughter notwithstanding, Darwin would say that mutations variously giving longer necks and differently structured limbs etc., led their bearers to bear more baby giraffes and long-neck adaptation occurred that way.  But could such complex structures have evolved by epigenetic means, by quantitative changes in development later genetically assimilated?

Giraffes by epigenetic evolution?  Source: wikimedia images
Regulatory modification doesn't seem able to produce new things, does it?  Is it possible to produce new structures by changing the expression patterns of currently existing structures?  If there is anything seriously 'Lamarckian' about epigenetics, perhaps this is where any really interesting issues arise (and, keep in mind that Lamarck's kinds of explanations were the sort of thing one might conjecture with what was known at the time, so one can, at most, use his name reservedly, as is the case with Darwin).

Mutations can occur in regulatory regions and affect the binding efficacy of regulatory proteins.  So changes in DNA sequence of these regions can affect the expression of genes they control.  In that sense mutations can in principle arise that do what epigenetic modification does: change gene expression.  If this occurs in situations where environmentally induced expression changes occur, then genetic assimilation can introduce expression changes more permanently in the genome.  There's no problem here.

Still, epigenetic changes don't directly change the structure of proteins, so can the latter be related to epigenetic responses to the environment?  In principle, certainly they can!  Gene structure clearly affects the dynamics of the protein's actions.  Mutations in the structure of a cell-surface receptor that responds to some environmental factor can affect its efficiency, and in that sense mimic the effect of epigenetic change that raises or lowers the sensitivity of genomic response to that same factor. Similarly, making more of something (say, collagen in skin and bone) could be similar in effect to making less but stronger material.  In that sense, genetic assimilation of structural changes can, in principle, mimic or be equivalent to epigenetic effects (here, by the way, we're ignoring whatever it is that enables a particular environmental factor to affect gene-specific epigenetic change--a major and highly relevant issue!).

In fact, it is not so trivial to ask what is a 'new' trait rather than a modification of something existing, especially if one thinks in terms of how development works.  Giraffe necks are complex structures, but are longer necks new or just quantitative changes in all the tissues involved?  If you think carefully, the very fact of evolutionary continuity--descent with modification to use Darwin's own phrase for it--raises the question as to what's actually new rather than quantitatively different from what came before.  The answer in our context here is not at all obvious.

In sum, one issue is that if gene structure changes can have similar effects to gene regulatory changes, then epigenetic mechanisms, that is, traits acquired by experience, can be related to adaptive evolution in the usual Darwinian sense of adaptive traits.  But this is not at all the same as the adaptive evolution of complex traits.  Could they, too, evolve via epigenetic mechanisms?

Could classical complex adaptive traits evolve through epigenetic means?
We now have extensive direct knowledge to show that tens, hundreds, or (typically) thousands of different genes are employed even in simple phenomena like the formation of basic tissue layers and lineages in embryos. The same is true for bones, eyes, organs and so on.  These causal elements all vary in and among individuals and species so that, within limits, many different combinations of genes and (importantly) the timing of their expression affects the traits that develop.  The differences are related to regulatory as well as structural sequence.  Genomewide mapping shows similarly complex causation of traits like behavior, or diseases like cancers, diabetes, and so on.

Similarly, we know that most of the traits we discuss in regard to adaptive evolution (and that were the objects of Darwin's and Lamarck's interest), evolved gradually, usually over countless thousands or even millions of generations.  That's hard to keep in mind, but to understand evolution properly we must try to do so.

An important fact is that the function of most genes is to affect the expression of other genes through various sorts of signaling or processing, indirectly as well as directly.  Signals, receptors, regulators, modifiers, and so on all are about quantitative effects, and an embryo is the result of a cascade of quantitative interactions, that end in physical structures and processes (like metabolism).  Even if final properties of adaptive traits are purely structural, they are the result of regulatory interactions.

A central question, perhaps the central question, has to do with the fact that for the above scenario to be plausible, epigenetic effects based on life-experience in the environment must affect relevant gene usage, and it is far from automatic that all of these factors involved in complex adaptive traits could be induced in that way. It's a lot easier to account for minor metabolic effects like this than for other traits, even bit by bit.  Still, some of them might, and they all would, in principle, be legitimate subjects for an epigenetic evolutionary role.  This will likely be an important area of investigation.

Could major adaptations such as Lamarck and Darwin were mainly concerned with evolve through epigenetic stages?  Clearly, with such hypercomplex systems as we're made of, and their very gradual evolution, there is no way to argue that epigenetic changes, aided by various forms of genetic assimilation, could not account for the evolution of complex adaptive traits.  At least some of organisms' life experience exposures (what Lamarck called habits or striving) can affect their gene usage and, ultimately over such incomprehensibly complex genetic and temporal landscapes, lead to the origin and evolution of complex adaptive traits.

Could some lifestyle activities in Africa have led for whatever reason to growth changes in giraffe-ancestors' necks, slowly and bit by bit, that led them to notice and eat higher leaves, which then led to genetic assimilation (which is a form of natural selection)?

Could doesn't mean did, obviously, nor even if these ideas applied to some traits there's no reason they would have to apply to all traits.  We know evolution takes many paths.  But the discovery of environmentally induced regulatory changes by epigenetic marking could in principle lead to complex trait evolution.  Even at best the onion-like multilayering of how an environmental experience comes to affect the expression of only some specifically relevant genes in the first place, will have to be understood.

Evolution, and evolutionary theory, carry on
Even if all of this epigenetic driving of aspects of evolution were found to occur, that would not be a threat to the general notion of adaptive Darwinian evolution--it just adds another aspect of the mechanisms involved, which often would involve a component of natural selection.  And indeed Darwinian concepts, especially of strong natural selection, are themselves very oversimplified relative to a far more nuanced reality.

If epigenetics turns out to be more broadly and evolutionarily important than we currently know from solid facts, Lamarck would in that limited sense be right, but it would not be for anything other than the idea that adaptive traits can appear to have evolved through gradually accumulated life experience.  Lamarck was quite aware of the complexity and, importantly, slowness of evolution. But he can't be credited with more than that.  He had no specific information about mechanism and in that sense no anticipatory insight.  In essence the same was true of Darwin: he was observing variation in life and accounting for its arrival via natural historical rather than theological or other non-material means. On present understanding, Darwin's ideas, flawed or incomplete (or pure guesswork) though they often were, seem closer to the mark of our current understanding in many ways, epigenetics notwithstanding, than Lamarck's (not to mention various others who at the same general time speculated about evolution).

Truly transgenerational phenotypic effects may be real or even widespread.  They may be built into genomes or might, in principle, last indefinitely many generations.  They could be more flexible relative to changing environments, via epigenetic routes than having to be hard-wired in the genome. But even if so, this would mainly add details to the generally understood means of evolution of traits, simple or complex.  Every such discovery, if true, would improve our understanding of life.  The jury is clearly still out in this case, and jurors are still properly doubtful, about whether or not it is a fact.

Tuesday, November 24, 2015

Epigenetics: what is it and what isn't it? Part I: basic ideas

Epigenetics is a word that has had a variety of meanings historically, and it's sometimes unclearly employed, even by the user.  But these days, when people talk about epigenetics they generally mean the chemical modification of DNA sequence in a way that does not change the sequence itself but affects the expression of genes in or near the modified DNA region--that is why it is called 'epi' genetic. Such chemical modifications affect whether or not the cell uses a particular gene (only a subset of all genes is used in any particular cell, but that subset changes depending on the cell's local environment at any given time). That is, epigenetic changes essentially are regulatory; the epigenetically modified DNA sequences are not mutations of the coding of the structure of a particular protein (or a directly functional RNA), just how or when or how intensely it is used by the cell. Likewise, epigenetic modification doesn't change the affected sequence itself, but affects whether regulatory proteins can bind there to cause a nearby gene to be expressed, that is, transcribed into RNA.  The phenomenon of such DNA 'marking' itself isn't controversial, and a few of the means by which it happens in a cell are known.  Indeed, unlike mutations in the sequence itself, the marking is easily erased and there are known mechanisms that do that.

However, reports that epigenetic marking can be inherited are quite legitimately controversial.  There are a few reasons for this.


How can local gene usage be inherited?

Cells respond to their environment--to extra-cellular conditions--via cell-surface receptors or other similar means.  If they don't have receptors for a signal floating by, they can't detect or respond to that signal. But cells that do detect a signal change whether they start or stop using a particular set of genes. That's how complex multicellular, multi-organ system organisms become differentiated, as well as to respond to environmental conditions.  Most examples of epigenetic inheritance relate to experience that affects particular types of cells, though many 'housekeeping' genes, genes that carry on basic metabolism, are used by all cells, and any environmental change could in principle induce all cells to change their gene usage.

Unless there is subsequent environmentally-induced change, once modified, when they divide, cells transmit their particular expression state to their daughter cells. 
If an epigenetic modification causes a cell to respond to a particular environmental signal by turning on the expression of a particular gene, that 'use it!' state would be passed on when the cell divides to produce other cells in its lineage, unless or until another modification occurred to reverse the original change. Thus, if some particular cell, say a lung cell, is induced by some environmental factor like a nutrient to express some set of lung-related genes, the effect is local, specific to lung cells. How that works is complex but some of the mechanisms are known.  However, they have to do with how chromosomes specifically in lung cells are packaged; that is a local fact.  For example, it need not also affect nerve or vessel or skin or stomach cells.  Again, that is because in a differentiated organism different tissues are separated from each other so they can be different.

This raises a serious problem: Local effects on gene expression will be passed on to daughter cells in that tissue, but this is not the same as transmitting the effect to the next generation of organisms. Intergenerational transmission requires that the modification also be made in germline--sperm or egg cells--because the offspring organism starts out life just as a single fertilized egg (which has no lung cells!). Germline cells generally need to have genes switched on (or off) to enable them to make a new organism from scratch, from that single fertilized egg cell. Some temporary change that was important to the embryo's future lung cells would not likely be appropriate for the development of those cells in the first place during embryogenesis. So it is no surprise that there are active mechanisms to strip off epigenetic changes in germ cells' DNA, to reprogram those cells' gene usage to prepare them for their embryonic duties, this is done by erasing and re-setting DNA modification in the sperm or egg cell. If the embryo's lungs, when they eventually have them, need to modify what they due based on the air their exposed to, then new epigenetic changes will occur. Thus, the process of erasing and reprogramming removes those changes. Some bits of the genome are protected from this but it is not automatically true that even environmentally induced changes in housekeeping gene usage will be transmitted.

It was first systematically shown by Weismann in the 19th century and has been a theoretical bulwark against the idea of Lamarckian inheritance, that at least in most animals, somatic (body) and germline cells are separated, independent lineages isolated from each other (the situation is different in many or most plants).  That means that for epigenetic changes to become heritable--and hence affect evolution--modifications to particular body cells would have to be applied to germline cells and not be erased before fertilization.

Without some clear mechanism, there is no reason that future sperm or egg cells will even 'know' about, much less respond to, the signal that induces change in the lung or nerve or stomach cells.  So for epigenetic change to be inherited, there is the serious question of how the genomes in germline cells are specifically modified by signals that affect nerve or lung, etc.  If a lung cell alters its use of gene X related to how lungs work, when it detects some (say) pollutant in the air, how does that specific change also get imposed on the germ line?  Explanations that have been suggested so far are mainly not very convincing. That's why most reports of inherited epigenetic modification are properly received with skepticism.

Still, many investigators are seriously interested in epigenetic changes, especially when or if they are inherited, for a few reasons. This sort of inheritance, which modifies DNA usage differently among a person's many different localized tissues, threatens the degree to which traits can be predicted from a person's DNA sequence alone (obtained, for example, from a blood sample), and among other things that threatens realization of the promise of 'precision' genome-based medicine.  Secondly, accurate assessment of epigenetic effects could lead to a better understanding of important environmental exposures and/or what to do about them, so that newborns are not doomed by their parents' habits to live with pre-set epigenetic traits that they now cannot prevent.  And the least legitimate reason, but one important in the real world of today is that is a lucrative and sexy new finding that can be made to seem a melodramatic 'transformative' shift in our understanding of life.

An important criterion for claims of true epigenetic inheritance is that they must pass through at least to a 3d generation without the presence of the environmentally causal trigger.  That is, transgenerational transmission is evidence that the genome is in fact preserving the change rather than just each new individual learning it from environmental experience (such as in utero).  While there have been various generally convincing reports of true transgenerational inheritance in some species like the simple nematode (C. elegans) or plants, this hasn't clearly been shown in mammals (or humans), even if one or even two generational inheritance, usually through the maternal line, has been found.

Most of the literature consists of curious reports or claims of epigenetic inheritance, reviews of the germline erasure process and what areas of germline DNA could perhaps escape erasure of epigenetic marking, and some examples that seem to be truly transgenerational.  At present, the excitement seems generally far exceeding the reality.  But since epigenetics is potentially quite important, and the methods for understanding it rather new, it is being given serious attention.

A paper by Bohacek and Mansuy (November 2015 Nature Reviews Genetics), reviews what is known about the degree to which epigenetic 'marking' is inherited.  This is a very good, measured paper that in our reading of it makes it clear that claims of non-trivial multi-generational DNA modification effects still need careful documentation.  But if life-experience by parents can affect their offsprings' traits in substantial ways related to the offsprings' future life experience, even if they are not exposed to the risk factors that set their parents' genome usage patterns, then if we could understand how this works perhaps such modifications would not be destiny, and means of prevention or control could be developed if the phenomenon were to be better understood.

Gene usage isn't the same thing as gene structure
Epigenetic inheritance can also affect ideas about how evolution works, if they really have long-term (many generational) effects. The suggestion is now routinely being made that the phenotypic effects of epigenetics we are seeing introduces a Lamarckian view of evolution that may, after all, have to be melded with our Darwinian theory (e.g., see Skinner, MK, Gen Biol Evol. 7: 1296-1302, 2015).  But the idea that this is a genuine revival of Lamarckism is still treated with sneering.  Should it be?

We have written a 2015 series of posts about Lamarckian ideas.  Lamarck was interested in the evolution of adaptive traits, like flying or ocean-living mammals, not just some specific minor traits. He had some non-starter ideas, but so did Darwin and they had far less knowledge than we do!  So one can't defend his theory per se for various reasons.  Still, it's worth thinking about rather than just sneering at Lamarck.  That's for tomorrow's post.....

Thursday, November 12, 2015

Please send me your Intro BioAnth (PhysAnth, Human Origins/Evolution) syllabus

Dear colleagues,

I'm in the early stages of planning a study to determine the most effective approaches to teaching Introductory Biological/Physical Anthropology (also called "Human Origins" etc...) to undergraduates.

Mostly I'm interested in their evolutionary thinking and whether we can identify our best methods for bettering it over one semester.

This study would involve inviting students enrolled in this course at colleges and universities around North America to take pre- and post-tests on-line (which could earn them some form of extra credit on your end, if you do that sort of thing). All participation would be entirely anonymous except for the feedback about your own (anonymized first) students that you and I share, if you'd like to hear about it. The results of this study should benefit us all.

Before I get to any of it, though, it would help greatly to see your syllabi so that I can identify the major ways that our learning outcomes and our approaches vary. This will help me frame hypotheses and write survey questions.

So please, I'm begging you, please send me your most recent syllabus for Introduction to Biological Anthropology (aka Introduction to Physical Anthropology aka Introduction to Human Evolution/Origins) or the equivalent course at your institution, that you teach.

holly_dunsworth@uri.edu

If you want mine in return, I'll be happy to reciprocate!

If you do share, you have my word that I will keep your syllabus confidential and anonymous and will not share with anyone.

When you send it, would you please mention whether you'd be interested in participating in such a study in the near future?

Thank you so much in advance,
Holly

P.S. If you could share this with your colleagues who teach this course, I'd be forever grateful.

Monday, November 9, 2015

Another bountiful acorn crop, on schedule

We've been raking a lot of leaves these past several weeks, which we do every year here in rural central PA.  But we've also, again, been raking a whole lot of acorns, which is not something we do every year. I blogged about this a few times in the midst of the last bountiful acorn year, five years ago, wondering why some years there are more acorns than others. It seems appropriate to rerun these posts.


The Mystery of the Acorn Bonanza

Penn State is situated in the middle of what used to be vast hardwood forest.  Some areas of the county have been more completely deforested than others, and we happen to live in an area in which the developers retained as many oaks as they could when they put up houses in the 1970s.  One consequence of this is that we spend many hours raking leaves this time of year.

Photo: A Buchanan

Many many leaves, and this year, many many acorns.  Masses of them.  It's a good year for acorns.  I wanted to know why.

Taking Holly's message on the 'scientific method' to heart, I wondered, could I apply the scientific method to this question?  Well, unfortunately no, not without having collected data on possibly relevant variables last year as trees were making their acorns -- or even two years ago, since the acorn production of some oak species is a two year affair.  And that would have required knowing the relevant variables.

So I did the next best thing.  I googled 'variation acorn production', thinking someone must have applied the scientific method to this question and done the requisite hypothesis testing.

Photo: A Buchanan

Which of course they have.  It turns out that people look at this from various angles, though.  Some are thinking of the downstream effects, so advise hunters on how to maximize acorn production, because acorns are a staple food for deer.  Others are thinking about sustainable forests, and about how to keep deer from eating the acorns that they hope will go on to produce new trees.

Even so, this approach was not nearly as productive as I'd hoped.  In fact, I kept coming up with a similar theme:
The number of seeds produced by a population of woody plants can vary markedly from year to year. Unfortunately, knowledge of the patterns and causes of crop-size variation is limited....
and
However, little is known about the proximate factors that control the yearly variation in acorn production in oak species...
and
Most acorn production studies note wide and consistent differences in acorn productivity among individuals, but none clearly demonstrate determinants of productivity. 

Hm.  Well, this site looked more promising -- a list of actual variables!
  A number of factors affect acorn production in oaks.  
∙masting cycles ∙acorn predators 
∙oak species  ∙tree age and diameter 
∙weather ∙tree dominance 
∙genetics 
When combined, these factors make acorn production highly variable from year to year, between the different oak species, between trees of the same species, and from one property to another.  
And further,
[Production is high] once every two to five years.  Acorn production during an abundant crop year may be 80 percent higher than in a low production year; the difference to deer can be hundreds of pounds of acorns per acre.  Although the exact mechanisms that control masting are not fully understood, biologists believe that oak species, weather, and genetics are important factors that determine how often oaks produce abundant crops.      
If we knew what masting was, this might be helpful, but probably not to answer my question -- there's that 'not fully understood' thing again.  And really, only 'weather' is a variable in this equation, as the species and genes of a tree don't change season by season, so this isn't really very helpful after all.

This was interesting:
LONG-TERM PATTERNS OF ACORN PRODUCTION FOR FIVE OAK SPECIES IN XERIC FLORIDA UPLANDS
We examined long-term patterns of acorn crop sizes for five species of shrubby oaks in three xeric upland vegetative associations of south-central peninsular Florida for evidence of regular fruiting cycles and in relation to winter temperature and precipitation. 
And potentially rewarding -- by looking at different species in a single area they were able to control for variation in all the possibly relevant factors.  What did they find?  "[E]vidence that annual acorn production is affected by the interactions of precipitation, which is highly variable seasonally and annually in peninsular Florida, with endogenous reproductive patterns." Oh, so it's rainfall.

Except that, as it turns out, a number of people have studied variation in acorn production in five local species in different areas.  There's a report of a study in California and one in Appalachia, and even one in Japan in which sea breeze was a factor, none definitively confirming the rainfall explanation.

In frustration, I emailed a local forestry agent.  I haven't heard back.  It's possible he's out counting acorns.

Ok, so I accept that there's no simple answer to this simple question.  The serious upshot of this little exploration is that here, too, complexity reigns.  Despite the list I cite above, who can really say what all the relevant variables are, not to mention measure them at the right time or place?  Oak flowers are wind-pollinated -- maybe acorn production depends greatly on wind catching the pollen at just the right time.  Which would be essentially unmeasurable.  And, perhaps variation in rainfall is a significant factor, but where and when?  The roots of mature oak trees run wide and deep, and when are which roots feeding which flowers?  And so on.

And how does one construct believable evolutionary (that is, adaptive Darwinian) scenarios for this?  There's no acorn gene!  (But, of course, it has been tried.)

And think how utterly confusing this must be for any squirrel who's just trying to use his experience to get ahead, to put away a good cache of meals, and wonders if he's going nuts because he's losing his memory.

But one interesting thing caught our eye here as we ventured away from our usual comfort level, scientific literature-wise.  Ecological studies, by their very nature, are less prone to reductive thinking than what we're used to.  "When combined, these factors make acorn production highly variable from year to year." By and large, these studies accept that the cause of variation in crop production is the result of interactions among various factors.

If only this were so readily accepted in genetics and anthropology.



This year's acorn crop, continued

I did hear back from the forester on the question of why so many acorns this year.  He says that oaks are generally sporadic fruit producers, with really good crops every 4 to 7 years.  There are several reasons for this, one being the weather and the other an ecological adaptation.

A late spring frost is hard on oak flowers, and will lead to a low yield, he says.  And, insects play a role.  There are on the order of 30 different species of acorn weevils "that can destroy up to 90% of any given year's production either while it is on the tree developing or after they fall in the autumn."  The cyclic nature of fruit production helps keep the insect population down.

And, he says that there are advantages to sporadic fruit production.  It keeps predator populations down, which increases the chances that some acorns from a given tree will survive and grow.  If not, my informant says, the tree would always be having to produce more and more fruit to stay ahead of the rodents.  Similarly, the fluctuation keeps weevil populations down, and thus acorn destruction down.  Good for the tree, not so good for the predators. 

Source


Both explanations sound plausible.  However, regular MT readers won't be surprised if we are a bit reluctant to accept the adaptive explanation right off the shelf. First, an oak tree is lucky if even a few of the acorns it produces in any given year makes its perilous way to treehood.  Even in a bad year, oaks way overproduce acorns relative to what will take root, or replacement needs and so on.

However, sporadic fruit production in response to the vagaries of climate or other means of destruction of flowers or developing acorns is completely in keeping with the adaptability or facultativeness that is a core evolutionary principle.  Oak trees need to be able to adapt to change, and good and bad fruit production years is one way they do so.  It's easier to suggest but a lot more difficult to conceive how a tree 'knows' (genetically evolves) to adjust for variable predator loads in the hypothesized way, when climate itself is unpredictable.

Friday, November 6, 2015

Anti-aging drug will put you on a very nasty waiting list.

A commentary article in the 18 September issue of Science describes an 'anti-aging' drug that an investigator wants to use in clinical trials.  We guess that scientists will never stop pushing the envelope, and perhaps the very job of science is to do that.  But at least, when it's costly and has consequences, the consequences should be acknowledged up front so that anyone supporting the idea knows what their getting into.

Of course the usual cast of characters, experts convened by NIH to hear the proposal, did their usual hyperbole job (one quoted as saying that "this is a ground breaking, perhaps paradigm-shifting trial.").  Experts are, after all, not wholly disinterested when it comes to research funding.

The proposed idea is to follow a few thousand older people for at least five years to see what happens to them on a chosen long-known-safe drug related to diabetes treatment.  They say, but without any real evidence that I could discern from the article, that they're not trying to extend life particularly, but just to add healthy years to one's lot.  Because these lucky survivors will purportedly be healthier, their medical needs will be less so the burden on the health-care system will be less, too. Yes, fund this work and reduce health care costs!  Does that sound like snake oil?

In this effort the idea is just to choose a known test drug to show proof of principle with the idea that the 'real' anti-aging compound will some day be discovered.  Despite the hype and breathless promotion by aging researchers (who are also researching aging!), from an actual biological point of view this is likely to be yet another stroll through Dreamland, and we've had plenty of those in regard to Fountain of Youth ideas.  Anyone remember monkey glands?  Or 'orthomolecular medicine'? Vitamin C?  Indeed, vitamin D?

The reason we make our skeptical assertion is that forestalling age-related diseases mainly means slowing down the process of degeneration.  That essentially is the expected effect of the proposed study.  But in reality, if this were successful it would mean, first, people having more years at risk for more decrepitude or for diseases to arise.  And secondly, it means the newly experienced years will almost inevitably involve living with more morbidity, that is, less quality of life, for longer times.

Why is this? In a nutshell, some cause or system failure will eventually get each of us.  Only cryopreservers can seriously think otherwise (and they're deluded).  Most of the causes of late adult death involve gradual decay of some biological system(s).  Often one thing leads to another: bones broken in a fall lead to infection or immobility that leads to something else, or livers and intestines may be fine but neural processes decline, and so on.  Stopping one decay will directly enable other decays to take its place in the survivor who would have passed away from the first one.  Our lifespan seems mainly to be based on surviving all roughly independent, or correlated, causes until one finally gets you.  The alternative is that lifespan is calibrated by some single factor, and many candidates have been advanced, like loss of chromosomal telomere length.  Nonetheless, the former, multiple-cause view is clearly applicable to a great extent.

The reason is that the processes of neural decay, mutational cause of cancer, clogging or bursting of arteries, toxic buildup in tissues, physical wear on joints, slowing down of cell division in the gut, lung, and immune systems and so on are undoubtedly at least partially local and independent, and simply not due to any single underlying cause.  This means that even if there is a magic calibrator with some major delaying or slowing effect, as does seem likely, given the different lifespans of various mammal species, that slowing down many or all decays will just extend the period in which the person will have to suffer through many different, simultaneous, forms of falling apart.

This is not a pretty picture, but based on history it is the nearly inevitable result of life-extension. Preventing pneumonia is, so to speak, a great way to increase cancer and dementia risks.  So will taking lifetime regimens of the newly discovered statin-replacement drugs that have recently been so highly touted.  So would the piling up of billions of elderly people in a resource-crowded world.

Facing up to the reality, in the absence of any solace from religious hopes, is a major challenge in our age of science.  But the real world intrudes nonetheless.  The grim reaper will get us in the end, of course.  And the grim reaper of our wallets will be fleecing us all the way there.

Thursday, November 5, 2015

Red meat makes a good, scary cancer story....but is it?

It's off again, on again:  don't eat processed meat, don't eat red meat, or you'll get colon cancer!! Eat fish (well, unless it has mercury) or chicken (unless it has salmonella), or 'the other white meat': pork (remember the billboards?). They're safe!

A few years ago we seemed to have been given some relief when stories suggested that red meat (beef) was OK after all (of course, the lives of the cows were awful, and eating beef meant you doped up on antibiotics, but at least it didn't give you colon cancer).

Recently, a statement (now apparently offline) released by the International Agency for Cancer Research, a part of the World Health Organization, asserts that eating processed meat and red meat, 'causes' cancer.  Actually, the report was a bit more nuanced than the headlines, but journalists have to make a living, no?

Bacon, Stock photo

In response to strong backlash, the WHO quickly was forced to 'clarify' their clarion call to vegetarianism -- here's a link to their Q&A on the subject.  They now acknowledge, or 'clarify' that what they had done was simply add the meats to a list of known nasties, that cause cancer.  Putting meat on a causal list is one thing, but dishing it out to the media is another, and a rather irresponsible way to play for publicity (of course, if the news media made an exception and actually did their job of being skeptics, this wouldn't have unwound as it did).

In any case, the bottom line was basically that even two strips of bacon a day increases your colon cancer rate by 18%.  That sounds like a whopping and terrifying difference!  The WHO put this in the same carcinogenic-substance category as asbestos and tobacco. As they quickly clarified, that is in a sense a warning list, but the 18% figure is what got in the news and may have, at least temporarily, slammed the bacon and hamburger industry, if anybody still listens to the daily Big Warnings. However, let's hold all cynicism for the moment, hard as that is to do, and look a bit more closely at was said.

First, there seems little doubt that processed meats 'cause' cancer.  That doesn't mean an innocent-looking strip of bacon will give you cancer.  Instead, what it means is that various high quality studies have found a dose-response pattern in which higher or longer exposure levels earlier in life are associated with higher cancer incidence later on.  We know that correlation is not the same as causation, and that lifestyle factors are highly correlated.  Thus, for example, those in dire poverty don't eat tons of processed meat, and those who eat less salami also eat more brussels sprouts, take vitamins, don't smoke, lay off the double gin tonics.....and of course, go to the Ashram regularly to get your mind off the bacon you didn't eat at breakfast and the aftertaste of your dinner's brussels sprouts, and say a mantra to stay calm after you've given up everything that's fun.

Now, in the west, the lifetime risk of colon cancer is about 5%.  That means that if you tote up the probability of having cancer at age 40, 45, 50, .... 100, if the 18% figure is credible, it means that risk is about that much higher in those who dose up on pastrami and burgers.  Actually, this was the estimate based on eating 2-strips of bacon or the equivalent every day.  Of course, by far most of these cancers occur in older people (over the age of 60, say).  That means that the risk figures mainly apply to you if you live to old age, and of those who die earlier of other things their actual risk turned out to be zero--they enjoyed their visits to McD's and the deli!  That's why smoking is, in a literal epidemiological sense, a preventive relative to colon cancer (smoking will kill you of something else first).  There's no joking about cancer, but the basic idea is that for those who lived long enough, about 5% get colon cancer at some age.  Actually, while we don't know about meat-eating habits, but risks have been declining in recent years in developing countries (and, I think, increasing in other countries as they westernize).

Eat meat and lower your risk!
At a baseline of 5%, an 18% increase means a lifetime risk of about 6%.  Now if you hog up even more, your risk will go higher, perhaps much higher.  But wait a minute.  How many people actually dish up so heavily on processed meat (including steak and burger)?  Surely some do.  In fact, we don't know exactly where the lifetime risk estimate of 5% comes from; if from a population sample, then it wouldn't have regressed out meat-eating, and the figure would already include meat-eaters. However, let's ignore all these potential confounding or confusing issues and just consider the 18% figure on its own, as a given, as risk differences between abstainers and sausage gluttons.

Now in modern countries with health care systems, one routine health-care procedure is regular colonoscopy in older adults.  There was a recent estimate that regular colonoscopy can prevent about 53% of colon cancers; the reason is that precancerous polyps are found and excised so they can't transform into cancer.  Actually, you can find even more dramatic estimates of the preventive effectiveness if you scan at the web.  Likewise, you'll find many other lifestyle factors widely cited as having protective effects, including exercise, vitamins, eating vegetables, and the like.

Let's just do a bit of back-of-the-envelope numerology to make the point that if you're a bacon hog but have regular screening, get your exercise and all that, and you reduce your meat-elevated risk by 50%, then your net risk is around 3%, about half the 'average' of 5%.  One can surmise that if you stop your bacon fix, but then figure you're fine and don't do the other preventives, many of which are likely to be wanting in the meat-hog's normal lifestyle, then the actual effect of your 'healthy' baconless diet change will be to increase your cancer risk!

This is a lesson in complex causation and oversimplified news stories.  Processed meat may be a risk factor for colon cancer, but throwing irresponsibly simplified figures like raw meat to the news media leads to worse, rather than better information for the public.

So, as Hippocrates said, moderation in all things.  Eat your reuben (OK, yes, along with some broccoli).  But go one better than Hippocrates:  get scoped!

Tuesday, November 3, 2015

Unsuspected death rates and miners' canaries

One of the major problems with health-risk prediction, from genetic or even other evidence, is that risks are estimated from past data but predictions are of course only for the future.  This is not a minor point!  Predictions are predicated on the assumption that what we've seen in the past will persist.  To a somewhat lesser extent, predictions based on measured risk factors are also based on the further assumption that variables used to estimate risk are causative and not just correlated with the outcome.

An inconvenient truth is that the two, retrospective and prospective analysis, are not the same and their connection hinges on these assumptions but the assumptions are by no means obviously true. We have written about this basic set of problems many times here.

Now a new study, which we saw first as reported here in the NY Times, is that while overall death rates generally have been dropping in the US, the authors note "the declining health and fortunes of poorly educated American whites.  In middle age, they are dying at such a high rate that they are increasing the death rate for the entire group of middle-aged white Americas, [authors] Dr. Deaton and Dr. Case found.....The mortality rate for whites 45-54 years old with no more than a high school education increased by 135 deaths per 100,000 people from 1999 to 2014."

This is very different from other developed countries, for this particular age group, a shown by this figure from the authors' PNAS paper, and deviates from the generally improving age-specific mortality rates in these countries.


From Deaton and Case, PNAS Nov 2015

There are lots of putative reasons for this observation. The main causes of death were suicides, drugs, and alcohol related diseases, as shown below by the second figure from their paper.  There were mental illnesses associated with financial stress, opiate misuse and so on.


From Deaton and Case, PNAS Nov 2015

There are sociological explanations for these results, results that other demographic investigators had apparently not noticed.  They do not seem to be mysterious, nor is there any suggestion of scientific errors involved.  Our point is a different one, based on these results being entirely true, as the seem to be.

When the future is unpredictable, to an unpredictable or unknowable extent
Why were these findings a surprise? First, perhaps, because nobody bothered to look carefully at this segment of our society or at these particular subsets of the data.  To this extent, predictions of disease based on GWAS and other association studies of risk will have used past exposure-outcome associations to predict today's disease occurrences.  But they'd have been notably inaccurate, because the factors Deaton and Case considered either were not considered and/or because behavioral patterns changed in ways that couldn't have been taken into account in past studies.  There may of course be other causes that these authors didn't observe or consider that account for some of the pattern they found, and there may be other subsets of populations that have lower or higher risks than expected, if investigators but happened to look for them.  There is, of course, no way to know what data, causes, or subsets one may have not known about, not been measured, or just not considered.

That is a profound problem with risk projections based on past observations.  The risk-factor assessments of the past were adjusted for various covariates in the usual way, but one can't know all of what one should include.  There is just no way to know that and, more profoundly, as a result no way to know how inaccurate one's risk projections are.  But that is not even the most serious issue.

Much deeper is the problem that even if all exposures and behaviors of study subjects from whom risk estimates were made by correlation studies, these have unknown and unknowable relevance to future risks.   The reason is that the exposures of people in the future to these same risk factors will change, even if their genomes don't (and, of course, no two current people have the same genome, nor the same as anyone's in studies on which risks were estimated).  Even if the per-dose effects were perfectly measured (no errors of any kind), the mixture of exposures to these factors will not be the same and hence the achieved risk will differ.   There is no way to know what that mix will be.

Worse, perhaps by far, is that future risk exposures are unknowable in principle.  If a new drug for treating people under financial stress, or a new recreational drug, or a new type of cell phone or video screen, or a new diet or behavioral fad comes along, it may substantially affect risk.  It will modify the mix of existing exposures, but its quantitative effect on risk simply cannot be factored into the predicted risks because we can't consider what we have no way to know about.

In conclusion
The current study is a miners' canary in regard to predictions of health risks, whether from genetic or environmental perspectives.  This particular study is retrospective, and just shows the impact of failure to consider variables, relative to what had been concluded (in this case, that there has been a general improvement of mortality rates).  The risk factors and mortality causes reported are within the general set of things we know about and the study in this case merely shows that mistakes in using the data and so on--not any form of cheating, bad measurement, etc.--is responsible for the surprise discovery.  These things can be easily corrected.

But the warning is that there are likely many factors related to health experience that are still not measured, but should be, and that there are also an unknown number that have not been measured, for the simple reason that they do not yet exist.  The warning canaries have been cheeping as loudly as they can for quite a while, both in regard to environmental and genomic epidemiology.  The fault lies not in canaries, but in miners' leaders, the scientific establishment, who don't care to hear their calls.