Monday, August 31, 2009

Gutting it out

Only a small fraction of bacteria can be grown in laboratories; apparently nobody understands what they need in their environment well enough. This can be a problem for microbiologists trying to identify the bacterium causing a new infectious disease, but it also means that it has not been possible to know all of the little bugs to which we our bodies are willing or unwilling hosts. The same would be true for other animals, wild ones as well as our pets and farm species. We say 'willing or unwilling' because, of these pathogens, some are presumed to be harmless commensals, and others are necessary to our survival (such as the E. coli in our intestine, that we depend on for digestion, and similarly for other mammals such as grazers like cows and goats who need bacteria in their rumens to digest cellulose).

One of the characteristics of bacteria is that they can exchange structures (e.g., plasmids) that contain some actively used genes. This is where many if not all of the genes are that lead the bacteria to resist nasty things in their environments such as antibiotics that bacterial targets have evolved to protect themselves. A vulnerable strain of bacteria can acquire a gene that makes them antibiotic resistant. This is of course a very important current problem in farm animals and humans, driven by the amount of antibiotics we ingest. And farm animals are reservoirs of pathogens that can affect humans, so humans and the animals they live with are part of large bacterial-host ecosystems.

Since we can't culture most bacteria, our knowledge of who's where, and the characteristics of many bacterial species, has been quite limited. But DNA sequencing technology has opened the way to identifying our visitors. By extracting all the DNA from a sample of some tissue, fragmenting the DNA and sequencing the fragments, their owners can be identified, and their genetic makeup and function characterized. This is done by comparing the sequence fragments against all sequences currently known (in Genbank). Even if we don't identify an exact match, we can find a known species that is close enough to our tissue-sampled sequence to identify its place in the bacterial tree of life. Antibiotic resistance genes can be identified in the same way, too, since many are already known.

A new paper in Science (Functional Characterization of the Antibiotic Resistance Reservoir in the Human Microflora, Sommer et al., Aug 28, 2009, 1128-1131) reports on a detailed analysis of the antibiotic resistance genes found in the microbiome, the resident bacteria, of healthy individuals. The study was done in an effort to learn more about how antibiotic resistance genes are acquired by pathogens that infect humans.

This study found that, indeed, many of the resistance genes in multidrug resistant bacteria were acquired by lateral gene transfer--that is, the gene hopped into the pathogen on a plasmid from a different bacterium. Bacteria in the wild usually live in large communities of many different species, where they can promiscuously exchange plasmids, thus antibiotic resistance can spread rapidly.

The authors wondered if the extensive history of antibiotic use in humans might mean that microflora in the human gut might be a reservoir of resistance genes readily transferable to pathogens. They isolated DNA from saliva and fecal samples from two healthy individuals who had not taken antibiotics for at least a year, and sequenced fragments of bacterial DNA that were resistant to all the antibiotics they tested.

They found that there was some, but by no means complete overlap between the two people who were sampled, and that nearly half the resistance genes they identified in this way were identical to resistance genes in human pathogens. This doesn't tell them whether gene transfer went from the commensal microflora to the pathogen or vice versa, but Sommer et al. suggest that it's quite plausible that our gut microflora are a reservoir of resistance genes just waiting to jump into pathogens which are now controllable with drugs. The remaining genes, although not yet found in pathogens, were functional when transferred to E. coli, suggesting that if they do eventually find their way into pathogens, they will be active, although there seems to currently be a barrier to lateral gene transfer which isn't yet understood.

The authors conclude:
Many commensal bacterial species, which were once considered relatively harmless residents of the human microbiome, have recently emerged as multidrug-resistant disease-causing organisms. In the absence of in-depth characterization of the resistance reservoir of the human microbiome, the process by which antibiotic resistance emerges in human pathogens will remain unclear.
This study provides interesting ecological information about bacterial dynamics, and of course warns us that antibiotic resistance may be more complex, challenging, and difficult to predict than we have thought. It's evolution in action.

Thursday, August 27, 2009

Just 6 teaspoons of sugar is the medicine to go down

The American Heart Association this week issued widely publicized recommendations about how Americans are drowning in sugar, and they suggest what to do about it. People (or is it just Americans?) should cut down from an average of 22 teaspoons a day to just 6 teaspoonfuls. Well, that's for the girls: guys get to have 9! This remarkably scientific precision is being reported widely in the media (e.g., CNN and the New York Times).

We wonder how long it will take some entrepreneurial marketer to develop a Sugarometer, that we can carry around to measure our daily dosages. Of course, there is a bit of a problem since most of the sugar we consume is already mixed into something else--like, almost anything else that we eat. So the Sugarometer will have to have some way to scan a product and work out the dose.

The caloric overdose problem in America, and in other wealthy countries, is a serious one. It is and will be a major burden on health care systems (even in countries like our own benighted land that are without universal health insurance). But the appearance of precision of this new dietary recommendation is rather silly. People are different in size, activity levels, genotypes, personal priorities (some still smoke!), and even self-image.

But we can't seem to shake our romance with technology, or its appearance. Things as simple and intuitive as dietary excess vs sedentary lives, with their heavily laden cultural and economic elements, have to be made precise. Even with a disclaimer that this is only a 'rough' estimate, how rough is it actually? Is it even meaningful? For example, do magazines and Hollywood determine the body shape that is considered OK? Are there really solid health data that are as specific as this report?

We regularly try to point out the problem of assuming a high predictability of phenotypes by genotypes. Here, it's somewhat the opposite, of assuming a high predictability of phenotypes by environments. It's the same kind of issue.

These studies give a misleading impression of human variation, first by standardizing what is probably not standard, and by assuming oversimplified ideas about causation. It would not surprise us to see GWAS done to find genetic effects per-teaspoonful-consumed.

While in a technical sense, such variation surely exists, and some of us are more vulnerable to a given level of health problem per teaspoon. However, the cost and patina of technical sophistication serves to distract from the real public health message, that has been known since Hippocrates: moderation in all things. You don't need million dollar grants(or professors) to know that!

Wednesday, August 26, 2009

Do ants think?

A paper in the August PLoS ONE (Ants, Cataglyphis cursor, Use Precisely Directed Rescue Behavior to Free Entrapped Relatives, Nowbarhari et al.) reports rescue behavior in ants. That is, they actively attempt to save an entrapped ant by digging around it and pulling on its limbs. Previous reports of ant digging behavior have been dismissed as an alarm response rather than rescue, but the authors of this paper beg to differ.

For this study, the researchers exposed study ants to six different situations, as described here.
Using a novel experimental technique that binds victims experimentally, we observed the behavior of separate, randomly chosen groups of 5 C. cursor nestmates under one of six conditions. In five of these conditions, a test stimulus (the “victim”) was ensnared with nylon thread and held partially beneath the sand. The test stimulus was either (1) an individual from the same colony; (2) an individual from a different colony of C cursor; (3) an ant from a different ant species; (4) a common prey item; or, (5) a motionless (chilled) nestmate. In the final condition, the test stimulus (6) consisted of the empty snare apparatus.
Only active nestmates, ants from the same colony, elicited rescue behavior. Others evoked aggression or no reaction. The authors conclude that a struggling nestmate emits a colony-specific pheromone to which rescuing ants respond.
Moreover, C. cursor ants are able to engage in highly precise behavior directed toward the inanimate object that has entrapped their nestmate. Thus, our findings show that rescuers somehow were able to recognize what, exactly, held their relative in place and direct their behavior to that object in particular, demonstrating that rescue behavior is far more exact, sophisticated and complexly organized than previously observed. That is, limb pulling and digging behavior could be released directly by a chemical call for help and thus result from a relatively simple mechanism. However, it's difficult to see how this same simple releasing mechanism could guide rescuers to the precise location of the nylon thread, and enable them to target their bites to the thread itself.
It's the last sentence of this paragraph that interests us. The thread is obviously not emitting any chemical signal, and the ants are not 'pre-programmed' to know that danger is connect to the thread, and that they could break it with their pincers. Anyone who has observed ants for any time at all, including Darwin, must surely be impressed by their sophisticated cognitive, decision-making abilities. They work together to move barriers from their trails, they figure out ways around barriers they can't move, they cooperate to bring prey back to the nest, and so on.

Darwin himself said it very well in Descent of Man (1871):
...the wonderfully diversified instincts, mental powers, and affections of ants are notorious, yet their cerebral ganglia are not so large as the quarter of a pin's head. Under this view, the brain of an ant is one of the most marvelous atoms of matter in the world, perhaps more so than the brain of man.

It's easy to dismiss the rescue behaviors described in this PLoS paper as 'only' biochemical. But cognition and sophisticated decision-making are evolutionarily ancient. And, what are they, really, but the interaction of complex surface molecules and their receptors? We tend to privilege human thought capabilities, but in fact our brains are just signals, ligands and surface receptors, too.

Monday, August 24, 2009

Everything's connected to everything else

A study out of the University of Indiana, described on the BBC website today, reports that married people have better cancer survival than unmarried or those going through the stress of divorce, etc. This is apparently not related to the level or quality of treatment.

We note this report not because it is particularly singular on its own, but because it is one of countless studies, including placebo effects, that show the interconnectedness of our physiological systems. 'Psychosomatic' effects, often sneeringly denigrated, clearly occur. Genetically, there are connections between neural, immune, and endocrine systems. These may be the flags that show why such otherwise non-physical effects exist.

For research the implications can be profound. For many reasons, practical, cultural, and historical, reductionist approaches are at the heart of the current modus operandi of science. Maybe there are other methods, but for 300 years or so these have transformed science and technology and are deeply ingrained into our thinking and science training.

However, in this and many other ways, reductionist methods may draw attention too far down the causal chain. It could focus us on trying to isolate causes that can't be isolated relative to the function we're studying. A gene can be studied on its own perhaps, but if its interactions with other genes are what we're studying, our methods need to reflect that.

It's common and easy to criticize reductionism, and we're not doing that here, per se. However, old habits die hard. We've known ever since Darwin that all life goes back to a singular beginning, and genomic sciences show that rather clearly in terms of DNA mechanisms, sequence variation, and so on. What recent sequence data have shown is that mechanisms long ago established can be highly conserved over long evolutionary times. Traits long thought to be independently evolved are regularly found to be homologous in various ways, both deep and subtle.

Thus our attempt to parse systems into discrete modular subsets is ultimately doomed to be inexact. When and how a study of, say, immunogenetics, can be done on its own without considering, say, neurobiology, is not clear. We can't design single studies to cover everything, and reductionism is needed in any kind of study (you can't study genes without studying genes!). Similarly, holism is a term that often sounds clearer than it really is.

Evolution didn't care: what worked proliferated. Evolution by phenotype sorts organisms by their results, not caring about their specific causes. Since life today is due to common ancestry, the problem we see is in no way a surprise. But there probably should be more explicit and serious studies of how and where to 'reduce' and how to integrate.

Sunday, August 23, 2009

The religion-evolution 'debate': such an egocentric waste of time!

We have tried not to get immersed in the so-called debate or controversy over evolution vs religion. But one can only bite one's tongue so much before being pushed over the edge. An op/ed piece by Robert Wright in today's New York Times did the trick.

This piece suggests that advocates of religion and atheism, in the context of evolution should back down and realize that they can both be right. The usual cast of 'experts' is cited: those in prominent academic institutions or who have written outspoken books or treatments of the subject, which the media seem to accept as evidence of their expert knowledge. But that's all bunk!

The debate or controversy (we'll save keystrokes by not continuing to use quote marks around the terms) is hot air, opinion masquerading as knowledge. The problem is simple, and except for details of new technical knowledge it has hardly changed an iota since it erupted right after Darwin's Origin of Species.

One can explain almost anything one wants, after the fact, by attributing it to evolution. This is especially true if you equate evolution to natural selection, which is a dogmatic, almost biblical kind of freezing of Darwin's 19th century thought into stone. Whatever is here today and appears organized (often equated to mean purposeful) is assumed to have been molded by selection, and since it's here and its current function is or can be guessed at, we can go further and equate that function to what was favored by selection over eons of time. Within this framework, you can then say that organisms have a purpose, a rationalized bastardization of the word that makes you seem sensitive to your (rational) religious audience. It's also a fake escape from what people really, in the eerie dark of night, mean by 'God', which has to do with solace, usually personified and able to interact in the world, such as by helping cure your ailing child.

The common alternative is to try to rescue real religion asserting that there is a God (no quotes here, because this is the real McCoy), a thinking personal being that (who?) started the world in motion using natural selection to produce His ultimate object, 'intelligent' beings (us humans, naturally, but why not also countless other species, why not even intelligent plants?).

Various views called deism, monism, and the like have God starting it all and then letting natural selection take its course. But it's trying to have your cake and eat it, too. For example, the rationalizer's 'natural selection' is nothing of the kind at all! Selection is, by nature, a sieving of randomly arisen variation via competition for reproductive success. But if God knew what would happen, then this is as much a sham as a movie or a rigged carnival game, since nothing is really random or competitive, it only looks that way after the fact.

Evolutionary psychology, the 'solution' to the religion/evolution debate, as argued in the NYT op/ed piece today, is basically bunkum squared (something we wrote about here, and David Brooks here), since it can argue just about anything you could imagine and if you can find it today you can make up a selective reason. Want the world to be cruel, to justify inequality like capitalism or the brutality of conflict? Easy! Survival of the fittest! Want to explain compassion, goodness, and even ideas about beneficent God? No sweat! Getting along is just a way of competing successfully! Want to explain even religion itself? Piece of cake! Make up a God gene, that produces neural illusions, and say that led its bearers to screw more successfully or be given resources (euphemistically called 'alms')!

Hey, these scenarios, couched in appropriately pious academic terms will make you seem deeply insightful, whether you want to appeal to the the religious, the uncertain, the militantly atheistic, or just sell books and be quoted in the Times. This is the bomb that Darwinism dropped on society. Its destructiveness persists after 150 years--and it's a 'dangerous idea' but not necessarily because it's true, but because after the fact evolutionary Just-So stories can always be constructed to have the appearance of being true--whether or not it is in fact true, which it may well be.

All sides in this debate are soaked with arrogance and hubris, often laced with intentional insult and cruelty, and they have been since identically vituperative, self-assured arguments first took place starting right after Darwin. The hubris comes from knowing you're right. It's instructive that no amount of data--and there's been a huge amount--can change either side's belief in its beliefs.

The blunt facts are that if you want to take a pure materialist's view, science is manifestly a powerful way to account for the world--at least for those aspects of the world that it can account for. Scientific knowledge eats away systematically at material problems, generating transformations of society (and huge fortunes) along the way. It makes many believe that the material world is obviously all there is.

On the other side, fear, hope, esthetics, and subjective experience lead many to believe there must be more. Call it God if you want, but you then have to fish around for His/Her/Its/Their properties, settling on what gives you the most comfort, more or less in the same way we choose the style of music we like best. As with natural selection, you can always post-construct rationales and reasons for your choice. But you're playing games with yourself (even if you're right, and there really is some sort of God(s), because you have no way to know if you are).

The experts are no such thing. They're like everyone else: they have opinions. They're working angles that they see in their interests to work. They enjoy their feeling of superiority (evolutionary psychologists can, we're sure, tell you why natural selection led us to be that way). Psychologically, God and Evolution play similar roles in the human mind.

This is not the time or place to try to go into the subtle aspects of the illusion of driving, Darwin-style, force-like natural selection (we do that in our book). But selection, which certainly seems to exist, is part of a spectrum with chance at one end and strong causation at the other end of that spectrum. Afterwards, what happened can always be defined as the result of selection, and seem organized in a way that even to evolutionists is described in designer-like terms that are perilously close to those of religion.

We humans like to be wise, we like to understand our world, we like self-comfort, we like media attention, and we like to be right! But in this case, it's all sound and fury signifying nothing. If you want to believe, go ahead and assert whatever makes you feel good about 'God'. If you want to disbelieve, go ahead and assert whatever makes you feel good about 'evolution'. If you want to have it both ways, you can do that too -- but only by suspending disbelief.

**Update** Jerry Coyne over at Why Evolution is True has much the same argument, with a very patient discussion of Wright's specific points.

Friday, August 21, 2009

Permanent ink

Calamari lovers, take a look at an interesting evolutionary story at the BBC web site of a fossil squid. This little gem is an estimated 150 million years old, yet seems in great detail to be similar to today's appetizers (we like them sauteed with chili peppers, olive oil, s&p, on a bed of fresh lettuce).

This particular fossil was found when a rock from a dig in Wiltshire, England, was cracked open. The ink sac still contained ink, and archeaologists were able to draw a picture of the specimen with it (drawing shown here, from the BBC web site). Many more such fossils were also found, including other squid species; researchers suggest that the squid were drawn to the area to feed, but were poisoned by algae when they arrived, and conditions were right for their preservation.

This find is one of many, long known in paleontology, that show remarkably conserved traits--again, down to the finest detail, for many different kinds of plant and animal (and bacterial) species. Indeed, stromatolites are multi-billion year-old beds of what appear to be bacterial films that look remarkably modern.

Such finds raise the obvious, but again not new, question of stasis in evolution. How can evolution, so typically portrayed as a relentless race to get ever better, be responsible for not changing at all? A common metaphor is known as the Red Queen hypothesis, from Through the Looking Glass: the Red Queen says to Alice, as she breathlessly pulls her by the hand, that they must always run as fast as they can to stay in the same place.

If you look at squid phylogeny (a fine web site for such data is maintained by Sudhir Kumar and Blair Hedges), you'll see that long-conserved creatures like flies, horseshoe crabs, many kinds of plants and much else--including squid--have DNA sequence relationships that make sense. The branches between current species roughly correspond to their ages of separation estimated from fossils and other kinds of taxonomic studies. They accumulate molecular differences in the usual way (though of course the details vary considerably and can be argued about).

But how does selection, if that's the cause, maintain such stasis? This question has been asked and answered implicitly or explicitly in various ways. Large populations essentially swallow up new variants like the sea. Selection removes the bad ones, but the good ones arise amidst such a huge population that they can basically not 'take over' the sequence for their respective locus.

Stephen J Gould made much of 'punctuated equilibrium', initially writing as if this was a dramatic new theory: change eventually occurs in a small subpopulation in some local area not documented by the fossil record. The advance then spreads throughout the species and then gives the illusion, in the fossil record, of having appeared suddenly. Sewall Wright actually developed formal theory for this as a basic means of evolution, in contrast to Fisher's ideas, way back in the 1930's. Wright envisioned major adaptive changes as being mainly possible in small isolated subpopulations, where new or helpful ('fit') variation would not just be swamped and absorbed by the species at large.

A key fact may be that, today there are many species of squid, and their traits vary both within and between species. So squid as a group clearly have evolved! But let's go back 150 million years and imagine the future course of events. If environments, like the general nature of the ocean bed, stay the same in many ways, there will be change from generation to generation, but the conservative nature of selection will usually eliminate or work against squids too different from their fellows or their habitat. Small, chance changes in DNA sequence and/or function will always occur, however, and local as well as global populations will accumulate variation. New or modified squid species will arise (and that had already happened by 150 my ago as we noted above).

In such circumstances, by chance or the gentle molding of selection, some descendants will stay pretty much the same, while others will either die out or differentiate into new species with different morphology or adaptations. So when we find a fossil and search among today's many kinds of squid, it's no surprise that we find some that didn't change.

This is a kind of problem in conditional probability, related to the degree of surprise we should feel about a given observation, or the multiple-testing phenomenon in statistics. If conditioned on searching through an open-ended array of possibilities, the 'significance' is much less (e.g., p value much greater) than if you just looked at a single instance.

We might be surprised if we held to a strictly Red Queen view that evolution always forces creatures to innovate or die. And, of course, some major or minor lineages don't stay as static, again what one would expect (dinosaurs didn't, but birds arose from among them). Otherwise, it is less clear how selection could preserve identity so precisely. These are problems associated with the kind of retrospective illusion due to the compression of the deep past that we write a lot about in our book. We think the Red Queen is misleading Alice!

Whatever the explanation, one lesson is to be careful how we interpret what is written in stone (or with permanent ink). And fried calamari are great, too!

Thursday, August 20, 2009

Francis Collins announces his agenda

There's a report in this week's Nature of Francis Collin's announced agenda for the NIH ("Collins sets out his vision for the NIH", Elie Dolgin, Nature, published online August 18, 2009). Collins has defined five areas of 'special opportunity':

Collins, who headed the National Human Genome Research Institute in Bethesda from 1993 to 2008, made the case that investment in biomedical research creates jobs and offers quick economic returns. Looking ahead, he said that the agency should devote more money to "five areas of special opportunity". First, to applying high-throughput technologies in genomics and nanotechnology to discover the genetic bases of diseases including cancer, autism, diabetes and neurodegenerative disorders. Second, to developing diagnostics, preventative strategies and therapeutic tools through more public–private partnerships. Third, to reining in the costs of health care with comparative-effectiveness research and personalized medicine. Fourth, to expanding research into diseases affecting the developing world. Finally, to increasing budgets and investing in training and peer review to achieve a predictable funding trajectory for the research community.

So, this answers the question we posed here and here about how much emphasis Collins will give to genetics -- it's not a surprise that his first priority is genetics and technology, as his lenses are made of DNA. But, it also answers another question we posed about whether he will also emphasize health issues related to poverty, and apparently neglected diseases are on his agenda as well.

That's a good thing. Francis also tried to reassure those who had raised questions about his religious beliefs. Since he is a master at convincing Congress to give him more (and then more) money, and since he identifies a range of appropriate projects, it seems very unlikely that religion will be any sort of problem or issue with his leadership of NIH.

It's easier to predict now how skewed toward genetics his agenda will be. He is a firm believer in the molecular approach to disease not just that genetic mechanisms are involved in disease--which they surely are, in essentially all except trauma and lightning strikes, but also that natural variation in genotypes is a major factor in variation in individual disease occurrence.

It's this that we think is questionable, especially in regard to the common diseases that are mostly, and manifestly more affected by environments than by genotypes. It's clear that a lot of money will be spent, and we think wasted, on 'personalized genomic medicine' in relation to these disorders. Indeed, he has a book coming out next year on personalized medicine.

Meanwhile, many diseases, and small subsets of almost all diseases, truly are genetic in that the variation hugely increases risk that is only affected by lifestyle factors in minor ways. We think that the best way to invest in genetics is to mount an appropriately genetic assault on these. Hopefully he'll see that that is done.

But, he'll be an effective advocate for the NIH, and one has to wish him well, especially if he works to help implement universal health care.

Turing the world of science and justice

The Independent ran a story on Tuesday (Aug 18) about a petition being signed by many in Britain, to demand an apology from the British government for its treatment of the late Alan Turing. Richard Dawkins has weighed in as well. Turing was famous for being one of the leaders in developing the regular system by which the British (and the US) were able to read much of the Axis military codes during WWII, by developing mechanical computing means of simulating input conditions for their Enigma machine.

Turing also was a founder of the idea and logic of programmable computers. A Turing machine is a general purpose theoretical concept of how to solve problems that can be very extensive. Many of its principles have been put into practice.

Relative to this blog is that Turing's idea of 'reaction-diffusion' systems was a mathematical model for repetitive patterning such as is widespread in life (hair, leaves, fingers, vertebrae, etc. etc.). Turing showed how complex patterns can be produced by simple interactive processes. You don't need a separate gene for each hair, and tinkering with the process can easily lead to differently patterned hair. We now know from extensive research by many, many of us that signaling systems work this way, and this is how organisms of all kinds are assembled (a major theme in our book, The Mermaid's Tale, that we referred to as 'complexity made simply').

Turing's ideas, modified for practical molecular developmental genetics, are more widespread than is his 1952 paper itself, and most users of the logic of the idea are probably unaware of the paper. But it was a factor in late 20th-century developmental biologists' thinking, expressed in terms like 'morphogens', morphogenetic fields, and the like.

We should be careful about lionizing anyone from the past. In each area in which Turing made major contributions, he had major antecedents. In the late 19th century William Bateson used then-popular field theory and concepts of wave interference patterns to liken such patterns to similar developmental processes in embryos that are responsible for repetitive traits.

In computer science there were Babbage, Holerith, and others, not to mention the Jacquard loom, and at Bletchley Park the Collosus, a punch-tape vacuum-tube based programmable device the British postal service had been using, modified as a German code-breaker.

And Turing was by no means the only major figure to work on Enigma. In fact the code had first been broken well before the war by Polish cryptographers who not only gave their knowledge to the French and British, but who had also invented the machine called the 'bombe' that was a kind of mechanical simulator referred to earlier. And Gordon Welchman who introduced a particular modification to the bombe, called the 'diagonal board', that enabled decoding-simulation of the more complex Enigma the Germans began using later in the way. (We know some of this history since we visited Bletchley Park earlier this summer--when we took the pictures in this post--and subsequently read about it).

The point is simply to place things in proper context. None of us work entirely without history. Even Darwin needs such tempered recognition. This is relevant to the petition mentioned by The Independent. It refers to the fact that Turing, besides being an odd duck in many ways, was homosexual. Today of course that would be no big deal. But that's today. At the time, even in the England he helped save from the Nazis, being gay meant being in the slammer for a number of years (it was like that here in the good ol' enlightened USA, too).

By fluke, Turing was discovered to be gay by the powers-that-be (see the story for details), and he volunteered to undergo hormone 'therapy' rather than jail for his punishment. It had negative effects on him, and for that and who knows what other reasons, Turing bit the poisoned apple -- literally -- and died at age 41.

The petition is for an:
apology from the Prime Minister Gordon Brown, recognising the "consequences of prejudice that ended his career". More than 700 people have signed a petition started by the leading computer scientist John Graham-Cumming on the Downing Street website, including gay rights campaigners, politicians and scientists."
We wholly sympathize and would certainly sign this petition if we were in the UK. However, it strikes us that that's not the right thing to do. Turing actually broke a law that he knew of and was on the books at the time. He was not the only victim of that law, and the majority of his contemporaries in Britain at the time would have considered it a just law, and hence he not a 'victim.'

Standards have changed, and Turing is certainly owed an expression of regret that times were then as judgmental as they were--and we should try to reduce such things today. But one shouldn't have to be a famous genius to warrant retrospective empathy. To apologize to Turing is selectively saying that the way he was treated was regrettable because he helped win the war and invented modern computing theory.

In a way that demeans the apology. We owe expression of regrets about our human failings firstly to ourselves, so we won't repeat them, and then to everyone who fell victim, chimney-sweep and code-breaker alike. We should be touring the entire world, not just Turing's world of science, in seeking out injustice.

Wednesday, August 19, 2009

Losing your mind? Yes, if you believe these results

The British newspaper, The Independent, last week published a great study in confounding and interpretation of results based on prior assumptions--exactly the kinds of thing that we think is a problem both within science and for those who write about it for public consumption.

The headline is "Losing your mind? The answer is in the mirror." The article reports results of a Scottish study of hundreds of 11 year-olds given an IQ test in 1932, and a recent follow-up of surviving 83 year-olds. Apparently, men with asymmetric faces--but not women--are more apt to lose cognitive ability as they age. Why? The article says
scientists believe that it could be explained by the idea that a good set of genes for facial symmetry may be linked with an equally good set of genes for brain preservation.
The idea that facial symmetry indicates 'good' or 'healthy' genes is not new. Indeed, the sociobiological explanation for beauty itself is that it indicates good genes, and a healthy and long life (which apparently means that beauty is not in the eye of the beholder), so that beautiful women are more likely to be chosen by men than women who are less beautiful. Further,
[t]here is a growing body of emphasis suggesting that the "fitness" of the genes involved in the development of the embryo in the womb, and of the body during childhood, can be measured by analysing the left-right symmetry of the body.
But really, this is just a wild guess that raises a lot of questions. For example, how did these 83 year old men with asymmetric faces get to be 83 if it's symmetry that's linked with genes for longevity? And, what about the fact that throughout human history, mainly spent in small bands with limited mate choice, nearly everyone mates, even people with asymmetric faces? Do men have a private Y-linked 'good set of genes for brain preservation'? Or facial symmetry?

And for decades there have been suggestions that those who have erratic, different, odd, or unusual behavior may in fact be more appealing as mates--may seem more charismatic, for example. And senile dementias may be reflected at least to some extent by behavioral differences earlier in life, if the nun study is to be believed. Whether these ideas are correct has been controversial, but shows the subtleties that may be involved in 'fitness'.

(A lovely little report published in the New York Times on August 4 alludes to a related issue, the ethics of cosmetic surgery in Darwinian terms. The paper from which the article draws is published in The Journal of Evolution and Technology, and suggests that the choice to have cosmetic surgery is unethical because beauty is an indicator of good genes, and attaining beauty unnaturally cheats potential mates into believing you've got good genes. Or, perhaps, does it reveal that you have good IQ genes, that lead to your deviousness? This paper could be the subject of a whole post--except that it's too ludicrous to give even that kind of credence to, starting with the opening sentence ("Evolution continually selects the best genes to proliferate the species.").)

Could there be alternative explanations for the Scottish findings? Could the small sample size (219 of the original much larger sample) mean the results aren't replicable? They aren't in females, although the study authors suggest this is because these women aren't yet old enough to be losing cognitive ability. Their argument is rather convoluted, though--people die on average 4 years after developing dementia, women are older than men when they die on average, women with asymmetry in this study don't yet have dementia, thus they could still lose mental capacity and not die for at least another 4 years. Could be. But it could just as well be that the correlation between asymmetry and dementia is a false one.

What about the genetic interpretation? Are there genes 'for' symmetric faces? Are there genes 'for' longevity? As far as we know, such genes haven't been found--if there are genes 'for' longevity, how did they evolve, given that natural selection doesn't see beyond reproductive age? And, face shape is yet another complex polygenic trait, and genes 'for' face shape (that is, genes that contribute to facial development) will be found all over the genome, and yet the explanation for the results reported in this article suggests that it's a simple question of asymmetry genes linked to dementia genes.

Development involves all sorts of stochastic effects. Left and right sides separate early in the embryo, but from that point til any later age they are largely independent. They have the same cellular genome, but are independently affected by all sorts of aspects of cell division, somatic mutation, and chance interactions with the environment, chance variation in the level of signaling molecules and their response, or in timing of gene expression, and many other things to boot. None of those is heritable and hence none can be affected by natural selection. And of course none of these causes of asymmetry would be linked with risk of dementia, another polygenic complex trait. (With the caveat that early onset dementia is a Mendelian single gene trait, but this isn't likely to be what afflicts the 83 year-olds in this study because the early onset form is also associated with early mortality.)

Hopefully there is more to this story than meets the reporter. Otherwise, it exemplifies shallow if not stupidly simplistic studies that shouldn't be funded or published, but stories about them get into print because the selection that is really important is that you select The Independent from the very asymmetrical display of newspapers trying to get you to look at them to pick up something to read on the train.

Tuesday, August 18, 2009

Darwin's Facebook

Facebook is the current generation's way of baring all to see. It's a kind of best-seller web service. Things are revealed on Facebook that many of older vintage believed to be quite private. But times change, and we have to recognize that fact and build it into our understanding of how the world works.

Charles Darwin (left, with son William) knew that his ideas about evolution had a lot to say about humans. But in 1859 in his Origin of Species, Darwin famously only hinted in one phrase that his ideas about evolution would throw light on the origins of humans, too. Twelve years later, that changed dramatically.

In 1871, Darwin addressed human evolution squarely and at length in Descent of Man. There, he compared human morphological traits to those of evolutionarily related primates, he discussed human racial variation, and evolution since the origin of our species. Although he had a sophisticated understanding of the amount of racial variation, he was less sophisticated when it came to culture, and he argued that some races (those that are 'civilized' today) were superior to others ('savages') because of natural selection.

But Darwin used morphology and other human attributes to argue that while we varied a lot around the world, the overall evidence showed that we had a single origin as a species. Races were not separately created.

In that book, Darwin considered behavior when he introduced his theory of sexual selection in the context of human evolution. The idea is that traits can be useful for securing mates, not just evading predators and surviving. This could explain traits, like fancy display traits or even variation in traits like skin color, that seemed to have little actual survival value. Fertility variation as well as mortality variation could be a mechanism of evolution and adaptation.

After Descent of Man was published, Darwin continued to think about behavior and human origins. If Facebook is today's web best-seller, Darwin's 1872 The Expression of the Emotions in Man and Animals was a best-seller of the more traditional kind. This was a detailed attempt to gather locally available data, as well as anecdotal data from around the world, to describe how people use their facial muscles to express their state of mind--their emotions--and thus to communicate visually with each other.

Darwin's motive was in part to show that an 1806 book on the use of facial muscles for expression was wrong. Charles Bell had argued that God had endowed people with expressive facial muscles to show their emotions. Darwin wanted to support his very different scientific view, but more than that, he wanted to continue his argument that humans had a single origin, and one that can be explained strictly in terms of historical processes. While he was an advocate of natural selection, this was almost entirely implicit in Expression, again because the point was to show that we shared these traits worldwide, among all races, and they were similar to comparable expressions in other animals. Ergo: all humans had a single, common, evolutionary origin in mammal phylogeny. We are a unified species.

We write because a new paper in Current Biology by Jack et al. (the abstract is here, and the paper described here by the BBC), reports on experimental data that suggests that Darwin's argument was, in fact, at least substantially wrong. In a controlled study of recognition of standardized facial expressions, the authors found that East Asians and Europeans tend to look at different parts of the face (eyes, mouth) for indicators, and interpret some expressions differently. The figure from their paper shows how these differences are statistical statements of how often or likely members of each group focus on, say, eyes. But it shows that these things are not hard-wired as Darwin had attempted to show in his way and with data available to him.

The shared-trait approach is called cladistic by taxonomists trying to reconstruct evolution. The idea is that shared traits mean shared ancestry. But just because organisms share some trait doesn't mean they have common ancestry because there can be independent evolution of the trait. This is known as convergent evolution or homoplasy. But even shared traits and common ancestry don't imply single species origin. Dogs and cats have legs and sharp teeth, but are not a single species--even if their shared traits reveal deeper common ancestry.

Of course, all life shares common ancestry if current evolutionary theory is at all correct, as it seems surely to be. So the question has to do with species' identity. One has to work out the evolution of each trait separately. And environments can play a role, especially in something like facial expressions, as they can be taught rather than inherited.

Nobody should have accepted Darwin's argument wholesale. Expressions can be faked. Actors and con artists do it all the time. We do it to smooth over awkward social situations and in many other times when we weave the tangled web by practicing to deceive. It's common experience not to be sure what someone's glance may or may not mean. Most species with once-thought universal behaviors, like bird songs or cattle sounds, have regional dialects--their version of 'culture.'

But there's also a deeper point. In evolution, few rules are iron-clad. It's not like, say, universal gravitation or chemical bonding between atoms. So the fact that we may not, after all, universally use the same facial expressions doesn't provide any support at all for multiple human origins! We know that we have a single origin, but we know it from tons of evidence of other kinds that is much clearer and more convincing.

The deeper lesson is to avoid the common mistake of over-specifying evolution and its principles, of making scientific generalizations into scientific dogma.

Sunday, August 16, 2009

Dr Livingstone, I presume? (Frank, that is!)

Malaria may have killed or harmed more people than any other single cause of disease in our species' history. A new article by Rich et al. in PNAS uses newly available DNA sequence data to date the origin of a common virulent type of malarial parasite and its transfer from an ancestral chimpanzee host to humans.

Prior work had suggested the transfer was from a bird host, but primates seem now to be the guilty party. The relevant strains are Plasmodium falciparum and its close relative P. reichenowi, the chimp parasite. The idea of a recent transfer was supported hypothetically by the assumption that it's not in a parasite's long-term interests to be too virulent, but that the P. falciparum parasite hadn't had time to adapt more chronic effects.

Sequences of several samples of these and other Plasmodia show clearly that the human and chimp sequences form a single evolutionary clade (the top big branch in the figure, reproduced from the paper). That also supports the recent transfer to humans from a chimp source that is considerably older and geographically widespread in Africa, but not a much more ancient bird clade.

This paper properly relates the authors' new findings to a classic, elegant, and incredibly perceptive and integrative American Anthropologist paper in 1958 by the late anthropologist Frank B Livingstone (Anthropological implications of sickle cell gene duplication in West Africa, American Anthropologist, New Series, Vol. 60, No. 3 (Jun., 1958), pp. 533-562). He looked at the distribution of malaria, the population genetics of malarial resistance including the frequency and distribution of sickle cell and other hemoglobin variants, and how long a mutation with a selective advantage due to protection from malaria would take to reach its present distribution. He elegantly integrated these data with known patterns of language and culture in the malarial areas of Africa.

Frank concluded that malaria arose roughly 10,000 years ago with the advent of settled agriculture, that encroached on the chimpanzees' forest habitats, the proximity providing opportunity for transfer to humans. Settled agriculture led to cleared, stationary fields in which ponds of water could develop, providing enhanced or stably localized breeding grounds for the larval stage of the mosquito life cycle.

Sequence divergence between the human vs chimp parasite genes, plus the low level of divergence within the human clade, suggest to Rich et al. that the initial transfer may have occurred hundreds of thousands or even a few million years ago. It may have conferred only benign malaria on the 'human' hosts -- actually our species as such is only about 100,000 to 200,000 years old, so the transfer would have been to our very non-agricultural antecedents whose population structure was probably much more patchy, sparse, and in these senses chimp-like, if probably also less arboreal.

Sometime post-agriculture, there may have been human and/or parasite mutations that made the parasite much more virulent, and it then spread rapidly with agriculture. Whether the virulence directly helped the spread or not would be debatable.

This is a very nice paper of science, one that leads us to a somewhat related topic about science. Given the relatively crude state of genetics at the time, Livingstone's paper was quickly recognized as a classic, and that judgment persists....if only people these days weren't too impatient to read it. Rich et al. did, but that's not so typically the case.

With much media and journal ballyhoo, a recent paper in Science by Tishkoff et al. (Haplotype Diversity and Linkage Disequilibrium at Human G6PD: Recent Origin of Alleles That Confer Malarial Resistance, Science 20 July 2001: Vol. 293. no. 5529, pp. 455 - 462) studied the origins of the human G6PD mutation, also related to malarial resistance. The authors used population genetic analysis conceptually similar to Livingstone's, and estimated that the protective mutation arose less than 10,000 years, compatible with the earlier estimates and reasoning. This was solid genetics, and Tishkoff et al. did refer to Livingstone's hypothesis, though in a rather buried instead of featured way. The finding was strongly confirmatory, but not the transformative discovery that media hype gave the impression it was.

Science generates enough solid, interesting work without the hyperbole that these days has come to serve many interests. That gives a misleading impression that we're making 'paradigm shifts' in human knowledge on an almost daily basis, which is not true. In many ways, this is a kind of crying 'wolf!'. In the long run, science would be better off if routine or incremental findings were recognized as such, because that is how science mainly works, and the public who support us should know that. Authors should insist on tempered claims in the media and by the journals.

It is easy of course for us to be over-critical, and in the nature of full disclosure, although he never worked on malaria, Ken's PhD advisor was Livingstone himself. So we naturally knew of Livingstone's work.

But that doesn't change the fact which is all too symptomatic of our credit-hungry times that we and the media give more credit to ourselves as if new technology has obsolesced all previous knowledge, and are unwilling or in too much of a hurry to do like Stanley and make treks required to identify the source of the News. In this case, that would be Dr Livingstone, we presume.

Thursday, August 13, 2009

I still smell a rat! (Episode II)

So each olfactory neuron (ON) expresses only one of its 2000 olfactory receptor (OR) genes on its surface and hence can only react (bind to) a limited range of odorant molecules. The combination of ONs that can bind to an odor molecule is a combinatorial expression code for the brain: the combination of signals is remembered when it first occurs and recognized when sometime later the same smell is encountered.

But how does this 'map' from nose to the brain?

In ways not yet understood, the receptor molecule on the surface of each ON helps guides the axon from the nose through the base of the skull into the brain. There it meets a structure called the olfactory bulb. Neurons expressing the same OR may be located in different parts of the nerve, but their axons recognize each other, and bundle together, as their axons migrate to one or two specific locations in the olfactory bulb. These locations are called glomeruli. A glomerulus is a kind of neural knot of axons from all the ONs that express the same OR and hence that respond to the same odor molecule.

From there, neurons travel to various parts of the brain. Their routes are not precise and there is no longer a simple correspondence between the neural endings in a given part of the brain and the odor molecules they respond to. The brain can remember which ONs fire for a given odorant, but there's no spatial map that corresponds to the spatial pattern of the ONs in the nose. Instead, the brain simply remembers where 'lemon' is, and we each have this little factoid in different parts of our brains.

Now it was thought that evolution had programmed a fixed location for each glomerulus, so that in some sense there would be at least a map of where in the olfactory bulb axons from cells using the same OR would travel to. There is logic in terms of the development of brain terminals for sound and sight information, since for example, light travels from a tree, lion, potential mate in an orderly way. But there's no such natural order to odors, and glomeruli-like units are not found in other sensory systems. So why would such a system be needed and how did it evolve?

Recent work reviewed by Zou et al, the paper we referred to in our previous post, shows that it may be true that axons expressing the same OR molecule do recognize each other and bundle together. They end up in a glomerulus in the olfactory bulb. But it's not the same place in different individuals, even different genetically identical mice from the same inbred strain. The idea of forming a glomerulus, as a kind of developmental ordering that clusters similar neurons together, and may increase the signal strength as a result, seems to be programmed. But a particular location is not.

By requiring less specific order, and one forced on a system of information (odors) that didn't have any natural order, the olfactory 'wiring' system may thus have been easier to evolve. But it means that odor responses would be somewhat less stereotypical, less precisely evolved, and that we respond to odors more as a result of experience than hard-wiring.

In fact, Zou et al. report that some studies have shown that the wiring that's observed depends in important ways on olfactory experience--the usage of the neurons--during early life. Since each animal's experience is different, it's no surprise that the results also differ.

This picture does not make complex traits simple, but it helps show how complexity can be made simply. Tractable, reasonable processes can end up generating complex structures by assembling them bit by bit in stages during development. Olfactory organization is an example.

Further, this story shows that environments and their associated variation and stochasticity affect the traits we all bear--even when they are programmed genetically to develop by orderly processes. In that way, even inbred animals--like human identical twins--can be different from each other.

Wednesday, August 12, 2009

I smell a rat! (but how do I know it's a rat?)

There's a very fine review in the current issue of Nature Neuroscience, by Zou et al.; "How the olfactory bulb got its glomeruli: a just-so story?" These authors discuss one of the more fascinating genetic and developmental--and hence also evolutionary--topics we know of. It's one we dealt with in some length in The Mermaid's Tale, and even in our earlier book (2004, Genetics and the Logic of Evolution).

This new review paper deals with recent findings in studies of the way that the olfactory system, by which we detect, classify, and remember odors, works. There are many fascinating aspects of this system, some of them conserved even in insect olfaction (and we mentioned a few of the aspects that fascinate Richard Axel when we described his talk at Bar Harbor). Here we'll deal only with some generalizations, and we'll carry the discussion further in our next post, so we don't get too verbose here!

As we discuss in Mermaid's Tale, some sensory systems must retain a kind of direct representation, or 'topographic map', of the incoming information, for the simple reason that that information has some form of regular order. Vision is the detection of 2-dimensional streams of incoming light, 2-dimensional because they represent x-y coordinates of the light source (3-D vision is a separate topic that complicates this a bit, but doesn't alter this basic idea). Sound has a natural spectrum of frequencies, too.

These signals are perceived by neural systems that themselves have maplike representational properties. The retina is a surface of cellular receiving pixels, and the cochlea a linear (though coiled-up in many species, including ourselves) way to sense relative sound frequencies. Both retain their sensory 'map' of incoming signal as they send detection impulses to the brain.

Topographic neural maps are how we discern and keep track of the orderliness of the information from 'out there' at any given time, and this is also built by developmentally logical, orderly ways that in a sense take advantage of the orderliness of the information. As you move along the cochlea, sound-detecting hair cells respond to gradually changing frequencies, and they send their axons to the brain in this orderly way. Likewise as you look across the retina, like the pixels on a TV screen, the cells are receiving images from corresponding places in the incoming image.

Smell is different. Odors have no locational or other spatial properties. To detect a smell and remember it, you don't have to relate it spatially to other odors, or even to where it came from, except generally. But you do have to identify and remember it in some way!

The first and important (and intriguing) key to the way this has evolved is that there is a huge gene family, of about 1000 members in mammals, that codes for olfactory cell-surface receptor proteins (ORs) that bind to odor molecules. These genes are on almost all our chromosomes in clusters of from a few to tens of members (and because we're diploid each cell has two copies of these 1000 genes). Yet each neuron picks only one copy of one of these genes to use, so it only has one type of receptor on its surface--how this happens is not understood. The other 999 genes (both copies) are kept turned 'off'. (There are some partial exceptions to this, that show how the one-gene picture usually results).

This way only particular odor molecules can bind to each olfactory neuron (ON), triggering an 'I smell it!' message along its axon to the brain. In each OR gene cluster, the genes were produced by gene duplication and have similar DNA sequence and hence similar chemical binding properties. ONs in similar areas of the nose are more likely to express OR genes from the same cluster. But this is very patchy and incomplete and it also turns out that there is not a very systematic similarity in odor molecules the receptors coded by adjacent genes can bind.

This means that the ONs sending their axons to the brain don't have a particularly orderly map relative to any kind of odor spectrum, the way light and sound spectra are neurally mapped. Again, different odors don't have such a natural map. But once your brain's figured out 'lemon' or 'skunk', it remembers it, however it's mapped there, so that you can recall it the next time.

It was long thought that the axons of the ONs found their way to specific collecting points, called glomeruli, on their way to the brain, in a way guided by the OR receptors on their surface and that this was a way to concentrate the signal (if the pulse from all ONs that used the same OR and were thus detecting 'lemon' aroma molecules passed through the same glomerulus) and send it to some specified location in the brain.

But it turns out that this is only partly true. Instead of a fixed map, that we all share, we each seem to be placing our 'lemon' messages in different parts of the olfactory area of our brains. Again, so long as we each know where that is, we can recognize the next lemon that comes along. But your lemon is located in a different part of your neural garage than mine.

So, unlike vision and sound, where it appears each of us has a similar source-to-brain map, this seems not to be the case for smell, as Zou et al. review. In our next post we'll explain what does seem to happen because while it doesn't depend on a topographic map, it does reflect understandable, and hence evolvable, developmental mechanisms that enable our olfactory house to be in order.

Tuesday, August 11, 2009

Will you buy your genome on a disk?

A story in the New York Times today describes a new DNA sequencing technology that will sequence a whole genome for under $50,000. This is lot closer to the $1000 genome that researchers have been waiting for (and promising) for a long time, as a research tool, but more importantly as an invaluable tool in diagnosis and prediction of disease risk. The grand 1-grand for 1-genotype idea is sure to become a reality sometime soon.

But, even science writers who are outliers on the genetics hyperbole scale are now routinely aware and questioning what we can gain from this information. Essentially, whole genome sequences extend association studies because they are looking for variants in sequence that correspond statistically to variance in phenotypes like disease. The scaled-up GWAS and big biobanks will be the sample on which such work will be done.

As regular readers of this blog know, we (and we're not alone) have been questioning the meaning of 'genes for' thinking for a long time--is this now percolating into the general consciousness even in the media?

Well, there are huge vested interests hiding under the bed. In spite of what looks to be an increasing acceptance of genetic complexity, adherents of 'genes for' thinking are still spending increasing time and money on genome-wide association studies (GWAS), looking for genes for their trait, and still claiming great success, DNA testing companies like deCODEme and 23andMe are still in business, claiming to be able to tell you your risk of disease, and people are still buying these services.

Those who are not so savvy but need to keep their careers on track, and who can do these kinds of studies (because they are largely canned and off-the-shelf nowadays), are sometimes perforce committed to this status quo. But for various reasons that range from true belief in the prospects to fully aware budget-protection are pressing ahead. They need the funding to continue to flow, and hope or believe that whole genome sequences will save the day. Somehow. They don't know how. Pray for serendipity!

It is easy to criticize and harder to change course, especially with so much invested in equipment, equipment manufacture, bureaucratic portfolios, lab personnel, publications, reputations, and tenure. In this sense, we think science is forced to stay the course since we're only human. But that doesn't make it the best science, even if it's technologically leading edge and extremely sophisticated, which it is.

To be a bit more sympathetic, most people are rather conventional and conceptually not very innovative. In science as well as other areas of human endeavor, we want our ideas and even our dogmas: they give continuity to our lives and a sense that we understand things. Change comes hard and new ideas even harder. Though we're all taught, and many teach, that the objective of a scientist to prove his/her ideas are wrong, that's near-total baloney! What is done is almost always contorting to prove that our ideas are right. That's how careers are built. Many journals won't even publish negative results. That's why even in the face of negative results, as in this case, we persist. But that doesn't make it good science.

As to the promises that genomes will predict your life experience....we're not buying it.

Monday, August 10, 2009

The brain and the braincase, and much more, too

In our book The Mermaid's Tale, we make the point that, in thinking about how the diversity of life arose, people often place too much emphasis on evolutionary time scales compared to the more immediate timescale on which life is actually lived. We note that no two species in nature, no matter how distant or how different their genomes, are any more different than a brain and the braincase that encloses it. Yet the brain and the braincase are made with cells having the same genome, very closely related in terms of cellular descent (from the single fertilized egg), and that in fact interact extensively with each other. This figure, from a simulation done by Brian Lambert, programmer extraordinaire with our group, illustrates how signal interactions among 3 layers, an outer epithelium, a pre-bone layer of mesodermal cells, and an underlying dural layer usually thought of as the outer layer of the brain, produce the different tissues of the brain and skull.

The difference is not in the genomes in any of the cells involved in the development of the head, of course, but in the way their genomes are used. Indeed, in research that we are doing in collaboration with Joan Richtsmeier and others here, and Mimi Jabs at Albert Einstein University in New York, we are finding the intricate way in which these two structures develop through intimate interactions of cell layers that often express similar genes. It is not even clear whether the cells of the future braincase, or those of the future brain, are 'in charge' of this process, and the very concept of a pure hierarchy of control is probably most often a misperception that may derive from our broader culture, in which we do have bosses and the bossed, and natural power hierarchies.

This cooperative kind of interaction based on signaling is essential to life, indeed we would argue that life is signaling. That means that contrary to ideas since Mendel's work on peas was rediscovered in 1900, genes per se are not as much the key to life as gene usage, and also the somewhat Lamarck-like fact that cells, if not organisms, do change--in this case their gene expression and hence their differentiation and behavior, as a result of experience (their cellular context), and these changes are inherited by their descendant cells in the body of the organism.

This is not the place to go on in detail about that--it's a major part of our book. But we are triggered to discuss the subject because a new paper by Dimas et al. in Science Express ("Common Regulatory Variation Impacts Gene Expression in a Cell Type-Dependent Manner", Dimas et al., published in Science Express online, July 30, 2009), has compared regions of the genome in which variation among individuals differently control expression of genes in several different cell lines from the individuals that were tested. The findings are, first, that each cell type has its own regulatory regions distributed across the genome. This is no surprise because each cell, such as different types of blood cells, does different things and must do that via hundreds of different expressed genes. There is variation because these individuals vary, just as you and we do. But the greater variation, as with the brain and the braincase, is among very closely related cells in terms of their gene expression.

Dimas et al. also found that regulatory regions used in only one of the tested cell types tended to cause lower levels of expression, and to be farther from the genes. Why this should be, if it is a finding that holds up in future work, is anyone's guess, but may reflect some aspect of evolution related to the cell type. Another finding was that the great majority, up to around 80%, of regulation is by control elements that only affect one of the tested cell types.

So cells differentiate to make organs, and that's what makes you as an organism. Evolution leads to differences, to be sure, but at least as interesting is the way that cells act as organisms of their own, and evolve--yes, evolve in the true sense of the word--very rapidly and by cooperative communication, rather than competition, among each other. In fact, in the book we call the developmental process of cellular differentiation cytospeciation.

These issues are important far beyond their basic interest. For example, the whole idea of using stem cells to develop replacement tissue as therapy for diseases depends on making the same starting cells become different, a singular challenge. Indeed, it's less of a challenge to produce a whole new animal with stem cells than to direct the differentiation of the cell types needed to make a single organ. But, the more this can be done with the patient's own genome, the more likely it will work.

But, some day it may be possible to take your adult skin cells, and turn them into brain or braincase, to repair damage acquired during life.

Heckling Haeckel

A review of a new book by Sander Gliboff called H.G. Bronn, Ernst Haeckel, and the Origins of German Darwinism: A Study in Translation and Transformation was published in PLoS Biology on July 28. As described by the reviewer, Axel Meyer, the book details the history of evolutionary thinking in Germany, largely determined by the first translator of Darwin's Origin, paleontologist Heinrich Georg Bronn. Bronn had apparently been thinking along similar lines to Darwin, except that he viewed evolution as a march toward perfection, a view that he liberally injected into his translation. He added many footnotes, and even a 15th chapter to further explain and interpret Darwin's thinking to the German public.


Darwin and Bronn corresponded about the translation, and Darwin modified subsequent editions of the book to include some of Bronn's suggestions, but he did the same with the ideas of many of his correspondents--that was largely how he operated. But, Gliboff's thesis is that Bronn's translation had an immediate influence on evolutionary thought in Germany, including through the Nazi era and to today. One of the more significant early readers of the translation was Ernst Haeckel, well-known paleontologist and popularizer of scientific thought in 19th century Germany. As Meyer writes,


The main reason why all of this is of greater, even political, interest beyond issues in the history of science, is that Ernst Haeckel is widely seen—although this is disputed among historians of science—to be in an unholy intellectual line from Darwin to social Darwinism and eugenics in the early twentieth century, eventually leading to fascism in Nazi Germany. Creationist and intelligent-design advocates worldwide tirelessly perpetuate this purported but largely unsubstantiated connection between Darwin, Haeckel, and Hitler.


We haven't read the Gliboff book, and it's not clear from the review what stand Gliboff takes on this, but the idea that Haeckel and Darwin were responsible for the Holocaust is not a new one. But Haeckel died in 1919, before the Nazis assumed control, and his views were part and parcel of his times. In his fine and definitive recent biography of Haeckel (The Tragic Sense of Life, 2008), Robert Richards deals at some depth with these issues, as well as the way historians do, or should, interpret or evaluate figures who had died before some subsequent events. In this sense, what we have is a bum rap against Haeckel.


Darwin, of course, and Haeckel too, placed some ‘races’ at a somewhat higher level of advance than others, as this figure from Haeckel’s 1868 Natural History of Creation shows. Darwin was, in other places in his writing, especially in Descent of Man¸ less clear in his value judgments about race, but he did refer to ‘savages’ as contrasted with ‘civilized’ people as being biologically inferior because of natural selection. He worried that modern life protected the weak. Haeckel’s view was the preponderant view of educated Europeans at the time, German or otherwise.


Unfortunately the issues are actually much deeper. In the 20th century, Germans were unquestionably culpable for going along with Hitler’s venomous ideas and acts. But where did these ideas arise? Bronn did apparently take liberties that may have influenced German thinking, especially if Haeckel did not read English as Richards and Meyer suggest, and so read only the German translation of the Origin. Darwin’s theory of evolution by natural selection was adopted and propelled energetically by Haeckel who had ideas about evolution as a kind of progressive phenomenon, that fit in with his anti-religious 'monistic' view of existence that denied a mind-matter distinction. The inequality that was central to natural selection fit the longstanding ethos of the time in regard to human race and hierarchy.


If the theory were truly universal then it would be only natural to think that winners and losers, better and worse, were natural characteristics of humans as well as the rest of organic life. The formal eugenic movements arose in England and the US and were all about who’s better and who’s not for assumed inherent reasons. Darwin’s cousin Francis Galton started ‘eugenics’ (he invented the term in 1883). Eugenics was vigorously discriminating in the US by the early 1900s, including forced sterilization of people judged unworthy. This idea was taught even in high schools. What is now Cold Spring Harbor Laboratory was once the Eugenics Office. Indeed, the Germans learned some of their approaches from us. They didn’t invent it.


For about 20 years the world’s leading textbook on human genetics was Baur, Fischer, and Lenz’ Human Heredity. It was translated into many languages, and reprinted for about 20 years, from about 1920 to 1940. It was a kind of feeder book that helped Nazis develop the theory of race that justified their actions. Race characteristics, Baur et al. argued, were Darwinian and genetic.


However, the Nazis were specifically not sanguine about Haeckel, for reasons having to do with Haeckel’s monism. More to the point, Baur et al., certainly German and proud of Germanic heritage, do not mention Haeckel.


As part and parcel of their appeal to recover national self-esteem after the devastation it experienced in WWI, the Nazis were, if anything, hyper-Nordic. They reveled in anything Aryan. They would have lauded Haeckel with great enthusiasm as a hero of their race. Indeed, in both Haeckel’s human evolutionary tree, and even Baur et al., Jews were placed at a very high plane among humans. The authors certainly knew of Haeckel and his writing, as he was still alive and was perhaps the most celebrated public scientist in the world (and certainly in Germany). Yet, they ignored him.

The Nazis took a ready-made Darwinian justification for their vitriol that was 'in the air' at the time. Their abuses were their own doing!

Friday, August 7, 2009

Cooperation and goat cheese

These goats don't faint or climb trees, but they do climb rocks, eat grass, chew their cud, and make the milk that Anne's sister, Jennifer and her husband, Melvin, will turn into yogurt and cheese.


But, to become milk, of course, the grass has to pass through the stomach. The ruminant stomach is a fine example of the kind of cooperative symbiosis we wrote about on 7/21. We all learn as children that ruminants -- hooved animals like cows, goats, and sheep -- “have four stomachs,” which allow these animals to subsist pretty much on cellulose alone. Cellulose is a very poor source of nutrients, and it requires a lot of time and energy to consume. Cows, for example, have to eat just about around the clock.


While the idea of four stomachs suggests four separate but equal digesting pots, in fact ruminants are more like us than we imagined as children, as they actually have only one stomach, with either three or four chambers, depending on the species--the rumen-reticulum, the omasum and the abomasum. This division allows these animals to digest plant food, or cellulose, that comprises most of their diet--despite the urban myth about goats eating everything from tin cans to nuts and bolts. Here's a public domain figure of a ruminant digestive system.


Cellulose is impossible for people to digest, as we have no way to break it down into usable nutrients, but the complicated ruminant digestive system allows cows, sheep, deer, camels, llamas, goats and so on to do so. Ruminants chew their food briefly and then swallow it, whereupon it enters the first chamber of the stomach, the rumen-reticulum. Here it’s mixed with saliva--enormous quantities of saliva; cows make something like 150 liters of it everyday, which explains why they drool so much, and why they have to drink so much--and then it’s divided into liquid and solid layers.


The solids form the cud, which the animal regurgitates around 500 times a day and slowly chews to break down the cellulose particles, in a process called “ruminating” (from the Latin, ruminare--could it really be because cows seem so thoughtful when they chew their cud?). When the animal swallows the semi-digested material, it is returned to the rumen-reticulum where the fibers are further broken down and fermented by many kinds of bacteria found in great numbers in the gut. It is primarily the carbohydrates that ferment (in fact, if you feed a ruminant enough sugar, she will get drunk), and the nutritive products of this fermentation are absorbed here, through the wall of the rumen-reticulum.


Fermentation produces an enormous amount of methane gas, which the ruminant releases by belching or farting. Anything that interferes with the release of this gas, causing “bloat,” is serious and can be life threatening; if a cow or a goat spends too much time lying on its side, it can literally bloat and die because there’s no easy exit for the methane in that position. From time to time, when someone with not much experience is bottle feeding the kids on the farm, they’ll miss the signs that one of them has had enough (bulging stomach), and will let the kid drink too much. The poor animal can be dead of bloat within hours. We worry about this when we feed babies on Jen and Melvin's farm, but so far we’ve been lucky. We’ve seen it happen, though, and it’s not pretty.


Like most animals, including humans, ruminants need help doing some things that are vital to their survival. Ruminants can’t digest grass without the help of the abundant micro-organisms in their gut. Although the symbiosis may not be good for each individual bacterium, because many of them get digested in the animal’s abomasum (the stomach’s final chamber), it’s good for bacterial species overall, because they absolutely thrive in the forestomach. Another example of the centrality of cooperation in life. This is not cooperation in the social intentional sense, such as a group project or a company, but it is social group behavior in which the parties not only benefit but depend on each other.


The liquid layer in the rumen-reticulum containing the broken down fibers of the re-chewed and re-swallowed cud gets passed to the omasum where the liquid and metabolites from breakdown of the bacteria are absorbed into the blood stream. The semi-digested bolus that remains, which is primarily protein at this point, is passed to the abomasum, the final stomach chamber, and then on to the intestine where the nutrients are absorbed, in a way similar to that of any single-stomached animal. In an adult ruminant, the abomasum is about 10% of the stomach. (Here's a picture of feta cheese that Jennifer is making.)


The rumen and reticulum of young ruminants like goats are undeveloped and non-functional until the animal begins to feed on grass or hay or grain at about two months of age. Before then, the abomasum is much the largest compartment so that the stomach functions more like that of a “monogastric,” or single-stomached animal. The milk the young animal drinks passes directly from the esophagus to the abomasum where it is digested. As the animal begins to add dry food to its diet, the rumen becomes populated with bacteria that are in the food or the air, and these colonizing bacteria actually stimulate the development of the goat’s rumen and reticulum, which allows the growing animal to begin to ruminate, digesting its food through fermentation. The development of these various sequential pouches along the gut is another example of cooperation, between cells, signaling each other and inducing growth and differentiation at appropriate places along the gut tube.


Some non-ruminants can digest cellulose, too, but not with a forestomach, as they don’t have one. These “monogastric herbivores” have the same kind of simple stomach we have, but have a huge large intestine, akin to the ruminant rumen, where their diet of cellulose ferments. Horses, rabbits and other rodents, porcupines, beavers, elephants, rhinoceroses, and pigs are all examples of hindgut fermenters. This is but another example of the many ways of evolutionary success, even to similar ends.


Although all these animals consume and digest cellulose with the aid of microbes, hindgut fermenters, unlike ruminants, can’t take advantage of all the microbial protein they break down in their gut because they can’t absorb it. Most nutrients are absorbed in the small intestine, but with fermentation happening beyond that, in the large intestine, these animals aren’t able to absorb the amino acids that are the result of the breakdown of bacteria there and so they are expelled in their feces. This in fact means that their feces are loaded with protein and nitrogen and so on, and a number of these animals actually reclaim these nutrients by consuming their own feces. It makes a great meal for insects, too.


From front end to back, cooperation of various kinds, in various ways, allows these mammals to consume cellulose. In a sense, too, they are cooperating with the grasses in the fields, as it stimulates their growth and proliferation, too. These days, of course, the farmer cooperates by cutting the field grass, making hay so they goats can survive the winter.


And, in another nice example of cooperation between "us" and "them", goats give really good massages. (Thank you for the photos, Jennifer!)