Uncommon Descent

14 September 2009

“Reducible complexity” in PNAS

Michael J Behe

Dear Readers, Recently a paper appeared online in the journal Proceedings of the National Academy of Sciences, entitled “The reducible complexity of a mitochondrial molecular machine” (http://tinyurl.com/mhoh7w). As you might expect, I was very interested in reading what the authors had to say. Unfortunately, as is all too common on this topic, the claims made in the paper far surpassed the data, and distinctions between such basic ideas as “reducible” versus “irreducible” and “Darwinian” versus “non-Darwinian” were pretty much ignored. Since PNASpublishes letters to the editor on its website, I wrote in. Alas, it seems that polite comments by a person whose work is the clear target of the paper are not as welcome as one might suppose from reading the journal’s letters-policy announcement (“We wish to provide readers with an opportunity to constructively address a difference of opinion with authors of recent papers. Readers are encouraged to point out potential flaws or discrepancies or to comment on exceptional studies published in the journal. Replication and refutation are cornerstones of scientific progress, and we welcome your comments.”) (http://tinyurl.com/mpf4sp) My letter received a brusque rejection. Below I reproduce the letter for anyone interested in my reaction to the paper. (By the way, it’s not just me. Other scientists whose work is targeted sometimes get the run around on letters to the editor, too. For an amusing / astounding example, see http://tinyurl.com/n5dbdz.) Call me paranoid, but it seems to me that some top-notch journals are real anxious to be rid of the idea of irreducible complexity. Recall that last year Genetics published a paper purportedly refuting the difficulty of getting multiple required mutations by showing it’s quick and easy in a computer—if one of the mutations is neutral (rather than harmful) and first spreads in the population (http://tinyurl.com/mxjwdy). Not long before that, PNAS published a paper supposedly refuting irreducible complexity by postulating that the entire flagellum could evolve from a single remarkable prodigy-gene (http://tinyurl.com/l6kjh4). Not long before that, Science published a paper allegedly refuting irreducible complexity by showing that if an investigator altered a couple amino acid residues in a steroid hormone receptor, the receptor would bind steroids more weakly than the unmutated form (http://tinyurl.com/kjq4e4 and http://tinyurl.com/lqav6p). (That one also made the New York Times!) (http://tinyurl.com/q6w3qq) For my responses, see here (http://tinyurl.com/meeqfs), here (http://tinyurl.com/3vzxet), here (http://tinyurl.com/ln7a6k), and here (http://www.discovery.org/a/3415). So, arguably picayune, question-begging, and just plain wrong results disputing IC find their way into front-line journals with surprising frequency. Meanwhile, in actual laboratory evolution experiments, genes are broken right and left as bacteria try to outgrow each other (http://tinyurl.com/lxvge5). Well, at least it’s nice to know that my work gives some authors a hook on which to hang results that otherwise would be publishable only in journals with impact factors of -3 or less. But if these are the best “refutations” that leading journals such as PNAS and Science can produce in more than a decade, then the concept of irreducible complexity is in very fine shape indeed. ************* To the editor: Reducible versus irreducible systems and Darwinian versus non-Darwinian processes The recent paper by Clements et al (1) illustrates the need for more care to avoid non sequiturs in evolutionary narratives. The authors intend to show that Darwinian processes can account for a reducibly complex molecular machine. Yet, even if successful, that would not show that such processes could account for irreducibly complex machines, which Clements et al (1) cite as the chief difficulty for Darwinism raised by intelligent design proponents like myself. Irreducibly complex molecular systems, such as the bacterial flagellum or intracellular transport system, plainly cannot sustain their primary function if a critical mechanical part is removed. (2-4) Like a mousetrap without a spring, they would be broken. Here the authors first postulate (they do not demonstrate) an amino acid transporter that fortuitously also transports proteins inefficiently. (1) They subsequently attempt to show how the efficiency might be improved. A scenario for increasing the efficiency of a pre-existing, reducible function, however, says little about developing a novel, irreducible function. Even as evidence for the applicability of Darwinian processes just to reducibly complex molecular machines, the data are greatly overinterpreted. A Darwinian pathway is not merely one that proceeds by “numerous, successive, slight modifications” (1) but, crucially, one where mutations are random with respect to any goal, including the future development of the organism. If some mutations arise non-randomly, the process is simply not Darwinian. Yet the authors say nothing about random mutation. Their chief data are sequence similarities between bacterial and mitochondrial proteins. However, the presumably homologous proteins have different functions, and bind non-homologous proteins. What is the likelihood that, say, a Tim44-like precursor would forsake its complex of bacterial proteins to join a complex of other proteins? Is such an event reasonably likely or prohibitively improbable? Clements et al (1) do not provide even crude estimates, let alone rigorous calculations or experiments, and thus provide no support for a formally Darwinian process. Their only relevant data in this regard is their demonstration that a singly-mutated bacterial TimB can substitute for Tim14 in mitochondrial transport. While that is certainly an interesting result, rescuing a pre-existing, functioning system in the laboratory is not at all the same thing as building a novel system step-by-random-step in nature. Biologists have long been wary of attempts to fill in our lack of knowledge of the history of life with imaginative reconstructions that go far beyond the evidence. As I have discussed (5), extensive laboratory evolution studies over decades offer little support for the plausibility of such felicitous scenarios as Clements et al (1) propose. The authors may well be overlooking formidable difficulties that nature itself would encounter. References

  1. Clements A, et al. (2009) The reducible complexity of a mitochondrial molecular machine. Proc Natl Acad Sci USA doi/10.1073/pnas.0908264106.
  2. Behe, MJ (1996) Darwin’s Black Box :The Biochemical Challenge to Evolution (Free Press, New York).
  3. Behe MJ (2000) Self-organization and irreducibly complex systems: A reply to Shanks and Joplin. Phil Sci 67:155-162.
  4. Behe MJ (2001) Reply to my critics: A response to reviews of Darwin’s Black Box: the biochemical challenge to evolutionBiol Phil 16:685-709.
  5. Behe, MJ (2007) The Edge of Evolution: the Search for the Limits of Darwinism (Free Press, New York).

*************

31 August 2009

Back on Bloggingheads TV

Michael J Behe

The editor-in-chief of Bloggingheads TV, Robert Wright, has re-instated my interview with linguist John McWhorter (http://bloggingheads.tv/diavlogs/22075) on that website. Wright was away last week when the brouhaha occurred, and it’s good to see that a steady editorial hand is back in charge.

28 August 2009

Bloggingheads TV and me

Michael J Behe

Dear readers,

I’ve just been through the weirdest book-related experience I’ve had since a Canadian university professor with a loaded rat trap chased me around after a talk I gave a dozen years ago, threatening to spring it on me. Last week I got the following email bearing the title “Invitation to Appear on Bloggingheads TV” from a senior editor at that site:

*************

Hi, Michael–

I’d like to invite you to appear on Bloggingheads.tv, a web site that hosts video dialogues between journalists, bloggers, and scholars. We have a partnership with the New York Times by which they feature excerpts from some of our shows on their site.

Past guests include prominent thinkers such as Paul Krugman, Paul Ehrlich, Frans de Waal, David Frum, Richard Wrangham, Francis Fukuyama, Robert Kagan, and Michael Kinsley.

Here is one of our recent shows, a dialogue between Paul Nelson, of the Discovery Institute, and Ron Numbers, of Wisconsin-Madison:

http://bloggingheads.tv/diavlogs/21107

I’m hoping that you might be interested in participating, as well.  First-time participants often report how refreshingly unconstrained they find the format—how it lets them present their views with a depth and subtlety not possible on TV or radio. We’d love to have you join us.

If you’re available, please let me know, and we can see about arranging a taping. Thank you for your time.

**************

He seemed like such a nice fellow, so after a couple days I emailed him back to say, sure, I’d be glad to. The editor responded, okay, sometime next week, your discussion partner will be John McWhorter of the Manhattan Institute. I had never heard of McWhorter before, so googled his name, and saw that he’s a linguist who often writes on race matters. I didn’t know what to expect because I know some conservatives (which he seemed to be from his bio) don’t like ID one bit.

Everything was arranged for the taping Tuesday afternoon. When the interview started, I was surprised and delighted to learn that McWhorter was actually a fan of mine. He said (I’m paraphrasing here) he loved The Edge of Evolution and wanted the book to become better known. He said that this was one of the few times that he initiated an interview at Bloggingheads. He said he was familiar with criticisms of the book and found them unpersuasive. He said that Darwinism just didn’t seem to him to be able to cut the mustard in explaining life, and he had yet to read a good, detailed explanation for a large evolutionary change. He also said that he had never believed in God, but that EOE got him thinking. In return I summarized my arguments from EOE, talked about protein structure, addressed his objections that intelligent design is “boring” and a scientific dead-end, and so on. At the end of the taping I thought, gee, those folks at Bloggingheads TV are a real nice bunch.

The next day I emailed the Bloggingheads editor to ask when the show would go on. He answered right back that at that very moment it had been activated, and thanks for participating. I clicked the link, and there was the show. I thought I looked older on screen than I am (my beard isn’t really that white), but emailed some friends to let them know the interview was up anyway. That evening I got an email from one of them saying that he couldn’t find the interview — it had been yanked.

Let me emphasize this, dear readers. Here we are living in the land of the free and the home of the brave. And yet a web site puts up an interview with an (ahem…) somewhat controversial figure, pulls it back down within hours, erases it, sends it down the memory hole. Why might that be? There would seem to be two possibilities: 1) maybe we aren’t quite as free as we think, or 2) maybe not quite as brave.

I bet on possibility #2. Because of the magic of the internet, it turns out that shortly after the show’s posting the comments section of the site was overrun by “bitterly virulent” (in the words of one principal in this saga) cyber bullies, some murmuring darkly about a grim future for Bloggingheads. After I found out the video was removed I emailed John McWhorter and the editor to ask for an explanation, and John emailed back that he himself requested the video to be pulled because people thought he was too easy on me, which was supposedly contrary to that old Bloggingheads spirit. I find that quite implausible (other shows on the site feature discussions between people who agree on many things). Rather, I suspect the folks at the website weren’t expecting the vitriolic reaction, began to worry about their good names and future employment prospects, pictured themselves banished to a virtual leper colony, panicked, and folded.

Well, mobs, including internet mobs, are scary things, and it’s understandable to panic when they unexpectedly show up at your door. But if you’re going to set up a website to air discussions about contentious issues of the day, you should have a whole lot more guts than displayed by Bloggingheads TV.

For those who want to watch for themselves the interview that made grown men tremble, it can be seen at http://tinyurl.com/lqonmn.

Below is a time-lapse picture of my Bloggingheads interview. I’m the guy on the right.

timelapse

21 May 2009

Letter to Science

Michael J Behe

The May 1st issue of Science contains a “News Focus” article entitled “On the Origin of the Immune System.” While describing some current work in the area the author, John Travis, makes liberal use of myself as an unreasonably-skeptical foil. I wrote a letter to the editor of Science pointing out inaccuracies in the story but, gee whiz, they didn’t think the letter would be of sufficient interest to their readers to print it. Below I reproduce the unpublished letter for those who might be interested in my reaction to the article.
To the editor:
 In his article “On the Origin of the Immune System” (Science, May 1, 2009) John Travis makes the same mistake as did the judge in the 2005 Dover trial — badly confusing the notions of intelligent design, common descent, and evolution. Citing the courtroom theatrics of the lawyers who piled a stack of textbooks and articles in front of me, Travis quotes me as remarking “They’re wonderful articles. … They simply just don’t address the question I pose.” Unfortunately, Travis seems uninterested in what that question might be. Instead he cheers, “Score one for evolution.”
 Although some news reporters, lawyers, and parents are confused on the topic, “intelligent design” is not the opposite of “evolution.” As some biologists before Darwin theorized, organisms might have descended with modification and be related by common descent, but the process might have been guided by some form of intelligence or teleological driving force in nature. Darwin’s chief contribution was not the simple idea of common descent, but the hypothesis that evolution is driven completely by ateleological mechanisms, prominently including random variation and natural selection. Intelligent design has no proper argument with the bare idea of common descent; rather, it disputes the sufficiency of ateleological mechanisms to explain all facets of biology. Those who fail to grasp such distinctions are like people who can’t distinguish between the ideas of Darwin and, say, Lamarck.
 In the courtroom scenario Travis recounts, I was testifying that science has not shown that a Darwinian mechanism could account for the immune system. Travis’s article itself confirms that is still true. He cites some biologists who think the adaptive immune system arose in a “big bang”; he quotes other scientists who assert, “There was never a big bang of immunology.” He discusses vertebrate immunologists who think they know what the selective advantage of the system is; he quotes invertebrate immunologists who feel otherwise. So are we to think that its history is uncertain and even its selective advantage is unknown, yet the mechanism by which the adaptive immune system arose is settled?
In my court testimony I cited the then-new article by Klein and Nikolaidis, “The descent of the antibody-based immune system by gradual evolution” (Proc. Natl. Acad. Sci. USA 102:169-174, 2005), which first disputed the big bang hypothesis. In it the authors candidly remark, “Here, we sketch out some of the changes that the emergence of the AIS entailed and speculate how they may have come about.” Valuable as it might be to science, however, speculation is not data, let alone an experimental result. Students are poorly served when they are not taught to distinguish among them.
Michael J. Behe

21 May 2009

Letter to Trends in Microbiology

Michael J Behe

The January 2009 issue of Trends in Microbiology contains an article entitled “Bacterial flagellar diversity and evolution: seek simplicity and distrust it?”  Unfortunately, like many people, the authors have a mistaken view of irreducible complexity, as well as a very shallow idea of what a Darwinian “precursor” to an irreducibly complex system might be. I wrote a letter to the editor of the journal to point out these difficulties. Alas, they said they had no room to publish it. Below is the letter that I sent.
To the editor:
In their recent article “Bacterial flagellar diversity and evolution: seek simplicity and distrust it?” Snyder et al. (2009) [1] attribute to me a view of irreducible complexity concerning the flagellum that I do not hold. They write “One advocate of ID, Behe, has argued that the bacterial flagellum shows the property of ‘irreducible complexity’, that is, that it cannot function if even a single one of its components is missing”. That isn’t quite right. Rather, I argued that necessary structural and functional components cannot be missing. In Darwin’s Black Box I wrote, “The bacterial flagellum uses a paddling mechanism. Therefore it must meet the same requirements as other such swimming systems. Because the bacterial flagellum is necessarily composed of at least three parts—a paddle, a rotor, and a motor—it is irreducibly complex.” [2] A particular auxiliary component of the flagellar system, such as, say, a chaperone protein, may or may not be needed for the system to work under particular circumstances. However, if it is missing a necessary mechanical part, it simply cannot work.
That shouldn’t be controversial. In fact Snyder et al (2009) avail themselves of the same reasoning when they write about a hook-basal-body complex recently discovered in Buchnera aphidicola, which “lacks … the gene for flagellin”. [1] They conclude it “must have some role other than motility.” Well, why must it have some role other than motility? Because, of course, it is missing the paddle, and therefore can’t work as a paddling propulsion system. In other words, in my sense of the term, it is irreducibly complex.
Snyder et al (2009) think Buchnera’s derived structure “illuminates flagellar evolution by providing an example of what a simpler precursor of today’s flagellum might have looked like – a precursor dedicated solely to protein export rather than motility”. [1] I think that simplicity should be distrusted. The activity of a protein export system has no obvious connection to the activity of a rotary motor propulsion system. Thus the difficulty of accounting for the propulsive function of the flagellum and its irreducible complexity remains unaddressed. In regard to the flagellum’s evolution, Snyder et al’s (2009) advice to distrust simplicity is sound and should be followed consistently. [1]
Michael J. Behe
Professor of Biological Sciences
Lehigh University
Bethlehem, PA 18015
References
1. Snyder, L.A., Loman, N.J., Futterer, K., and Pallen, M.J. (2009) Bacterial flagellar diversity and evolution: seek simplicity and distrust it?  Trends Microbiol. 17, 1-5.
2. Behe, M.J. (1996) Darwin’s Black Box: the Biochemical Challenge to Evolution. The Free Press, New York, p. 72.

2 April 2009

“The Old Enigma,” Part 3 of 3

Michael J Behe

Dear Readers,
This post continues directly from Part 2.
Second, the authors assume that, in the absence of phenotypic mutations, the first genotypic mutation would be strictly neutral. That is, the selection coefficient for the first mutation is very, very close to zero. It turns out that this is a critical feature. If the first mutation were slightly positive itself (without considering look-ahead) then it could be selected on its own, and the look-ahead effect makes little difference. On the other hand, if the first mutation is slightly negative (including look-ahead), then it will not be positively selected and, again, the effect makes essentially no difference. It is only in a very restricted range of selection coefficients that any significant influence will be seen.
A related point is the question: except for purposes of illustration, why should the look-ahead effect be conceptually separated from everything else that goes into the selection coefficient? Clearly any mutation can have many effects, from stabilizing (or destabilizing) the structure of a protein to increasing (or de-) its interaction with other proteins, to favorably (or un-) affecting the energy budget of a cell, and so on. All of the effects can influence whether the mutation is favorable overall or not, so why separate out look-ahead? If, considering all influences, a particular mutation is favorable because offspring with the mutation survive with higher probability, then that is represented by a positive selection coefficient; if unfavorable, a negative coefficient. It is dubious to subdivide survival due to a particular mutation into tiny parts.
Third, the look-ahead effect is manifestly a double-edged sword. Consider the sequence of the protein one mutation before it reached what we previously called the “unmutated state” — that is, the sequence of the protein that was fixed in the population right before it reached the sequence that was two mutations from the highly favorable form. We can call it “sequence minus one.” Now suppose a mutation appears in the DNA of one cell that would take us to the starting sequence (call it “sequence zero”) if it spread and became fixed in the population. The next mutation (call it “sequence plus one”) can appear in this individual cell as a phenotypic, look-ahead mutation. The final mutation (“sequence plus two”), which has the highly selectable feature, does not appear even as a phenotypic mutation in this cell. But now suppose that sequence plus one was not strictly neutral without look-ahead, but somewhat deleterious (as most protein mutations are). Then, because of the look-ahead effect, sequence zero will be selected against, and the probability that the population ever develops sequence zero will be much lower.
The take-home point is that, although looking ahead might help the final step a bit if the penultimate mutation is otherwise strictly neutral, the look-ahead effect will actively inhibit the development of a multimutation feature if one of the steps in a mutational pathway is somewhat deleterious. And the more deleterious it is, the more effectively the path is blocked. In a rugged adaptive landscape, the look-ahead effect is as likely to hurt as to help. In other words, it is a net of zero. So Darwinism remains great at “seeing” the immediately-next step, but it has no reliable power to see beyond.
Finally and most importantly, recall the central message of The Edge of Evolution: The Search for the Limits of Darwinism: To have a good idea of what Darwinian evolution can do, we no longer need to rely solely on speculative models, which may overlook or misjudge aspects of biology that nature would encounter. We already have good data in hand. We already have results that should constrain models. Over many thousands of generations, astronomical numbers of malarial cells seem not to have been able to take advantage of the look-ahead effect or anything else to build new, coherent molecular machinery. All that’s been seen in that system in response to antibiotics are a few point mutations. In tens of thousands of generations, with a cumulative population size in the trillions, no coherent new systems have been seen in the fascinating work of Richard Lenski on the laboratory evolution of E. coli. Instead, even beneficial mutations have turned out to be degradative ones, where previously functioning genes are deleted or made less effective. And that’s the same result as has been seen in the human genome in response to selective pressure due to malaria — a number of degraded genes or regulatory elements, and no new machinery.
Theoretical models must be constrained by data. If models don’t reproduce what we do know happens in adaptive molecular evolution, then they are wholly unreliable in telling us anything about what we don’t know. Unless a model can also reproduce empirical results such as those cited just above, it should be regarded as fanciful.

1 April 2009

“The Old Enigma,” Part 2 of 3

Michael J Behe

Dear Readers,
This post continues directly from Part 1.
Koonin is clearly very impressed with the new paper, which he calls “brilliant” and “a genuinely important work that introduces a new and potentially major mechanism of evolution…” His enthusiasm is a good indication that the problem is a major one, and that no other papers exist which deal effectively with it.
So what is the paper (a theoretical, mathematical-modeling study) about?  When a mutationless gene is transcribed and translated into a protein, errors can creep in. It turns out that these error rates are much higher than for copying DNA. Using published mutation rates, Whitehead et al (2008) estimate that 1 in 10 standard-sized proteins will contain an error; that is, they will contain an amino acid that is not coded for by the gene. The authors call these “phenotypic mutations.” Inherited changes that occur in the DNA are called “genotypic mutations.” Now, the idea is this. Suppose an organism needs two mutations to acquire some new feature, such as a disulfide bond. Further suppose that a single organism in a population that initially has neither of the mutations acquires just one of the necessary mutations in its DNA. Because of phenotypic mutations, this single organism will also contain some copies of the protein that have the second mutation. If the selective benefit of these phenotypic mutations is proportional to their concentration, as the authors suppose, then that organism may have an advantage over other organisms with no mutations. In a sense, the authors say, evolution can look a step ahead, so the authors dub this the “look-ahead effect.” The reviewer Eugene Koonin agrees that the paper “in a sense, overturns the old adage of evolution having no foresight. It seems like, even if non-specifically and unwittingly, some foresight might be involved.”
 As the authors and one of the other referees note, this is pretty reminiscent of something called the “Baldwin effect”, which was first proposed in the 19th century. The authors contend that there are subtle differences between the Baldwin effect and the look-ahead effect. Yet, whoever deserves priority for the idea, I don’t think the look-ahead effect contributes much at all to solving the problem of multiple mutations. In my own opinion, the idea of the paper is certainly clever, but Koonin vastly overestimates its importance. It offers virtually no help in solving the “old enigma,” as I explain below and in Part 3.
First, the effect is quite minor at best. Since, based on transcriptional and translational mutation rates, the fraction of proteins with the correct phenotypic mutation is expected to be about one-hundredth of one percent (10^-4) of the total number of protein copies, the presumed selective effect will be only 10^-4 times the selective effect of the double genotypic mutant. So if the double genotypic mutant had a selective advantage of 0.1 (a pretty substantial value), the phenotypic look-ahead mutant would have an advantage of just 10^-5. If the double genotypic mutant has less of an advantage, the look-ahead has proportionately less. Because of this, the effect would be helpful only for large population sizes: too small of a population and there is no effect, because the mutation is effectively neutral. One can construct situations in which the selective advantage of a particular double genotypic mutant would be enormous (for example, if it conferred antibiotic resistance) so the look-ahead effect would be greater, but positing the general occurrence of such situations in nature amounts to special pleading.
It’s also important to realize that the authors of the paper purposely did not consider mitigating factors in their analysis. As they wrote, “The goal of our analysis was to demonstrate that the look-ahead effect is theoretically possible, and as such, we intentionally excluded confounding factors for the sake of clarity.” Other possible important effects that weren’t considered in the model include the influence of the first genotypic mutation on the stability of the spectrum of proteins with phenotypic mutations, effects of the mutations on translation rates, and so on. It is certainly understandable to simplify a model as much as possible for an initial investigation. However, any confounding effects will only diminish the strength of an already-weak influence.

31 March 2009

“The Old Enigma,” Part 1 of 3

Michael J Behe

Dear Readers,
When The Edge of Evolution  The Edge of Evolution: The Search for the Limits of Darwinism  was first published, some Darwinist reviewers sneered that the problem it focused on — the need for multiple mutations to form some protein features (such as binding sites), where intermediate mutations were deleterious — was a chimera. There were no such things, they essentially said. University of Wisconsin geneticist Sean Carroll, reviewing the book for Science, stressed examples where intermediate mutations were beneficial (I never said there weren’t such cases, and discussed several in the book). In the same vein, University of Chicago evolutionary biologist Jerry Coyne assured readers of The New Republic that “[i]n fact, interactions between proteins, like any complex interaction, were certainly built up step by mutational step … This process could have begun with weak protein-protein associations that were beneficial to the organism. These were then strengthened gradually…” The take-home message of the reviews for the public and for scientists in other fields was the same: Nothing to see here, folks. Move along. No problem here.
Contrast those assurances with a recent paper that addresses “the old enigma [my italics] of the evolution of complex features in proteins that require two or more mutations.” Those words (reminiscent of the title of my 2004 paper in Protein Science with David Snoke, “Simulating evolution by gene duplication of protein features that require multiple amino acid residues,” which is cited by the recent paper) were written by the prominent bioinformatician (and no friend of ID) Eugene Koonin in his review of the paper, “The look-ahead effect of phenotypic mutations” (Whitehead, D. J., et al. 2008, Biol. Direct 3:18). (Reviews are published along with papers on the Biology Direct web site.)
Old enigma? Old enigma? Who knew that evolving just a couple of interactive amino acid residues was a long-standing mystery? Someone should tell Carroll and Coyne….
I will discuss the specifics of the paper in Part 2. But let me first drive home this point. The development of protein features, such as protein-protein binding sites, that require the participation of multiple amino acid residues is a profound, fundamental problem that has stumped the evolutionary biology community until the present day (and continues to do so, as I explain below). It is a fundamental problem because all proteins exert their effects by physically binding to something else, such as a small metabolite or DNA or other protein, and require multiple residues to do so. The problem is especially acute for protein-protein interactions, since most proteins in the cell are now known to act as teams of a half-dozen or more, rather than individually. Yet if one can’t explain how specific protein-protein interactions developed, then it is delusional to claim that we can explain how anything that depends on them developed, such as the molecular machinery of the cell. It’s like saying “we understand perfectly well how a car could evolve; we just don’t know how the pieces could get fit together.” If such a basic requirement for putting together complex systems is not understood, nothing is understood. Keep this in mind the next time you hear a blithe Darwinian tale about the undirected evolution of the cilium or bacterial flagellum.

18 March 2009

Waiting Longer for Two Mutations, Part 5

Michael J Behe

Dear Readers,
An interesting paper appeared several months ago in an issue of the journal Genetics, “Waiting for Two Mutations: With Applications to Regulatory Sequence Evolution and the Limits of Darwinian Evolution” (Durrett, R & Schmidt, D. 2008. Genetics 180: 1501-1509). This is the fifth of five posts that discusses it. Cited references appear in this post.
The final conceptual error that Durrett and Schmidt commit is the gratuitous multiplication of probabilistic resources. In their original paper they calculated that the appearance of a particular double mutation in humans would have an expected time of appearance of 216 million years, if one were considering a one kilobase region of the genome. Since the evolution of humans from other primates took much less time than that, Durrett and Schmidt observed that if the DNA “neighborhood” were a thousand times larger, then lots of correct regulatory sites would already be expected to be there. But, then, exactly what is the model? And if the relevant neighborhood is much larger, why did they model a smaller neighborhood? Is there some biological fact they neglected to cite that justified the thousand-fold expansion of what constitutes a “neighborhood,” or were they just trying to squeeze their results post-hoc into what a priori was thought to be a reasonable time frame?
When I pointed this out in my letter, Durrett and Schmidt did not address the problem. Rather, they upped the stakes. They write in their reply, “there are at least 20,000 genes in the human genome and for each gene tens if not hundreds of pairs of mutations that can occur in each one.” The implication is that there are very, very many ways to get two mutations. Well, if that were indeed the case, why did they model a situation where two particular mutations — not just any two — were needed? Why didn’t they model the situation where any two mutations in any of 20,000 genes would suffice? In fact, since that would give a very much shorter time span, why did the journal Genetics and the reviewers of the paper let them get away with such a miscalculation?
The answer of course is that in almost any particular situation, almost all possible double mutations (and single mutations and triple mutations and so on) will be useless. Consider the chloroquine-resistance mutation in malaria. There are about 10^6 possible single amino acid mutations in malarial parasite proteins, and 10^12 possible double amino acid mutations (where the changes could be in any two proteins). Yet only a handful are known to be useful to the parasite in fending off the antibiotic, and only one is very effective — the multiple changes in PfCRT. It would be silly to think that just any two mutations would help. The vast majority are completely ineffective. Nonetheless, it is a common conceptual mistake to naively multiply postulated “helpful mutations” when the numbers initially show too few.
Here’s a final important point. Genetics is an excellent journal; its editors and reviewers are top notch; and Durrett and Schmidt themselves are fine researchers. Yet, as I show above, when simple mistakes in the application of their model to malaria are corrected, it agrees closely with empirical results reported from the field that I cited. This is very strong support that the central contention of The Edge of Evolution is correct: that it is an extremely difficult evolutionary task for multiple required mutations to occur through Darwinian means, especially if one of the mutations is deleterious. And, as I argue in the book, reasonable application of this point to the protein machinery of the cell makes it very unlikely that life developed through a Darwinian mechanism.
References
1. White, N. J., 2004 Antimalarial drug resistance. J. Clin. Invest. 113: 1084–1092.
2. Lynch, M. and Conery, J.S. 2000. The evolutionary fate and consequences of duplicate genes. Science 290: 1151–1155.

13 March 2009

Waiting Longer for Two Mutations, Part 4

Michael J Behe

An interesting paper appeared several months ago in an issue of the journal Genetics, “Waiting for Two Mutations: With Applications to Regulatory Sequence Evolution and the Limits of Darwinian Evolution” (Durrett, R & Schmidt, D. 2008. Genetics 180: 1501-1509). This is the fourth of five posts that discusses it. Cited references will appear in the last post.
 Now I’d like to turn to a couple of other points in Durrett and Schmidt’s reply which aren’t mistakes with their model, but which do reflect conceptual errors. As I quote in a previous post, they state in their reply, “This conclusion is simply wrong since it assumes that there is only one individual in the population with the first mutation.” I have shown previously that, despite their assertion, my conclusion is right. But where do they get the idea that “it assumes that there is only one individual in the population with the first mutation”? I wrote no such thing in my letter about “one individual”. Furthermore, I “assumed” nothing. I merely cited empirical results from the literature. The figure of 1 in 10^20 is a citation from the literature on chloroquine resistance of malaria. Unlike their model, it is not a calculation on my part.
Right after this, in their reply Durrett and Schmidt say that the “mistake” I made is a common one, and they go on to illustrate “my” mistake with an example about a lottery winner. Yet their own example shows they are seriously confused about what is going on. They write:
“When Evelyn Adams won the New Jersey lottery on October 23, 1985, and again on February 13, 1986, newspapers quoted odds of 17.1 trillion to 1. That assumes that the winning person and the two lottery dates are specified in advance, but at any point in time there is a population of individuals who have won the lottery and have a chance to win again, and there are many possible pairs of dates on which this event can happen…. The probability that it happens in one lottery 1 year is ~1 in 200.”
No kidding. If one has millions of players, and any of the millions could win twice on any two dates, then the odds are certainly much better that somebody will win on some two dates then that Evelyn Adams win on October 23, 1985 and February 13, 1986. But that has absolutely nothing to do with the question of changing a correct nucleotide to an incorrect one before changing an incorrect one to a correct one, which is the context in which this odd digression appears. What’s more, it is not the type of situation that Durrett and Schmidt themselves modeled. They asked the question, given a particular ten-base-pair regulatory sequence, and a particularsequence that is matched in nine of ten sites to the regulatory sequence, how long will it take to mutate the particular regulatory sequence, destroying it, and then mutate the particular near-match sequence to a perfect-match sequence? What’s even more, it is not the situation that pertains in chloroquine resistance in malaria. There several particular amino acid residues in a particular protein (PfCRT) have to mutate to yield effective resistance. It seems to me that the lottery example must be a favorite of Durrett and Schmidt’s, and that they were determined to use it whether it fit the situation or not.