Uncommon Descent

13 March 2009

Waiting Longer for Two Mutations, Part 4

Michael J. Behe

Dear Readers,
 
 An interesting paper appeared several months ago in an issue of the journal Genetics, “Waiting for Two Mutations: With Applications to Regulatory Sequence Evolution and the Limits of Darwinian Evolution” (Durrett, R & Schmidt, D. 2008. Genetics 180: 1501-1509). This is the fourth of five posts that discusses it. Cited references will appear in the last post.
 
 Now I’d like to turn to a couple of other points in Durrett and Schmidt’s reply which aren’t mistakes with their model, but which do reflect conceptual errors. As I quote in a previous post, they state in their reply, “This conclusion is simply wrong since it assumes that there is only one individual in the population with the first mutation.” I have shown previously that, despite their assertion, my conclusion is right. But where do they get the idea that “it assumes that there is only one individual in the population with the first mutation”? I wrote no such thing in my letter about “one individual”. Furthermore, I “assumed” nothing. I merely cited empirical results from the literature. The figure of 1 in 10^20 is a citation from the literature on chloroquine resistance of malaria. Unlike their model, it is not a calculation on my part.
 
Right after this, in their reply Durrett and Schmidt say that the “mistake” I made is a common one, and they go on to illustrate “my” mistake with an example about a lottery winner. Yet their own example shows they are seriously confused about what is going on. They write:
 
“When Evelyn Adams won the New Jersey lottery on October 23, 1985, and again on February 13, 1986, newspapers quoted odds of 17.1 trillion to 1. That assumes that the winning person and the two lottery dates are specified in advance, but at any point in time there is a population of individuals who have won the lottery and have a chance to win again, and there are many possible pairs of dates on which this event can happen…. The probability that it happens in one lottery 1 year is ~1 in 200.”
No kidding. If one has millions of players, and any of the millions could win twice on any two dates, then the odds are certainly much better that somebody will win on some two dates then that Evelyn Adams win on October 23, 1985 and February 13, 1986. But that has absolutely nothing to do with the question of changing a correct nucleotide to an incorrect one before changing an incorrect one to a correct one, which is the context in which this odd digression appears. What’s more, it is not the type of situation that Durrett and Schmidt themselves modeled. They asked the question, given a particular ten-base-pair regulatory sequence, and a particular sequence that is matched in nine of ten sites to the regulatory sequence, how long will it take to mutate the particular regulatory sequence, destroying it, and then mutate the particular near-match sequence to a perfect-match sequence? What’s even more, it is not the situation that pertains in chloroquine resistance in malaria. There several particular amino acid residues in a particular protein (PfCRT) have to mutate to yield effective resistance. It seems to me that the lottery example must be a favorite of Durrett and Schmidt’s, and that they were determined to use it whether it fit the situation or not.

11 March 2009

Waiting Longer for Two Mutations, Part 3

Michael J. Behe

Dear Readers,

 An interesting paper appeared several months ago in an issue of the journal Genetics, “Waiting for Two Mutations: With Applications to Regulatory Sequence Evolution and the Limits of Darwinian Evolution” (Durrett, R & Schmidt, D. 2008. Genetics 180: 1501-1509). This is the third of five posts that discusses it. Cited references will appear in the last post.

The third problem also concerns the biology of the system. I’m at a bit of a loss here, because the problem is not hard to see, and yet in their reply they stoutly deny the mistake. In fact, they confidently assert it is I who am mistaken. I had written in my letter, ‘‘… their model is incomplete on its own terms because it does not take into account the probability of one of the nine matching nucleotides in the region that is envisioned to become the new transcription-factor-binding site mutating to an incorrect nucleotide before the 10th mismatched codon mutates to the correct one.’’ They retort, “This conclusion is simply wrong since it assumes that there is only one individual in the population with the first mutation.” That’s incorrect. Let me explain the problem in more detail.

Consider a string of ten digits, either 0 or 1. We start with a string that has nine 1’s, and just one 0. We want to convert the single 0 to a 1 without switching any of the 1’s to a zero. Suppose that the switch rate for each digit is one per hundred copies of the string. That is, we copy the string repeatedly, and, if we focus on a particular digit, about every hundredth copy or so that digit has changed. Okay, now cover all of the numbers of the string except the 0, and let a random, automated procedure copy the string, with a digit-mutation rate of one in a hundred. After, say, 79 copies, we see that the visible 0 has just changed to a 1. Now we uncover the rest of the digits. What is the likelihood that one of them has changed in the meantime? Since all the digits have the same mutation rate, then there is a nine in ten chance that one of the other digits has already changed from a 1 to a 0, and our mutated string still does not match the target of all 1’s. In fact, only about one time out of ten will we uncover the string and find that no other digits have changed except the visible digit. Thus the effective mutation rate for transforming the string with nine matches out of ten to a string with ten matches out of ten will be only one tenth of the basic digit-mutation rate. If the string is a hundred long, the effective mutation rate will be one-hundredth the basic rate, and so on. (This is very similar to the problem of mutating a duplicate gene to a new selectable function before it suffers a degradative mutation, which has been investigated by Lynch and co-workers. (2))

So, despite their self-assured tone, in fact on this point Durrett and Schmidt are “simply wrong.” And, as I write in my letter, since the gene for the chloroquine resistance protein has on the order of a thousand nucleotides, rather than just the ten of Durrett and Schmidt’s postulated regulatory sequence, the effective rate for the second mutation is several orders of magnitude less than they thought. Thus with the, say, two orders of magnitude mistake here, the factor of 30 error for the initial mutation rate, and the four orders of magnitude for mistakenly using a neutral model instead of a deleterious model, Durrett and Schmidt’s calculation is a cumulative seven and a half orders of magnitude off. Since they had pointed out that their calculation was about five million-fold (about six and a half orders of magnitude) lower than the empirical result I cited, when their errors are corrected the calculation agrees pretty well with the empirical data.

10 March 2009

Waiting Longer for Two Mutations, Part 2

Michael J. Behe

Dear Readers,
 
 An interesting paper appeared several months ago in an issue of the journal Genetics, “Waiting for Two Mutations: With Applications to Regulatory Sequence Evolution and the Limits of Darwinian Evolution” (Durrett, R & Schmidt, D. 2008. Genetics 180: 1501-1509). This is the second of five posts that discusses it. Cited references will appear in the last post.
 
 Interesting as it is, there are some pretty serious problems in the way they applied their model to my arguments, some of which they owned up to in their reply, and some of which they didn’t. When the problems are fixed, however, the resulting number is remarkably close to the empirical value of 1 in 10^20. I will go through the difficulties in turn.
The first problem was a simple oversight. They were modeling the mutation of a ten-nucleotide-long binding site for a regulatory protein in DNA, so they used a value for the mutation rate that was ten-times larger than the point mutation rate. However, in the chloroquine-resistance protein discussed in The Edge of Evolution, since particular amino acids have to be changed, the correct rate to use is the point mutation rate. That leads to an underestimate of a factor of about 30 in applying their model to the protein. As they wrote in their reply, “Behe is right on this point.” I appreciate their agreement here.
 
The second problem has to do with their choice of model. In their original paper they actually developed models for two situations — for when the first mutation is neutral, and for when it is deleterious. When they applied it to the chloroquine-resistance protein, they unfortunately decided to use the neutral model. However, it is very likely that the first protein mutation is deleterious. As I wrote discussing a hypothetical case in Chapter 6 of The Edge:
“Suppose, however, that the first mutation wasn’t a net plus; it was harmful. Only when both mutations occurred together was it beneficial. Then on average a person born with the mutation would leave fewer offspring than otherwise. The mutation would not increase in the population, and evolution would have to skip a step for it to take hold, because nature would need both necessary mutations at once…. The Darwinian magic works well only when intermediate steps are each better (‘more fit’) than preceding steps, so that the mutant gene increases in number in the population as natural selection favors the offspring of people who have it. Yet its usefulness quickly declines when intermediate steps are worse than earlier steps, and is pretty much worthless if several required intervening steps aren’t improvements.”
If the first mutation is indeed deleterious, then the model that Durrett and Schmidt (2008) applied to the chloroquine-resistance protein is wrong. In fact, if the parasite with the first mutation is only 10% as fit as the unmutated parasite, then the population-spreading effect they calculate for neutral mutations is pretty much eliminated, as their own model for deleterious mutations shows. What do the authors say in their response about this possibility? “We leave it to biologists to debate whether the first PfCRT mutation is that strongly deleterious.” In other words, they don’t know; it is outside their interest as mathematicians. (Again, I appreciate their candor in saying so.) Assuming that the first mutation is seriously deleterious, then their calculation is off by a factor of 10^4. In conjunction with the first mistake of 30-fold, their calculation so far is off by five-and-a-half orders of magnitude.

9 March 2009

Waiting Longer for Two Mutations, Part 1

Michael J. Behe

 
Dear Readers,
 
 An interesting paper appeared several months ago in an issue of the journal Genetics, “Waiting for Two Mutations: With Applications to Regulatory Sequence Evolution and the Limits of Darwinian Evolution” (Durrett, R & Schmidt, D. 2008. Genetics 180: 1501-1509). This is the first of five posts that discusses it. Cited references will appear in the last post.
 
 As the title implies, it concerns the time one would have to wait for Darwinian processes to produce some helpful biological feature (here, regulatory sequences in DNA) if two mutations are required instead of just one. It is a theoretical paper, which uses models, math, and computer simulations to reach conclusions, rather than empirical data from field or lab experiments, as The Edge does. The authors declare in the abstract of their manuscript that they aim “to expose flaws in some of Michael Behe’s arguments concerning mathematical limits to Darwinian evolution.” Unsurprisingly (bless their hearts), they pretty much do the exact opposite. Since the journal Genetics publishes letters to the editors (most journals don’t), I sent a reply to the journal. I waited until the reply, and a response from the authors, was published in Genetics until posting about it here on my blog. The original paper by Durrett and Schmidt can be found here, my response here, and their reply here.
 
 In their paper (as I write in my reply), “They develop a population genetics model to estimate the waiting time for the occurrence of two mutations, one of which is premised to damage an existing transcription-factor-binding site, and the other of which creates a second, new binding site within the nearby region from a sequence that is already a near match with a binding site sequence (for example, 9 of 10 nucleotides already match).”
 
The most novel point of their model is that, under some conditions, the number of organisms needed to get two mutations is proportional not to the inverse of the square of the point mutation rate (as it would be if both mutations had to appear simultaneously in the same organism), but to the inverse of the point mutation rate times the square root of the point mutation rate (because the first mutation would spread in the population before the second appeared, increasing the odds of getting a double mutation). To see what that means, consider that the point mutation rate is roughly one in a hundred million (1 in 10^8). So if two specific mutations had to occur at once, that would be an event of likelihood about 1 in 10^16. On the other hand, under some conditions they modeled, the likelihood would be about 1 in 10^12, ten thousand times more likely than the first situation. Durrett and Schmidt (2008) compare the number they got in their model to my literature citation (1) that the probability of the development of chloroquine resistance in the malarial parasite is an event of order 1 in 10^20, and they remark that it “is 5 million times larger than the calculation we have just given.” The implied conclusion is that I have greatly overstated the difficulty of getting two necessary mutations. In the next several posts I will show that they are incorrect.

25 February 2009

Hog’s tail or bacon? Jerry Coyne in The New Republic

Michael J. Behe

Dear Readers,
In a long book review in The New Republic, University of Chicago evolutionary biologist Jerry Coyne calls Brown University cell biologist Ken Miller a creationist. No surprise there — “creationist” has a lot of negative emotional resonance in many intellectual circles, so it makes a fellow’s rhetorical task a lot easier if he can tag his intellectual opponent with the label. (Kind of like calling someone a “communist” back in the 1950s.) No need for the hapless “creationist” to be a Biblical literalist, or to believe in a young earth, or to be politically or socially conservative, or have any other attribute the general public thinks of when they hear the “C” word. For Coyne, one just has to think that there may be a God who has somehow affected nature.

In fact Miller thinks that God set up the general laws of the universe in the knowledge that some intelligent species would emerge over time, to commune with Him. (To be convinced of the existence of God, the skeptical Coyne wants a nine-hundred-foot-tall Jesus to appear to the residents of New York City, or something equally dramatic. Some of us view the genetic code and the intricate molecular machinery of life as rather more spectacular than a supersized apparition.) Miller, however, is the poster boy for a religious scientist who thinks God would never abrogate the laws of nature (at least during the general development of the universe — special events in religious history are another matter). Calling Miller a creationist is stretching the word far out of shape, to make it simply a synonym for “theist,” with the advantage to Coyne that the associated rhetorical baggage is dumped on Miller.

I would feel more sympathy for Miller except for the fact that he pulls the same trick when it suits his purpose. As Jerry Coyne notes, “One of Miller’s keenest insights is that ID involves not just design but also supernatural creation.” So even though proponents of ID such as (ahem) me explicitly deny the necessity of supernatural creation to ID, and go to great lengths to explain the difference between a scientific conclusion of design and a theological conclusion of creation, that’s ignored, and both Coyne and Miller paint ID with the creationist brush, the better to dismiss it.
My purpose here, however, is not to either attack or defend Miller, who can fend for himself. Rather, it is to point out a rather large confusion in Jerry Coyne’s thinking. He writes:

“What is surprising in all this is how close many creationists have come to Darwinism. Important advocates of ID such as Michael Behe, a professor at Lehigh University (and a witness for the defense in the Harrisburg case), accept that the earth is billions of years old, that evolution has occurred — some of it caused by natural selection — and that many species share common ancestors. In Behe’s view, God’s role in the development of life could merely have been as the Maker of Mutations, tweaking DNA sequences when necessary to fuel the appearance of new mutations and species. In effect, Behe has bought all but the tail of the Darwinian hog.”

But if I have accepted everything but the teeny-weeny tail of Darwinism, why does Coyne get so upset with me (see his earlier review of The Edge of Evolution, also in The New Republic)? You’d think that if Jerry Coyne and I agreed on 99% of the important things, he’d be a bit friendlier. And why does he pummel Miller as a “creationist” for accepting everything but the very furthest tip of the hog’s tail? In fact, why does Coyne use the same epithet for Miller as for someone who thinks the world was created in a puff of smoke six thousand years ago?

Because of course, contra Jerry Coyne, the question of purposeful design versus no design is hardly peripheral — it is central. The existence of intentional design is not the hog’s tail; it is the bacon. A real “Maker of Mutations” or “Mover of Electrons” (as Coyne derisively designates Miller’s view of God) cuts the heart out of Darwinism. As many people besides myself have pointed out over scores of years, Darwin’s claim to fame was not to propose “evolution,” teleological versions of which had been proposed by others before him. Rather, Darwin’s contribution was to propose an apparently ateleological mechanism for evolution — random mutation and natural selection. Frankly, it is astounding that Jerry Coyne gets so confused about the significance of Darwin’s theory.

15 January 2009

Miller vs. Luskin, Part 2

Michael J. Behe

Dear Readers,

At the end of his first post squabbling with Discovery Institute’s Casey Luskin, Brown University Professor Kenneth Miller refers to some great new work by UC San Diego Professor and member of the National Academy of Science Russell Doolittle. Doolittle, of course, has worked on the blood clotting cascade for about fifty years! (I discussed some of his work in Chapter 4 of Darwin’s Black Box.) In a new paper Doolittle and co-workers analyze DNA sequence data from a primitive vertebrate, the lamprey, thinking that it might have a simpler clotting cascade than higher vertebrates. (1) It is difficult work, because the sequences of lamprey proteins — even ones that are indeed homologous to the proteins of other vertebrates — are significantly diverged from, say, mammalian proteins.


They argue that most of the core clotting cascade proteins are present, but two seem to be absent: lamprey has single proteins that act as Factor V/VIII (proaccelerin/anti-hemophilic factor) and Factor IX/X (Christmas factor/Stuart factor). The authors then infer that either gene or genome duplication led to separation of the factors. Although it’s interesting work, Doolittle’s conclusions are only suggestive (and the authors clearly say that the data are only suggestive). They found four copies of genes that are similar to Factors V/VIII, as well as to the non-clotting proteins ceruloplasmin and hephaestin (Figure 2 in their paper). They argue that only one is a real blood clotting factor and the other three aren’t, but the arguments are pretty tentative. The same for Factors IX/X. The authors identify two “Factor X” genes. Might one of those be acting as a Factor IX gene? At the conclusion of the paper the authors say they may try to support their arguments with biochemical experiments. I’m looking forward to reading the results of those.

Whether or not their conclusion is correct, however, as far as the argument for intelligent design is concerned the only relevant part of Doolittle’s paper is Figure 10, which purports to show the clotting pathway in lamprey vs. other vertebrates. (Intelligent design is wholly compatible with common descent — including descent by gene duplication/rearrangement. Rather, ID argues against the Darwinian claim that complex, functional molecular systems could be built by a random, unguided process.) Yet to get from one arrangement to the other one would take multiple steps, not just one: whole genome duplication, retargeting of Factor IX, retargeting of Factor VIII, and so on. (The problems are essentially the same, as I pointed out in an essay in 2000 entitled “In Defense of the Irreducibility of the Blood Clotting Cascade,” posted on the Discovery Institute website.)  So even if the suggested events occurred, they were extremely unlikely to have occurred by a Darwinian mechanism of random mutation/natural selection (the authors make no argument for a Darwinian mechanism). Guided, yes. Random, no.

It’s pertinent to remember here the central point of The Edge of Evolution. We now have data in hand that show what Darwinian processes can accomplish, and it ain’t much. We no longer have to rely on speculative scenarios that overlook barriers and problems that nature would encounter. Random mutation/natural selection works great in folks’ imaginations, but it’s a bust in the real world.

1. Doolittle, R.F., Jiang, Y., and Nand, J. 2008. Genomic evidence for a simpler clotting scheme in jawless vertebrates. J. Mol. Evol. 66:185-196.

12 January 2009

Miller vs. Luskin, Part 1

Michael J. Behe

Dear Readers,

Brown University Professor Kenneth Miller has gotten into a little tiff with Discovery Institute’s Casey Luskin over what I said/meant about the blood clotting cascade in Darwin’s Black Box.  This is the first of two posts commenting on that.

In Chapter 4 of Darwin’s Black Box I first described the clotting cascade and then, in a section called “Similarities and Differences”, analyzed it in terms of irreducible complexity. Near the beginning of that part I had written, “Leaving aside the system before the fork in the pathway, where details are less well known, the blood clotting system fits the definition of irreducible complexity…  The components of the system (beyond the fork in the pathway) are fibrinogen, prothrombin, Stuart factor, and proaccelerin.” Casey Luskin concludes that from that point on I was focusing my argument on the system beyond the fork in the pathway, containing those components I named. That is a reasonable conclusion because, well, because that’s what I said I was doing, and Mr. Luskin can comprehend the English language.

Apparently Prof. Miller can’t. He breathlessly reports that one page after I had qualified my argument I wrote “Since each step necessarily requires several parts, not only is the entire blood-clotting system irreducibly complex, but so is each step in the pathway” and Miller asserts that meant I had inexplicably switched back to considering the whole cascade, including the initial steps. It seems not to have occurred to Miller that that sentence should be read in the context of the previous page, so he focuses on the components before the fork, the better to construct a strawman to knock down. In fact, in that section containing the second quote (“Since each step…”) I was arguing about the difficulty of inserting a new step into the middle of a generic, pre-existing cascade (“One could imagine a blood clotting system that was somewhat simpler than the real one—where, say, Stuart factor, after activation by the rest of the cascade, directly cuts fibrinogen to form fibrin, bypassing thrombin”), and likened it to inserting a lock in a ship canal. It could be done if an intelligent agent were directing it, but it would be really difficult to do by chance/selection. All that seems to have passed Miller by.

In philosophy there is something called the “principle of charitable reading.” In a nutshell it means that one should construe an author’s argument in the best way possible, so that the argument is engaged in its strongest form. Unfortunately, in my experience Miller does the opposite — call it the “principle of malicious reading.” He ignores (or doesn’t comprehend) context, ignores (or doesn’t comprehend) the distinctions an author makes, and construes the argument in the worst way possible. (See my previous posts on July 11-12, 2007 about Miller’s tendentious review of The Edge of Evolution.)

Good salesmanship. Bad scholarship.

16 June 2008

Once More With Feeling

Michael J. Behe

Dear Readers,

Kenneth R. Miller, a professor of biology at Brown University, has written a new book Only a Theory: Evolution and the Battle for America’s Soul, in which he defends Darwinism, attacks intelligent design, and makes a case for theistic evolution (defined as something like “God used Darwinian evolution to make life”). In all this, it’s pretty much a re-run of his previous book published over a decade ago, Finding Darwin’s God: A Scientist’s Search for Common Ground between God and Evolution. So if you read that book, you’ll have a very good idea of what 90% of the new book concerns. For people who think that a mousetrap is not irreducibly complex because parts of it can be used as a paperweight or tie clip, and so would be easy to evolve by chance, Miller is their man. Despite the doubts of many — perhaps most — evolutionary biologists of the power of the Darwinian mechanism, to Miller’s easy imagination evolving any complex system by chance plus selection is a piece of cake, and intermediates are to be found behind every door. A purer devotee of Darwinian wishful thinking would be hard to find.

A few events of the last ten years seem to have caught his attention. He discusses The Edge of Evolution for several pages, reprising his superficial review for Nature that I critiqued on this site last year. At a number of points he lovingly quotes Dover trial Judge John Jones, either not recognizing or purposely ignoring the fact that Jones’ opinion was pretty much copied word for word from a document given to him by the plaintiff’s attorneys; there’s no evidence that Jones comprehended any of the expert testimony at the trial — even Miller’s own testimony. Miller even quotes the passage from “Jones”’ opinion which blatantly mischaracterized my testimony, placing in my mouth words that the plaintiff’s attorney had actually spoken. But even that has been gone over many times; if you read the newspaper and some blogs, all this is very old hat.
The theistic evolution is the same too. (I have nothing against theistic evolution — I used to agree with it — except now I think it doesn’t fit the data.) We live in a finely tuned universe, so that points to God. Miller pointedly denies that that is a scientific argument, but it’s hard to see why not. How many other theological or philosophical arguments depend on the exact values of physical constants — to many significant figures — such as the charge on the electron, the strength of gravity, and so on? Reasoning based on quantitative, precise measurements of nature is science. Ironically, Miller is an intelligent design proponent when it comes to cosmology, but is contemptuous of people who see design extending further into nature than he does.

The only “new” argument in the book is Miller’s complaint that his intellectual opponents are threatening America and civilization, and so must be stopped for the good of the country. (Now, how many times have you heard a politician or special pleader use that line?) America is a science-based society, you see, so we should all bow when the National Academy of Sciences speaks — anything less is un-American.
Well, it seems to me that a country which places control of the military in civilian hands is a country which recognizes that experts, like other people, can be blinded by their biases. If control of the military is too important to be left to the experts, control of education is, too. Even to experts who are as sure of themselves as Kenneth Miller is.

6 June 2008

Multiple Mutations Needed for E. Coli

Michael J. Behe

Dear Readers,
An interesting paper has just appeared in the Proceedings of the National Academy of Sciences, “Historical contingency and the evolution of a key innovation in an experimental population of Escherichia coli.” (1) It is the “inaugural article” of Richard Lenski, who was recently elected to the National Academy. Lenski, of course, is well known for conducting the longest, most detailed “lab evolution” experiment in history, growing the bacterium E. coli continuously for about twenty years in his Michigan State lab. For the fast-growing bug, that’s over 40,000 generations!
I discuss Lenski’s fascinating work in Chapter 7 of The Edge of Evolution, pointing out that all of the beneficial mutations identified from the studies so far seem to have been degradative ones, where functioning genes are knocked out or rendered less active. So random mutation much more easily breaks genes than builds them, even when it helps an organism to survive. That’s a very important point. A process which breaks genes so easily is not one that is going to build up complex coherent molecular systems of many proteins, which fill the cell.
In his new paper Lenski reports that, after 30,000 generations, one of his lines of cells has developed the ability to utilize citrate as a food source in the presence of oxygen. (E. coli in the wild can’t do that.) Now, wild E. coli already has a number of enzymes that normally use citrate and can digest it (it’s not some exotic chemical the bacterium has never seen before). However, the wild bacterium lacks an enzyme called a “citrate permease” which can transport citrate from outside the cell through the cell’s membrane into its interior. So all the bacterium needed to do to use citrate was to find a way to get it into the cell. The rest of the machinery for its metabolism was already there. As Lenski put it, “The only known barrier to aerobic growth on citrate is its inability to transport citrate under oxic conditions.” (1)
Other workers (cited by Lenski) in the past several decades have also identified mutant E. coli that could use citrate as a food source. In one instance the mutation wasn’t tracked down. (2) In another instance a protein coded by a gene called citT, which normally transports citrate in the absence of oxygen, was overexpressed. (3) The overexpressed protein allowed E. coli to grow on citrate in the presence of oxygen. It seems likely that Lenski’s mutant will turn out to be either this gene or another of the bacterium’s citrate-using genes, tweaked a bit to allow it to transport citrate in the presence of oxygen. (He hasn’t yet tracked down the mutation.)
The major point Lenski emphasizes in the paper is the historical contingency of the new ability. It took trillions of cells and 30,000 generations to develop it, and only one of a dozen lines of cells did so. What’s more, Lenski carefully went back to cells from the same line he had frozen away after evolving for fewer generations and showed that, for the most part, only cells that had evolved at least 20,000 generations could give rise to the citrate-using mutation. From this he deduced that a previous, lucky mutation had arisen in the one line, a mutation which was needed before a second mutation could give rise to the new ability. The other lines of cells hadn’t acquired the first, necessary, lucky, “potentiating” (1) mutation, so they couldn’t go on to develop the second mutation that allows citrate use. Lenski argues this supports the view of the late Steven Jay Gould that evolution is quirky and full of contingency. Chance mutations can push the path of evolution one way or another, and if the “tape of life” on earth were re-wound, it’s very likely evolution would take a completely different path than it has.
I think the results fit a lot more easily into the viewpoint of The Edge of Evolution. One of the major points of the book was that if only one mutation is needed to confer some ability, then Darwinian evolution has little problem finding it. But if more than one is needed, the probability of getting all the right ones grows exponentially worse. “If two mutations have to occur before there is a net beneficial effect — if an intermediate state is harmful, or less fit than the starting state — then there is already a big evolutionary problem.” (4) And what if more than two are needed? The task quickly gets out of reach of random mutation.
To get a feel for the clumsy ineffectiveness of random mutation and selection, consider that the workers in Lenski’s lab had routinely been growing E. coli all these years in a soup that contained a small amount of the sugar glucose (which they digest easily), plus about ten times as much citrate. Like so many cellular versions of Tantalus, for tens of thousands of generations trillions of cells were bathed in a solution with an abundance of food — citrate — that was just beyond their reach, outside the cell. Instead of using the unreachable food, however, the cells were condemned to starve after metabolizing the tiny bit of glucose in the medium — until an improbable series of mutations apparently occurred. As Lenski and co-workers observe: (1)
Such a low rate suggests that the final mutation to Cit+ is not a point mutation but instead involves some rarer class of mutation or perhaps multiple mutations. The possibility of multiple mutations is especially relevant, given our evidence that the emergence of Cit+ colonies on MC plates involved events both during the growth of cultures before plating and during prolonged incubation on the plates.
In The Edge of Evolution I had argued that the extreme rarity of the development of chloroquine resistance in malaria was likely the result of the need for several mutations to occur before the trait appeared. Even though the evolutionary literature contains discussions of multiple mutations (5), Darwinian reviewers drew back in horror, acted as if I had blasphemed, and argued desperately that a series of single beneficial mutations certainly could do the trick. Now here we have Richard Lenski affirming that the evolution of some pretty simple cellular features likely requires multiple mutations.
If the development of many of the features of the cell required multiple mutations during the course of evolution, then the cell is beyond Darwinian explanation. I show in The Edge of Evolution that it is very reasonable to conclude they did.
References
1. Blount, Z.D., Borland, C.Z., and Lenski, R.E. 2008. Historical contingency and the evolution of a key innovation in an experimental population of Escherichia coli. Proc. Natl. Acad. Sci. U. S. A 105:7899-7906.
2. Hall, B.G. 1982. Chromosomal mutation for citrate utilization by Escherichia coli K-12. J. Bacteriol. 151:269-273.
3. Pos, K.M., Dimroth, P., and Bott, M. 1998. The Escherichia coli citrate carrier CitT: a member of a novel eubacterial transporter family related to the 2-oxoglutarate/malate translocator from spinach chloroplasts. J. Bacteriol. 180:4160-4165.
4. Behe, M.J. 2007. The Edge of Evolution: the search for the limits of Darwinism. Free Press: New York, p. 106.
5. Orr, H.A. 2003. A minimum on the mean number of steps taken in adaptive walks. J. Theor. Biol. 220:241-247.

9 May 2008

Malaria and Mutations

Michael J. Behe

Dear Readers,

An interesting paper appeared recently in the New England Journal of Medicine. (1) The workers there discovered some new mutations which confer some resistance to malaria on human blood cells in the lab. (Their usefulness in nature has not yet been nailed down.) The relevance to my analysis in The Edge of Evolution is that, like other mutations that help with malaria, these mutations, too, are ones which degrade the function of a normally very useful protein, called pyruvate kinase. As the workers note:

“[H]eterozygosity for partial or complete loss-of-function alleles . . . may have little negative effect on overall fitness (including transmission of mutant alleles), while providing a modest but significant protective effect against malaria. Although speculative, this situation would be similar to that proposed for hemoglobinopathies (sickle cell and both -thalassemia and -thalassemia) and G6PD deficiency. . .”

This conclusion supports several strong themes of The Edge of Evolution which reviewers have shied away from. First, that evenbeneficial mutations are very often degradative mutations. Second, it’s a lot faster to get a beneficial effect (if one is available to be had) by degrading a gene than by making specific changes in genes. The reason is that there are generally hundreds or thousands of ways to break a gene, but just a few to alter it beneficially without degrading it. And third, that random mutation plus natural selection is incoherent. That is, separate mutations are often scattered; they do not add up in a systematic way to give new, interacting molecular machinery.

Even in the professional literature, sickle cell disease is still called, along with other mutations related to malaria, “one of the best examples of natural selection acting on the human genome.” (2) So these are our best examples! Yet breaking pyruvate kinase or G6PD or globin genes in thalassemia does not add up to any new system. Then where do the elegant nanosystems found in the cell come from? Not from random mutation and natural selection, that’s for sure.

1. Ayi, K., et al. 2008. Pyruvate kinase deficiency and malaria. N. Engl. J. Med. 358:1805-1810.

2. Tishkoff, S.A., et al. 2001. Haplotype diversity and linkage disequilibrium at human G6PD: recent origin of alleles that confer malarial resistance. Science 293:455-462.