Evolutionary adaptation to a new environment leads to changes that improve function and increase fitness, but it may also result in the deterioration of functions not needed in that selective environment. These declines in function that accompany adaptation – tradeoffs – are often considered inevitable costs or constraints of adaptation . Tradeoffs can either result from antagonistic pleiotropy, when adaptive mutations in the selective environment are deleterious elsewhere, or from the accumulation of mutations that are neutral in the selective environment. A consequence of these tradeoffs is that they prevent ‘Darwinian demons’: single super-genotypes that are optimally fit across a spectrum of environments . Instead, there are usually ‘generalist’ organisms that perform adequately in a variety of environments that can coexist with ‘specialist’ organisms that occupy smaller niches but with greater effectiveness.
Critical to the emergence of specialization is the constancy of an environment . By occupying the same environment for an extended period of time, the advantage of maintaining fitness in alternate environments is not realized. Organisms may thus be expected to become increasingly specialized to their current, static environment. The most obvious form of specialization is a decline in the ability to utilize resources or exist in conditions that were not experienced during adaptation. Many such examples have been observed during laboratory evolution, whereby organisms have evolved to narrow or shift their range of preferred temperatures , carbon sources [5, 6], host organisms , or even laboratory water supply . Alternatively, organisms can lose the ability to grow well in the absence of a resource that is currently available. This has been observed, in particular, for microbes infecting hosts. Examples range from the long-term, dramatic losses by intracellular symbionts  to the rapid emergence of auxotrophy during passage through a mouse gut .
One of the most prominent examples of prolonged adaptation to a single environment is the Lenski Long-Term Evolution Experiment (LTEE) . In this experiment, 12 populations of Escherichia coli were founded with either the arabinose-negative strain REL606 (populations A-1 to A-6) or the otherwise isogenic arabinose-positive derivative, REL607 (A + 1 to A+6). These have evolved since 1988 in Davis-Mingioli (DM) minimal media  batch cultures containing glucose as a growth substrate. Over 50,000 generations, the fitness of the evolved strains in the evolutionary environment has increased substantially, and appears to continue to do so . In line with the expectation of a generalist-specialist tradeoff, evolving strains have also specialized for aspects of their static environment. Evolved isolates have lower fitness in some alternative environments [4, 5, 14, 15], despite the fact that many such individual mutations can be generally beneficial across environments .
Perhaps the most surprising adaptive change to have occurred during the LTEE was the huge increase in fitness of one population due to evolving the ability to metabolize citrate, the “inert” metal chelator present in DM minimal medium. Disodium citrate (which we will hereafter refer to as citrate) was included in the evolutionary growth media as a historical artifact of the media’s original formulation for penicillin enrichment of auxotrophs [12, 17]. The common use of citrate in minimal media formulations owes to its ability to serve as a chelator of Fe (III). Indeed, no direct addition of iron is made to DM minimal media. Given the quite low level of glucose used in the LTEE (25 mg/L = 0.14 mM), substantially more citrate was present (1.7 mM) than glucose. A diagnostic trait of most E. coli strains is that they can only grow on citrate anaerobically, whereas Salmonella, for example, can grow on it aerobically. This inability of the ancestral strain to grow on citrate during the aerobic conditions of the LTEE therefore initially rendered this organic acid an unavailable secondary resource. Incredibly, after 31,000 generations – and in just one of the 12 replicate populations (A-3) – the ability emerged to utilize citrate as a sole carbon source during aerobic growth . Interestingly, this was not even the first time E. coli has acquired the ability to aerobically utilize citrate. Cit+
E. coli K12 had previously been observed to arise spontaneously , and high expression. Plastids has been shown to confer aerobic citrate growth upon E. coli B .
Consistent with the fundamental role that iron plays in microbial growth, and its relative scarcity, microbes have developed an arsenal of techniques to procure it. In E. coli, transcriptional regulators sensitive to the intracellular concentrations of iron down-regulate iron uptake genes when supplies are adequate. Under conditions of iron deprivation, these same regulators simultaneously up-regulate iron acquisition systems while down-regulating proteins requiring iron [23, 24]. One mechanism to obtain iron is to secrete and reabsorb small molecules called siderophores that chelate extracellular Fe (III), as well as the transporters to utilize them. E. coli also has the ability to take up Fe (II), and a transport system to directly capture Fe (III) from host proteins like transferrin or lactoferrin, or from heme .
There is precedent for selection acting upon metal acquisition during experimental evolution. Growing Methylobacterium at low levels of cobalt repeatedly selected for transposition events upstream of a single cobalt transporter, leading to increased uptake rates . Interestingly, the selective effect of these mutations was dependent upon both the carbon source and the growth rate of cells. Additionally, selection has been observed on metal acquisition as a social trait. Genotypes of Pseudomonas aeruginosa that fail to produce siderophores have emerged during adaption in laboratory conditions, as well as over the course of infection in the lungs of cystic fibrosis patients [27, 28]. Because excreted siderophores become public goods, in well-mixed environments, non-producers can have a selective advantage over producers .
Here we investigated whether other strains from generation 50,000 of the Lenski LTEE have evolved to become dependent on citrate for their performance growing on glucose. There are four non-exclusive hypotheses for the dependence of glucose growth of each population upon citrate. H0: The null hypothesis is that there is no significant stimulation (or perhaps even inhibition), H1: As disodium citrate is the sole source of sodium in the LTEE media, there may be stimulation by sodium ions in a manner independent of citrate itself, H2: Evolved strains use – at least partially – citrate as a growth substrate (e.g., the Cit+ A-3 population), and H3: Evolved strains have come to rely upon citrate to chelate iron present in the media.
We found that the ancestors and most of the evolved populations were largely insensitive to the presence of citrate (H0). In contrast, three lineages in addition to the Cit+ A-3 population have evolved increased growth rate and yield in the presence of citrate. In no case was this was due to sodium ions (H1). Unlike the Cit+ A-3 (H2), however, these lineages neither grew on citrate directly nor incorporated isotopic label from citrate during growth on 100% U-13C-glucose. These three lineages were at least partially rescued by the direct addition of Fe (II). These data are consistent with H3: these three populations have evolved to rely on citrate for its original intent, as a chelator of iron. As such, these three populations have evolved a novel dependence upon citrate due to reduction in citrate-independent glucose growth rather than the gain of growth on citrate as a carbon and energy source.