Abstract
Many control programs rely on non-immunizing controls, such as vector control, in order to reduce the massive burden of endemic diseases. These often cannot be maintained indefinitely, and the results of cessation are not well understood. Using mathematical models, we examine the results of stopping non-immunizing control measures in endemic settings. We compare the cumulative number of cases expected with and without the control measures and show that deployment of control can lead to more total infections over some timeframes than without any control – the divorce effect. Like the well-known honeymoon effect, this result is directly related to the population-level loss of immunity resulting from non-immunizing controls. This effect is robust across numerous settings. We also examine three approaches that attempt to minimize the divorce effect in seasonal situations and derive a crude analytical approximation of the magnitude of the divorce effect. Our results strongly argue that the accumulation of susceptibility should be considered before deploying such controls against endemic infections when indefinite maintenance is unlikely. We highlight that our results are particularly germane to endemic mosquito-borne infections, such as dengue virus, both for routine management involving vector control and for field trials of novel control approaches.
Introduction
An estimated 200 million cases of malaria, 100 million cases of dengue fever, and 9 million cases of measles occurred in 2016 [1], representing only a portion of the total impact of endemic disease that year. The burden that this places on local populations, both in terms of morbidity and mortality and both direct and indirect economic costs, often pressures policy makers to act to suppress these diseases. However, the scientific rationale on which the implemented policies are based is not always clear, making it difficult to assess whether the risks associated with control have been adequately addressed.
Eradication— the permanent reduction of worldwide incidence to zero [2]— is the ideal aim of all control programs. This goal is unrealistic, with only two infections having been successfully eradicated to date: smallpox and rinderpest [3]. Often, a more realistic goal for a control program is either long-term suppression or local elimination of the disease. These goals hold their own challenges though, as they require long-term or even indefinite control programs, which can face budgetary and public support issues, not to mention the potential for some controls to fail due to evolution of resistance. Further, if there is a loss of herd immunity in the population due to the control lowering population exposure to the pathogen, there is the additional risk that when a control program ends the infection will re-emerge in a post-control epidemic and reestablish in the population [3].
Naively, one might imagine that lowering the incidence of infection will have no detrimental effects for the population. However, mathematical modeling has previously revealed numerous perverse outcomes of application of ineffective control measures (by which we mean ones that do not bring the basic reproductive number, R0, below one) in endemic settings. Perhaps the most famous example is the increased age at infection that results when a population is partially vaccinated for rubella, leading to more infections occurring in women of child-bearing age, where severe complications, such as congenital rubella syndrome, can result when pregnant women become infected [4–6]. While this certainly represents a potential downside of the control, the population sees a reduction in rubella prevalence. McLean and Anderson (1988), showed that when an ineffective control is used against an endemic infection it often results in an initial drop in prevalence to well below the endemic level, the “honeymoon effect”, but this is followed by outbreaks that periodically increase prevalence above the endemic level as a consequence of a build-up of susceptible individuals. Similarly, in a seasonally-forced setting, Pandey and Medlock [8] found that vaccination against dengue virus could result in a transient period with periodic outbreaks of larger peak prevalence than occurred before vaccination. These last two examples illustrate possible negative side effects of ineffective controls: they can cause transient increases in prevalence while still resulting in a decrease in total incidence.
In the results above, there is higher incidence than expected, but Okamoto et al.[9] described an even more troubling theoretical result while exploring a model of failed or stopped combined strategies aimed at controlling dengue virus, e.g. vaccination along with vector control. They observed that when control was only transient the total number of infections that occurred since the start of control, a quantity they called the cumulative incidence (CI), could exceed the number of cases that would have been observed had no control been deployed. Even in situations where control measures had a significant positive impact over a period of years, the outbreaks that ensued following the cessation, or failure, of control could lead to an outbreak that was large enough to outweigh the number of cases prevented during the control period.
Here we demonstrate that this effect—which we call the divorce effect—is not an artifact of very specific complex models, but quite a general phenomenon that can occur across a range of models and parameter space when deploying a control measure that does not confer immunity. By exploring the dynamics of the divorce effect in the setting of several simple models we gain insights that were not obtainable using the previous complex models. Conversely, we find that for immunizing controls (e.g. vaccination) the divorce effect does not occur, even when the duration of protection is relatively short-lived.
We demonstrate the generality of this result for endemic infections by simulating cessation of control measures in three commonly-used models for pathogen transmission. Unlike the honeymoon effect, the divorce effect occurs for both ineffective and effective controls, provided that they are transient. As anticipated, control results in the accumulation of susceptible individuals resulting in potential for a large outbreak following the cessation of control. This outbreak is either triggered by infective individuals that remain in the population or by reintroduction of infection from outside the control area, and its size increases asymptotically towards the size of a virgin-soil epidemic as the length of the control period is increased and herd immunity is lost. Counter-intuitively, and comparable to results in Okamoto et al. [9], we see that the post-control outbreak often results in there being timeframes over which the cumulative incidence of infection since the start of control is higher than would have occurred in the absence of control. Further, these outbreaks are significantly larger than the endemic levels of the disease and would likely overwhelm healthcare providers in the area.
This paper is organized as follows. We first describe the three models we choose to illustrate the divorce effect: a non-seasonal SIR model, a seasonal SIR model, and a host-vector model. We then demonstrate, in each setting, the occurrence of the divorce effect and its sensitivity to relevant parameters, namely R0 and the duration and strength of control. Further, for the seasonal SIR model, we explore the sensitivity of the strength of the divorce effect on the timing of the start and end of the control. Then for the seasonal SIR and seasonal host-vector model we look at three possible strategies for mitigating the divorce effect and show they are incapable of eliminating the divorce effect. A crude analytical approximation for the divorce effect and additional models are explored in the Supplemental Information, as is the impact of using immunizing controls.
Methods
To evaluate the magnitude of the Divorce Effect, we simulate the cessation of a short-term control affecting transmission in three disease systems: a SIR model, a seasonal SIR model, and a host-vector SIR model. While these are the only models we discuss in detail here, this result can be seen in most models that have a replenishment of the susceptible population, including the more general SIRS model, for which host immunity is not life-long, (see Supplemental Information for exploration of additional forms of transmission models). These results are parameterized for a human population and mosquito vector, but the results are generalizable to other species.
SIR Model
We assume a well-mixed population of one million hosts and a non-fatal infection that is directly transmitted and confers complete life-long immunity. The numbers of susceptible, infective, and removed individuals are written as S, I and R, respectively. We allow for replenishment of the susceptible population by births, but assume the population size is constant by taking per-capita birth and death rates, μ, to be equal. This results in the standard two-dimensional representation of the SIR model, where the number of removed individuals is R = N − S − I (Equation 1). For our simulations, we assume parameters resembling a short-lived infection in a human population, lasting on average 5 days (average recovery rate, γ=0.2 /day) and that individuals live on average 60 years (μ=4.566 × 10-5 /day), allowing the transmission parameter, β, to be adjusted to achieve the desired value of R0. In order to keep infective numbers from falling to arbitrarily low levels and to reseed infection following cessation of control, we include a background force of infection [10,11], Ib = .02, representing a constant “background” force of infection due to other populations (equivalent to one fiftieth the force of infection caused by an infection within the population).
Seasonal SIR Model
For the seasonal SIR model, we allow the transmission parameter to fluctuate seasonally (annually) around its mean, β0, taking the form given in Equation 2. Seasonal oscillations in the parameter have relative amplitude β1 with maxima occurring at integer multiples of 365 days.
Host-Vector Model
We model an infection with obligate vector transmission. As in other models, we assume that the host population size is held constant (R = N − S − I), but we allow the vector population size to fluctuate—so that, for instance, we can model vector control. For simplicity, we only model the female adult vector population and assume density-dependent recruitment into the susceptible class (U), with a logistic-type dependence on the total female adult population size. Infectious vectors (V) arise from interactions with infected hosts (Equation 3). We assume that host demography and recovery rates are the same as in the SIR model, with a host population of one million individuals. We assume that the vector lives on average 10 days (δ = 0.1 /day), the intrinsic growth rate (r) and density dependence parameter (k) are parameterized as in Okamoto et al. (2016): r = 0.835/day and k = 3.675×10-7/(vectors*day), resulting in an equilibrium vector population of 2 million individuals. The transmission parameter from host to vector (βHV) is assumed to be 0.2 /day and the parameter for vector to host is changed to produce the desired R0. We again assume a background force of infection (Ib=.02) representing reintroduction of infection from outside our focal population.
Seasonality plays a large role in vector-borne diseases and affects many aspects of the infection and its vector. Temperature affects breeding rates, larval development, and death rates of the vector, the extrinsic incubation period and transmissibility of the disease itself, and host encounter rates, while precipitation can affect the availability of appropriate habitat and encounter rates [12–14]. However, most of these add a level of model complexity which is unnecessary for this study, so we choose to use a simple forcing term for mosquito recruitment that fluctuates seasonally with relative magnitude rs (rs = 0.2) about its baseline (r0 = 0.835/day) (Equation 4).
Control
We model a control that is applied consistently from time t0 (which, for simplicity, we usually take to be equal to zero) to time tend and is instantaneously removed at the end of the control period. In the SIR and seasonal SIR models, this control reduces the transmission rate by some proportion, ε, and, in the host vector model, as a proportional increase, σ, in the vector mortality rate. This results in the transmission parameter given in Equation 5 for directly transmitted infections and the vector death rate given in Equation 6 for the vector-borne infections.
While we only look at these control measures in the main text, other controls (such as an increase in the recovery rate, γ) are explored in the Supplemental Information, and give similar results.
Measuring Effectiveness
There are a number of measures that can be used to quantify the effectiveness of a control. Given that we want to characterize the total number of cases that occur from the start of control until a particular point in time, these involve calculation of the total number of cases that occur over that period, a quantity we call the cumulative incidence (CI). For a directly transmitted infection, this is calculated as follows i.e. by integrating the transmission term over the time interval from the start of control until the time, t, of interest. This quantity could be calculated both in the presence of control and in the baseline, no-control, setting; we distinguish between these two by labeling quantities (e.g. state variables) in the latter case with a subscript B to denote baseline.
One commonly-used measure of effectiveness is the number of cases averted by control, CIB(t) - CI(t). This has the disadvantage (particularly in terms of graphical depiction) that it can become arbitrarily large as t increases. Consequently, some authors choose to utilize a relative measure of cases averted, dividing by the baseline cumulative incidence (see, for instance, [15]). We instead follow our earlier work and use the relative cumulative incidence (RCI) measure employed by Okamoto et al. [9], calculating the cumulative incidence of the model with the control program relative to the cumulative incidence of the model without the control program (Equation 8). RCI(t) values above one imply that the control measure has resulted in an increase in the total number of cases compared to the baseline.
We see that Hladish et al.’s relative cases averted measure is simply 1-RCI(t). Both relative measures have properties that make them attractive for graphical depiction although it should be borne in mind that both involve a loss of information on the actual number of cases averted. For example, RCI = 1.1 after one year is a vastly smaller increase in total cases than RCI = 1.1 after 10 years, and an RCI of just below one after many years can represent a large reduction in total incidence. In cases where this information is pertinent, it may be more appropriate to use non-relative measures such as cases averted. The choice of measure does not impact the occurrence of the divorce effect; figures that show cases averted are included in the Supplemental Information.
Analogous expressions for CI and RCI can be written for the host-vector model using the appropriate transmission terms.
Results
SIR Model
Simulations show the successful suppression of infection following the implementation of a control which reduces the transmission parameter, β, in the population. With infection at endemic equilibrium, the honeymoon effect [7] states that even a modest reduction in the transmission parameter will have a large effect on the incidence of the infection due to the effective reproductive number, Rt, the expected number of new infections each infectious individual causes, being one. After the control is stopped, the infection remains at low incidence for some time as the number of infective individuals builds from very low numbers. However, once control ends Rt immediately rises above one and continues to increase while prevalence is low, due to the buildup of susceptibles (plots of Rt are provided in supplemental information). This increased Rt eventually drives a large outbreak, quickly depleting the susceptible population, at which point incidence (Figure 1(a), black curve), and Rt, again fall to low numbers.
To evaluate the success of the control, we examine the RCI in the period following introduction of control and see that during and immediately following the control period, when incidence is low, the RCI decreases towards 0, suggesting a successful control program. However, once the post-control outbreak begins, RCI increases rapidly resulting in the divorce effect (RCI>1) before dropping back below one once the epidemic begins to wane and incidence falls below endemic levels (Figure 1(a)). During the period where RCI>1, lasting approximately 2.5 years in our example, the control has not only failed to decrease the total incidence of infection but has resulted in an increase in total incidence, the divorce effect. Following this initial outbreak and trough, RCI continues to oscillate around and approach one in the long run (see Figure S2).
Exploring values of R0 and the duration and strength of control shows that the divorce effect is present over a wide region of parameter space. Figure 1(b) shows the magnitude of the divorce effect, quantified by the maximum RCI seen, as a function of R0 and duration of control for a perfect control measure (β=0 during the control period). Perfect control was employed here to eliminate any confounding effects from the honeymoon effect that could occur during an imperfect control. We find that for the most biologically relevant area of parameter space (R0<20, control lasting less than 20 yrs) the divorce effect always occurs and will result in a 20-60% increase in cumulative incidence (RCI=1.2-1.6) at its peak. The non-monotonic relationship between the magnitude of the divorce effect and the length of the control seen here suggests that a control program should either be discontinued within the first year or continued as long as possible to avoid the worst consequences of the divorce effect (Figure 1(b)).
Relaxing our assumption of a completely effective control and focusing on a fixed R0 (R0=5, Figure 1(c)), we see that the relationship between the magnitude of the divorce effect and the length of the control period varies with the strength of the control. An oscillatory fringe-like pattern is seen in Figure 1c when control is ineffective but carried out for a long period of time. This results from outbreaks due to the honeymoon effect occurring during the control period, leading to the depletion of some of the susceptible population.
Seasonal SIR Model
Temporary control measures in the seasonal SIR model show many of the same dynamics as in the non-seasonal model, namely that a successful control is followed by a period of low incidence and eventually a post-control outbreak leading to a divorce effect (Figure 2(a)). However, the timing and size of the post-control epidemic, and thus the magnitude of the divorce effect, depend not only on R0 and the length of the control but also the timing of both the onset and end of the control (Figures 2(b) and 2(c)). This leads to a highly nonlinear dependence of the magnitude of the divorce effect on R0 and the duration of control (Figure 2(b)). However, the presence of ranges of parameter space with small magnitudes of the Divorce Effect at regular intervals could allow policy makers to determine optimal times to stop control.
The oscillatory nature of the relationship between the maximum RCI and R0 (Figure 2(b)) suggests a relationship between the timing of the control period and the severity of the divorce effect. While the magnitude is only sensitive to the start time for very short control periods, lasting around a year, it is highly sensitive to the end time (Figure 2(c)). This means that controls of similar lengths can have significantly different outcomes depending on their timing, e.g. a 1 year control ending day 415 results in a maximum RCI barely above 1 while a control of the same length ending day 590 results in a maximum RCI near 1.6, and is a direct result of the seasonal forcing function and delaying the outbreak until a period in which R0 is larger, similar to results seen when controls are used against epidemics in seasonal settings [16,17]. Regardless of start time, the optimal end time occurs around 60 days after the maximum transmission parameter, b(t), (days 790 and 1155 in Figure 2(c)), suggesting the best time to end control will be shortly after the peak in the transmission parameter.
Host-Vector Model
The non-seasonal host-vector model has broadly similar dynamics to the non-seasonal SIR model in terms of the divorce effect (see Supplemental Information), so here we focus instead on the seasonal host-vector model. Following one year of insecticide treatment that reduces the average mosquito lifespan by a half (i.e. increases the mosquito death rate by 100%, σ = 1) the infection is suppressed and there is no seasonal outbreak for the next three years. A major outbreak, with approximately eight times the peak prevalence of the pre-control seasonal outbreaks, occurs in the fourth year and results in a maximum RCI of around 1.8, before the epidemic fades and incidence again returns to low levels. The size of this outbreak would almost certainly risk overwhelming even the most well-funded medical services.
Mitigating the Divorce Effect
It is apparent from earlier results (e.g. Figure 1(b)) that avoiding the divorce effect in a non-seasonal setting is not possible with a non-immunizing control, due to the inevitable build-up of susceptible individuals. Therefore, the goal in these situations should be to maintain the control as long as possible or until a vaccine becomes available, and we focus instead on the seasonal SIR and host-vector models (Supplemental Information). To attempt to mitigate the divorce effect, we examine three different control programs here. The first relies on annual controls lasting one month when R0 is at its maximum, the second has a month-long control applied in response to the prevalence reaching some set level—which we might imagine corresponding to an outbreak becoming detectable or reaching a sufficient level to cause concern to local authorities—that we take here to be when one hundred individuals out of a million are infective, and the third chooses when to implement a month-long control based on minimizing the peak RCI. For comparison, all three use 12 total months of control.
With annual monthly control for a directly transmitted seasonal disease, the population sees a significant initial reduction in prevalence. However, as predicted by the divorce effect, the repeated use of controls results in a diminished effect on the prevalence and seasonal outbreaks begin to occur between control periods. The peak prevalence of these outbreaks quickly grows to be significantly larger than the seasonal outbreaks before the control program was begun, however they are blunted by the next control period before RCI rises above one. Once the program is ended, however, a post-control outbreak quickly brings RCI above one (Figure 4(a)). In the case of host-vector transmission, the control successfully suppresses the disease for the first 1.5 years, however the population begins to experience outbreaks during what was traditionally the off-season. After the control program is ended, the population enters a period of larger outbreaks occurring every four years (Supplemental Information).
The reactive control has a similar effect following the initial control period, however it results in ever more rapid need for control, exhausting all 12 months of treatment in the first five years for directly transmitted disease (Figure 4(b)) and within the first four years for the vector-borne disease. We see that while this results in a lower RCI during the control program, it results in an even larger post-control outbreak and a larger maximum RCI for both transmission pathways.
Intuitively, Figure 2(c) suggests choosing a time period to implement the control that will minimize the divorce effect. To do this, we implement a third method which optimally chooses the time between control periods to minimize the Divorce Effect in an attempt to take advantage of the timing of the seasonality. This plan results in waiting the maximum amount of time to implement the control in all cases (Supplemental Information), suggesting that the divorce effect is unavoidable and will continue to grow in magnitude during the first decades of the program regardless of the timing of the treatments. While this suggests it may not be possible to eliminate the divorce effect or to extend programs without worsening the divorce effect, it should still be possible to minimize the divorce effect by carefully choosing the timing of the end of the control program once cessation becomes necessary.
Additional Results
Results for additional models, along with an analytical approximation to the magnitude of the divorce effect are included in the supplemental information.
Discussion
It has long been appreciated that non-immunizing control measures deployed against endemic infections will result in a large short-term reduction in prevalence but will lead to a reduction in herd immunity, leaving the population at risk of large outbreaks after the cessation of control. Here we have shown, in quite general settings, that these outbreaks can be so large as to increase, over certain time periods, the total incidence of infection above what would have occurred if no control had been used—a result we call the divorce effect. This represents a failure for control of the worst kind, namely a control that increases the total incidence of the infection. Unfortunately, many commonly used disease control plans rely on temporary non-immunizing controls, meaning that populations may be left at risk of the divorce effect once the control measure is ended.
Controls that do not confer immunity—including isolation, use of drugs as a prophylaxis or to shorten duration of infectiousness or behavioral changes such as social distancing—are often deployed in epidemic settings, particularly for new pathogens for which a vaccine is unavailable, but may also be used to blunt seasonal outbreaks of endemic diseases. In these endemic settings, we have shown that it is important to weigh any potential benefit from these controls against the risk of post-control outbreaks and the divorce effect. While there are timeframes over which a temporary non-immunizing control has benefits, the severity of the post-control outbreak that results in the divorce effect will risk overwhelming even well-maintained healthcare systems.
Vector-borne infections represent the most common situation in which non-immunizing controls are regularly used against endemic diseases, e.g. insecticide spraying to combat seasonal dengue outbreaks. The honeymoon effect predicts that insecticides can provide short-term benefits in endemic settings but that the additional benefit of continued spraying will decrease over time due to the accumulation of susceptibles (i.e. depletion of herd immunity) that results. Indeed, Hladish et al. [15] saw precisely these effects using a detailed agent-based model for dengue control that employs indoor residual spraying. Cessation of spraying will be expected to lead to large post-control outbreaks: again, Hladish et al.’s model exhibited annualized incidence of 400% compared to the uncontrolled baseline setting in certain years. (It should be noted that Hladish et al. did not explore cumulative incidence since the start of control and so did not directly observe the divorce effect, although from their results we certainly believe that the effect was occurring.) Here, we expand on those results and show that they are not specific to insecticides and that if the control is not maintained indefinitely, or at least for a few decades, the damage of the divorce effect can quickly outweigh the short-term benefits. Further, programs implementing insecticides may be intended to be indefinite, but the evolutionary pressure imposed can result in the rapid and unpredictable evolution of resistance and without proper monitoring could result in an increase in total incidence due to the divorce effect before officials realize that resistance has developed. While insecticides, and other non-immunizing controls, will, and should, continue to play an important role in epidemic settings, where herd immunity is negligible, the results of this study raise important questions about their use in combating endemic infections.
In some instances, control measures are deliberately transient in nature, such as field trials for assessing the impact of proposed novel control methods, e.g. a review of field trials of dengue vector control showed they lasted between 5 months and 10 years [18]. Multiple year field trials such as these can result in considerable build-up of the susceptible population, meaning consideration needs to be given to the consequences of this accumulation and the potential for large outbreaks to occur in the wake of cessation of control. If our results are validated, they must be factored not only into the design of such trials but also into the informed consent process for trial participation, with participants made aware of the risk of the divorce effect and plans put in place to provide a reasonable level of protection during and following the study. As we have shown, these outbreaks can occur months or even many years later, and while disease incidence would be observed closely during the trial, our results argue that monitoring should continue for an appropriate length of time following the cessation of control. Furthermore, we emphasize that the epidemiological consequences of the honeymoon effect—specifically the relative ease of reducing incidence for an infection near endemic equilibrium—must be kept in mind when interpreting the results of such trials. Together, these dynamical effects argue that susceptibility of the population to infection should be monitored together with incidence to fully assess the impact and effectiveness of the control.
Additional concerns are raised when an endemic and an epidemic infection share the same transmission pathway (e.g. Aedes aegypti vectoring both dengue and Zika). Emergency control against the epidemic infection also impacts the endemic infection, leading to the potential for the divorce effect to occur in the latter if the control is ceased once the epidemic has subsided. It may be that policy makers have to choose to allow an epidemic of a highly publicized, but low risk, epidemic in order to maintain immunity levels of another lower profile, but more dangerous, disease. On the other hand, if the risk due to the epidemic is sufficiently high, it may still be advantageous to use the control, however the risks need to be carefully compared and an informed decision, that accounts for the divorce effect, needs to be made.
While transient non-immunizing controls are common and provide opportunities to observe the divorce effect, we tend to focus on prevalence or incidence over short periods of time and not cumulative measures such as CI or relative measures such as RCI or CA, which would expose the divorce effect. Even when relative measures are used, such as Hladish et al. [15], the time frame over which incidence is compared can have a drastic effect on the interpretation of the result. While Hladish et al. examine simulated data, real-world data comes with a myriad of other problems. Often the divorce effect may occur when the system is poorly monitored, as with field trials and unintentional control, in systems that, like dengue, have large year-to-year variation, or in systems where the failure is associated with other confounding socio-economic events such as war or natural disaster, resulting in data that is either scarce or difficult to interpret. The divorce effect may become more apparent in coming years, though, as mosquito control is lessened following the end of the Zika epidemic, allowing for a rebound in dengue in South America, and insecticide resistance problems continue to grow.
Careful thought should be given to whether or not it is appropriate to begin new programs that rely on non-immunizing controls in endemic settings. This is an inherently complicated decision that must take into account numerous factors, both scientific and sociopolitical, but, in light of our results, policymakers should carefully weigh the risks of the divorce effect against other factors, e.g. imminent approval of a new vaccine or political pressure, before implementing disease management plans that rely on non-immunizing controls. Further, it is important that when non-immunizing controls are included in these management plans that they are not considered possible solutions but instead stop-gaps, and emphasis is placed on the development of vaccination as opposed to the indefinite continuation of the program.
Currently, control of endemic diseases worldwide, especially vector-borne diseases, relies heavily on non-immunizing controls such as insecticide. Policy makers should begin developing exit plans for these disease management programs —guidelines for safely ending the program when it becomes clear that indefinite maintenance is unlikely, which should be designed to minimize the impact of the divorce effect. In this paper, we have shown three possible designs for exit plans that could minimize the divorce effect. However, none of these designs were capable of eliminating the divorce effect. Our results suggest there is an inherent cost associated with the loss of immunity resulting from these programs.
Funding
This work has been supported by grants from the National Science Foundation (RTG/DMS-1246991), the National Institutes of Health (grants R01-AI139085, R01-AI091980 and P01-AI098670), the W. M. Keck Foundation and the NC State Drexel Endowment.
Acknowledgements
We thank Fred Gould, Sumit Dhole, Michael Vella, Christian Gunning, Jennifer Baltzegar, and Jaye Sudweeks for helpful discussion.