Abstract
An adaptive agent predicting the future state of an environment must weigh trust in new observations against prior experiences. In this light, we propose a view of the adaptive immune system as a dynamic Bayesian machinery that updates its memory repertoire by balancing evidence from new pathogen encounters against past experience of infection to predict and prepare for future threats. This framework links the observed initial rapid increase of the memory pool early in life followed by a mid-life plateau to the ease of learning salient features of sparse environments. We also derive a modulated memory pool update rule in agreement with current vaccine response experiments. Our results suggest that pathogenic environments are sparse and that memory repertoires significantly decrease infection costs even with moderate sampling. The predicted optimal update scheme maps onto commonly considered competitive dynamics for antigen receptors.
I. INTRODUCTION
All living systems sense the environment, learn from the past, and adapt predictively to prepare for the future. Their task is challenging because environments change constantly, and it is impossible to sample them completely. Thus a key question is how much weight should be given to new observations versus accumulated past experience. Because evidence from the world is generally uncertain, it is convenient to cast this problem in the language of probabilistic inference where past experience is encapsulated in a prior probability distribution which is updated according to sampled evidence. This framework has been successfully used to understand aspects of cellular [1–4] and neural [5–8] sensing. Here, we propose that the dynamics of the adaptive immune repertoires of vertebrates can be similarly understood as a system for probabilistic inference of pathogen statistics.
The adaptive immune system relies on a diverse repertoire of B and T cell receptors to protect the host organism from a wide range of pathogens. These receptors are expressed on clones of receptor-carrying cells present in varying copy numbers. A defining feature of the adaptive immune system is its ability to change its clone composition throughout the lifetime of an individual, in particular via the formation of memory repertoires of B and T cells following pathogen encounters [9–14]. In detail, after a proliferation event that follows successful recognition of a foreign antigen, some cells of the newly expanded clone acquire a memory phenotype. These cells make up the memory repertoire compartment that is governed by its own homeostasis, separate from the inexperienced naive cells from which they came. Upon reinfection by a similar antigen, memory guarantees a fast immune response. With time, our immune repertoire thus becomes specific to the history of infections, and adapted to the environments we live in. However, the commitment of part of the repertoire to maintaining memory must be balanced against the need to also provide broad protection from as yet unseen threats. What is more, memory will lose its usefulness over time as pathogens evolve to evade recognition.
How much benefit can immunological memory provide to an organism? How much memory should be kept to minimize harm from infections? How much should each pathogen encounter affect the distribution of receptor clones? To answer these questions we extend a frame-work for predicting optimal repertoires given pathogen statistics [15] by explicitly considering the inference of pathogen frequencies as a Bayesian forecasting problem [16]. We derive the optimal repertoire dynamics in a temporally varying environment. This approach can complement more mechanistic studies of the dynamics and regulation of immune responses [12, 17–19] by revealing adaptive rationales underlying particular features of the dynamics. In particular, we link the amount of memory production to the variability of the environment and show that there exists an optimal timescale for memory attrition. Additionally, we demonstrate how biologically realistic population dynamics can approximate the optimal inference process, and analyze conditions under which memory provides a benefit. Comparing predictions of our theory to experiment, we argue for a view in which the adaptive immune system can be interpreted as a machinery for learning a highly sparse distribution of antigens.
II. THEORY OF OPTIMAL IMMUNE PREDICTION
The pathogenic environment is enormous and the immune system can only sample it sparsely, as pathogens enter into contact with it at some rate λ. We consider an antigenic space of K different pathogens with time-varying frequencies Q(t) = (Q1(t), …, QK(t)). These frequencies are unknown to the organism, and evolve stochastically. Their dynamics is formally described by a Fokker-Planck operator 𝒜 encoding how pathogenic frequencies change (Fig. 1A and Methods). We reason that the immune system should efficiently use the information available through these encounters, along with prior knowledge of how pathogens evolve encoded in the system dynamics, to build an internal representation of the environment (Fig. 1B). Biologically, we can think about this representation as being encoded in the composition of the adaptive immune repertoire (the size and specificity of naive and memory lymphocyte clones), but generally further cellular memory mechanisms might also contribute. Based on this representation of the world, the immune system should organize its defenses to minimize harm from future infections (Fig. 1C).
How could the immune system leverage a representation of beliefs about pathogen frequencies to provide effective immunity? Each lymphocyte (B or T cell) of the adaptive immune system expresses on its surface a single receptor r out of L possible receptors. This receptor endows the lymphocyte with the ability to specifically recognize pathogens (labeled a) with probability fa,r. The immune repertoire is defined by frequencies of these receptors across the lymphocyte population, denoted by P = (P1, …PL). These frequencies sum up to one, which implies a resource allocation trade-off between the different receptor types – having more of one in the repertoire implies having less of others. How much harm an infection inflicts depends on how much resources the immune system has devoted to fighting the infection, i.e. the fraction P̃a (t) = ∑r fa,r Pr of the repertoire specific to antigen a, which we will refer to as the coverage of the antigen. Given the pathogen frequencies Q(t) and repertoire distribution P(t), the mean harm cause by the next infection is given by ∑a Qa · c(P̃a), where c is decreasing function of the fraction of the repertoire specific to the infection [15]. The host organism does not know Q with certainty, but has an internal belief B(Q, t) about the frequencies learned through sampling during previous infections. An optimal immune system can then distribute its resources to minimize the expected harm of future of infections: where Q̂ (t) 〈 Q 〉 B(Q,t) are the expected frequencies of pathogens. Although the function G may be complicated, it generally implies that receptors that are specific to frequent infections (high Q̂ a) should be well represented in the optimal repertoire () ([15] and Methods). In this framework we have assumed that infections, their clearing by the immune system, and the subsequent update of the repertoire, are all fast compared to changes in the environment, which occur over a time τ, and to the mean time between pathogenic encounters (λ−1).
The internal representation of the environment can be regarded as a system of beliefs, or guesses, about pathogen frequencies. Formally, these beliefs can be represented in the form of a probability distribution function B(Q, t) over pathogen frequencies, which the host implicitly computes using all the information it has garnered over time. Optimally, these beliefs are computed by the rules of Bayesian sequential forecasting, by combining the memory of past encounters with knowledge of the stochastic rules under which the pathogenic environment evolves (Methods). Optimally, the belief distribution should be initialized at birth to reflect the steady state distribution of the dynamics, where 𝒜ρs = 0. Upon encountering a pathogen a at time t, the prior belief distribution B(Q, t−) is combined with the likelihood of the observed pathogen Qa to compute the post-encounter belief B(Q, t+) according to Bayes rule [16]:
Between encounters, the immune system should continue to update its beliefs by forecasting how pathogen frequencies change with time. The optimal way to do so is to project the old belief distribution forward in time using [16]
This prediction step, which is performed in the absence of any new information, relies on the immune system implicitly “knowing” the probability laws governing the stochastic evolution of the environment—but not, of course, the actual path that it takes. In the results section we show how Eqs. 2-4 can be turned from abstract belief updates into dynamical equations for a well-adapting immune repertoire.
The Bayesian forecasting framework provides a broad account of the possible adaptive value of many features of the adaptive immune system without the need for additional assumptions. Immune memory formed after a pathogenic challenge is explained as an increase in optimal protection level resulting from an increase in estimated pathogen frequency, following Eq. 3. Attrition of immune memory is also adaptive, because it allows the immune repertoire to forget about previously seen pathogens which it should do in a dynamically changing environment (Eq. 4). Lastly, some of the biases in the recombination machinery and initial selection mechanisms [20] represent an evolutionary prior (Eq. 2) which tilts the naive repertoire towards important regions of antigenic space.
We are proposing an interpretive framework for understanding adaptive immunity as a scheme of sequential inference. This view provides two key insights. First, it confirms the intuition that new experience should be balanced against previous memory and against unknown threats in order for adaptive immunity to work well. Second, it suggests a particular dynamics of implicit belief updates that can globally reorganize the immune repertoire to minimize harm from the pathogenic environment. Going beyond these broad ideas, in the Results section we analyze in details a model for optimal immune prediction in which all these statements can be made mathematically precise. We also show a plausible implementation that the immune system could follow to approximate optimal Bayesian inference, and we compare the resulting dynamics with specific features of the adaptive immune system.
III. RESULTS
A. A lymphocyte dynamics for approximating optimal sequential inference
For concreteness we consider a drift-diffusion model of environmental change (Eq. 11 in Methods). The drift-diffusion model, while clearly a much simplified model of real evolution, captures two key features of changing pathogenic environments: the co-existence of diverse pathogens, and the temporal turnover of dominant pathogen strains. The drift-diffusion model is mathematically equivalent to a classical neutral stochastic evolution of pathogens [21] driven by genetic drift happening on a characteristic timescale τ and immigration from an external pool with immigration parameters θ = (θ1, …, θK) (Eq. 11 in Methods). Generally, pathogens are under selective pressure to evade host immunity, and strains are replaced faster than under the sole action of genetic drift. Matching the timescale of pathogen change to those observed experimentally, the model then provides a simple, effective description of the pathogen dynamics.
In this case, we show that optimal Bayesian belief update dynamics can be approximated by maintaining a memory of an effective count of previous encounters n = (n1, …, nK), initialized to the immigration rates n(0) = θ, and subject to the update rules (see SI Text A 2): where |n| = ∑a′ na′. The expected frequency of each pathogen (used in Eq. 13) is estimated from these counts as:
We checked the accuracy of this approximation explicitly by comparing it to an exact solution computed by spectrally expanding the generator of the stochastic dynamics (see SI Text A 3 and Fig. S1).
An optimal immune system should should map the counts above into a receptor repertoire P⋆ as in Eq. 13. The repertoire then follows a dynamics derived from Eqs. 5-6 (see SI Text B), in which memory of past infections is encoded only in the repertoire composition itself and in a single global variable representing the total memory that is kept (encoding |n|). To understand this, consider a cost function c(P̃ a) = – log P̃ a and uniquely specific receptors, fa,r = δa,r. In this case the mapping Eq. 13 is the identity, i.e. P⋆ (t) = Q̂ (t) ([15] and Methods). Then the optimal repertoire dynamics can be achieved simply by having clone sizes of different receptors follow Eq. 5,6 up to some scaling. More generally the optimal repertoire is some non-linear mapping of the encounter counts, but only requires information that can be represented in population sizes of different clones, which are quantities regulated by the actual biological dynamics.
B. Learnability of pathogen distribution implies a sparse pathogenic landscape
The immune system must be prepared to protect us not just from one pathogen but a whole distribution of them. Even restricting recognition to short peptides and accounting for cross-reactivity [22], estimates based on precursor frequencies for common viruses give an effective antigen environment of size K ~ 105–107 [23]. How can the immune system learn anything useful about such a high dimensional distribution from a limited number of pathogenic encounters? Naively, one might expect that the number of samples needed to learn the distribution of pathogens must be larger than of the number of pathogens, i.e., λt ~ K, where t is the time over which learning takes place. Although little is known about the receptor-antigen encounter rate λ, this estimate suggests that the pathogenic environment is not easily learnable and therefore memory has limited utility
This apparent paradox can be resolved by the fact that the pathogenic environment may be sparse, meaning that only a small fraction of the possible pathogens are present at any given time. In our model of the pathogen dynamics, this sparsity is controlled by the parameter θ. In the scenario that we are considering, typical pathogen landscapes Q are drawn from the steady state distribution ρs(Q) of the immigration-drift dynamics, which is a Dirichlet distribution parametrized by θ (Eq.A1 in SI Text A). When θa is small, the distribution is peaked at Qa = 0, meaning that pathogen a is absent a majority of the time. For instance, for uniform θa ≡ θ ≪ 1, the effective number of pathogens present at any given time is Kθ (see SI Text C 3). Since the system only needs to learn about the pathogens that are present, the condition for efficient learning should naively be λt ~ Kθ, which is much easier to achieve realistically for small θ.
Our theory can be used to quantify the benefit of memory as a function of the different immunological parameters. We compute the optimized cost function , and study how it decreases as a function of age, t, relative to the cost at birth c0 = c(t = 0), as the organism learns from pathogen encounters. This relative cost also depends on the encounter rate λ, the size of the pathogenic space K, the sparsity of pathogenic space θ and timescale of change in the environment τ. Fig. 2A shows that the benefit of memory increases with pathogen sparsity – when θ is small, even a few encounters suffice to seed enough memory to reduce the cost of future infections. The cost saturates with age to a value c∞, either because memory approaches optimality, or because memory eventually gets discarded and renewed as the environment changes. Fast changing environments lead to an earlier and higher saturation of the cost with age (Fig.2B) since learning and prediction are limited by decorrelation of the environment. The pathogen dynamics is sped up when there are strong selection pressures to evade immunity. Faster dynamics decrease learning efficiency and in turn reduce selective pressures. The effective timescale should in practice be set by a co-evolutionary balance between both effects.
Analytical arguments show that in the limit of few samples the relative cost c/c0 achievable in a static environment scales as λt/Kθ (SI Text C). In general, we find that the cost is a function of λte/Kθ, where the effective time te is defined via with n(t) being the vector of the encounter counts discussed above (see SI Text A 4 for derivation from Eqs. 5-6). Plotted in terms of this variable the relative cost gap as function of time, (c − c∞)/(c0 – c∞) collapses onto a single curve for all parameter choices (Fig. 2C). Fig. 2C shows that the cost drops by a factor of ~ 2, when λte/Kθ ~ 1. Thus, there is a substantial benefit to memory already when the effective number of encounters is comparable to the effective number of pathogens. At young ages (small t) or with slowly changing environments (large τ), te ≈ t and so this condition is simply λt ~ Kθ, i.e. the total number of encounters should be comparable to effective number of pathogens that are present.
C. Optimal attrition timescale
Our theory suggests that there is an optimal timescale for forgetting about old infections which is related to the timescale over which the environment varies. Eq. 6 shows that memory should optimally be discounted on an effective timescale τmem = 2τ/(|n| − 1). Comparing this to the slowest timescale of environmental variation, τc = 2τ/|θ| (Eq.A35), where |θ| =∑ aθa, we have
The timescale on which old memories should be forgotten scales with the environmental correlation timescale. The two timescales are equivalent when the immune system has little information about the pathogenic environment (|n| ~ |θ|). Given the long timescales over which many relevant pathogens change, immune memory should generally be long-lived (with the timescale of decay being of the order of years or decades). Indeed, despite the relatively short life span of memory cells [24], constant balanced turnover keeps elevated levels of protection for decades after an infection, even in the absence of persistent antigens [25–27].
Interestingly, our theory predicts that memory should be discounted more quickly when the immune system has gathered more information (larger |n|). Using the mean-field equations for |n(t)| –|n(0)| from Results III B we can derive how the memory time scales at steady state at high sampling rate. Using that for large times |n(t)| ≫ |n(0)| holds in the high sampling rate limit, one can simplify the mean-field result to in steady state (t → ∞). Combined with Eq. 8 follows, which shows that a larger sampling rate leads to a faster discounting of past evidence. This is reminiscent of results in optimal cellular signalling where there are similar trade-offs between noise averaging and responsiveness to changes in the input signal [28].
D. Memory production in sparse environments should be large and decrease with prior exposure
The theory can be used to make quantitative and testable predictions about the change in the level of protection that should follow a pathogen encounter. Consider an infection cost function that depends as a power law on the coverage, , with a cost exponent α that sets how much attention the immune system should pay to recognizing rare threats. (Below we will use the shorthand α = 0 to indicate logarithmic cost.) Cost functions of this form can be motivated by considering the time to recognition of an exponentially growing antigen population by the immune system [15], or, alternatively, by considering the time delay of the expansion of the precursor cells to some fixed number of effector cells (SI Text D).
In the simplest model for repertoire updates recognition of pathogens leads to proliferation proportionally to the number of specific precursor cells, followed by a homeostatic decrease of the memory pool [18, 29]. Thus the fold-change P̃a(t+)/P̃ a(t−) = const where t−, t+ are times just before and after the encounter. By contrast, our Bayesian theory predicts that the fold change upon encountering pathogen a should be where κ depends on prior expectations about the antigenic environment and previous pathogen encounters (see Methods V C). Setting α = 0 gives the result for a logarithmic cost function.
To understand this prediction first consider the effect of a primary infection on a naive repertoire, θa ≡ θ, P̃⋆ (0) = 1/K, and |n(0)| = Kθ where the receptors are uniquely specific (fa,r = δa,r). In this case κ = 1/K1+αθ (see SI Text B 1) and Eq. 9 predicts a fold-change of (1 + 1/θ)1/(1+α). We have argued previously that their learnabilility implies that pathogenic environments are sparse, i.e. θ ≪ 1. Therefore we predict that primary antigenic encounter should lead to a large memory production. Experimentally, memory production typically leads to the proliferation of antigen-specific cells by a factor of 100-1000-fold [14], in qualitative agreement with this prediction. Turning the argument around, such a large increase in protection upon an encounter is only adaptive in highly sparse environments. Quantitatively, it implies a sparsity parameter θ ~ 10−6–10−4 (here taking α = 1 for definiteness) (Fig. 3B). Combined with the estimate K 105–107 [23] this suggests that the effective number of pathogens at any given time ranges from Kθ = 0.1 to 1, 000.
To test Eq. 9 on immunological data, we fit the Bayesian update model to experiments reporting fold-changes in antigen titers upon booster vaccinations against influenza from [30] (Fig. 3A) using least-squares. Titers correspond to the concentration of antibodies that are specific to the antigen a, and can thus be viewed as an experimental estimate of P̃a. The optimal Bayesian strategy explains the data, accounting for the larger boosting at small prevaccination titers and showing no increase for large titers, while the proportional model predicts constant boosting for all titers. Similar experimental results have been reported for antibody titers pre- and post a shingles vaccination [31]. Mechanistic models have been proposed to explain how the population dynamics of expanding lymphocytes might give rise to non-proportional boosting for both B cells and T cells [29, 32–34].
Interestingly, for T cells Quiel et al. [35] have shown that fold expansion to peak cell numbers in an adoptive transfer experiment depends on the initial number of T cells as a power law with exponent ~ – 1/2. That scaling, which is for the peak expansion, predicts more expansion at high precursor number than Eq. 9, which is for memory production. This implies a nonlinear relationship between peak T cell level and memory production, which further suppresses memory production at high precursor numbers. This prediction might be could checked in experiments measuring memory production after infection clearance, as well as the expansion peak.
E. Long-term dynamics of a well-adapting repertoire
Our model makes predictions for the dynamics of growth and attrition of memory over time, with consequences for immunity and for the diversity of the immune repertoire. We quantify the dynamics in terms of a memory fraction defined as a sum of the coverage fractions over all previously encountered pathogens {ai}. The memory fraction measures the size of memory relative to size of the whole immune repertoire. Early in life every infection is new and even modest increases in the memory fraction lead to large drops in infection susceptibility (measured by the expected cost of new infections in Fig. 4A). At the same time, the memory fraction increases rapidly (Fig. 4B), but the growth of memory slows as subsequent infections lead to less memory production following the optimal fold-change rule in Eq. 9, and as attrition begins to play a role. The fraction of the repertoire devoted to memory in mid-life is largely determined by how the cost of infections scales with coverage.
The observed memory fraction of ~ 50% at mid-life suggests a cost exponent of α ≈ 0.5 (Fig. 4B). The diversity of the memory repertoire increases with time at a rate that slows with age (quantified in Fig. 4C by richness, which measures the number of unique specificities, and the Shannon entropy of the repertoire frequency distribution.)
To gain insight into these dynamics of our model we average the stochastic equations over the statistics of pathogen encounters. We show in SI Text B 2 that this mean-field approximation yields a differential equation for the population fraction of different clones with two opposing contributions which balance alignment of the immune repertoire with the current pathogenic environment (i.e. memory production) against alignment with the long-term mean environment (i.e. attrition). Interestingly, the mean-field equation broadly coincides with dynamics that were proposed in [15] to self-organize an optimal immune repertoire. The essential difference here is that the time-scale of learning slows down with increasing experience following the rules of optimal sequential update in Eq. 9.
We then asked which features of the proposed repertoire dynamics are most relevant to ensure its effectiveness. How important is the negative correlation between fold expansion and prior immune levels, and how important is attrition? Furthermore, if the immune system follows Bayesian dynamics it must have integrated on an evolutionary time scale a prior about composition and evolution of the pathogen environment through the parameters θ and τ – however, the prior may be inaccurate. How robust is the benefit of memory to imperfections of the host’s prior assumptions about pathogen evolution? To answer these questions we compare the long term immune repertoire dynamics using the optimal Bayesian scheme to other simplified schemes. We find that a constant fold expansion dynamics quickly leads to very suboptimal repertoire compositions (Fig. 4D, pink line), since the exponential amplification of cells specific to recurrent threats quickly leads to a very large fraction of the repertoire consisting of memory of those pathogens (Fig. 4E, pink line). This sub-optimality persists even if we assume that some global regulation caps the constant fold expansion such that no individual receptor clone can take over all of the repertoire (Fig. 4D,E grey line). Thus, negative feedback in T cell expansion to individual antigens is very important to maintain a properly balanced diverse repertoire. In contrast, within a dynamics with a negative correlation, the precise levels of updating do not need to be finely tuned to the environmental statistics: varying the assumed sparsity of the pathogen distribution, which controls fold expansion upon primary infection in the optimal dynamics, leads to a relatively modest deterioration of the convergence speed of the learning process (Fig. S3A) and does not matter asymptotically (Fig. S3B). Attrition does not matter at young age, but can play an important role for long-term adaptation to relatively rapidly changing pathogen distributions (Fig. 4F). However, the attrition time scale need not be finely tuned to get close to optimal dynamics (Fig. S4).
F. Adapting a cross-reactive repertoire
Above, we described adaptation of immune repertoires in terms of changes in the effective coverage Pa = ∑r fa,rPr, where the cross-reactivity matrix F = (fa,r) reflects the ability of each receptor to recognize many antigens, and also the propensity of each antigen to bind to many receptors [22]. Because of cross-reactivity, each pathogen encounter should result in the expansion of not just one but potentially many receptor clones. Here we ask how the optimal immune response is distributed among clones with different affinities.
Following Perelson and Oster [38], we will represent the interaction of receptors and antigens by embedding both in a high-dimensional metric recognition “shape space”, where receptors are points surrounded by recognition balls. Antigens that fall within a ball’s radius will be recognized by the corresponding receptor. In this presentation a and r are the coordinates of antigens and receptors respectively and their recognition propensity depends on their distance, fa,r = f (|a − r|).
Earlier sections have already discussed the optimal dynamics of the coverage P̃a, which is a convolution of the cross-reactivity matrix with the receptor clone distribution Pr. Thus, the optimal dynamics of the clone distribution can be derived by deconvolving the cross-reactivity subject to the constraint that Pr cannot be negative. Carrying out this analysis in SI Text B 3, reveals a general qualitative phenomenon – competitive exclusion between clones expressed in the repertoire and their close neighbors within the cross-reactivity radius (Fig. S5, blue line). This exclusion is not an assumption of the model, but rather stems from the optimal Bayesian theory. Given a receptor clone that covers one region of antigenic shape space, the global likelihood of detecting infections increases by placing other clones to cover other regions. This can be shown analytically when cross-reactivity is limited, memory updates are small in magnitude, and the pathogen distribution is assumed to be uncorrelated (SI Text B 3).
In general, the frequencies of pathogens might be correlated in antigenic space, for example because mutations from a dominant strain give rise to new neighboring strains. An optimally adapting immune system should incorporate such correlations as a prior probability favoring smoothness of the pathogen distribution. Such priors work their way through the optimal belief update scheme that we have described, and weaken the competitive exclusion between clones with overlapping cross-reactivity (Fig. S5, orange line).
In general, when cross-reactivity is wide or the required clone fraction update is large, numerical analysis shows that achieving optimally predictive immunity after a pathogen encounter requires a global reorganization of the entire repertoire (Fig. S6, blue line). There is no plausible mechanism for such a large scale reorganization since it would involve up- and down-regulation even of unspecific clones. However, in SI Text B 3 we show that the optimal update can be well-approximated by changes just to the populations of specific clones with pathogen binding propensities fa,r that exceed a threshold. The optimizing dynamics with this constraint exhibits strong competitive exclusion, where only the highest affinity clones proliferate, while nearby clones with lower affinity are depleted from the repertoire (Fig. S6, orange line). The local update rule provides protection that comes within 1 percent of the cost achievable by the best global update. Thus, reorganization of pathogen-specific receptor clone populations following an infection, as seen in vertebrates, can suffice to achieve near-optimal predictive adaptation of the immune repertoire.
IV. DISCUSSION
The adaptive immune system has long been viewed as a system for learning the pathogenic environment [10]. We developed a mathematical framework in which this notion can be made precise. In particular, we derived a procedure for inferring the frequencies of pathogens undergoing an immigration-drift dynamics and showed how such inference might approximately be performed by a plausible population dynamics of lymphocyte clones. We also argued that the antigenic environment must be effectively sparse to be learnable with a realistic rate of pathogen encounters. The optimal repertoire dynamics in sparse antigenic environments naturally produces a number of known properties of the adaptive immune system including a large memory production in naive individuals, a negative correlation of memory production with preexisting immune levels, and a sublinear scaling with age of the fraction of the repertoire taken up by memory of past infections.
Our framework is easily extended to incorporate further aspects of pathogen evolution, e.g. mutational dynamics in antigenic space. Such dynamics will lead to correlations in the pathogen distribution which we showed will influence the structure of the optimal conjugate repertoire. In particular, the optimal response should spread around the currently dominant antigens to also provide protection against potential future mutations. Hypermutations in B cells may play a role in this diversification, in addition to their known function of generating receptors with increased affinity for antigens of current interest. It would also be interesting to extend our framework to other immune defense mechanisms, including innate immunity, where the role of memory has received recent attention [39].
Although our study was motivated by the adaptive immune system, some of our main results extend to other statistical inference problems. We have extended earlier results on exactly computable solutions to the stochastic filtering problem for Wright-Fisher diffusion processes [40] to derive an efficient approximate inference procedure. This procedure might be of use in other contexts where changing distributions must be inferred from samples at different time points, e.g., in population genetics. Additionally, we have derived the convergence rate for Bayesian inference of categorical distributions in high dimensions in the undersampled regime, showing that effectively sparse distributions can be inferred much more quickly. These results add to the growing literature on high-dimensional inference from few samples [41, 42], which has arisen in the context of the big data revolution. We propose that the adaptive immune system balances integration of new evidence against prior knowledge, while discounting previous observations to account for environmental change. Similar frameworks have been developed for other biological systems. In neuroscience, leaky integration of cues has been proposed as an adaptive mechanism to discount old observations in change-point detection tasks [43, 44], and close-to-optimal accumulation and discounting of evidence has been reported in a behavioral study of rat decision-making in dynamic environments [45]. Inference from temporally sparse sampling has been considered in the framework of infotaxis, which is relevant for olfactory navigation [46]. In the context of immunity, related ideas about inference and prediction of pathogen dynamics have been used to predict flu strain and cancer neoantigen evolution in silico [47, 48]. Finally, ideas similar to those developed here could be used in ecology or microbiome studies to reconstruct evolutionary or ecological trajectories of population dynamics from incomplete sampling of data at a finite number of time points, e.g., from animal sightings or metagenomics.
V. METHODS
A. Modeling pathogen dynamics by a immigration-drift process
In our model we describe the stochastic dynamics of the pathogenic environment (Fig. 1A) by a Fokker-Planck equation for the conditional probability distribution ρ(Q, t) where 𝒜 is a differential operator acting on ρ that controls the dynamics. For concreteness, we consider a population that changes due to genetic drift and immigration from an external reservoir, which we describe by a Wright-Fisher diffusion equation [49, 50] where τ sets the time scale of dynamics, θ is a K-dimensional vector of immigration rates, and δa,b is the Kronecker delta, which is 1 if a = b and 0 otherwise. Here and in the following we denote the norm of a vector x by |x| = ∑i xi. To efficiently simulate trajectories according to this dynamics we sample the new distribution of frequencies directly from the transition density of the stochastic process as described in App. A 5. This dynamics retains key features of real pathogen environments. First, at a given point in time the environment contains many different pathogens with different frequencies determined by genetic drift and immigration. Second, the dominant pathogens change over time, such as is the case for many viruses e.g. the flu or HIV.
B. Minimizing the cost of infection
To solve the optimization problem Eq. 13 analytically a set of necessary conditions for optimality, the so-called Karush-Kuhn-Tucker conditions, can be derived. When all receptors are present at a non-zero frequency in the optimal repertoire these conditions imply [15] where λ⋆ is set by the condition . If we further simplify the problem by assuming that there is no cross-reactivity between different pathogens and by considering power-law cost functions then this simplifies to the explicit solution where Z is a normalization constant. Other cases are discussed in detail in [15] including how to solve the optimization problem numerically using a projected gradient algorithm in the general case.
C. Change in protection upon a pathogen encounter
The inference dynamics induces via the mapping from Q̂ to P⋆ (Eq. 13) a dynamics of an optimally adapting immune repertoire. To get intuition we derive how the coverage changes in a simple setting in which Eq. 13 holds (further cases are considered in SI Text B.
By combining Eqs.5,7 we obtain an update equation for the expected frequencies upon encounter of antigen a as where to simplify notations we use Q̂ (t+) = Q̂ +, and where |n+| = |n−| + 1. Using Eq.13 it follows that coverages are updated as
Defining and neglecting the change in normalization which is of order 1/K relative to the update size, we obtain Eq. 9. To fit the data set we note that a proportional rescaling of P̃a by a factor k can be subsumed within the model by redefining κ → κk1+α. Therefore the scaling of P̃a to an antibody titer can be subsumed within κ.
SI Text B: Induced repertoire dynamics
1. Dependence of fold change upon a pathogen encounter on sparsity
To understand how memory production depends on environmental sparsity we specialize Eq. 9 to the case of a uniform prior distribution. We then have |n| ≈ Kθ and Z− ≈ ∑a 1/K1/(1+α) = Kα/(1+α), which for Kθ ≫ 1 leads to κ = 1/(K1+αθ). The fold change upon an encounter of a pathogen starting from a naive repertoire thus depends as follows on the sparsity of the environment,
2. Mean-field dynamics
Besides the large changes of the naive repertoire upon a primary infection there are situations in which the inferred distribution is changing in a more continuous manner, e.g. updating in the limit of many previous samples, or the prediction step. We thus now ask how small changes in the expected frequencies of pathogens Q̃ change the coverage P̃. We assume that there is no cross-reactivity fr,a = δr,a, and consider power-law cost functions, where we have optimal receptor frequency distribution . As a preliminary we calculate the Jacobian
We can then show that the dynamics in terms of follows, which is of the form of a replicator equationwith “fitness” and mean fitness f̅ = ∑r′ Pr′ fr′.
Based on this general result we now analyze the dynamics of the repertoire due to the sequential Bayesian filtering. Equivalently to Eq. 14 the change of inferred distribution ∆Q̂ = Q̂+ − Q̂ upon encountering antigen a is given by where ea is the unit vector with a-th entry one and all other zero. Asymptotically for large |n| every update has a small effect only, and we might consider a mean-field description. In this description we replace ea by its expectation value Q and define an average rate of change per unit time by multiplying the update size by the frequency λ of pathogen encounters:
Here Q is the actual distribution of pathogens, and Q̂ are the expected frequencies of pathogens based on the immune system’s internal belief. For the prediction step we have a dynamics for counts, which we can convert into a dynamics for the inferred distribution. We have Q̂r = nr/|n| and a dynamics on counts given by Eq. 6. From there we obtain where is the prior guess for the distribution. Taken together we have with the (time-varying) coefficients
The fitness in the replicator equation is then
The fixed point of the dynamics in a static environment δ(t) = 0 is the optimal repertoire as expected from the asymptotic optimality of Bayesian inference. Replacing we then obtain a fitness which except for the prefactor is equivalent to the population dynamics proposed previously in [15]. That work did not consider the prefactor that leads to a slowing down of the dynamics with time to reflect a tradeoff between new evidence and past experience. The prediction steps relaxes the inferred distribution towards the prior distribution with a speed that for large |n| is proportional to |θ|/τ.
3. Updating a cross-reactive repertoire
We now consider the repertoire dynamics in the presence of cross-reactivity. In a first order Taylor expansion the change in the repertoire composition upon a pathogen encounter is given by where is the Jacobian of the mapping function (Eq. 13).
The mapping between pathogen frequencies and the optimal repertoire takes the form G(Q) = F −1P̃⋆ (Q) (if achievable given the constraint that no receptor frequency can be negative). P̃⋆ (Q) is a function that depends on the cost function. The Jacobian can thus be calculated using the chain rule as
For the power-law cost function we have , where R = ∑a fr,a is the row sum of F, which assume to be constant. Analogously to the derivation of the Jacobian in the previous section we derive from which with some algebra follows
Here, there is a departure from the dynamics of the number Nr of lymphocytes with receptor r proposed in [15], where proliferation is proportional to fr,a, instead of (F −1)r,a.
SI Text C: Inference of high-dimensional categorical distributions from few samples
1. Mean cost versus time
In this Appendix we will derive analytical expressions for the optimized cost as a function of time with the following simplifying assumptions: absence of cross-reactivity, fa,r = δa,r and ; no attrition, τ → +∞; and a power-law cost function c(P) = P−α. The prior on Q is a homogeneous Dirichlet distribution:
This problem is equivalent to the Bayesian inference of a distribution drawn from a Dirichlet meta-distribution. Asymptotic convergence properties of Bayesian inference procedures are well-established [51], but the convergence speed of Bayesian estimators of the distribution in the non-asymptotic regime has been much less studied to our knowledge. Analysing the behaviour of c(t) is equivalent to analysing the convergence of the estimated distribution to the true one with increasing number of samples. Here we will establish the relevant scaling for few samples.
We consider the biologically relevant regime of high dimension but effective sparsity of the distribution, Kθ ≫ 1, θ ≪ 1. Our main insight is that for such sparse distribution Bayesian inference is effective when the number of samples is on the order of a few Kθ, instead of the potentially much larger K.
The prominent role sparsity plays in allowing for more efficient estimation is reminiscent of compressed sensing [55]. Non-asymptotic results about inference in high-dimensional settings have been explored recently in the context of machine learning [42]. Both connections merit further exploration.
We define the expected cost as 〈c(t) 〉, where the average is taken over both random choices of Q, and random realizations of the pathogen encounters, n, which are distributed according to:
where λ is the encounter rate. The number of encounters determine the average belief for Q, which itself shapes the optimal response and thus the cost through Eq. C1: where Z is a normalization constant.
Note that the cost can be expressed in terms of a divergence between the best receptor distribution P⋆ given full knowledge of Q and the actual receptor distribution P. Defining and replacing into Eq. C1, we obtain: where c∞ = Z̃1+α is the asymptotic cost for P = P⋆ and where Dβ (P||Q) := (β − 1)−1 ln is the Rényi divergence of order β, which reduces to the standard Kullback-Leibler divergence for β = 1, i.e. α = 0.
2. Reducing the problem to a single pathogen
By symmetry all terms in the sum of Eq. C1 are equal on average and we have where we have introduced the short hand notations q := Qa and p := P̃a, and where the average is taken over n := na. The pathogen frequency q is approximately Gamma-distributed:
In general, the expectation value depends through p on all previous encounters with any of the pathogens (i.e. all the other na′, a′ ≠ a). In high dimensions we can approximate this dependence by neglecting the correlation of the normalization factor Z with q̂ and using an effective Z. For the power law cost functions we then have
For logarithmic cost this simplifies to p = q̂ and Z = 1. From Eq. C9 it follows that
We have q̂ = (θ + n)/(Kθ + λt), but as the equation is invariant to a linear rescaling of q̂ we can replace q̂ by simply θ + n.
3. Costs for perfect or no information
Asymptotically the distribution is learned perfectly and we have q̂ = q. Plugging this into the expressions derived previously we obtain for the power-law and logarithmic cost function respectively. Performing the integrals we obtain where Γ(z) is the Gamma function and γ the Euler-Mascheroni constant. For α = 1 this specializes to c̄∞ = πKθ. For the logarithmic cost c̄∞ is equal to the Shannon entropy of the distribution, which suggests an interpretation of Kθ as the effective number of pathogens that are present.
We can compare these costs for those obtained for a uniform repertoire to obtain for power law and logarithmic cost respectively. As expected, in more sparse environments a larger relative improvement can be obtained by learning the distribution.
4. Scaling in the limit of few samples
In the limit of small sampling, each pathogen has been seen at most once, meaning that n is binary and distributed according to a Bernoulli variable with mean λtq. Then we can use the approximation 〈 (θ + n)β〉 ≈ θβ(1 – λtq) + λtq to obtain
Putting things together we obtain
which, except for a correction that vanishes as θ → 0, scales with λt/(Kθ).
For the logarithmic cost we approximate similarly ln〈 (θ + n) 〉 (1 – λtq) ln(θ) + λtq ln(1 + θ) ≈ (1 – λtq) ln(θ). We then have
Using the formulas for the first and second moments we obtain
Approximating further we have
Again the relative cost depends solely on λt/(Kθ) except for logarithmic corrections that vanish as K→ ∞ for fixed Kθ.
SI Text D: Infection cost in the expansion-delay regime
We have previously described mechanistic models that give rise to a power-law dependency of infection cost on the coverage [15], in which we have assumed thatthe crucial determinant of infection cost is set by the time delay to recognition of the pathogen by the immune system. Experimental evidence shows that the initial recruitment of a large fraction of all specific lymphocytes often happens rapidly compared to the time it takes for the adaptive immune system to start clearing the infection [56]. We thus might hypothesize that the advantage of higher precursor numbers lies not in shortening the time to detection but in shortening the time to response by a sufficiently large number of effector cells.
To derive the scaling of infection cost with coverage under these conditions, we consider that after an infection at time 0, the number of specific cells grows exponentially with a rate γ, N (t) = N (0)e−γt. During the same time the pathogen population grows exponentially as well at a rate until a time t⋆ at which a threshold level N⋆ of specific cells is reached. The expansion-delay time scales as t⋆ = ln(N⋆/N (0))/γ. If we assume that the cost of an infection is proportional P (t⋆), then the cost scales as a power law with the initial number of specific cells N (0)−γp/γ.
Acknowledgements
The work was supported by grant ERCStG n. 306312, Simons MMLS grant 400425, and NSF grant PHY-1734030. Work on this project at the Aspen Center for Physics was supported by NSF grant PHY-1607611.
References
- [1].↵
- [2].
- [3].
- [4].↵
- [5].↵
- [6].
- [7].
- [8].↵
- [9].↵
- [10].↵
- [11].
- [12].↵
- [13].
- [14].↵
- [15].↵
- [16].↵
- [17].↵
- [18].↵
- [19].↵
- [20].↵
- [21].↵
- [22].↵
- [23].↵
- [24].↵
- [25].↵
- [26].
- [27].↵
- [28].↵
- [29].↵
- [30].↵
- [31].↵
- [32].↵
- [33].
- [34].↵
- [35].↵
- [36].↵
- [37].↵
- [38].↵
- [39].↵
- [40].↵
- [41].↵
- [42].↵
- [43].↵
- [44].↵
- [45].↵
- [46].↵
- [47].↵
- [48].↵
- [49].↵
- [50].↵
- [51].↵
- [52].↵
- [53].↵
- [54].↵
- [55].↵
- [56].↵