ABSTRACT
It is increasingly recognized that Alzheimer’s disease (AD) exists before dementia is present and that shifts in amyloid beta occur long before clinical symptoms can be detected. Early detection of these molecular changes is a key aspect for the success of interventions aimed at slowing down rates of cognitive decline. Recent evidence indicates that of the two established methods for measuring amyloid, a decrease in cerebrospinal fluid (CSF) amyloid β1–42 (Aβ1–42) may be an earlier indicator of Alzheimer’s disease risk than measures of amyloid obtained from Positron Emission Tomography (PET). However, CSF collection is highly invasive and expensive. In contrast, blood collection is routinely performed, minimally invasive and cheap. In this work, we develop a blood-based signature that can provide a cheap and minimally invasive estimation of an individual’s CSF amyloid status using a machine learning approach. We show that a Random Forest model derived from plasma analytes can accurately predict subjects as having abnormal (low) CSF Aβ1–42 levels indicative of AD risk (0.84 AUC, 0.78 sensitivity, and 0.73 specificity). Refinement of the modeling indicates that only APOEε4 carrier status and four plasma analytes (CGA, Aβ1–42, Eotaxin 3, APOE) are required to achieve a high level of accuracy. Furthermore, we show across an independent validation cohort that individuals with predicted abnormal CSF Aβ1–42 levels transitioned to an AD diagnosis over 120 months significantly faster than those with predicted normal CSF Aβ1–42levels and that the resulting model also validates reasonably across PET Aβ1-42 status (0.78 AUC).
This is the first study to show that a machine learning approach, using plasma protein levels, age and APOEε4 carrier status, is able to predict CSF Aβ1–42 status, the earliest risk indicator for AD, with high accuracy.
1 Introduction
Alzheimer’s disease (AD) is a terminal neurodegenerative disease that has historically been diagnosed based on “clinically significant” cognitive decline of an individual and exclusion of other conditions. However, it is increasingly recognized that AD is a decades-long neurodegenerative process, with shifts in amyloid β1–42 (Aβ1–42) providing the first indicators of disease development, long before “Alzheimer’s dementia” (significant cognitive decline) is clinically apparent1–5.
There is currently no cure or disease-modifying therapy for this terminal illness despite hundreds of clinical trials being conducted since 20026,7. It is hypothesized that the high failure rate of AD trials is in part due to the trials targeting AD patients with significant cognitive impairment, who are therefore in the late stages of the disease and likely have suffered a level of brain tissue loss that cannot be compensated for8. Compounding this is the discovery that many patients enrolled in clinical trials were retrospectively found to have normal levels of amyloid and hence did not have AD9, with this number as high as 20%10. Given these findings, there is a great interest in amyloid screening for clinical trial enrichment in order to recruit individuals at the earliest stages of AD, where intervention is thought to have the greatest chance of success8. It would also ensure that included individuals are amyloid positive (i.e. have abnormal levels of amyloid), a necessary precondition for the development of AD. This sort of selective screening is an important precursor for the longer-term goal of population screening for AD11.
There are currently two established methods to measure an individual’s amyloid burden: either in vivo in the form of reduced levels of Aβ1–42 in the cerebrospinal fluid (CSF) or increased uptake of radioactive tracers that bind selectively to the Aβ fibrillary aggregates by PET imaging. Unfortunately, existing methodologies for measuring an individual’s amyloid levels suffer drawbacks that limit their utility for screening. Lumbar punctures are highly invasive, with this factor alone limiting the applicability of CSF biomarkers for screening. While PET scans are less invasive, they are far more expensive and access to PET scanning facilities is limited in some regions. Despite this, many current trials that target amyloid now require positive amyloid imaging at baseline to ensure accurate diagnosis, a cost-intensive process7.
Despite their invasiveness, recent studies have found evidence that changes in CSF Aβ1–42 may indicate AD risk long before these same changes are reflected in PET Aβ imaging12–14. Palmqvist et al13 have shown compelling evidence that changes in CSF Aβ1–42 occur up to a decade before the same signal is found by PET Aβ imaging. These results indicate that CSF may be a more suitable measure for early detection, whereas Aβ PET contributes independent information that is more related to disease progression and downstream pathology.
To bypass the invasiveness of CSF collection, there is a strong interest in finding blood-based markers that yield the same information about amyloid status as would be obtained from CSF. There have been a number of studies which have shown that a blood protein signature can be found that reflects AD brain pathology as measured by PET15–24. Of particular interest is the recent study by Nakamura et al. (2018) 25, whereby levels of Aβ1–40, Aβ1–42 and APP669–771 in plasma, measured using specialised immunoprecipitation (IP) coupled with Matrix Assisted Laser Desorption/Ionization (MALDI) time-of-flight (TOF) mass spectrometry (MS) (henceforth referred to as IP-MALDI-TOF-MS), were shown in combination to have strong performance (> 0.94 area under the receiver operating characteristic curve (AUC)) in predicting PET Aβ1–42 status across two cohorts. This combination of biomarkers was also found to be predictive of CSF Aβ1–42 status, with an AUC of 0.88 on a smaller subset of patients (n=46). The novel IP-MALDI-TOF-MS method employed by Nakamura et al.25 is still in its infancy and it is unclear how easily this will be translated into a clinical setting. Thus, there is still strong interest in finding blood markers for CSF Aβ1–42 using alternative approaches that rely on more established assays.
Here, we evaluated the ability of proteomic and metabolomic data to predict the levels of CSF Aβ1–42 using a Random Forest (RF) approach and explore which types of measurements lead to the strongest predictive performance. We then determine the minimal set of features required to achieve comparable predictive performance. Finally, we evaluate the robustness and utility of these predictive models across a held-out validation cohort of individuals with mild cognitive impairment (MCI), demonstrating that subjects with predicted abnormal CSF Aβ1–42 levels showed a faster rate of cognitive decline (measured by the transition to a clinical AD diagnosis) than those with predicted normal CSF Aβ1–42 levels.
2 Methods
2.1 Overview of cohort and measurements
The Alzheimer’s Disease Neuroimaging Initiative (ADNI) is a large, multicenter, longitudinal neuroimaging study, launched in 2004 by the National Institute on Aging, the National Institute of Biomedical Imaging and Bioengineering, the Food and Drug Administration, private pharmaceutical companies, and non-profit organizations.
ADNI is a longitudinal study of older adults, designed to test whether serial magnetic resonance imaging (MRI), positron emission tomography (PET), other biological markers, and clinical and neuropsychological assessment can be combined to measure the progression of MCI and early AD.
The ADNI study protocols were approved by the institutional review boards of all participating sites (http://www.adni-info.org/) and written informed consent was obtained from all participants or authorized representatives. All the analytical methods were performed on the de-identified data and were carried out in accordance with the approved guidelines. Study inclusion criteria and definitions for each diagnosis class have been previously reported in detail26. Briefly, individuals diagnosed as AD had to meet the National Institute of Neurological and Communicative Disorders and Stroke–Alzheimer’s Disease and Related Disorders Association criteria for probable AD (McKhann et al. 1984). These individuals had issues with global cognition and memory function and they, or their caretakers, reported significant concerns about their memory. In contrast, individuals with MCI exhibited subjective memory loss (CDR of 0.5 and were at least one standard deviation(SD) below the normal mean of the delayed recall of the Wechsler Memory Scale Logical Memory II) but showed preserved activities of daily living, the absence of dementia and scored 24-30 on the MMSE.
2.2 Data preparation
We examined 566 individuals in the ADNI cohort who had baseline measures of age, APOEε4 carrier status, 193 protein levels (including homocysteine, Aβ1–40, and Aβ1–42) and a further 190 proteins measured on a Rules-Based Medicine (RBM) platform) and 186 LC-MS/MS metabolites and lipids. After applying previously documented quality control procedures (Supplementary Methods) and removing analytes with more than 15% missingness, 149 proteins and 138 metabolites remained. No samples were removed from the analysis as missingness levels were less than 5%. Any remaining missing data points were imputed using an unsupervised RF approach27, with the resulting 289 analytes listed in Supplementary Table 5, showing means, SD, and association across different CSF Aβ1–42 status. After quality control, a total of 566 individuals, each with measures for 289 analytes, age, and APOEε4 carrier status, were present in the ADNI cohort (Table 1).
2.3 Training and Validation cohorts
The 566 individuals in this study ranged from 54.4-89.6 years of age and could be categorized at baseline by their AD clinical diagnosis as cognitively normal (CN; n = 58), amnestic MCI (n = 396) or probable AD (n = 112). A breakdown of the demographics of the 566 individuals by baseline diagnosis and CSF availability is shown in Table 1.
This cohort was split into training and validation cohorts with 356 and 210 individuals with and without measures of CSF Aβ1–42, respectively. CSF Aβ1–42 was measured using the Luminex xMAP platform. The training set was used to build predictive models and evaluate their performance directly using the measured Aβ1–42 levels while the validation cohort was used to evaluate the generalizability and utility of the model’s predictions. For each cohort, we also considered a subset of individuals for whom Aβ1–42 status from PET was available at least one-time point (not just at baseline), either using [11]C-Pittsburgh compound B (PiB) and [18]F-AV-45 (florbetapir, AV45) tracers, for further validation of our modeling. Further demographic information for these cohorts can be found in Supplementary Tables 1, 2 and 3.
2.4 Binary and regression modeling tasks
The primary aim of this work was to produce a model that predicts if an individual’s CSF Aβ1–42 levels are below the recognized clinical threshold of 192pg/ml for the Luminex platform, indicating an abnormal CSF Aβ1–42 level, and hence increased AD risk. Given the continuous CSF Aβ1–42 measures in the ADNI cohort, two approaches were considered
a ‘regression’ task: learning the continuous CSF Aβ1–42 levels and thresholding these post-prediction
a ‘binary’ task: learning the dichotomized CSF Aβ1–42 status based on clinical thresholds directly.
While both tasks result in a binary classifier, they face different trade-offs. The regression task makes use of the full information in the CSF levels but needs to learn a suitable threshold to convert its continuous predictions into suitable binary labels whereas the binary task only learns from the dichotomized CSF levels. Given these trade-offs, we have investigated both modeling approaches throughout this work.
2.5 Statistical modeling
We made use of Random Forests (RF) as the modeling approach to predict CSF Aβ1–42 levels for both the binary and regression tasks. RFs are a widely-used machine-learning ensemble method that have a number of advantages for the small sample size and disparate types of features observed in the ADNI dataset. RFs are invariant to the scale of the observed features and make few assumptions about the distributions of observed data allowing them to be applied to multiple data modalities easily. It can also detect non-additive relationships between variables without needing them to be included explicitly28.
All analysis in this work made use of the RF implementation in the R package ranger29. Each forest contained 2000 individual trees, each making use of a random selection of p3/4 features, where p was the total number of variables used in a given model. These parameter choices were based on recommendations provided in Ishwaran et al. (2011)30. All other parameters in the ranger implementation were set to their default values.
To get an estimate of the performance of our models, we have made use of a nested cross-validation (CV) framework, whereby an inner CV was used to determine model parameters, and the outer CV was used to gain an estimate of the model’s performance on unseen data31. In this study, we used 3 repetitions of 3 fold CV for the inner loop and 10 repetitions of 10 fold CV for the outer loop.
As the RF used pre-determined parameter values, only a single parameter had to be determined; the threshold on the continuous regression predictions necessary to generate binary labels. This threshold was selected based on performance in the inner CV loop, using the R package OptimalCutoffs32 to evaluate six potential cutoff metrics (Supplementary Methods) and selecting the method which maximized the accuracy over all of the test folds from the inner cross validation loops. The best performing cutoff criterion was then used in the current iteration of the outer cross-validation loop and the accuracy, sensitivity, and specificity derived from this threshold was recorded for that fold. While this approach means that a different method could be used to derive the regression threshold for each fold in the outer CV loop, the resulting estimate of performance is unbiased and hence is likely to be more representative of performance on unseen data compared with selecting a threshold based on the entire set of training data.
2.6 Measures of model performance
Model performance was summarized by the mean and standard deviation of the area under the Receiver Operating Characteristic (ROC) curve (AUC), accuracy, sensitivity, and specificity from the testing performance across the different cross-validation runs. R2 values were also calculated for the regression task. Increases in AUC between models were tested for significance using a one-tailed Wilcoxon signed-rank test. Receiver operating curves were constructed by aggregating all of the test predictions from the outer cross-validation.
2.7 Evaluating the importance of different input modalities
The input variables were separated into three classes: a commonly used baseline model (B) including age and APOEε4 carrier status; Proteomics (P), which included the 146 analytes measured on the RBM panel as well as homocysteine and plasma Aβ1–40 and Aβ1–42; Metabolomics (M), including 138 metabolites and lipids.
Four separate random forests were created using different subsets of these features to determine which were most useful for modeling CSF Aβ1–42. We denoted these models by the combination of features they included; for example ‘BPM’ refers to a model built using all three classes of features. The best performing model was selected for all subsequent analysis.
2.8 Discovery of the smallest set of markers needed for strong predictive performance
After evaluating the impact of the different input modalities, we determine the minimum set of individual analytes necessary to achieve high predictive performance. This was done by treating the number of included features as a parameter to be determined in our nested CV framework. Within each fold of the inner CV loop, we used a recursive feature elimination approach, ranking features according to their Variable Importance, the difference in the prediction error on the out-of-bag data when a given feature was permuted and unpermuted28 and removed the lowest ranking features in a stepwise fashion. The AUC of the resulting RF was recorded, and the procedure was repeated over increasingly smaller subsets of features until no features were left to be removed. After the inner CV loop finishes, we determine the number of features that achieved the optimal trade-off between model complexity and performance by selecting the smallest subset of features that achieved within 4% of the maximal observed AUC. A model using this subset of features was then trained on all training folds of the outer CV loop and evaluated on the test fold. Again, by determining the number of features to include within our nested cross-validation framework, we are able to determine an unbiased estimate of the model’s expected performance over unseen data.
2.9 Survival Analysis
Survival analysis was conducted to determine if the rate of conversion from MCI to AD was different between those with predicted low and normal CSF Aβ1–42 levels, enabling us to determine if our predictions lead to useful clinical outcomes in the validation cohort.
Four separate analyses were performed, using the:
measured CSF status on the training set (n=198)
predicted CSF status from B model, the standard baseline, in the validation set (n=198)
predicted CSF status from BP model, the best performing model, in the validation set (n=198)
predicted CSF status from BP fs model, the most parsimonious model, in the validation set (n=198),
where BP fs is the feature selected model with the smallest set of features. For each analysis, we have examined the hazard ratios using Cox regression and used log-rank tests to compare the survival distribution of low/normal CSF Aβ1–42 stratifications in the four analyses, as well to compare equivalence between the actual and predicted stratifications.
2.10 Validation performance over PET Aβ1–42 status
In order to further validate our model, we have examined the ability of our model to differentiate PET Aβ1–42 abnormal and normal status. While it is known that Aβ1–42 status from PET can differ from that observed in CSF, measurements from the two modalities are correlated and should be very similar for individuals who are not close to the cutoff indicating pathology. This provides analysis provides further evidence of our model’s ability to determine Aβ1–42 status in individuals in the validation cohort, where CSF measurements are not available.
Given that only a limited number of individuals had associated measures of PET imaging at baseline (n=18 and 27 for training and validation cohorts respectively), we have made use of the earliest PET image available, leaving us with 108 and 68 individuals in the training and validation cohort to evaluate. The threshold for abnormality was defined as an SUVR of 1.5 and 1.11 for PET images using PiB and AV45 tracers respectively. The mean number of years past baseline that a scan was taken was 3.07 and 2.97 years for training and validation cohorts respectively.
The use of imaging at non-baseline times assumes that differences between the baseline and time that the image was taken are relatively small (which may be reasonable assuming a slow rate of Aβ1–42 accumulation) and that few individuals are close to the defined threshold for abnormality. If these assumptions do not hold, it is likely to worsen predictive performance, making this analysis somewhat conservative.
3. Results
3.1 Models utilizing protein levels accurately predict CSF positivity
We evaluated the ability of blood-based biomarkers to predict CSF Aβ1–42 normal/abnormal status using RFs trained using different subsets of input variables, treating the modeling of CSF Aβ1–42 as either a regression or binary task. Summaries of the performance metrics from the resulting models are shown in Table 2 with their corresponding ROC curves shown in Figure 1.
We observed strong overall predictive performance for both the regression and binary tasks within our cross-validation framework. All sets of features outperformed the base model of age and APOEε4 carrier status with BP based models leading to the highest AUC of 0.84 and 0.83 for the regression and binary tasks respectively. The standard deviation for the AUC was relatively high (7-8%), likely due to the noise inherent in both the analytes being used for prediction as well as in the CSF Aβ1–42 measurements.
The BP models resulted in a mean R2 of 0.29 for the regression task. The automatically derived threshold for this regression RF yielded a mean accuracy of 0.77, with a sensitivity of 0.78 and a specificity of 0.73. Across the 100 cross-validation runs, the chosen threshold ranged from 164pg/ml to 194pg/ml with a median of 185pg/ml.
Similar AUC and accuracy could be observed for learning the dichotomized CSF labels directly (e.g 0.83 AUC, 0.77 accuracy for the BP model). For the binary task, a slight drop in both AUC, as well as an altered trade-off between sensitivity and specificity, was observed across all different feature sets compared to the regression task. Given this, we chose to focus on the regression model for much of the follow-up analysis.
While all models making use of blood analytes outperformed the base model of age and APOEε4, models that made use of the protein level measurements consistently achieved the strongest predictive performance, whereas metabolites appeared to be of limited utility. In both the regression and binary tasks, models containing metabolites and proteomic data (BPM) achieved equivalent or worse AUCs than models containing only the proteomic data (BP). Furthermore, we observed that the use of the base features and metabolites alone (BM) lead to decreased performance compared to the baseline model, indicating that the set of measured metabolites may have contributed little predictive information or may have been too noisy to be useful for predicting CSF status. These findings are in contrast to the previously reported utility of metabolites in predicting PET Aβ1–42 positivity22.
While the results presented in this section include clinically diagnosed AD individuals, who are almost all CSF Aβ1–42 positive, it is worth considering only ‘pre-clinical’ individuals as this may be more relevant for selective screening in drug trials. Evaluating our model’s performance on CN and MCI individuals only, we find that similarly strong predictive performance can be obtained (Supplementary Table 4, Supplementary Figure 1, 0.80 AUC, 0.77 accuracy for the BP model) supporting our primary findings that plasma protein levels can be utilized to predict amyloid pathology status.
To ensure that our imputation procedure did not bias our results, we also built similar models using only complete cases after applying more stringent quality control (removal of plasma analytes where more than 1% of measurements were missing), obtaining similar AUCs of 0.81 for the regression and binary tasks (Supplementary Figure 2).
3.2 Strong predictive performance is maintained using only four proteins
The models described so far used all (> 140) available features in this dataset. In practice, measuring hundreds of analytes is costly, negating a key advantage of using blood biomarkers for screening. Given this, we have applied feature selection to the BP regression model to identify the smallest number of features that still achieved high predictive performance. Within cross-validation, we find that the average performance of this feature selection approach, denoted BP fs, yields an AUC, sensitivity, and specificity of 0.81, 0.81 and 0.63. The number of features selected in the model ranges from 2 to 15, with a median of 5 features included.
When applying this feature selection procedure to the entire set of training data, we identified a subset of four plasma analytes as well as APOE4 genotype status critical for model performance: Chromogranin-A (CGA), Aβ1–42 (AB42), Eotaxin 3, and Apolipoprotein E (APOE). This combination of protein levels, together with APOEε4 is denoted as BP5. Figure 2 indicates how each variable influences the model predictions after we have accounted for the influence of the other four variables. As expected, the strongest relationship with CSF Aβ1–42 is with APOEε4 carrier status, where being a carrier (APOEε4 = 1) leads to a low predicted Aβ1–42 level. While the relationships between the proteins and CSF Aβ1–42 are non-linear (a common outcome given the nature of RFs), the overall correlation with CSF Aβ1–42 is positive for CGA, Plasma Aβ1–42, and APOE protein levels and negative for Eotaxin 3.
3.3 Validation of clinical utility
To demonstrate the utility of our modeling on unseen data, we conducted a survival analysis over the validation cohort (n=198), evaluating the probability of baseline MCI individuals transitioning to AD diagnosis over 120 months, stratified by predicted CSF Aβ1–42 status from either the B, BP or reduced BP5 model. These survival distributions could then be compared to those of the real Aβ1–42 status observed over the training cohort. Given the demographic similarity of the two cohorts, we would expect to see strong similarities in rates of conversion.
From Figure 3, we observed that in all cases, the predicted low CSF Aβ1–42 group transitioned to AD significantly faster than the Aβ1–42 normal group. Comparing the predictions from the BP, BP5 and B models on the validation cohort to the actual CSF Aβ1–42 status on the training cohort, we find that there is no significant difference between the survival distributions for either the normal (log-rank test p = 0.19, 0.2, 0.21) or abnormal (log-rank test p = 0.97, 0.31, 0.23) survival distributions, respectively, reflecting the overlapping confidence intervals of the hazard ratios. However, it can be observed that due to differences in the thresholding of the Aβ1–42 levels, fewer individuals are deemed as CSF Aβ1–42 ‘normal’ in the actual data (n=53), compared with any of the three models applied to the validation datasets (n=95, 73, and 71 for BP, BP5, and B models respectively), highlighting the well-recognized issues of defining standardized cutoff values across studies33. The significant differences in conversion rates between the predicted normal/low strata, especially from the more parsimonious BP5 model, together with their similarity to the survival distributions of the actual CSF measures, provide strong evidence that our blood-based model can help stratify individuals based on their risk of developing clinical AD (Table 1).
3.4 Concordance with PET Aβ1–42 status
To further validate and quantify our model’s performance, we have explored the relationship between the predicted CSF Aβ1–42 scores and PET imaging status. Confirming that the PET and CSF Aβ1–42 status are correlated, we find that they differed in only 7 out of 108 individuals for whom both CSF and PET amyloid status were available. As such, evaluating our model against the PET Aβ1–42 status should provide a conservative estimate for the AUC on the validation cohort, despite the lack of CSF measures.
The resulting ROC curves in Figure 4 provide further evidence that the BP and BP5 models are able to predict Aβ1–42 status, with AUCs against PET Aβ1–42 on the validation cohort of 0.78 and 0.8 for the BP and BP5 models respectively. These results are similar to those from predicting CSF status from the training data (Figure 1), with a small expected drop due to the inherent differences between CSF and PET amyloid. Interestingly, we observe stronger performance for the reduced BP5 model compared to the full BP model, with both models significantly improving upon the baseline model of age and APOEε4 status.
4 Discussion
The most positive results from AD trials to date have been found in patients with early forms of the disease, leading to an increasing awareness that treatments are likely to be most successful if applied at the earliest stages of AD8. Some AD clinical trials are enriching pre-symptomatic AD individuals with PET screening. However, recent findings that shifts in CSF amyloid can be observed up to a decade before those from PET may indicate that CSF positive individuals are even more suitable for clinical trial enrichment 34. Direct measurement of CSF biomarkers is too invasive to be used in such a screening test35 motivating the development of a minimally-invasive, low-cost solution that provides the same type of information to overcome these limitations.
This current study evaluates the utility of a blood-based signature of CSF Aβ1–42 status using a Random Forest approach. We demonstrated that CSF Aβ1–42 normal/abnormal status using age, APOEε4 carrier status, and protein levels can be predicted with a high AUC, sensitivity and specificity of 0.84, 0.78 and 0.73 respectively. Compared to the base model (age and APOEε4 genotype) the inclusion of the plasma analytes improved the performance (AUC) by 6%. To make the model more suitable for clinical application, we identified four plasma analytes which, together with APOEε4 carrier status, still achieved a high AUC, sensitivity, and specificity of 0.81, 0.81 and 0.64 respectively. These predictive models were then validated on a separate cohort of individuals to demonstrate that MCI subjects with predicted abnormal CSF Aβ1–42 (low) levels transitioned to an AD diagnosis at a significantly higher rate than those predicted with normal CSF Aβ1–42 levels. Furthermore, these rates were similar to those observed in a demographically similar cohort of MCIs using actual CSF Aβ1–42 levels. This is a strong validation of our modeling as the blood-based biomarkers for CSF Aβ1–42 status is only useful if they can replicate the behavior of the actual Aβ1–42 status for clinically relevant endpoints for individuals that were not used to build the predictive model. Strong predictive power of PET Aβ1–42 status on the validation cohort provides further evidence for the generalizability and robustness of our modeling.
A number of studies have previously investigated the use of blood analytes to predict the burden of amyloid in the neocortex, as measured by PET 15, 16, 18–20, 22, 23. Some of these studies showed similar performance metrics to those reported in this work (> 0.80 AUCs15,23–25 or > 0.78 accuracy17), indicating that prediction of PET and CSF Aβ1–42 status are of similar difficulty. PET Aβ is directly related to brain fibrillar amyloid, whereas CSF amyloid is a marker of soluble Aβ1–42 and they may, therefore, give different insights into AD progression and mechanisms. For example, CSF Aβ1–42 has been shown to be associated with APOEε4 whereas PET Aβ1–42 has been shown to have a greater association with tau36. Thus, the development of a blood-based screening test for CSF Aβ1–42 levels is a complementary approach to existing blood-based biomarkers of PET amyloid status
Of the above studies, the study by Nakamura et. al.25 showed a very high AUC in discovery and validation datasets for PET Aβ1–42 status (AUC 0.94 and 0.96 respectively) as well as a strong performance for predicting abnormalities in CSF Aβ1–42 levels (AUC 0.88%), in a small cohort (n=46) of their validation set. While these results are promising, the automation of the novel technique used (IP-MALDI-TOF-MS), and hence transfer to a clinical setting, is non-trivial, motivating the search for complementary approaches. The protein signature presented in this study, based on a multiplex immunoassay, is likely to require a far shorter timeframe for clinical translation given the high level of automation that already exists for multiplex immunoassays, and that biomarkers from such platform have already been used in commercially available diagnostic tests that have been approved by the FDA.
The use of metabolites appeared to be of limited utility for predicting CSF Aβ1–42. In both the regression and binary tasks, models containing metabolites achieved equivalent or worse AUCs than models without. These findings can be contrast with the utility of metabolites in predicting PET Aβ1–42 positivity22 and their association with AD more broadly37. Alternative methods for integrating this source of data38 may be required in order to find robust associations with CSF Aβ1–42 status.
The subset of features used in our BP5 model included APOEε4 genotype and plasma levels of Chromogranin-A (CGA),Eotaxin 3, Aβ1–42 (AB42), and Apolipoprotein E (APOE). Several of these identified proteins have known associations with Alzheimer’s disease. Unsurprisingly, the levels of plasma APOE are associated with CSF amyloid levels. APOEε4 is the strongest genetic risk factor for AD. APOE is involved in the clearance of [inline]and there is a strong relationship between APOEε4 genotype and APOE plasma levels, where APOEε4 carriers have lower plasma levels42, 43. Plasma Aβ1–42 showed a positive relationship in our model for CSF Aβ1–42, in line with a prior observation44. This is interesting as the link between alterations of Aβ1–42 levels in the blood and the progression of the disease is still controversial and studies assessing the Aβ1–42 concentration in blood of AD patients have produced conflicting results44–50. Chromogranin A (CGA) is associated with synaptic function and has traditionally been used as an indicator of neuroendocrine tumors51. More recent work has shown that CGA has a degree of co-localisation with amyloid plaques in the brain52, 53. However, levels of CGA in the CSF and blood serum do not appear to be correlated54 and serum CGA has not previously been linked to AD. Eotaxin 3, also known as C-C chemokine ligand 26 (CCL26), plays an important role in the innate immune system and has been found to be dysregulated in AD patients55. CSF Eotaxin 3 has been shown to be significantly elevated in patients with prodromal AD, however, Eotaxin 3 levels in plasma or the CSF has not been shown to correlate with rates of disease progression55, 56
This study has several limitations. The training and validation cohorts are both composed of individuals in the ADNI study and thus all measures were conducted on the same platforms. Hence further cross-cohort and cross-platform replication is required. This remains an ongoing issue within the development of all AD biomarkers relating to early screening and requires significant future investment57. Furthermore, the current cohort is neuropathology biased, i.e. 84% of the cohort have MCI or AD, and thus likely to have neuronal damage, potentially confounding the analysis of CSF Aβ1–42 status. Finally, it needs to be noted that there are other medical conditions that are known to affect CSF Aβ1–42 levels and it is unclear whether these affect any of the patients in our cohort.
The early identification of AD disease is paramount and a major global focus as the success of disease-modifying or preventative therapies in AD may depend on detecting the earliest signs of abnormal amyloid-beta load. The differences between CSF Aβ1–42 and PET Aβ1–42 in preclinical stages of AD are likely to have implications for clinical trial enrichment. Blood-based biomarkers of amyloid can serve as the first step in a multistage screening procedure, similar to those that have been clinically-implemented in cancer, cardiovascular disease, and infectious diseases57. In-conjunction with biomarkers for neocortical amyloid burden, the CSF Aβ1–42 biomarkers presented in this work may help yield a cheap, non-invasive tool for both improving clinical trials targeting amyloid and population screening.
6 Conflict of Interest
The authors declare no conflict of interest.
7 Author Contributions Statement
B.G, B.F, and N.F. designed the study, B.G., C.S. and B.F. analyzed the data and ran all experiments, B.G. made the figures, B.G., C.S. and N.F. wrote the manuscript. All authors interpreted the data and critically revised the manuscript.
8 Consortium Members
8.1 Alzheimer’s Disease Neuroimaging Initiative - ADNI
Michael W. Weiner8, Paul Aisen9, Ronald Petersen10, Clifford R. Jack, Jr.11, William Jagust12, John Q. Trojanowki13, Arthur W. Toga14, Laurel Beckett15, Robert C. Green16, Andrew J. Saykin17, John Morris18, Leslie M. Shaw13, Jeffrey Kaye19, Joseph Quinn20, Lisa Silbert20, Betty Lind20, Raina Carter19, Sara Dolen19, Lon S. Schneider19, Sonia Pawluczyk19, Mauricio Beccera19, Liberty Teodoro14, Bryan M. Spann14, James Brewer20, Helen Vanderswag20, Adam Fleisher20, Judith Heidebrink21, Joanne L. Lord21, Sara S. Mason11, Colleen S. Albers11, David Knopman11, Kris Johnson11, Rachelle S. Doody22, Javier Villanueva-Meyer22, Munir Chowdhury22, Susan Rountree22, Mimi Dang22, Yaakov Stern23, Lawrence S. Honig23, Karen L. Bell23, Beau Ances23, John C. Morris23, Maria Carroll23, Mary L. Creech23, Erin Franklin23, Mark A. Mintun18, Stacy Schneider18, Angela Oliver18, Daniel Marson24, Randall Griffth24, David Clark24, David Geldmacher24, John Brockington24, Erik Roberson24, Marissa Natelson Love24, Hillel Grossman25, Effie Mitsis25, Raj C. Shah26, Leyla deToledo-Morrell26, Ranjan Duara27, Daniel Varon27, Maria T. Greig27, Peggy Roberts27, Marilyn Albert28, Chiadi Onyike28, Daniel D’Agostino28, Stephanie Kielb28, James E. Galvin29, Brittany Cerbone29, Christina A. Michel29, Dana M. Pogorelec29, Henry Rusinek29, Mony J de Leon29, Lidia Glodzik29, Susan De Santi29, P. Murali Doraiswamy30, Jeffrey R. Petrella30, Salvador Borges-Neto30, Terence Z. Wong30, Edward Coleman30, Charles D. Smith31, Greg Jicha31, Peter Hardy31, Partha Sinha31, Elizabeth Oates31, Gary Conrad31, Anton P. Porsteinsson32, Bonnie S. Goldstein32, Kim Martin32, Kelly M. Makino32, Saleem Ismail32, Connie Brand32, Ruth A. Mulnard33, Gaby Thai33, Catherine Mc-Adams-Ortiz33, Kyle Womack34, Dana Mathews34, Mary Quiceno34, Allan I. Levey35, James J. Lah35, Janet S. Cellar35, Jeffrey M. Burns36, Russell H. Swerdlow36, William M. Brooks36, Liana Apostolova37, Kathleen Tingus37, Ellen Woo37, Daniel H.S. Silverman37, Po H. Lu37, George Bartzokis37, Neill R Graff-Radford38, Francine Parftt38, Tracy Kendall38, Heather Johnson38, Martin R. Farlow17, Ann Marie Hake17, Brandy R. Matthews17, Jared R. Brosch17, Scott Herring17, Cynthia Hunt17, Christopher H. van Dyck39, Richard E. Carson39, Martha G. MacAvoy39, Pradeep Varma39, Howard Chertkow40, Howard Bergman40, Chris Hosein40, Sandra Black41, Bojana Stefanovic41, Curtis Caldwell41, Ging-Yuek Robin Hsiung42, Howard Feldman42, Benita Mudge42, Michele Assaly42, Elizabeth Finger43, Stephen Pasternack43, Irina Rachisky43, Dick Trost43, Andrew Kertesz43, Charles Bernick44, Donna Munic44, Marek-Marsel Mesulam45, Kristine Lipowski45, Sandra Weintraub45, Borna Bonakdarpour45, Diana Kerwin45, Chuang-Kuo Wu45, Nancy Johnson45, Carl Sadowsky46, Teresa Villena46, Raymond Scott Turner47, Kathleen Johnson47, Brigid Reynolds47, Reisa A. Sperling48, Keith A. Johnson48, Gad Marshall48, Jerome Yesavage49, Joy L. Taylor49, Barton Lane49, Allyson Rosen49, Jared Tinklenberg49, Marwan N. Sabbagh50, Christine M. Belden50, Sandra A. Jacobson50, Sherye A. Sirrel50, Neil Kowall51, Ronald Killiany51, Andrew E. Budson51, Alexander Norbash51, Patricia Lynn Johnson51, Thomas O. Obisesan52, Saba Wolday52, Joanne Allard52, Alan Lerner53, Paula Ogrocki53, Curtis Tatsuoka53, Parianne Fatica53, Evan Fletcher54, Pauline Maillard54, John Olichney54, Charles DeCarli54, Owen Carmichael54, Smita Kittur55, Michael Borrie56, T-Y Lee56, Rob Bartha56, Sterling Johnson57, Sanjay Asthana57, Cynthia M. Carlsson57, Steven G. Potkin57, Adrian Preda57, Dana Nguyen57, Pierre Tariot58, Anna Burke58, Nadira Trncic58, Adam Fleisher59, Stephanie Reeder59, Vernice Bates60, Horacio Capote60, Michelle Rainka60, Douglas W. Scharre61, Maria Kataki61, Anahita Adeli61, Earl A. Zimmerman62, Dzintra Celmins62, Alice D. Brown62, Godfrey D. Pearlson63, Karen Blank63, Karen Anderson63, Laura A. Flashman64, Marc Seltzer64, Mary L. Hynes64, Robert B. Santulli64, Kaycee M. Sink65, Leslie Gordineer65, Je D. Williamson65, Pradeep Garg65, Franklin Watkins65, Brian R. Ott66, Henry Querfurth66, Geffrey Tremont66, Stephen Salloway67, Paul Malloy67, Stephen Correia67, Howard J. Rosen68, Bruce L. Miller68, David Perry68, Jacobo Mintzer69, Kenneth Spicer69, David Bachman69, Nunzio Pomara70, Raymundo Hernando70, Antero Sarrael70, Norman Relkin71, Gloria Chaing71, Michael Lin71, Lisa Ravdin71, Amanda Smith72, Balebail Ashok Raj72, Kristin Fargher72
8 Magnetic Resonance Unit at the VA Medical Center and Radiology, Medicine, Psychiatry and Neurology, University of California, San Francisco, USA.
9 San Diego School of Medicine, University of California, California, USA.
10 Mayo Clinic, Minnesota, USA.
11 Mayo Clinic, Rochester, USA.
12 University of California, Berkeley, USA.
13 University of Pennsylvania, Pennsylvania, USA.
14 University of Southern California, California, USA.
15 University of California, Davis, California, USA.
16 MPH Brigham and Women’s Hospital/Harvard Medical School; Massachusetts, USA.
17 Indiana University, Indiana, USA.
18 Washington University St. Louis, Missouri, USA.
19 Oregon Health and Science University, Oregon, USA.
20 University of California–San Diego, California, USA.
21 University of Michigan, Michigan, USA.
22 Baylor College of Medicine, Houston, State of Texas, USA.
23 Columbia University Medical Center, South Carolina, USA.
24 University of Alabama – Birmingham, Alabama, USA.
25 Mount Sinai School of Medicine, New York, USA.
26 Rush University Medical Center, Rush University, Illinois, USA.
27 Wien Center, Florida, USA.
28 Johns Hopkins University, Maryland, USA.
29 New York University, NY, USA.
30 Duke University Medical Center, North Carolina, USA.
31 University of Kentucky, Kentucky, USA.
32 University of Rochester Medical Center, NY, USA.
33 University of California, Irvine, California, USA.
34 University of Texas Southwestern Medical School, Texas, USA.
35 Emory University, Georgia, USA.
36 University of Kansas, Medical Center, Kansas, USA.
37 University of California, Los Angeles, California, USA.
38 Mayo Clinic, Jacksonville, USA.
39 Yale University School of Medicine, Connecticut, USA.
40 VMcGill University, Montreal-Jewish General Hospital, Canada.
41 Sunnybrook Health Sciences, Ontario, USA.
42 U.B.C. Clinic for AD & Related Disorders, Canada.
43 Cognitive Neurology - St. Joseph’s, Ontario, USA.
44 Cleveland Clinic Lou Ruvo Center for Brain Health, Ohio, USA.
45 Northwestern University, USA.
46 Premiere Research Inst (Palm Beach Neurology), USA.
47 Georgetown University Medical Center, Washington D.C, USA.
48 Brigham and Women’s Hospital, Massachusetts, USA.
49 Stanford University, California, USA.
50 Banner Sun Health Research Institute, USA.
51 Boston University, Massachusetts, USA.
52 Howard University, Washington D.C, USA.
53 Case Western Reserve University, Ohio, USA.
54 University of California, Davis – Sacramento, California, USA.
55 Neurological Care of CNY, USA.
56 Parkwood Hospital, Pennsylvania, USA.
57 University of Wisconsin, Wisconsin, USA.
58 University of California, Irvine – BIC, USA.
59 Banner Alzheimer’s Institute, USA.
60 Dent Neurologic Institute, NY, USA.
61 Ohio State University, Ohio, USA.
62 Albany Medical College, NY, USA.
63 Hartford Hospital, Olin Neuropsychiatry Research Center, Connecticut, USA.
64 Dartmouth-Hitchcock Medical Center, New Hampshire, USA.
65 Wake Forest University Health Sciences, North Carolina, USA.
66 Rhode Island Hospital, state of Rhode Island, USA.
67 Butler Hospital, Providence, Rhode Island, USA.
68 University of California, San Francisco, USA.
69 Medical University South Carolina, USA.
70 Nathan Kline Institute, Orangeburg, New York, USA.
71 Cornell University, Ithaca, New York, USA.
72 USF Health Byrd Alzheimer’s Institute, University of South Florida, USA.
8.2 Alzheimer’s Disease Metabolomics Consortium
Andrew Saykin73, Kwangsik Nho73, Mitchel Kling13, John Toledo13, Leslie Shaw13, John Trojanowski13, Lindsay Farrer51, Gabi Kastsenmü ller74, Matthias Arnold74, David Wishart75, Peter Wü rtz76, Sudeepa Bhattcharyya77, Cornelia van Duijin78, Lara Mangravite79, Xianlin Han80, Thomas Hankemeier73, Oliver Fiehn82, Dinesh Barupal82, Ines Thiele83, Almut Heinken83, Peter Meikle84, Nathan Price84, Cory Funk84,Wei Jia86, Alexandra Kueider-Paisley30, P. Murali Doraiswamy30, Jessica Tenebaum30, Colette Black30, Arthur Moseley30, Will Thompson30, Siam Mahmoudiandehkorki30, Rebecca Baillie30, Kathleen Welsh-Bohmer30, Brenda Plassman30.
73 Indiana University
74 Helmholtz Zentrum Muenchen
75 The Metabolomics Innovation Centre, Canada (TMIC)
76 Nightingale Health
77 University of Arkansas
78 Erasmus MC
79 SAGE Networks
80 University of Texas Health Science Center, San Antonio
81 Leiden University Metabolomics Center
82 West Cost Metabolomics Center
83 University of Luxembourg
84 Baker Heart and Diabetes Institute
85 Institute for Systems Biology
86 University of Hawaii
5 Acknowledgments
Data collection and sharing for this project was funded by the Alzheimer’s Disease Neuroimaging Initiative (ADNI) (National Institutes of Health Grant U01 AG024904) and DOD ADNI (Department of Defense award number W81XWH-12-2-0012). ADNI is funded by the National Institute on Aging, the National Institute of Biomedical Imaging and Bioengineering, and through generous contributions from the following: AbbVie, Alzheimer’s Association; Alzheimer’s Drug Discovery Foundation; Araclon Biotech; BioClinica, Inc.; Biogen; Bristol-Myers Squibb Company; CereSpir, Inc.; Cogstate; Eisai Inc.; Elan Pharmaceuticals, Inc.; Eli Lilly and Company; EuroImmun; F. Hoffmann-La Roche Ltd and its affiliated company Genentech, Inc.; Fujirebio; GE Healthcare; IXICO Ltd.; Janssen Alzheimer Immunotherapy Research & Development, LLC.; Johnson & Johnson Pharmaceutical Research & Development LLC.; Lumosity; Lundbeck; Merck & Co., Inc.; Meso Scale Diagnostics, LLC.; NeuroRx Research; Neurotrack Technologies; Novartis Pharmaceuticals Corporation; Pfizer Inc.; Piramal Imaging; Servier; Takeda Pharmaceutical Company; and Transition Therapeutics. The Canadian Institutes of Health Research is providing funds to support ADNI clinical sites in Canada. Private sector contributions are facilitated by the Foundation for the National Institutes of Health (www.fnih.org). The grantee organization is the Northern California Institute for Research and Education, and the study is coordinated by the Alzheimer’s Therapeutic Research Institute at the University of Southern California. ADNI data are disseminated by the Laboratory for Neuro Imaging at the University of Southern California. This work was supported by IBM. We would like to thank Dr. Matthew Downton, Dr. Annalisa Swan and Dr. Anna Trigos for helpful feedback on the manuscript.
Footnotes
↵5 A complete listing of ADNI investigators can be found at: http://adni.loni.usc.edu/wp-content/uploads/how_to_apply/ADNI_Acknowledgement_List.pdf.
↵6 A complete listing of ADMC investigators can be found at: https://sites.duke.edu/adnimetab/who-we-are/
↵† Data used in preparation of this article were generated by the Alzheimer’s Disease Metabolomics Consortium (ADMC). As such, the investigators within the ADMC provided data but did not participate in analysis or writing of this report
↵‡ Data used in preparation of this article were obtained from the Alzheimer’s Disease Neuroimaging Initiative (ADNI) database (adni.loni.usc.edu). As such, the investigators within the ADNI contributed to the design and implementation of ADNI and/or provided data but did not participate in analysis or writing of this report.