RT Journal Article SR Electronic T1 Scaling up DNA data storage and random access retrieval JF bioRxiv FD Cold Spring Harbor Laboratory SP 114553 DO 10.1101/114553 A1 Lee Organick A1 Siena Dumas Ang A1 Yuan-Jyue Chen A1 Randolph Lopez A1 Sergey Yekhanin A1 Konstantin Makarychev A1 Miklos Z. Racz A1 Govinda Kamath A1 Parikshit Gopalan A1 Bichlien Nguyen A1 Christopher Takahashi A1 Sharon Newman A1 Hsing-Yeh Parker A1 Cyrus Rashtchian A1 Kendall Stewart A1 Gagan Gupta A1 Robert Carlson A1 John Mulligan A1 Douglas Carmean A1 Georg Seelig A1 Luis Ceze A1 Karin Strauss YR 2017 UL http://biorxiv.org/content/early/2017/03/07/114553.abstract AB Current storage technologies can no longer keep pace with exponentially growing amounts of data. 1 Synthetic DNA offers an attractive alternative due to its potential information density of ~ 1018 B/mm3, 107 times denser than magnetic tape, and potential durability of thousands of years.2 Recent advances in DNA data storage have highlighted technical challenges, in particular, coding and random access, but have stored only modest amounts of data in synthetic DNA. 3,4,5 This paper demonstrates an end-to-end approach toward the viability of DNA data storage with large-scale random access. We encoded and stored 35 distinct files, totaling 200MB of data, in more than 13 million DNA oligonucleotides (about 2 billion nucleotides in total) and fully recovered the data with no bit errors, representing an advance of almost an order of magnitude compared to prior work. 6 Our data curation focused on technologically advanced data types and historical relevance, including the Universal Declaration of Human Rights in over 100 languages,7 a high-definition music video of the band OK Go,8 and a CropTrust database of the seeds stored in the Svalbard Global Seed Vault.9 We developed a random access methodology based on selective amplification, for which we designed and validated a large library of primers, and successfully retrieved arbitrarily chosen items from a subset of our pool containing 10.3 million DNA sequences. Moreover, we developed a novel coding scheme that dramatically reduces the physical redundancy (sequencing read coverage) required for error-free decoding to a median of 5x, while maintaining levels of logical redundancy comparable to the best prior codes. We further stress-tested our coding approach by successfully decoding a file using the more error-prone nanopore-based sequencing. We provide a detailed analysis of errors in the process of writing, storing, and reading data from synthetic DNA at a large scale, which helps characterize DNA as a storage medium and justify our coding approach. Thus, we have demonstrated a significant improvement in data volume, random access, and encoding/decoding schemes that contribute to a whole-system vision for DNA data storage.