TY - JOUR T1 - Scaling up DNA data storage and random access retrieval JF - bioRxiv DO - 10.1101/114553 SP - 114553 AU - Lee Organick AU - Siena Dumas Ang AU - Yuan-Jyue Chen AU - Randolph Lopez AU - Sergey Yekhanin AU - Konstantin Makarychev AU - Miklos Z. Racz AU - Govinda Kamath AU - Parikshit Gopalan AU - Bichlien Nguyen AU - Christopher Takahashi AU - Sharon Newman AU - Hsing-Yeh Parker AU - Cyrus Rashtchian AU - Kendall Stewart AU - Gagan Gupta AU - Robert Carlson AU - John Mulligan AU - Douglas Carmean AU - Georg Seelig AU - Luis Ceze AU - Karin Strauss Y1 - 2017/01/01 UR - http://biorxiv.org/content/early/2017/03/07/114553.abstract N2 - Current storage technologies can no longer keep pace with exponentially growing amounts of data. 1 Synthetic DNA offers an attractive alternative due to its potential information density of ~ 1018 B/mm3, 107 times denser than magnetic tape, and potential durability of thousands of years.2 Recent advances in DNA data storage have highlighted technical challenges, in particular, coding and random access, but have stored only modest amounts of data in synthetic DNA. 3,4,5 This paper demonstrates an end-to-end approach toward the viability of DNA data storage with large-scale random access. We encoded and stored 35 distinct files, totaling 200MB of data, in more than 13 million DNA oligonucleotides (about 2 billion nucleotides in total) and fully recovered the data with no bit errors, representing an advance of almost an order of magnitude compared to prior work. 6 Our data curation focused on technologically advanced data types and historical relevance, including the Universal Declaration of Human Rights in over 100 languages,7 a high-definition music video of the band OK Go,8 and a CropTrust database of the seeds stored in the Svalbard Global Seed Vault.9 We developed a random access methodology based on selective amplification, for which we designed and validated a large library of primers, and successfully retrieved arbitrarily chosen items from a subset of our pool containing 10.3 million DNA sequences. Moreover, we developed a novel coding scheme that dramatically reduces the physical redundancy (sequencing read coverage) required for error-free decoding to a median of 5x, while maintaining levels of logical redundancy comparable to the best prior codes. We further stress-tested our coding approach by successfully decoding a file using the more error-prone nanopore-based sequencing. We provide a detailed analysis of errors in the process of writing, storing, and reading data from synthetic DNA at a large scale, which helps characterize DNA as a storage medium and justify our coding approach. Thus, we have demonstrated a significant improvement in data volume, random access, and encoding/decoding schemes that contribute to a whole-system vision for DNA data storage. ER -