cutout paper composition of viral genome in human body

How to Prepare for Whole Genome Sequencing

5/19/2012

  This week, the news from the sequencing sub-sector of the Biotech sector was pretty much all bad.  The stock prices of most of the major sequencing companies were down slightly this quarter, just when it seemed that they should have been taking off.  I feel that as someone who has been at the center of this field for several years, I can confidently provide some insight into part of the reason why this part of our economy is not taking off as rapidly as one might expect, given the staggering success of technological developments in this area.  It’s not that they have provided too little, but rather, too much and too soon.
  I think that in this case, there is no better way to illustrate this than with a personal example.  My institute is now planning to upgrade from our current sequencing platform, the Illumina GAII, to one with 100-fold more capacity, the Illumina HiSeq.  While before, it was possible to sequence the human genome, it was only possible with a concerted effort over the course of months and at great expense.  Many centers like ours published proof-of-concept genome papers on their favorite organisms or people (we seriously considered this, but in the end decided not to).  Now it is possible to sequence around 10 human genomes in a single week-long run. 
  I have repeatedly tried to explain this fact to people, and asked them how they might use whole human genome studies in their research.  Only a few have responded positively  Most say something like “that’s great…”, and then change the subject.  They simply don’t know what they would do with this type of data.  In talking with others, I find that this is a more general phenomenon.  The scientists are glad that this technology exists, but are reluctant to use it, and perhaps with good reason.
  What it comes down to is this:  does your lab want to sequence some genomes?  To sequence 10 human patient genomes is now so trivial as to be plausibly included as one aim of a grant, so why aren’t people doing this?  Part of the reason lies in the fact that you would pretty much need to get the rest of your lab behind the project of getting these genomes ready for sequencing, and part of it lies in the fact that you would need to build an infrastructure for analyzing this information, and each lab that starts such and undertaking must, at some level, build their own infrastructure for this purpose.   
  One thing that I have consistently noticed about Genomics studies of any sort is that they tend to be difficult to analyze and write up.  It seems to be a general rule that the average amount of time from the last sample sequenced to submission of the manuscript is not less than one year.  It is often the case that the main difference between a low-impact Genomics paper and a high-impact Genomics paper has more to do with the analysis and planning than it does with the data.  Smaller labs can be overwhelmed.  Just storing the data can be a monstrous task, let alone analyzing it.  I think that in many cases, they are reluctant to spend the time, money and effort necessary for these studies, only to be left with a library of data that is extremely difficult to analyze.  In a certain way, it is high-stakes gambling, with big risks and big rewards. 
  And so, I think that the best answer for why stocks slumped this week is because sequencing technology has done too well.  Living with the promise of what personal genome sequencing technology would bring us was far easier than living with the technology itself.  But, as we brace ourselves for the deluge of data and the chaos that is certain to follow, I am reminded of Thoreau, “If you have built castles in the air, your work need not be lost; that is where they should be. Now put the foundations under them.”

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *