CRISPR Gene Editing: Proofreaders and Undo Buttons, but Ever "Safe" Enough?

Posted by Elliot Hosman, Biopolitical Times on November 19th, 2015

The 1986 Franklin Spelling Ace, a previous generation of spellcheck. Flickr/Nate Bolt

Untitled Document

News about genetic engineering continues to emerge at a dizzying pace. In recent weeks, a handful of reports suggest that the suite of new “gene editing” tools may have so-called “proofreaders” and “undo” protocols that increase technical safety. At the same time, a growing consensus seems to be emerging that looks beyond immediate technical safety to the long-term and social implications of modifying the genes of human embryos for the purpose of “enhanced” reproduction.

Is CRISPR safer?

A November 13 story in The Scientist, headlined Cas9 Proofreads Gene Edits, canvassed two recent research publications (in Nature and Science) co-authored by CRISPR co-discoverer Jennifer Doudna. The take-home message was that the Cas9 protein – the molecule charged with making cuts to DNA in the CRISPR/Cas9 gene editing complex – may have certain built-in mechanisms that work against off-target cuts. The headline’s metaphorical imagination conformed to the headline used by UC Berkeley in its related November 12 press release: CRISPR-Cas9 gene editing: check three times, cut once.

The same day, VICE Motherboard reported on work done by researchers at UMass Medical School (published in Molecular Therapy)that “used a ’non-cutting’ version of the protein Cas9” to research the genetics of muscular dystrophy. In contrasting the UMass team’s research with the typical function of CRISPR/Cas9 complexes, VICE’s Melissa Cronin wrote descriptively,

Usually, CRISPR is a cutting machine, hacking away at pathogenic genes. But sending a weed-whacker into a delicate genome to cut away hundreds of spots is risky, and could result in mistakes.

On November 16, Nature News described research by a team including George Church and Kevin Esvelt on gene drive – a technology that can amplify specified genes in populations by altering inheritance probabilities –  with the headline: Safety upgrade found for gene-editing technique. A few days earlier, Sharon Begley had reported in STAT on increasing concerns with gene drives with the headline Why FBI and the Pentagon are afraid of gene drives. The “undo button” proposed by Church, et al.’s research, or what Scientific American referred to perhaps appropriately as a “kill switch” [paywall], was, it seems, more of the unforeseeable same. The so-called upgrade was “sending a second gene drive out to undo the effects of the first.”

Is there an emerging consensus against CRISPR-ing future people?

These recent headlines may “calm some fears about the technology.” But even if the promised safeguards function as advertised, they wouldn’t necessarily prevent gene editing tools from effecting unforeseeable and irreversible changes to human genomes or ecological systems – not to mention the fabric of our society. Gang Bao, professor and bioengineering researcher at Rice University who studies the genetics of sickle cell disease, recently noted:

In the germline, off-target effects might persist for generations and could lead to long-term changes in the genome. Until we know the full consequences of gene editing, it would be a huge mistake to use it to modify the germline.

Jennifer Doudna has long been cautious of the potential for CRISPR technology to go awry. Just days prior to her most recent publication, she was quoted by Michael Specter in The New Yorker story The Gene Hackers on its potential to “do more harm than good”:

I lie in bed almost every night and ask myself that question,” she said. “When I’m ninety, will I look back and be glad about what we have accomplished with this technology? Or will I wish I’d never discovered how it works? … I have never said this in public, but it will show you where my psyche is,” she said. “I had a dream recently, and in my dream”—she mentioned the name of a leading scientific researcher—“had come to see me and said, ‘I have somebody very powerful with me who I want you to meet, and I want you to explain to him how this technology functions.’ So I said, Sure, who is it? It was Adolf Hitler. I was really horrified, but I went into a room and there was Hitler. He had a pig face and I could only see him from behind and he was taking notes and he said, ‘I want to understand the uses and implications of this amazing technology.’ I woke up in a cold sweat. And that dream has haunted me from that day. Because suppose somebody like Hitler had access to this—we can only imagine the kind of horrible uses he could put it to.

Many voice concern that eugenics in the modern age could be as pernicious as the twentieth-century variety, even if it is submerged in the shiny casing of individual consumer decisions. Nathaniel Comfort’s historical essay Better Babies (Aeon, November 17) argues,

Scientific medicine rescued eugenics, turning human perfection from a social programme [of who to mate with and who to sterilize] into a biotechnical problem. … CRISPR must be seen as the latest step in this history of promises: the promise of ending genetic disease, of designer babies, of the self-direction of human evolution.

Other recent stories have echoed these concerns: Would you edit your unborn child’s genes so they were successful? (The Guardian UK), The Risks of Assisting Evolution (The New York Times), The Crispr Quandary (The New York Times Magazine), Experts debate: Are we playing with fire when we edit human genes? (STAT). More and more observers and stakeholders appear to be taking a cautious position on this technology. In that STAT story, a number of notables were asked where they stood on the issue of germline editing. Pushing the discussion beyond research gaps in technical safety, NIH director Francis Collins explained:

[P]reimplantation genetic diagnosis already offers a practical and much less ethically challenging option for most couples seeking to avoid the birth of a child with a serious genetic disorder. … Do we want to accept the scenario that only those with financial resources get to ‘improve’ the genomes of their children?

Collins concluded that there was a “profound paucity of compelling cases” where germline editing could overcome a balancethat “leans overwhelmingly against human germline engineering.”

Bioethics professor R. Alta Charo is co-chair of the recently announced committee charged with producing a “consensus study” after the upcoming National Academies “international summit on human gene editing.” This makes her comments, published by the University of Wisconsin Madison’s news office, particularly interesting:  

Changes to germ line cells will affect all subsequent generations. Ethically, it offers possible benefits to — but imposes risks on — people who were never involved in the original decision. And whatever happens, good or bad, will reverberate down the generations. ... germ line engineering is, in my opinion, the least likely gene editing application in the near term — potentially forever. Established methods could, a lot more simply, avoid some grievous or fatal genetic defects. You could adopt a child or use donor sperm and eggs. Or you could use in vitro fertilization and pre-implantation genetic diagnosis for embryo selection to avoid bringing a child into the world who will suffer with a serious disease.

All these recent comments suggest that even as researchers rush to proclaim they’re solving CRISPR’s technical limitations, its long-term consequences and social implications can’t be ignored.

Previously on Biopolitical Times:

Image via Flickr/Nate Bolt

Gene Therapy: Comeback? Cost-Prohibitive?

Posted by Elliot Hosman, Biopolitical Times on November 19th, 2015

Untitled Document

The Center for Genetics and Society and many others have long argued that it’s important to draw a sharp policy line between heritable genetic modification and genetic alterations aimed at treating an existing patient – gene therapy. That does not, however, mean that gene therapy is problem-free. With the CRISPR boom of the last three years, a number of biotech companies have been planning human clinical trials for a range of gene therapy applications, which raise important questions of their own.

At a recent UC Irvine conference on The Challenge of Informed Consent in Times of Controversy, Columbia University law professor and Nation columnist Patricia Williams described the hype now surrounding CRISPR “gene editing” developments, whether applied to heritable or non-heritable genetic changes:

What’s happening now is also a rat race, to beat out others in the charge to the patent office; a lunge to own all parts of the genome, to close down the public commons in the bioterritory of the genome.  Hence, much of this has a temporal urgency to its framing that exploits our anxiety about mortality itself. Hurry up or you’ll die of a really ugly disease. And do it so that ‘we’ win the race, for everything is a race, a race against time, a race to file patents, a race to market, to better babies … there is never enough glory or gain, there is always the moving goalpost. And this is a cause for worry in the framing of a broad spectrum of technologies.

Amid the excitement about the new generation of genetic engineering tools and protocols that Williams evokes, and the fast-paced reporting on research developments and scientists’ speculations, important distinctions are too often being muddied and serious concerns are too often overlooked.

Three recent developments in the gene therapy world, for example, were sometimes reported in ways that not only conflated somatic and germline applications, but also failed to distinguish in vivo treatments (inserting specifically programmed CRISPR complexes inside the body, in which case precision is paramount) from ex vivo approaches (editing cells in a lab, and then inserting the successfully edited cells into a patient’s body). On the other hand, the developments did lead reporters to raise concerns about the huge costs associated with the field of gene therapy, and the many obstacles still left to overcome.

Baby Layla and Cellectis

The first was widespread commentary starting November 5 on a press release from Great Ormond Street Hospital in London and biotech company Cellectis in France about an infant named Layla who had received gene-edited cells that had rid her body of otherwise unresponsive leukemia. The genetic repair method used for this somatic gene therapy was a lesser-known molecular nuclease known as TALENs. It involved not an ex vivo or in vivo engineering of the infant’s cells (that is, “personalized medicine” based on the patient’s DNA), but edited donor immune cells that were already on hand when the prospect of Layla’s experimental clinical case emerged. 

Layla’s doctors were excited but circumspect in the press release, with one saying:

We have only used this treatment on one very strong little girl, and we have to be cautious about claiming that this will be a suitable treatment option for all children.

Yet as Ricki Lewis pointed out some days later in an article in PLOS Biology, Will Layla Save Gene Editing?

Three months may seem way too soon to report even startling results on a single cancer patient. ‘Cancer-free’ is usually evoked only 5 years after successful treatment, and I wouldn’t even use it then…The timing of the announcement may be important when we look back on the birth of gene and genome editing.

Lewis speculated that given the hype surrounding CRISPR, Layla’s story (even though it involves gene therapy enabled by a different gene editing tool) may have “its greatest impact” on the upcoming “International Summit on Human Gene Editing to be held in Washington D.C. December 1-3” which concerns germline modification. She added that following the news release, “Cellectis’s stock rose, 11% after the news broke and another 3% the next day.”

Editas Medicine CRISPR human clinical trials in 2017?

A second gene editing development that broke on the same day was Editas Medicine CEO Katrine Bosley’s announcement at a tech conference that the biotech company would begin human clinical trials of CRISPR somatic gene therapy by 2017 to treat a form of the rare genetic blindness, Leber’s congential amaurosis (LCA)—what could amount to the first use of CRISPR for human medical treatment. The UK Telegraph misleadingly reported Editas’ plans with the headline First genetically modified humans could exist within two years. While gene therapy technically produces “genetically modified humans,” the term is typically used to refer to (hypothetical) humans created after the genetic modification of embryos or gametes. The confusion can’t be blamed on the headline writer, since the article also overbroadly states that CRISPR (regardless of application) is “controversial because it fundamentally changes a person’s genetic code which can then be passed down to offspring.” Contrary to the Telegraph’s reporting, however, Editas’ proposed CRISPR gene therapy trial would not target the genes that are passed on to future generations.

In addition, the article and its headline also mislead by implying that the Editas clinical trial would be the first instance of gene therapy. This suggestion erases a wrought history of gene therapy trials in the last decades that were largely unsuccessful, and that harmed or killed patients, most notably Jesse Gelsinger in 1999.

Spark Therapeutics gene therapy partially restores vision?

A third development, reported a few days later (November 11) in The Washington Post, described a different gene therapy for LCA blindness. This clinical trial, sponsored by Spark Therapeutics, partially restored the vision of Allison Corona, who began experimental clinical treatment three years ago. Reporters Carolyn Y. Johnson and Brady Dennis did a good job both of putting this story in the context of previous gene therapy clinical trials gone wrong, and of confronting a clearly controversial aspect of the current approaches: “soaring drug prices.” The estimated cost of Spark’s LCA gene therapy? $500,000 per eye. The reporters also cited a 2014 study in Nature Biotechnology that “found that a gene therapy could conceivably be priced as a one-time payment of $4 million to $6 million.”

In reporting on Editas’s plans (and helpfully distinguishing them from what Spark is doing), MIT Technology Review writer Antonio Regalado also noted that “the eventual cost of such a treatment could be extraordinarily high, given the small number of people who would need it.” Of the roughly 3,000 people in the United States with LCA, Editas’s gene therapy is targeting a gene impacting some 20%, or 600 people. 

Gene therapy's troubled comeback

As clinical trials using the latest genetic engineering tools for gene therapy are announced, there will be many questions to consider. Among them: How should we distinguish the safety and technical risks associated with in vivo and ex vivo applications? Will CRISPR or other molecular nucleases remain in a clinical patient, continuing to snip DNA and causing potentially dangerous off-target effects for years to come? How will we as a society resolve the huge six- and seven-figure costs associated with a medical treatment that stands to benefit so few in a world plagued by health disparities? And how can we make sure patients are protected if biotech companies rush their gene editing products to market, whether to influence international summits, to boost their stock prices, or just to overshadow a competitor’s recent press?

Previously on Biopolitical Times:

Image via Flickr/NIH

New Rules Proposed to Address Privacy and Trust in the Precision Medicine Initiative

Posted by Katayoun Chamany, Biopolitical Times guest contributor on November 19th, 2015

Untitled Document

With the launch of the US Precision Medicine Initiative (PMI), patient autonomy within the practice of informed consent is being revisited. The PMI is designed to amass the data of a million volunteers in an effort to advance research and support public health. Alongside this national effort, proposed revisions to the “Common Rule” that regulates research with human subjects in the US are open for public comment through December 7, and are summarized in a Perspective published in the New England Journal of Medicine on October 28, 2015 by NIH director Francis Collins and NIH senior advisor Kathy Hudson.  

In general, the process known as “informed consent” is designed to give research participants the autonomy to consider the risks and benefits associated with a research study as part of their decision making about whether to agree or refuse to participate. Early on in biomedical and genomics research, the risks and benefits presented as part of the process were confined to health side effects and therapeutic outcomes. More recently, with the advent of advances in biotechnology, supercomputing, and the construction of large-scale data sets, risk and benefit have taken on new meaning.

In a country that is struggling to address national healthcare within the context of racial and economic inequities, analyses of risk and benefit must expand beyond traditional definitions. This is especially true as biomedical research has become increasingly dependent on human bodies, cells, tissues, and DNA. Today, healthy volunteers in clinical trials can gain financial benefit in the form of payment or compensation; contributors of genetic information must consider privacy and discrimination risk associated with release of genetic information; and patients must be aware of profits made from research on biospecimens collected as part of diagnosis or therapy.

Though standards of ethical conduct are mandated in the US by Institutional Review Boards as required by the National Research Act of 1974 and the Belmont Report (the “Common Rule”), these guidelines are in need of updating and revision given the unusual nature of cells as propagating entities or “biologics.” Professional working groups and ethics advisory councils, such as the National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research, the Office of Management and Budget Working Group to Revise the Common Rule, the National Academy of Sciences, and the American College of Genetics and Genomics have issued statements regarding ethical conduct for research with human subjects and biospecimens, and healthcare provider responsibility to inform patients of incidental genomic findings and downstream profit making.

A good example of some of the changes underfoot is the establishment of the HeLa Genome Access Working Group in 2013. This group was put in place to acknowledge the important contributions made to tissue culture and cell research using a cervical biopsy specimen taken from Henrietta Lacks during her clinical treatment in 1951. That the Lacks family members, two of whom are members of the working group, now have a say in how the HeLa genome is used in research does much to acknowledge the history behind the establishment of this cell line and the downstream profits made from it in the creation of cell culture reagents, diagnostics, vaccines, and drugs.

However, I would argue, as others have, that this kind of personalized gatekeeping cannot be put in place for each individual biospecimen collected in the future. Once a cell is removed from a human tissue or blood sample, its establishment into a cell line makes it a portable entity that can move across time and space in labs spanning a wide array of investigation. Requiring consent for secondary research with stored biospecimens would mean that researchers would have to locate donors and participate in the informed consent process for each new research study that was not foreseen when the sample was collected. This proposal, alongside another to place de-identified samples within the scope of the Common Rule, may present formidable challenges for those researchers with limited funding and infrastructure. Thus, rather than broaden research participation and research scope, these proposals may bias research directions towards those that are seen to have large financial payoffs and that include the participation of a privileged class that has not endured the injustices of past biomedical studies.

Certainly additional oversight is needed to avoid medical injustices inflicted upon the marginalized or uninformed, as described in the Nature editorial titled “Justice for All” and the Presidential Commission for the Study of Bioethical Issues 2011 Report “Ethically Impossible,” which details the egregious practices of the US Public Health Service in the Guatemala and Tuskegee studies of sexually transmitted diseases from 1932-1972. In addition to reparations and apologies, a more proactive and interdisciplinary approach to conducting biomedical research using large data sets is underway.

In January, with the launch of the PMI, the National Institutes of Health convened a Workshop to Explore the Ethical, Legal, and Social Implications (ELSI) of Citizen Science. Many attendees struggled with the term “citizen science,” wondering if the language was appropriate when discussing population-based biomedical research. Citizen science often conjures up images of people sampling trees and water to address environmental concerns like climate change and pollution. But speakers, including Elizabeth Yeampierre of UPROSE, showcased the ways in which building relationships between communities and researchers is a form of citizen science. She highlighted the importance of being mindful of health and environmental injustice that has its origins in colonization, oppression, and slavery. Others highlighted the importance of involving communities and patients in research study planning, such that research goals are in line with the needs of these communities, as is being done in the National Patient Powered Research Networks (PCOR). 

Though these are important points, they appear to be more relevant to hypothesis-driven studies or epidemiological ones that have a specific disease focus. In the context of the PMI, there is no hypothesis. Instead, a large dataset amassed from existing and prospective studies would be mined to observe patterns and design future research studies that could influence policies regarding environmental toxin disposal, but also the development of lucrative drugs and products.

During the Citizen Science Workshop, participants expressed interest in learning how communities can be involved in regulating how, when, and where biospecimens can be used in research. Many of the issues raised are reflected in the proposed revisions of the Common Rule and associated comments. The workshop also informed the development of the Privacy and Trust Principles associated with the PMI, issued earlier this month. These principles are designed to acknowledge the complexity associated with the collection, manipulation, and dissemination of publicly donated biospecimens and lifestyle information, and to build a community of trust in the safeguarding of property and privacy.

What is somewhat disheartening is the lack of conversation around incentivizing contributions and participation from communities in an effort to honor this work, or what some have come to describe as biolabor. With respect to compensation for participation in research, there are a range of responses. Some believe that incentives or financial compensation can address the need for bioresources to assemble large data sets to advance scientific and biomedical research. These approaches, they argue, would specifically address the lack of diversity in samples by including those that have not traditionally been involved in such research. Others see biobanking as a civic duty to support a public good, not unlike other requirements in society, such as taxation, catalytic converter requirements for cars, and anti-smoking laws. Those that challenge this latter stance argue that each individual should be able to act autonomously, and that the choice to participate in research should be protected and recognized. This is precisely why the US uses an opt-in approach to organ donation upon one’s death, which is counter to other countries such as Wales which, on December 1 through the Organ Donation Wales Program, will move to an opt-out plan for organ donation upon death.

There also appears to be a level of “bodily exceptionalism” at play in public contributions, such that contributions involving internal resources (blood, DNA, cells) appear to warrant a different level of oversight and regulation than contributions that involve external resources such as money (taxation) or demographic information (census). Thus, some would argue that it is bodily integrity, not autonomy that is important. The range of responses to these positions, proposals, and practices is varied, reflecting the plurality of opinion even within groups that traditionally hold uniform voice.

Perhaps one of the most surprising proposed changes is that the Common Rule would no longer be limited to federally funded research. Rather, researchers operating in the private sector, or funded by state monies, would also need to comply. Because biologics can be traded, exchanged, shared, and sold, they often move in and out of the public and private sectors, making ethical oversight at the current time difficult to apply. If all research involving biospecimens was regulated under the same Common Rule, consistency would be achieved and donors and volunteers would have a clearer understanding that tissues collected during clinical diagnosis or treatment, or those donated for academic research, may down the road be used in research studies to develop drugs, diagnostics, and vaccines.

Another important proposed rule change applies to social science researchers. These researchers often complain that the Common Rule is not appropriately designed for their work and creates unnecessary hurdles. Thus, the proposed change exempts most of these studies. In this instance, the broad-strokes approach to solving a research challenge may cause more problems in the long run.

This is particularly true as the PMI intends to collect lifestyle and social information alongside genomic data. Similarly, private genomics companies like 23andMe and research studies using Apple’ ResearchKit will be collecting data that can be used in both biomedical and social science research, and will be most useful when these data are used together to address epigenetic influences on health. That biological data falls under the Common Rule, while environmental (built, social, and natural) data does not, seems counterintuitive to the goals of these interdisciplinary projects.

As we return to the definition of health proposed by French physician George Canguilhem in 1943 as “the ability to adapt to one’s environment,” we must also consider models of public health research that contribute to the social good. David Winikoff’s “charitable trust model” seeks to provide both guidance and expertise to communities and individuals that provide vital information and biological resources to these growing large-scale datasets. A closer look at the Privacy and Trust Principles of the PMI and the proposed revisions to the Common Rule may suggest an intention to adopt some of the principles of that model, but there is still work to be done.

Katayoun Chamany is Associate Professor of Biology and the founder of the Interdisciplinary Science program of Eugene Lang College for Liberal Arts at The New School and a Science Education for New Civic Engagements and Responsibilities (SENCER) Leadership Fellow.

Previously on Biopolitical Times: Image via Flickr/CIAT, Neil Palmer

Genetic Surveillance: Consumer Genomics and DNA Forensics

Posted by Elliot Hosman, Biopolitical Times on October 29th, 2015

Untitled Document

“If we each keep our genetic information secret, then we’re all going to die.”

So says Bill Maris, founder, President and CEO of Google Ventures, that $2B investment firm with stakes in more than 280 startups, looking to spend $425M on anti-aging and life extension this year.

Maris isn’t simply trying (successfully) to make headlines, he’s looking to drive a consumer genomics market by convincing people to hand over their genetic material for research. He isn’t alone on this front. 23andMe and have also engaged in grand, seductive promises: Learn your carrier status! Meet your long-lost relatives! Learn how “African” your DNA is, based on “ancestry informative markers!”

This kind of hype downplays the limits and obstacles to providing reliable genetic information and using it to generate beneficial health impacts. It completely obscures the extent to which research as a system—corporate, academic, governmental, what have you—has been co-opted by private gains and has proceeded with little-to-no accountability to the public good and health. And it elides the real drivers of the genomics business model: mass data collection and brokering data access.

Much of the recent reporting on consumer genomics has focused on the FDA’s battle with 23andMe about selling clinically unreliable health information, and on business developments in the sector. Earlier this month, we learned that is in talks with FDA to start selling health information. Last week, the big news was that 23andMe has been cleared by the FDA to begin selling carrier screening tests for 36 genetic variants.

Another recent news story bridges what have been largely segregated conversations about personal genomics and DNA forensics. Brendan Koerner reports in WIRED about a 36-year-old filmmaker in New Orleans who learned to his surprise that he was a suspect in a 1996 murder. Idaho police had run a “familial search” with DNA found at the crime scene, which bore similarities to DNA his father had submitted to his Mississippi church’s genealogy project, later bought up by Police got a warrant to compel to de-anonymize the father’s DNA, and the company complied, leading police to the filmmaker’s door in December 2014. The filmmaker was cleared after 33 days, but the implications of law enforcement collaborating with personal genomics companies in cold cases came as a chilly reminder of the current climate of mass surveillance—genetic and otherwise.

In the week after the story broke, several reporters investigated whether other personal genomics companies were collaborating with law enforcement. Amid fanfare regarding the FDA decision allowing them to partially resume selling health-related tests, 23andMe responded by publishing a “Transparency Report” on its website stating that it had received and denied five requests from law enforcement since 2006.

Yet the lessons of surveillance in other contexts caution against unchecked reliance on the goodwill of big data companies to protect their users’ privacy. Indeed, with secret courts sealing law enforcement’s requests to access other data points on civilians, how transparent are “transparency reports” anyway?

“What are you worried about? Your genome isn’t really secret.”

In the same Bloomberg Business article, Bill Maris asks us why we would want to withhold our data from an exponentially growing corporate database. The answer is: We’ve been here before. 

In a post-WikiLeaks world where #privacy is trending, many of us are still formulating and learning the impact of corporate and government surveillance over daily life. Now we have to grapple with the realization that server farms aren’t just for phone records: DNA, the code of life, can also be analyzed, synthesized, and applied in innumerable contexts for a range of political and corporate ends.

recently reported
on the growing number of biobanks around the world that contain over a million samples, and as sequencing and data storage costs fall, the numbers of samples and banks could continue to balloon. We already know that DNA databases have led to devastating impacts on people’s lives in the context of criminal justice and immigration decisions, most notably in poor communities, communities of color, and among immigrant families. The scaling up of consumer genomics widens the net of genetic surveillance into more privileged populations. Whether provided voluntarily (to purchase ancestral information, or contribute to medical research) or forcibly (via the criminal justice or immigration systems) our DNA, once collected, could make us all more vulnerable.

The Gmail Metaphor for Genomics

In the last few years, as 23andMe has scaled its empire, commentary has repeatedly compared the genetic data collection of personal genomics companies to the case of Google (yes, 23andMe CEO Anne Wojcicki was until recently married to Google founder Sergei Brin). In 2013, Charles Seife, writing in Scientific American, argued:

“[A]s the FDA frets about the accuracy of 23andMe’s tests, it is missing their true function, and consequently the agency has no clue about the real dangers they pose. The [23andMe] Personal Genome Service isn’t primarily intended to be a medical device. It is a mechanism meant to be a front end for a massive information-gathering operation against an unwitting public.” [emphasis added]
Former CGS blogger Jesse Reynolds would agree, writing in 2010:
I've long been of the mind that, just as the traditional business model of newspapers is to get revenue not from readers but from advertisers, personal genomics companies see the potential profit not from the consumers themselves but from the compiled databases – likely in the form of selling access to them.

However, 23andMe spokesperson Angela Calman-Wonson claimed just the opposite in an interview in Nature about the recent FDA decision, stating that consumer testing “is always going to be at the core of our business model.” Reporter Erika Check Hayden apparently didn’t find this convincing, and followed the quote with the statement:

As it grows larger, 23andMe's customer database becomes more valuable for research and drug development by the company and its partners, such as California-based biotechnology firm Genentech.

In his 2013 article, Seife expands his comparison of 23andMe to Google by reflecting on the search engine’s early history:

When it first launched, Google billed itself as a faithful servant of the consumer, a company devoted only to building the best tool to help us satisfy our cravings for information on the web. And Google’s search engine did just that. But as we now know, the fundamental purpose of the company wasn’t to help us search, but to hoard information. Every search query entered into its computers is stored indefinitely. Joined with information gleaned from cookies that Google plants in our browsers, along with personally identifiable data that dribbles from our computer hardware and from our networks, and with the amazing volumes of information that we always seem willing to share with perfect strangers—even corporate ones—that data store has become Google’s real asset. By parceling out that information to help advertisers target you, with or without your consent, Google makes more than $10 billion every quarter.

Yes, Gmail has changed our lives, but what have we lost with Big Brother(s) reading our emails and mining the data for sale or subpoena to advertisers, law enforcement, etc? What about Facebook experimenting with the data it collects on its users to research how to sway users’ emotions? Does Facebook’s research inform its sponsorship of the 2016 presidential debates?

As Alex Lash noted recently in his reporting on the scramble to cash in on the genome (Oct. 20):

Even if our genomes aren’t stolen, can we trust the corporate keepers, or will they inappropriately spill the beans on our medical conditions? Remember the story about Target sending pregnancy-related coupons to a teenager’s house?

23andMe is positioning itself as an advocate for “democratizing healthcare,” luring consumers to buy information related to their health and family in exchange for handing over a bundle of data that are potentially more precious and valuable than search queries and cookies combined. For better or for worse, genetic data can be used for a range of powerful ends, including linking someone to a crime (that they may or may not have committed), selling to third parties for advertising purposes, or developing expensive drugs putatively precise enough to target particular genetic variables.

We need to think broadly about the connections between mass surveillance, biological discrimination, criminal justice, DNA as irrefutable evidence of family or of crime, immigration procedures. And we need to consider whether concepts like “privacy,” “informed consent,” and “notice” are robust enough to preserve human dignity in the face of Big Data’s latest project: mass genetic surveillance.  

Previously on Biopolitical Times:

Image via Wikimedia

Displaying 1-4 of 1283  
Next >> 
Last Page » 
« Show Complete List » 



home | overview | blog | publications | about us | donate | newsletter | press room | privacy policy

CGS • 1936 University Ave, Suite 350, Berkeley, CA 94704 USA • • (p) 1.510.625.0819 • (F) 1.510.665.8760