Ethical research — the long and bumpy road from shirked to shared

0
246


In the autumn of 1869, Charles Darwin was hard at work revising the fifth edition ofOn The Origin of Speciesand drafting his next book,The Descent of Man, to be published in 1871. As he finished chapters, Darwin sent them to his daughter, Henrietta, to edit — hoping she could help to head off the hostile responses to his debut, including objections to the implication that morality and ethics could have no basis in nature, because nature had no purpose.

That same year, Darwin’s cousin Francis Galton publishedHereditary Genius, a book that recast natural selection as a question of social planning1. Galton argued that human abilities were differentially inherited, and introduced a statistical methodology to aid “improvement of the race”. Later, he coined the term ‘eugenics’ to advocate selective reproduction through application of the breeder’s guiding hand.

Darwin’s transformative theory inspired modern biology; Galton’s attempt to equate selection and social reform spawned eugenics. The ethical dilemmas engendered by these two late-nineteenth-century visions of biological control proliferate still. And, as older quandaries die out, they are replaced by more vigorous descendants. That there has never been a border between ethics and biology remains as apparent today as it was 150 years ago. The difference is that many of the issues, such as the remodelling of future generations or the surveillance of personal data, have become as everyday as they are vast in their implications. To work out how to move forward, it is worth looking at how we got here.

In the late nineteenth century, like today, society was in upheaval and science was on a roll. With Darwin’s bold hypotheses set before them, Victorian breeders, microscopists, collectors, astronomers, geologists and anatomists sought to discover the laws interconnecting life’s core processes — often by using ingenious experimental designs. To probe the formative effects of gestation on heredity in mammals, the gentleman naturalist Walter Heap, a laboratory demonstrator at the University of Cambridge, UK, conducted the first experiments in transferring embryos from one variety of rabbit to another at his home in Prestwich in the 1890s. His methods typified a new era of disrupt-and-learn biology.

Biology rebooted

By the early part of the twentieth century, what had come to dominate was “the biological gaze”, to quote historian Evelyn Fox Keller at the Massachusetts Institute of Technology in Cambridge2. Rather than simply observing life, experimenters began to manipulate its component parts to test the limits of the system, mix up ingredients and turn biology inside out.

In 1903, the embryologist Hans Spemann conducted his famous experiments with amphibians. Using one of his infant daughter’s fine, elastic hairs, he tied a loop around a fertilized salamander egg to create an animal with two heads and one tail. That same decade, in the United States, physiologist Jacques Loeb pursued a new ‘engineering biology’, trying out all sort of chemicals and conditions to prompt development in model organisms such as sea urchins3.

Ian Wilmut, who led the team that created Dolly the cloned sheep in the 1990s (at the Roslin Institute near Edinburgh, UK), once stated that it was Dolly’s birth that ushered in “the age of biological control” — and made obsolete the expression “biologically impossible”4. In fact, this view of life was born at least a century earlier. And as confident experimentation turned ever more closely and deliberately towards humans, the relationships between research, industry and governments became a tangled ethical bank, and have remained so ever since.

Embryologist Ian Wilmut encounters his brainchild Dolly the cloned sheep, which is now stuffed and went on show at an exhibition in 2015.Credit: Will Latham/eyevine

Eugenics, never without its trenchant opponents, became an increasingly crucial part of a new world order over the course of the twentieth century. It is particularly associated with the mass-sterilization campaigns that began after Indiana’s 1907 act, and with the Nazi racial-hygiene programme that reached its nadir in the Holocaust.

Another legacy of the eugenics movement is the management of populations using techniques such as demography, racial classification and statistical modelling. These, combined with family planning, became synonymous with modernity and progress. From Latin America and Scandinavia to India, China and the Soviet Union, eugenics took root in projects to ‘improve the population’ throughout the twentieth century. Eugenic presumptions about the differential fitness of native and immigrant populations were central to colonial administrations across the British Empire. Census-takers created ‘races’ and ‘tribes’ where none existed, for the purpose of managing populations more ‘scientifically’. These categorizations got inked into emerging nations across Africa and southeast Asia, and continue to shape definitions of race in countries including Malaysia and Singapore today.

The logic of the modern nation state is in no small part provided by eugenic techniques of classifying and controlling citizens, as pointed out by historians Alison Bashford, now at the University of New South Wales in Sydney, Australia, and Philippa Levine, at the University of Texas at Austin5. This typological approach to administration was normalized through what has been called the “prism of heritability” by the sociologist Troy Duster, now at the University of California, Berkeley6. That had the effect of linking together the pathologization of mental illness, homosexuality, criminality, poverty, ethnicity and race into a discourse of ‘rational’ management that became mainstream.

In other words, the principles of the eugenics movement are part of contemporary society’s DNA. Across national and global policies affecting everything from health care, fertility and incarceration to border control, education and regional development, the goal of shaping the population through selective pressures — such as creating a “hostile environment” for immigrants — is alive and well.

Essay series: lessons from the past for the future of research

The rise of bioethics

The birth of bioethics in the 1970s was in no small part a response to harmful research projects undertaken within this context — on vulnerable groups such as immigrants, prisoners and psychiatric patients — and without meaningful consent. The field emerged largely in the United States, partly driven by the international outrage at the exposure in 1972 of the covert US Public Health Service research project at Tuskegee University — in which more than 400 black US men, mostly poor share-croppers from Alabama, were subjected to untreated syphilis between 1932 and 1972. As many as half of them died, and 60 of their wives and children contracted the disease.

In 1974, the US government passed the National Research Act and established the National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research. Two years later, the commission drafted a report outlining in detail the “basic ethical principles and guidelines that should assist in resolving the ethical problems that surround the conduct of research with human subjects”.

In 1978, this was published asThe Belmont Report7in the US Federal Register, establishing guidance for national research and the three pillars of modern bioethics. These were: respect for persons, beneficence (‘doing good’) and justice. The report also clarified the basis for informed consent of study participants, and helped to enforce mandatory policies for ethical oversight of research. The three principles were largely aimed at preventing the mistreatment of vulnerable individuals and communities. Under Belmont’s influence, research ethics became a central principle of modern science.

Bioethics flourished throughout the 1980s, expanding to include equity in public health and access to medical care. The field became increasingly central to medical and scientific training, as well as to research funding. That focus was intensified by the ‘too little, too late’ critiques of government responses to the HIV crisis that emerged in the mid-1980s.

Louise Brown, the world’s first baby born as a result ofin vitrofertilization, pictured in 1981.Credit: Michel Artault/Gamma-Rapho/Getty

Bioethics gathered momentum at this time by offering guidance on controversial biomedical applications such as organ transplants andin vitrofertilization (IVF). In the first encyclopaedia of bioethics, published in 1978, theologian Warren T. Reich drew attention to a key shift in the practice of medicine: one that moved away from a commitment to preserving life8. In the past, he argued, medicine was guided by the absolute principle to ‘do no harm’. However, a different ethical dilemma arose out of a heart transplant, a procedure that could significantly improve an individual’s quality of life but which also had the potential to kill them. In contrast to the iron-clad medical ethics of old, the absolute value of human life became relativized. In the world’s most advanced medical facilities, a higher quality of life could now be worth dying for. Once again, ethical debate was reignited.

It was in the 1990s that professionalized bioethics reached its high-water mark. The Human Genome Project (HGP) — the leviathan of publicly funded DNA sequencing — promised to unleash a combination of Darwin’s and Galton’s visions as the century drew to a close. Ethics claimed the largest share of HGP funds set aside for the analysis of “Ethical, Legal, and Social Implications” (ELSI) of genome mapping. In the United States alone, around US$3 billion (5% of the HGP budget) was spent to create “the world’s largest bioethics program”. Armies of ethicists combed over the philosophical principles of altering genetic material in ways that might or might not be passed on to future generations, and the perils of designer babies.

Then, as the century neared its end, something else took centre stage: new techniques derived from reproductive and developmental biology, such as cloning and research into stem cells and embryos. As the prospects of quick bench-to-bedside applications from the HGP faded, so did the allure of bioethics. The discipline lost its most significant source of funding as ELSI programmes ceased.

To return to 1978, there was another turning point for bioethics in the year ofThe Belmont Report: the birth of Louise Brown, the first baby conceived through IVF. Some of the most controversial research and applications over the past half-century have concerned reproductive and developmental biology. But while bioethicists were recruited en masse to contemplate the impact of the HGP, the fertility industry mushroomed, generating an impressive set of acronyms (but no ELSI). For a while, global public opinion became more sharply divided over cloned dogs and genetically modified (GM) maize (corn) than GM babies. That concern would come later.

In retrospect, many of the forces that propelled late-twentieth-century bioethics into the limelight — such as the focus on speculative genomic futures — eventually left it unmoored. In the past two decades, bioethics has drifted into uncharted waters. Today, amid a panoply of ethical quagmires ranging from gene-edited babies and neurotechnology to dish-grown organoids and nanobots, the fraught relationship between society and research is once again front and centre.

Genetically modified foods have caused controversy in many nations.Credit: Renee C. Byer/Sacramento Bee/zReportage/eyevine

Beyond bewilderment

Just as the ramifications of the birth of modern biology were hard to delineate in the late nineteenth century, so there is a sense of ethical bewilderment today. The feeling of being overwhelmed is exacerbated by a lack of regulatory infrastructure or adequate policy precedents. Bioethics, once a beacon of principled pathways to policy, is increasingly lost, like Simba, in a sea of thundering wildebeest. Many of the ethical challenges arising from today’s turbocharged research culture involve rapidly evolving fields that are pursued by globally competitive projects and teams, spanning disparate national regulatory systems and cultural norms. The unknown unknowns grow by the day.

The bar for proper scrutiny has not so much been lowered as sawn to pieces: dispersed, area-specific ethical oversight now exists in a range of forms for every acronym from AI (artificial intelligence) to GM organisms. A single, Belmont-style umbrella no longer seems likely, or even feasible. Much basic science is privately funded and therefore secretive. And the mergers between machine learning and biological synthesis raise additional concerns. Instances of enduring and successful international regulation are rare. The stereotype of bureaucratic, box-ticking ethical compliance is no longer fit for purpose in a world of CRISPR twins, synthetic neurons and self-driving cars.

Bioethics evolves, as does any other branch of knowledge. The post-millennial trend has been to become more global, less canonical and more reflexive. The field no longer relies on philosophically derived mandates codified into textbook formulas. Instead, it functions as a dashboard of pragmatic instruments, and is less expert-driven, more interdisciplinary, less multipurpose and more bespoke. In the wake of the ‘turn to dialogue’ in science, bioethics often looks more like public engagement — and vice versa. Policymakers, polling companies and government quangos tasked with organizing ethical consultations on questions such as mitochondrial donation (‘three-parent embryos’, as the media would have it) now perform the evaluations formerly assigned to bioethicists. Journal editors, funding bodies, grant-review boards and policymakers are increasingly the new ethical adjudicators.

These shifts have been a long time coming and have many different sources, including the driving influence of practical ethicists such as the British philosopher Mary Warnock — often to the consternation of the wider bioethics community. After Warnock’sReport of the Committee of Inquiry into Human Fertilisation and Embryologywas published for the UK government in 1984, John Harris, a medical ethicist at the University of Manchester, UK, complained that “the crucial questions are fudged, or rather are never addressed”9. He argued that Warnock’s approach was over-reliant on “primitive feelings”, resulting in recommendations that were “false” and “dangerous”. The Warnock committee, in his view, had evaded the single most important question they faced concerning the moral status of the human embryo, in favour of a sentimental concession to expedient policy.

As we now know, Warnock was prescient in her attention to the strength of public feeling in relation to human-embryo research. Her reliance on several overlapping types of argument to justify strict limits on the introduction of new reproductive technologies has enabled the United Kingdom to establish a licensing system that is more flexible and that has proved more long-lasting than in any other country. Her committee, unusually comprising a majority of non-scientists, reached its consensus based on a pragmatic and principled proposal: that approval for the study of controversial therapeutic and experimental procedures would be subject to a strict and comprehensive code of practice upheld by Parliament. The law itself, Warnock argued, would act as both a guarantor and a symbol of public morality; it would in its combination of permissive scope and legislative precision express “the moral idea of society”. This was a new template for ethical reasoning.

When the UK government decided, in the wake of the passage of the Human Fertilisation and Embryology Act in 1990, that it would not establish a counterpart to the US National Bioethics Advisory Commission, it was following Warnock’s lead, and that of Anne McLaren. This developmental biologist and Warnock committee member took a populist and practical approach to public trust in science that has been highly influential.

Today, interdisciplinary expertise plus extensive and creative public consultation increasingly define a new approach to ethical science. This trend has been reinforced by organizations such as the Nuffield Council on Bioethics, which advises the UK government by mobilizing a broad spectrum of knowledge, far beyond that of bioethicists and philosophers. Since 1993, the council has commissioned and published nearly 30 specialist reports on controversial biomedical issues, ranging from genetic screening to xenotransplantation. Few of the panels have been chaired by bioethicists. Many of the reports have widened the idea of what counts as an ethical issue — for example, the exploration of cultures of UK scientific research, chaired by the University of Cambridge plant scientist Ottoline Leyser10. In a similar vein, the International Society for Stem Cell Research has released a series of global guidelines that prioritize oversight, communication and research integrity as well as patient welfare, social justice and respect for study participants over fixed principles of ethical conduct.

In a social-media-saturated age wary of fake news, the new holy grail is the ability to create trustworthy systems for governing controversial research such as chimeric embryos and face-recognition algorithms. The pursuit of a more ethical science has come to be associated with building trust by creating transparent processes, inclusive participation and openness to uncertainty, as opposed to distinguishing between ‘is’ and ‘ought’.

In short, expert knowledge and reliable data are essential but never enough to enable enduring, humane governance to emerge. So there is now more emphasis on continuous communication and outreach, and on long-term strategies to ensure collective participation and feedback at all stages of scientific inquiry. The result is less reliance on specialized ethical expertise and more attention to diversity of representation.

Amid the perils and promises of applications, from replacement heart and liver cells or driving malarial resistance through the mosquito population to ending Huntingdon’s disease, a new legacy to Darwin and Galton has emerged. It turns out that what we have in common is less a single biological essence — or the ability to alter it — than a shared responsibility for human and non-human futures. The implication of this new model is that the most ethical science is the most sociable one, and thus that scientific excellence depends on greater inclusivity. We are better together — we must all be ethicists now.

Read More

LEAVE A REPLY

Please enter your comment!
Please enter your name here