Book Review: Flower Hunters by Mary Gribbin and John Gribbin

flower-hunters

I’ve spent a fair amount of the last six months in the quiet corridors of the Manchester Museum Herbarium, helping with a lengthy transfer of packets of the General Moss collection from filing cabinet to boxes. Sifting through sheets and packets, I can trace the patterns that the personalities of the different collectors etch in the collection. I see the differences in packeting; ‘stuffers’ who see the size of the envelope as the minimum target and the ‘folders’, who encase tiny scraps in tissue-thin paper, the localities visited and the specimens selected which different personalities pick. It is inevitable that any Herbarium collection is filtered through the collectors’ passions and whims. Therefore, when I found a copy of Mary and John Gribbin’s Flower Hunters in the library, I picked it up out of a desire to learn more about the collectors whose specimens fill the shelves of herbaria throughout the world.

The book traces a history of botany up until the 20th century, when there still existed territories unaltered by human hands. There is a distinct horticultural slant, but does cover scientific and economic botany which interests me more. There is also more of a stress on Hunters than Flowers; biographic minutiae trumps botanical detail. I would have liked to have seen more on the bryophytological loves of Richard Spruce, for example, to get a feel for his mind and passions.

Across the biographical chapters of individual botanists, what piqued my interest was how class influence how these collectors could practice and were rewarded for their work. Whilst all suffered on arduous, years long journeys around the world, the gentleman (and gentlewoman) naturalist had a retinue and global connections. Their resources permitted the intellectual freedom to see global patterns in vegetation, which fueled evolutionary and ecological thinking. In contrast, working-class commercial plantsmen were sent out on their own and expected to keep their heads down, looking for the economically not intellectually fruitful, but only seeing the smallest slice of the profits their seeds and plants made.

The book is structured roughly chronologically, with each chapter on the life of a plant collector (or collaborating pair), hence showing the changing shape of science and adventure. The structure works well in the beginning, the early classification system of John Ray (1627-1705) is followed by the more familiar classification of Linnaeus (1707 – 1778). I was surprised to learn that Ray’s somewhat cumbersome classification based on essential features of the plant (such as the dicotyledon/ monocotyledon division still current) blended with Linnaeus’ user-friendly classification system based on somewhat arbitrary features (Crytogamia, anybody?) to create the bones of modern plant taxonomy. Linnaeus is not the origin of plant taxonomy, he had his own forerunners as well. We then move to Joseph Banks (1743 – 1820), the de facto first director of the Royal Botanic Gardens at Kew and friend of George III who accompanied Captain Cook on the Endeavour. Here scientific curiousity is inseparable from imperial ambition, Banks saw the land and flora of the Pacific as ripe for “improvement” and exploitation.

Then follows the plantsmen, sent as the somewhat disposable employees of private companies into unmapped territory. These include David Douglas (1799-1834) sent alone on starvation pay by the Royal Horticultural Society to discover and collect plants including his eponymous fir and the Sugar Pine in North America. Expected to live on the land and employ local guides, his health was ruined, losing most of his vision and becoming rheumatic. He died in a bizarre, tragic manner; on Hawaii, his poor eyesight caused him to fall into an animal trap where he was gored by the trapped bull.

We then move onto a pair of similarly intrepid plantsmen, William and Thomas Lobb, who also brought large trees back to their employers. But with the later plant hunter, Robert Fortune (1812-1880), the voyages become more routine. His work involved trading, officially and unofficially, with other plant breeders in China. His notable achievement was in service of the British Empire, he was the first to take viable tea seeds out of China to establish successful plantations in India, ending China’s monopoly on tea. Though economic considerations went into even Linnaeus’ botanising, consideration of profit becomes more naked and unscrupulous.

Bucking the arch of the narrative is the figure of Marianne North (1830-1890). A wealthy woman, she circled the globe without the expected chaperone, not plant hunting for profit or for a scientific institute (as none would employ a woman to do so) but for her art. She was an incredible prolific botanical artist who, unusually for the time, painted plants in situ. She worked rapidly and in oils, producing what the Gribbins describe as not highly detailed but more importantly accurate. They are not plants “murder[ed] to dissect” but alive in their habitat, an ecological not an anatomical painter. I am hesitant to criticize the inclusion of North as I admire her as an intrepid artistic woman. But she does stick out of a narrative trend of increasing commercialization of plants precisely because she was not a plant hunter, with all it’s masculine connotations. She did not bring back the plants themselves as trophies, but her own impressions of the plants and their environments captured in the paintings. If her story was in the context of a discussion of the growing ecological understanding, amongst the stories of Alexander von Humboldt and Alfred Russell Wallace for example, it would make sense. But here it feels like she was shoehorned as “the woman” in a history of a period and profession which was more or less uniformly masculine. The best way to appreciate women’s contribution to science is to place them in their intellectual context, not to include them in a place where their lives seem superfluous and dismissable.

The final story is that of Joseph Dalton Hooker (1817-1911), who I know best for the Bentham & Hooker System of classifying seed plants which the Manchester Museum Herbarium (and many other herbaria) arrange their specimens by to this day. This is not discussed in the book, sadly, as it would make a nice narrative circle from Ray and Linnaeus to Hooker. The focus is instead upon Hooker’s botanising in the Himalayas. He was one of the earliest Europeans to explore the Tibetan Plateau, and is largely responsible for the flourishing of rhododendrons in the gardens (and countryside!) of Britain. Less destructively, his collections form integral parts of many herbaria in Britain, including Manchester’s.

A book such as this does run the threat of superficiality and glaring omission (Darwin is conspicuously but understandably absent, but aside from Linnaeus there is an absent of the non-Britons). But more important is cohesion, which it ultimately lacks. It doesn’t make any conclusions about the state of botanical science in this period, merely presenting fascinating stories.  A book which treated the hunters and their plants on the same level, giving biographical and evolutionary detail alongside each other, would have made maybe a more enjoyable, tightly structured book.

 

Book Review: How To Survive a Plague by David France

51wo3zzp4bl-_sy344_bo1204203200_

My generation are some of the first to grow up with HIV infection as a treatable condition. We only know a world where combination HAART can give those newly infected an undetectable viral load and a nearly normal lifespan, so we largely lack the fear and stigma which surrounded AIDS from its beginning. It is easy for us to think, yes, HIV/AIDS was and is tragic, but the modern treatments are as good as a cure, so it’s no real problem now. But such complacency is wrong.

David France opens How to Survive a Plague with the funeral of Spencer Cox, a member of the AIDS Coalition to Unleash Power (ACT UP) who suggested that HIV researchers test  the drug Crixivan in large simple trials to produce data fast and get the drug on the market. This scene is not at the height of the plague but in 2013. The drug combinations he and many others were saved by in 1996 became ineffective, as his HIV strain mutated to snake past all available HIV drugs. What it is vital for those born after the plague years to know is that treatability does not entail dismissal. And France’s book details the struggle and stubbornness, as well as plenty of setbacks and self-aggrandizement, it took to get to where AIDS is today.

France runs a history of the science of HIV/AIDS alongside and intertwining with the history of activism. His writing is less pacy when writing on the research compared to when writing about the high drama of drug trials, but forgivable given his background. France is an AIDS activism insider. He is among the crowds of plague survivors at Cox’s funeral and is connected to the story’s main players. Though the scope and tone of the book somewhat resembles Siddhartha Mukherjee’s “Big Book of Cancer”, it does not attempt to magisterial objectivity. This means the book is How to Survive a Plague (as a gay male activist and journalist in New York, 1981-96). At the books heart is a small world of New York activists. San Francisco is a distant oasis separated by the dry sands of prejudice, to say nothing of the rest of the world. But narrowness is key to the intimacy France creates by following key figures like characters in a novel through what would otherwise be a fog of names, dates and acronymns.

Some of his stars include the musician Michael Callen, Richard Berkowitz and their physician Joseph Sonnabend, who together authored a 1982 book advising on safe sex. There is Larry Kramer, the mercurial playwright crucial in ACT UP’s founding and Peter Staley, the ex-Wall Street trader who protested against the high price of the drug AZT at the New York Stock Exchange. The book is rich in the tales of these and others fighting have AIDS taken seriously by scientists and law-makers.

What was new about the AIDS epidemic was how activists engaged with research. Theatre major and high school dropouts were reading the scientific literature on HIV/AIDS and presenting their own illnesses at conferences so they could challenge drug companies and government officials on funding and clinical trials. ACT UP’s famous mantra was “drugs into bodies”, they intended to get as many people on the best drugs quickly. This was strikingly at odds with a government and medical establishment characterized by it’s neglect, greed and indifference.

Successes finally emerged when researchers and activists worked together. AIDS activists suggested the parallel track method of clinical trials, which generated good data from a carefully controlled clinical trial, but those who aren’t the ‘perfect patient’ trials need can get the drugs. The drug Crixivan appeared promising in the early 1990s, so was rapid rushed from test tube to human trials via the substitution of animal trials for a ‘big chimp trial’, a scientist heroically took the drug themselves. But the manufacturer Merck faced was slow in producing enough of the drug for the Phase II trials, so driven by love and despair activist Tom Blout on Merck’s community advisory board was getting the drug bootlegged for lover Jim Straley, who died after his supply ran out.

Such a personal history of an emotive topic runs the risk of whiggish hagiography, but France’s acknowledgement of the failures and successes of activist groups keeps it balanced. The pitfalls of activist-driven medicine as shown in the case of AZT. Pressure from activists to value hast came at the cost of efficacy, when the good outcomes early in the Phase II clinical trial of the drug meant the trial was stopped prematurely and the drug approved in record time. But AZT was ultimately found to have no effect on lifespan, not captured in the short trial. But the scientists have plenty of flaws as well. There was a narrow focus by the NIH on trials of drugs which targeted the HIV virus, at the expense of developing treatment for the opportunistic infections which actually kill AIDS patients. But by the end of the plague, activists and scientists largely collaborated to temper each others’ faults. There is comedy in some of ACT UP’s stunts, such as the unfurling of a giant condom over the house of anti-AIDS research Senator Jesse Helms. But throughout the book runs a sense of the great, howling injustice of the willful ignorance of the AIDS crisis by those in power.

The book ends in 1996, the end of the plague years, the survivors dissipated. But the plague still shuffles on, wreaking havoc for those lacking access to treatment and bruised by previous treatments. AIDS casts a long shadow over the LGBTQ community, but the grief and anger spurred activists into becoming more vocal in demanding their own humanity be respected. The consequences of their actions reverberate in every pride march and every newly-out teenager today.

The Self in the Data: review of Homo Deus by Yuval Noah Harari

Futurology is a notoriously difficult field; difficult to practice and difficult for the practitioner to be taken seriously. There is an art to avoiding both Nostromdamusian vagueness and the retrospective naivety of Thomas Watson’s mythic comment on the “world market for maybe five computers”. A good futurologists needs to be aware of the aims and abilities of modern technology and scientific research, but crucially must understand how the political and social environment could channel or outright block scientific and technological development. Futurology is more than simply predicting what shiny new inventions will be out in the next five years. Rather, it is the skill of locating the core philosophical underpinnings of modern technology, science and politics to predict what would happen if they were allowed to proceed to their logical conclusions, the tech being the means to this social end.This is Harari’s aim in Homo Deus, not to prophecy but rather to give a warning of there we are going. It is left to the reader whether we should, or can, fight this.

Early in the book, Harari defines the New Human Agenda as to achieve happiness, immortality and deification, or supreme knowledge and manipulation of nature. Harari argues that despite having mostly solved issues of famine, war and disease in the West and increasingly in the rest of the world, we are still not content. Rather we sapiens strive to achieve the next items on the list. In doing so, Harari notes sees many unknown unknowns opening up. With our measly sapiens brains, we struggle to comprehend what a future of cognitively enhanced Homo dei would be like, what Ray Kurzeweill terms the singularity. Such adherence to uncertainty is refreshing in a book about the future, but does not serve as a cop out for refusing to consider what the future may hold.

I disagree with Harari complacency over the eventual conquest of infectious disease. It rather reminds me of a review paper by Rodney Wishnow and Jesse Steinfeld published in 1976 entitled precisely The Conquest of Infectious Disease in the United States. After the emergence of the AIDS epidemic, Hughes and Berkelman gave their similarly titled 1993 paper the despairing blunt subtitle Who are we kidding? It would be fallacious to predict that no major infectious disease could emerge in the future and take a significant toll on the population. Rising anti-biotic resistance makes this a growing likelihood. But even many non-infectious diseases are caused, worsened or treated by the shared environment, so the effect of the person’s community in a future healthcare cannot be ignored. Whilst I do agree that happiness, immortality and deification will become major goals of the wealthy into the 21st century, the trials previous human civilizations – disease, famine and war – will be persistent problems, faced as we will be with dwindling natural resources and changing opportunities for pathogens to spread in a climate altered, post-antibiotic world.

Harari places the move from mass to personalized healthcare within a narrative of the diminishing of the value of individuals, in addition “Great Decoupling” of intelligence from consciousness and the decline of mass warfare could make many economically irrelevant. But the infectious and environmental aspects of healthcare will mean it will always be social, as Eula Biss writes “our bodies may belong to us, but we ourselves belong to a greater body composed of many bodies.” Precisely that we are all to a lesser extent custodians of the health of those around us would likely preserve the value of the group in the face of atomised personalized healthcare.

Harari is a professor of world history at the Hebrew University of Jerusalem. His previous work, Sapiens: A Brief History of Humankind, had the focus on broad historical themes expected by someone with that job title. He carries over this historical scholarship to Homo Deus, with a surprising about of the book given over to discussion of the past. But Harari does not use the past in the way endorsed by Santayana’s well-worn quote, “those who cannot remember the past are condemned to repeat it”. Things are more complicated than this. As Harari so eloquently puts it; “Historians study the past not in order to repeat it, but in order to be liberated from it. […] It enables us to turn our head this way and that, and begin to notice possibilities that our ancestors could not imagine, or didn’t want us to imagine.” On this rationale, Harari spends a significant part of the book exploring the foundational ideology of modern society. Principally, this is humanism – the centering of the value of individual (human) subjective experience in politics, art and ethics, with sapiens rather than deus as the source of all meaning. However, Harari treats this as a religion, broadly defined, and as such holds its claims up to empirical scrutiny, and finds them lacking.

Harari presents a number of arguments drawing on experiments in the life sciences which threaten the idea of the indivisible self and free will key to the liberal religion. After two years of being told by my A-Level Philosophy teacher I do not have free will I am sure are not novel. But more interestingly, Harari argues that as living systems must be conceived of as a algorithmic system, if we are to abandon ideas of the immaterial soul, we can be causally predicted. Throughout human history, the difference between our experiencing and narrating selves in which the latter assess experience based only on the peak and end pleasure or pain means we are a poor judge of our subjective experience.

What Harari describes as the new religion of “dataism” therefore intends to jettison this reliance on the vagaries of our own assessments of our experience and go straight to the data. Human experience is defined as the data patterns that human generates, no messy, qualitative feelings needed. Whilst this description makes dataism sound like a cult from a sci-fi film, many people in the West do make feelings concede to data every day. Harari references the BRCA 1 and 2 mutation tests taken by the perfectly healthy Angelina Jolie, the data on her risk for breast and ovarian cancer lead to her deciding on a preventative mastectomy and ovariectomy. Nothing in her subjective experience told her she was in need of significant surgery, it was rather her genetic data and the population study data relating to her phenotype that lead to her decision.

Whilst reports of the New Human Agenda have been somewhat exaggerated, as the old one still persists, I find Harari to be most illuminating on the role of data in the future. Upon being explain to that I have no free will in my mid-teens, I was remarkably relaxed about this compared to my classmates. I am not alone in being a persistently inconsistent judge of my own feelings. I have often felt lost and confused in a sea of my own subjectivity, grasping for something stable and objective. I gladly concede that a powerful enough algorithm could know me better than I know myself, and maybe would consult such non-conscious, intelligent algorithms when making decisions.

My acceptance of the role of data to supplement human decision-making does not necessarily entail an acceptance of a future unemployable ‘underclass’. Harari grounds his projections for a dataist future upon the continuation of free market capitalism. Such a future is far from guaranteed. Harari makes a good case for the pacifying effects of global trade under capitalism making it a very stable political system. But given climate instabilities and the consequential political chaos will worsen, the future seems far from stable. It is understandable that Harari gives little space to predictions of political revolution given the aims of his book (and his reading of Marx), but the absence of the effects of climate change, given how it could indirectly hamper our attempts to achieve happiness, immortality and deification, is inexcusable.

Harari is honing a niche as a popular historian in the grandest sense, synthesizing much of global history for his readers but his respect for the readers own mind means he refrains from fable-like storytelling. His mind ranges far over history, both tunneling back into the deep past and sending tendrils into the future, giving him a sense of fore- and hindsight which we and our politicians could well learn from.

Revolutions in the History of Science and the History of Life: The Influence of Kuhn on Gould and Eldredge’s theory of punctuated equilibria

I wrote this essay at the start of last summer at the end of my Lower Sixth year as part of an extension project for A-Level Philosophy inspired by finding Turner’s Paleontology: A Philosophical Introduction and being given The Structure of Scientific Revolutions at an impressionable age. I have now decided to dig it out and publish it unedited since then, retaining my youthful zeal, naivety and poor essay titling skills.

Thomas Kuhn was one of the most influential 20th century philosopher of science known primarily for his idea that science advances by a series of revolutions. During a paradigm shift, the fundamental theoretical foundation of a science is overturned and a new generally accepted theoretical foundation, or paradigm, is established. Kuhn’s analysis of the way that scientists work, detailed in his essay The Structure of Scientific Revolutions, has influenced the way that scientists in general have thought about the way in which they do science. But the intellectual framework of Kuhn’s theory has influenced scientist further, even in the extent of influencing heavily the formulation of a scientific theory.

Stephen Jay Gould and Niles Eldredge’s theory of punctuated equilibria states that it is best to take a literal reading of the fossil record, as it actually shows long periods of stasis where species stay the same, punctuated by the rapid appearance of new species. The theory is commonly described as “Marxist”, often as an insult, despite that Marx, and other Hegelians, believed that everything in history is leading to a single goal or end-point; a telos. Gould and Eldredge stressed the contingency of evolutionary history, and hence evolution does not occur towards a telos. Hence, the lack of an end goal that Gould and Eldredge posited for evolutionary history distinguishes his broad theory of evolutionary history from Marx’s theory of human history. However, we will see that the structure of the theory of punctuated equilibria is much more similar to the structure of Kuhn’s theory of scientific revolutions, and indeed, that punctuated equilibria could not have been formed as a theory without the intellectual framework Gould and Eldredge borrowed from Kuhn and other philosophers of science. Though many scientists do not consider detailed philosophical study to be important, I will argue that punctuated equilibria is a good example of where a scientific theory could not have been formulated without a study of philosophy and the adoption of the sort of thinking common amongst philosophers of science.

Kuhn suggested that the history of an established, mature science is characterised by long periods of normal science, under which almost all scientists work with the same paradigm. Paradigms are suitably opened ended so that scientists work to “patch up” the science by asking unanswered questions, but seldom question the theoretical foundation of the science. Occasionally, the paradigm strains under the increasing weight of anomalies that scientists find in the data, so the data cannot be seen to be compatible with the current paradigm. Then a scientific revolution may occur; where an often younger, outsider scientists proposes a new theory for the interpretation of the data that explains enough of the anomalies to challenge the current paradigm. If enough scientist pledge allegiance to this new theory, it will become a paradigm. This shift from one paradigm to another is not wholly rational; it is due to a wide range of personal, sociological, psychological and professional considerations as well as the strength of the evidence. The new paradigm becomes incorporated into normal science, and a period of stability occurs as scientists labour under the same, new paradigm. Scientists must use background theories to decide where and how to collect data and the ways in which they analyse data. But this data obtained are interpreted (or possibly actually seen) in line with the particular paradigm that the scientist is working under, so data is theory-laden. Furthermore, Kuhn argued that science is not advancing to a goal of an immutably truth, clearly set by nature. Rather science evolves through revolutions, but to no particular goal, in a similar way that living organisms evolve, without a goal or telos.

Therefore, the broad frame work of Kuhn’s theory is characterised by: long periods of stasis, where the paradigm remains the same, punctuated by rapid periods of revolution, where a new paradigm emerges, but does not move the scientific field towards the goal of absolute truth. Hence, we can trace the intellectual trail from Kuhn’s theory to Gould and Eldredge’s punctuated equilibria; which is similarly characterised by long periods of stasis, where species remain the same, punctuated by rapid periods of revolution, when new species appear, but does not move the evolution of life towards an absolute goal of evolutionary fitness.

In order to understand Gould and Eldredge’s theory, it is important to understand what it was formulated to counter. Darwin wrote that evolution occurred gradually, primarily by natural selection, involving the gradual evolution of one form or species into another, a process known as phyletic gradualism. Darwin himself said that this view of evolution did not reflect what is shown in the fossil record, as there is no evidence of enormous numbers of intermediate varieties, so the fossil record cannot be interpreted as revealing “any such finely graduated organic chain”1 of species splitting and evolving into different species. But Darwin conceded that “the geological record is extremely imperfect and this fact will to a large extent explain why we do not find interminable varieties, connecting together all the extinct and existing forms of life by the finest graduated steps.” So the phyletic gradualist would see the fossil record as showing gradual evolution littered with gaps, and in many of these gaps fall transitional forms.

However, in 1972, Stephen J Gould and Niles Eldredge presented their theory of punctuated equilibria2, though it is a different way of seeing the fossil record rather than a true theory. They posited that the fossil record in actuality shows an ancestral species in an older layer of rock and many descendent species in the next youngest layer of rock with no intermediary form between the ancestor and the descendants. Phyletic gradualists would see this as a “gap” in the deposition of sediment, due for example to a lake drying up, and therefore a thin layer of rock represents a long period of time in which speciation was occurring gradually. Hence, the phyletic gradualist reassures themselves again that speciation is gradual and the fossil record is incomplete. Gould and Eldredge bemoaned the confines of the phyletic gradualist picture: “We have all heard the traditional response so often that it has become extremely imprinted as a catechism that brooks no analysis: the fossil record is extremely imperfect. […] renders the picture of phyletic gradualism virtually unfalsifiable.”

Therefore, they said, what grounds do we have for not taking a literal reading of the fossil record? Maybe the apparent “jump” between ancestor and descendants is not an artifice due to geological particularities, but represents something actually occurring in evolution. The literal reading invites the proposition that, often, species do not differentiate gradually. Species could differentiate by a subpopulation breaking off from the main population, the small numbers of the isolated subpopulation on the periphery of the ancestor’s range experience a different environment, so the lineage splits into two new descendent species rapidly. The descendants then reinvaded the ancestor’s geographical range, a process known as allopatric speciation. When species become established, they do not change for long periods, until the lineage becomes punctuated by another speciation event. This happens in too short a time period and in a different geographical range to most of the rest of the population, therefore the fossils of the ancestor and the transitional forms are extremely unlikely to be found directly about each other in the same stratum until the new descendent species reinvades. So, most of the time, no “insensibly graded fossil series.”2 is captured in the fossil record. The ancestor, transitional forms, descendants series occurs at a faster tempo than Darwin suggests, and these events are interspersed with periods of stasis.

Though Darwin did not posit that evolution had a telos, he did formulate his theory using the standard frame-work of Victorian, ideas of gradual historical progress and Hegelian teleology. Indeed, it is very likely that Darwin could not have come up with punctuated equilibria however hard he tried, given the intellectual environment he worked in. Gould and Eldredge are indebted to Kuhn for their frame-work for punctuated equilibria. But furthermore, their presentation of their theory was done so in a way that acknowledges Kuhnian ideas of the theory-ladenness of science, remarkably modest behaviour.

Their formulation of punctuated equilibria is based on idea of the theory-ladennes of evidence used by Kuhn; that your background theories, or the paradigm you are working under, might lead you to interpret (or even actually see) the data in a way different from another scientist working under a different paradigm. Hence a phyletic gradualist and a punctuated equilibria adherent may each interpret the same fossil sequence differently from the other; the former would interpret there to be gaps in the formation of rock, giving the appearance of an interrupted gradual evolutionary process but the latter would interpret there to be no actual gaps in the rock formation, but rather the “gaps” show suggests a period of rapid evolution in a different geographical area. Gould and Eldredge write, in a remarkably Kuhnian style, in their original paper, that “the idea of punctuated equilibria is just as much a preconceived picture as that of phyletic gradualism. We readily admit our bias towards it and urge readers, in the ensuing discussion, to remember that our interpretations are as coloured by our preconceptions as are the claims of the champions of phyletic gradualism by theirs.”2 Hence, Gould and Eldredge stress punctuated equilibria as just another way of seeing, and they are as biased towards it as Kuhnians as Darwin was to gradualism as a Victorian.

In their original paper, Gould and Eldredge went so far as to say that “the data of palaeontology cannot decide which picture [phyletic gradualism or punctuated equilibria] is more adequate”. Though later claiming that punctuated equilibria could be verified, here the theory-ladennes of evidence is suggested to an extreme degree, as they suggest that our interpretations are not merely “coloured by our preconceptions” but wholly fogged, implying disturbingly that we cannot know anything inductively, as everything is hidden implicitly in our theories, or paradigms.

This extreme approach did, however, not bring many adherents to punctuated equilibria, hence their later concession that punctuated equilibria could survive tests against the data. With this verificationist approach, Gould and Eldredge have won over the vast majority of younger scientists to varying extents, and hence launched the palaeobiological revolution.

In conclusion, the work of Kuhn and other philosophers in the philosophy of science heavily influenced the formulation and presentation of the scientific theory of punctuated equilibria, itself a considerable contribution to modern evolutionary thought. Therefore, I stress the importance of scientists learning about philosophical ideas, if only for the sake of scientific innovation. Broadening the intellectual horizons can only bring the possibility of scientists applying theoretical frame-works in novel ways to allow radical reinterpretations of current theories. If scientists learn to think like philosophers, at least some of the time, it can bring about innovative interpretations of nature, which is vital for the continued relevance, usefulness and intellectual robustness of science.

Bibliography:

  1. Darwin, C. On the Origin of the Species (1859)
  2. Gould, S. J. , Eldredge, N. Punctuated Equilibria: An Alternative to Phyletic Gradualism (1972)
  3. Turner, D., Paleontology: A Philosophical Introduction (2011)

Phallocentric Fallacies: On Gender Bias in Science

In the media and in the science classroom, it is common to hear the lamenting of the lack of women in science at the highest level. This is made evident in the statistics, such as the 12.8% of the STEM workforce in the UK being women as of 2014. The world over and in the majority of scientific disciplines, women are conspicuously absence at the highest level. Most agree that this is not due to women’s inherent inability to do science or their lack of ambition to do anything but raise their children. Therefore the gender disparity at the leading edge of scientific research and innovation is often bemoaned as a shameless waste of talent. In such an example, Athene Donald explores the phenomenon of girls interested in physical sciences being subtly or unsubtly discouraged from taking A-Level Physics and being “lost” from the path to a career in physics or engineering. Donald argues that such a phenomenon is harmful to the economy, as to simply maintain the status quo in terms of science industries in the UK, we need 10,000 more STEM graduates than we had graduating as of 2012, according to the Royal Academy of Engineers.

I don’t deny that women aren’t needed to make up the numbers of competent STEM professionals if we hope to expand STEM industries. Furthermore, I agree with Donald that it reflect poorly on an intellectual culture if those who are academically able and motivated to pursue a field of interest are discouraged from doing so for reasons unrelated to their ability.

However, these arguments apply to encouraging anyone who has the merely inkling of interest in science to pursue it in the educational systems, so are not in principle incompatible with having the upper echelons of scientific institutions filled with men of a particularly narrow social slice if this is how the dice have fallen in terms of interest.

But I will argue that women, as well as everyone else who isn’t of the demographic which has been historically the definition of a scientist; the middle-class white European man, have more to offer science than just another pair of hands.Though science aims to be objective it is inescapably subjective as it is done by human beings with subjective experiences. We gain our subjective biases through how we experience our lives in our society, these background biases act as “blinkers” and inevitably limit our outlook on the ever elusive truth of reality. This narrowing is not out of stubbornness to see reason, as the perjorative use of “blinkers” entails, but means it is very difficult to see otherwise. As Elizabeth Anderson writes in Feminist Epistemology: An Interpretation and a Defence: There is no reason to think our presently cramped and stunted imaginations set the actual limits of the world, but they do set the limits of what we now take to be possible.”

Those who have very similar experiences due to their similar social backgrounds are likely to have similar “blinkers” and similarly narrow outlooks, which becomes the status quo. As Anderson writes: “A scientific community composed of inquirers who share the same background assumptions is unlikely to be aware of the roles these assumptions play in licensing inferences from observations to hypotheses, and even less likely to examine these assumptions critically.” In contrast, those who have different experience and interests through being socialised differently will have their own slightly different set of blinkers and fields of vision of reality slightly askew from the status quo.

Science done by those with very similar life experiences, such as coming from the same social class, same country, same educational background, same sex-class and so on can be very fruitful, I do not deny the achievements of the Enlightenment, but this can only go so far. The introduction of someone with different backgrounds, such as that of being a woman in a patriarchal society, into a field previously dominated by androcentricism, the centring of the male means that she brings with her a different set of subjective biases about the field, so her blinkers are slightly offset to those of her male colleagues and she may have an outlook subtly different, and may encompass a patch of reality the men have so far missed. By contrasting ideas developed by those with divergent outlooks, scientists in the field should then conduct experiments to work out which idea matches reality most closely, and therefore help edge science ever closer to the truth.With such similar subjective biases, a field can only go so far until old hypotheses become rehashed again and again until the empirical evidence relevant to them is exhausted. But using her subtly different outlook onto the world, the female scientist may be able to come up with an innovative hypothesis which after sufficient empirical corroboration may be a theory which comes closer to reality than male scientists with their own particular outlooks have until then been able to.

My focus here will be on the use of what Anderson describes as gender symbolism, “which occurs when we represent nonhuman or inanimate phenomena as masculine” or feminine” and model them after gender ideals or stereotypes.” I will use a historical example of this where gender ideals are mapped onto a biological phenomenon where in fact no sound evidence of it’s existence is found, a true phallocentric fallacy where the masculine is seen where it does not exist. The episode which sparked this articles comes from a particularly obscure branch (or hypha) of biology: fungal reproduction.

As Nicolas P. Money writes in Mushroom, 19th century mycologists were very interested in the topic of fungal sexual reproduction, though the difficulties of studying the phenomena meant that, whist most specialists seemed to favour “the gentle fusion of colonies”, no experimental data nor a mechanism for this was proposed. However, during the First World War, Worthington G. Smith (1835-1917) proposed that he had observed through his microscope mushrooms producing sperm cells, which were ‘ejaculated’ onto the spores in the soil. Smith’s observations are flawed on two fronts. Firstly, he seems to have struggled to see these sperm cells, writing that “At first it requires long and patient  observation to make out the form of these bodies satisfactorily, but when the peculiar shape is once comprehended, there is little difficulty in correctly seeing their characteristic form.” It sounds rather like to see these cells, you must know what to look for, so you see what you know. But his most egregious mistake was to hydrate his samples with the shocking non-sterile “expressed juice of horse dung”, no doubt containing sperm-like amoeba. However, it is highly likely that due to his experience as a man in the patriarchal Victorian society Smith could only imagine sexual reproduction to occur by the forceful ejaculation of the active male sex cells onto the passive female sex cells, a clear projection of the gender symbolism of Victorian society onto the natural world. His blickers contributed to the poor quality of his science, as he did not or refused to acknowledge the contaminating effect of the horse dung on his samples so certain his results were correct.

In contrast, the young graduate Elsie Maud Wakefield (1886- 1972) appears to me to be the model of the “New Woman”, a graduate of the then all-women’s Somerville College, Oxford. Though information on her biography is sparse, as a woman in the late 19th century and early 20th century she would likely have been aware of ideas about human sexual relations being more mutualistic and equal than the Victorian ideals of male dominant courtship, such as those later expressed in the work of Marie Stopes. Whether she adhered to these political values or not, she would have been better able to imagine a non-phallocentric natural world which Smith could not. Therefore, her subjective ‘blinkers’ were different enough from those of Smith’s that she was able to conduct her experiments on fungal reproduction without the phallocentric assumptions of the active male sperm and passive female spores.

Wakefield conducted a series of experiments which demonstrated the necessity of the fusion of the mycelium, the fungal ‘roots’, to produce mushrooms in the Basidiomycete fungi, with no role for mobile sperm cells. But as Wakefield discovered, the nuclei of the two colonies don’t immediately fuse when the colonies fuse, instead the fused colony grows and forms mushrooms with the two, unfused nuclei inhabiting every cell. Nuclear fusion, the event which occurs in animals when sperm meets the egg, only occurs in the mushroom just before spores are produced. Neither colony engaged in sex takes on an ‘active masculine’ or ‘passive feminine’ role which the Victorian Smith expected to find in society and in nature, and so the phallocentric system of gender symbolism breaks down.

I do not claim that women are able to tap a magical reserve of female knowledge gained purely by virtue of having a female body. This sort of crude gender essentialism only aids in cementing differences. Instead, I argue that simply because no two people can ever occupy the same position in time and space, each person’s subjective experience of reality will be slightly different from that of others, and so will have different background assumptions and interests when entering science, including biases based on being socialised as a woman or a man. Instead a shuffling of subjective, gender biased perspectives is where real scientific innovation and the hope of objectivity can be found. As Anderson writes, “Each individual might be subject to perhaps ineradicable cognitive biases or partiality due to gender or other influences. But if the social relations of inquirers are well arranged, then each person’s biases can check and correct the others’.In this way, theoretical rationality and objectivity can be expressed by the whole community of inquirers even when no individual’s thought processes are perfectly impartial, objective, or sound.”

Innovation through Synthesis: the Methodology of Gregor Mendel

40_gregor_mendel_2

I wrote this essay for entry earlier this year for entry to the Galton Institute’s Mendel Essay Prize 2016, an essay competition open to British and Irish A-Level Students to write on Gregor Mendel and his legacy. I chose to write on what I identified to be Mendel’s principle contribution to biological thinking, a novel fusion of mathematical methods with biological subject matters, which in addition to furthering out understanding of inheritance and opening up the new field of genetics also provided a more rigourous mathematical framework which Biology has followed for much of the 20th century.

Abstract: Gregor Mendel is well known as the Moravian monk who in 1866 presented what became known as his Laws of Inheritance, later incorporated with Darwinian Natural Selection in the Modern Synthesis to form the basis of modern genetics. He is popularly seen as a model Baconian inductive scientist, however I will argue that Mendel’s true innovation was his synthesis of the two disciplines of combination theory and the study of inheritance. This allowed Mendel to describe a biological phenomenon using a mathematical model, a methodology adopted by that vast majority of 20th and 21st century scientists.

Scientific creativity is commonly seen as being either one of two apparently incompatible extremes. The first is that of logical creativity, whereby scientific discoveries are made and problems are solved through the use of logic and deductive and inductive reasoning. By this view, it is very likely that any particular theory would be postulated eventually, as all scientists use the same methods of reason and logic to reveal the same truths about the world. The other extreme is that of the creative genius, whereby the idiosyncratic cognitive patterns of a particular gifted individual allows, as William James writes, “the most abrupt cross-cuts and transitions from one idea to another”.1 Therefore, it is very unlikely that two geniuses will ever construct exactly the same theory, as each will have different ways of connecting ideas, whereas the laws of logic are always the same.

Gregor Mendel is popularly seen as the former; as a logical, deductive toiler. This concept of Mendel has it that, by carrying out many experimental crosses between plant strains, Mendel gained a large data set from which he logically coaxed out his Laws of Inheritance; making him a model Baconian inductive scientist.2 However, a closer reading of Mendel’s biography reveals that Mendel’s work was, as it is for all scientists, as much a product of his “standing on the shoulders of the giants” allowing him to “see further”, drawing on past scholarship in order to be creative. But crucially, Mendel’s creative innovation came through his ability to place one foot on the ‘shoulder’ of two different ‘giants’ and straddle the disciplines of mathematics and biology, and hence synthesis the two in an entirely novel and creative way.

Before Mendel, the study of inheritance was predominated by the idea of blending inheritance, whereby offspring inherited a combination all their parents’ traits, so would have an appearance midway between those of the parents. However, by the mid-19th century, other scientific disciplines had fallen in the path of an all-consuming “‘avalanche of numbers’”3. Therefore, it was arguably inevitable in this mathematically-fashionable context that another scientist would have applied some sort of mathematics principles to the problems of inheritance; indeed, both Hugo de Vries and Carl Correns did so independently, but not in so precise a manner as Mendel over thirty years earlier. The precision of Mendel’s Laws derived from his novel use of combination theory, which was taught to Mendel by its originator Andreas von Ettingshausen. Combination theory is a way of describing mathematically the arrangement of objects in a group in term of underlying laws, which Mendel readily understood and adopted.3 Therefore, Mendel’s innovation came in his ability to use his teacher’s tool to construct a mathematical model in the novel context of biological inheritance.

Though the destruction of Mendel’s experimental notes after his death means that the motivations for his experiments can only be guessed at, it is likely that Mendel approached the issue of inheritance with the hypothesis that the inheritance of particular traits was governed by underlying mathematical laws, derived from combination theory. In order to elucidate any laws present, Mendel had to use a deductive Newtonian, not Baconian, method: he first formulated a hypothesis and designed experiments to prove or disprove this hypothesis.4 The success of Mendel’s Newtonian methodology is shown by his ability to predict the ratio of pea pod colour in offspring produced by a monohybrid cross. Using modern terminology, Mendel’s experimental data shows that crossing together two heterozygous (Gg) green podded F1 plants produces F2 offspring with the phenotypic ratio of 3 green: 1 yellow, but with the genotypic ratio of 1 GG (green): 2Gg (green): 1gg (yellow), as the G (green) allele is dominant to the recessive g (yellow) allele. This F2 genotypic ratio can be determined using combination theory by simply multiplying out the F1 genotypes of Gg and Gg. Therefore, experimental observations confirm this mathematical model of the biological phenomenon of inheritance, permitting the adoption of this deductive, Newtonian methodology by much of 20th and 21st century natural science, to great success.

The Philosopher of Science Thomas Kuhn argued that scientific understanding consists of paradigms, or broad frameworks of theories. As experiments throw up anomalies in the current paradigm, the paradigm enters a period of crisis, leading a revolution after which a new paradigm is established.5 By synthesising the concepts of two different fields together, Mendel was able to creatively provide a revolutionary solution to the anomalies of blending theories of inheritance. Though Mendel lived in a period of interest in the application of mathematics, his ability to, as Robin Marantz Henig writes, “maintain […] two different mental constructs of the world simultaneously and apply […] the principles of one model to problems in the domain of the second”3 allowed Mendel to mathematically describe biological phenomena, a truly creative innovation not merely derived from inductive toil.

Mendel’s creativity does not fit neatly into either of the common conceptual extremes of scientific creativity. Mendel both carried out the “abrupt cross-cuts” of the genius in using combination theory to solve biological problems and underwent inductive toil to gather empirical evidence. Therefore, Mendel’s example suggests that scientific creativity cannot be achieved by one of these two extremes; rather only by combining logical problem solving with the “seething cauldron of ideas”1 of the mind of a genius can truly innovative and revolution science occur.

1William James in: Simonton, D.K. 2004. Creativity in Science: Chance, Logic, Genius and Zeitgeist. Cambridge, UK. Cambridge University Press.
2O’Hear, A. 1989. An Introduction to the Philosophy of Science. UK. Clarendon Press.
3Henig, R.M. 2001. A Monk and Two Peas: The Story of Gregor Mendel and the Discovery of Genetics. London, UK. Phoenix.
4 Schwarzbach E., Smýkal P., Dostál O., Jarkovská M., Valová S. 2014. Gregor J. Mendel – genetics founding father. Czech J. Genet. Plant Breed. 50 pp. 43–51.
5Kuhn, T.S. 1996. The Structure of Scientific Revolutions. 3rd Edition. London, UK, University of Chicago Press