Design a site like this with
Get started

How do you vaccinate against a worm?

From: Mouwenda, Y.D., et al (2021), “Characterization of T cell responses to co-administered hookworm vaccine candidates Na-GST-1 and Na-APR-1 in healthy adults in Gabon”, PLoS Neglected Tropical Diseases 15:10.

We all know how vaccines work. You take something from the pathogen into your body, triggering your body’s immune response to fight it off. Then, when you encounter the pathogen for real, your immune system, thanks to your Memory T-Cells (and a few other cells), is ready for it, and can fight it off easily.

There are a few ways to go about this process. Traditionally, the bacteria or virus causing the disease would be killed (‘inactivated’ for viruses, because they’re not quite alive to begin with) or knocked down to a less dangerous form (‘attenuated’). Once that was done, the pathogen could be injected (or swallowed, in the case of the Sabin Polio vaccine) into the body, triggering the required immune response without any danger to the patient. Indeed, many nineteenth century bacteriologists considered the ability to make a vaccine from a bacterial culture an essential skill that every bacteriologist should learn!

More recently, we’ve developed a very exciting technology called mRNA vaccines, which trigger your own cells to produce the virus’s ‘spike protein’, which then triggers your immune response against the spike. This is especially cool, because the spike is what the virus uses to infect your cells, so having your immune system able to recognise and disable it is a perfect way to protect yourself! The spike protein is harmless without the rest of the virus to go with it, as well, so there’s never any risk to you. The COVID-19 vaccinations I hope you’ve all had (if you’re able) are mRNA vaccines.

But what do you do when you’re trying to vaccinate against something that has no spike protein, and can’t be safely put into the body? Parasitic worms are a huge health problem in many parts of the globe, and are experts at evading the human immune system. It’s not entirely clear how they manage it, but many worms will not provoke an immune response, and giving everyone in an at-risk area drugs against them at once has become the main method used to protect people, despite the many obvious difficulties of such an endeavour.

One particularly nasty set of parasitic worms are the hookworms (Ancylostoma and Necator spp.). These live in the upper gut, where they feed off their unfortunate host’s blood, causing a whole host of problems, principally including anaemia. As I mentioned, they’re very good at evading the immune system, so to create a vaccine against them, scientists have had to get clever.

One promising approach is to train the immune system to recognise the enzymes (chemical tools) the hookworm needs to live. Trials have shown promising results for two enzymes, Na-GST-1 and Na-APR-1, which the hookworm needs to digest blood. As humans don’t drink blood we don’t have these enzymes, so their presence is a sure indicator of hookworm. And they are essential for the hookworm to feed, so training the immune system to knock them out will kill the worm in no short order.

Trials of these two enzymes have shown them to be safe and to provoke an immune response, indicating that they could be used in anti-hookworm vaccinations. Yoanne Mouwenda’s team wanted to dig deeper, and see exactly how the immune system responded to them.

Studying 24 people in Gabon, they found that unvaccinated people (at baseline) did not show any T-Cell response to Na-GST-1 (other immune cells did respond), but did respond to Na-APR-1. After three doses of vaccine, the participants T-Cells responded much more strongly to Na-GST-1, but had not changed their response to Na-APR-1. This suggests that Na-GST-1 is the better vaccine candidate, but it is difficult to be sure on such a small study, and it may be that Na-APR-1 might be just as effective when given in higher doses or a different form. The authors suggest that as the participants lived in Gabon, where hookworm does occur, they may have already developed a slight immune response to Na-APR-1 from natural infection.

Immunology is incredibly complicated, and I don’t understand half of it, but it is heartening to see scientific ingenuity on this level being brought to bear on protecting people from such a horrific disease.

And please, get every vaccine you are eligible for. Of all our medical technologies, they are by far the most effective and least risky, and save lives – maybe including yours! – every day. And most of the  people saved by vaccination don’t even realise they were ever in danger. Which is nothing short of miraculous, thinking about it.


Monkey business: how pinworms shape genetic diversity in howler monkeys

ft. Inconsistent Capitalisation

From: “Co-structure analysis and genetic associations reveal insights into pinworms (Trypanoxyuris) and primates (Alouatta palliata) microevolutionary dynamics”, B. Solórzano-García, E. Vázquez-Domínguez, G. Pérez-Ponce de León & D. Piñero, BMC Ecology and Evolution 21:190 (2021).

One of the most important drivers of evolutionary change is parasitism. Parasitic organisms are by definition detrimental in some way to their host (broadly speaking), so any host that has any kind of resistance or unusually strong defence against parasites is at a considerable evolutionary advantage. This puts the parasites themselves under constant pressure to adapt, to evolve around their host’s defences, creating a comparable advantage for any particularly well-adapted parasite.

In a similar process, the Delta variant of Covid-19 has not only overtaken most cold and flu viruses, but also all other (thus-far-emerged) variants of Covid-19 worldwide because it is so much more infectious than them – evolutionarily-speaking, it has become one of the most successful viruses of all time, in terms of sheer spread across the globe, because of a handful of mutations which have given it a considerable advantage in infecting people over its original (Alpha variant) form.

The upshot of the constant evolutionary conflict between parasite and host, with each constantly evolving and adapting in response to the other, is often incredibly rapid and creative evolution. These ‘Red Queen’ dynamics, named because neither parasite nor host usually manages to gain much of a lead in this ‘race’* are thought to be responsible for huge amounts of genetic and biological diversity, creating a large amount of the incredible variety visible in the natural world.

Brenda Solórzano-García and her team wanted to investigate these dynamics in Mexican populations of Mantled Howler Monkeys (Alouatta palliata)** and two of its pinworm parasites, Trypanoxyuris multilabiatus, which only affects this specific species of monkey, and T. minutus, which also affects a handful of other closely-related species. The advantage of working with Mantled Howler Monkeys is that they are strictly tree-dwelling, which means that populations separated by farms, cities or any other human-made or natural barrier are almost completely cut off from each other, and are likely go down their own independent evolutionary pathways. Similarly, looking at two different pinworms, one species-specific (T. multilabiatus) and the other more of a generalist (T. minutus) provides a good opportunity to see whether T. multilabiatus, affecting only one species, is more closely attuned to its host.

And indeed, genetic analysis did show that genetic patterns in T. multilabiatus populations mirror their Mantled Howler Monkey hosts more closely than T. minutus. Similarly, both pinworms shared with the howler monkeys a broadly east-west split in their population structures, suggesting that the pinworms, transmitted directly between monkeys, are tied to the population dynamic of their host. If the monkeys can’t cross a large break in tree cover, neither can the pinworms – and this is detectable from their genetics.

Solórzano-García’s team also found evidence for the T. minutus pinworms adapting to their howler monkey hosts, with genetic strains (haplotype variants) of the pinworm each tending to only parasitise a single strain of the monkeys. More broadly, across the study area, unique strains of monkey (indicating isolated populations) tended to harbour their own unique pinworm strains.

In evolutionary biology, it is generally thought that local adaptation to hosts in parasites comes with a trade-off – the parasite will get very good at infecting hosts in its immediate area, but will perform less well against more potential hosts from more distant areas. In this case, the fact that many strains of  pinworm were found only in one strain of monkey indicates that they were less able to infect any monkeys of other strains they may come across. That is, pinworms often became so adapted to dealing with the defences of their local monkey strain that they could not get round those of unfamiliar monkeys.

One of the reasons evolution is so fascinating, and so brain-meltingly complex, is that you can’t separate out everything that’s going on. It’s common for us to talk about organisms adapting to their environment, but the fact is that their environment is, in large part, made up of other organisms, all constantly adapting to one another. Parasite-host dynamics are a wonderful thing to study because they distill this complexity down to a more manageable level, in this case of three species interacting, (rather than three hundred thousand) while still allowing us to appreciate that biology is not in any way fixed, static or insular, but a set of fluid, constantly moving and interacting processes, with organisms constantly shaping themselves, each other and their environment. The nature of our field is we study change and movement, an array of cascading processes, from chemical reactions to genetic interactions, all shaping and being shaped by each other. That’s what creates this beautiful tapestry of life that we dedicate ourselves to understanding. And the creation of novelty via evolution, like science itself, is an ongoing process.

* “It takes all the running you can do, just to stay in the same place” – The Red Queen, Through the Looking Glass Chapter Two (Lewis Carroll, p.179 of the 1993 Wordsworth Edition)

**A pallium literally being a Greek-style cloak or mantle

What can a 1700s Ship’s Surgeon tell us about assumptions in Science?

One thing that scientists and historians of science understand instinctively, but I think is underappreciated elsewhere, is that scientists’ assumptions shape not only how we interpret experimental results, but also how we design our experiments.

The ways we think about things, what we understand our experimental subjects and variables to be, certain assumptions we make about them, are baked into experimental design. If we use mice as a model organism, we assume that mice basically do what we think they do, and any weirdness we observe is therefore probably mostly due to the experiment.* If we are working with light, we assume it behaves mostly like a transverse electromagnetic wave – since, quantum weirdness aside, that is what we understand it to be.**

It’s always difficult to wrap our heads around our own understandings and assumptions, since we tend to see them as facts of life, the way the world is. This is where history of science, looking at people who lived in a different world and thought in very different ways, can be very useful in helping us to understand ourselves. So I’d like to use this post to discuss an experiment I came across in my own research, and talk about how the assumptions of an 18th century surgeon shaped his experiments.

James Lind was a ship’s doctor, a surgeon in the British Royal Navy. He is most famous for demonstrating that lemon juice could be used to prevent scurvy – a huge killer of sailors – in what is sometimes described as the world’s first medical trial.*** He also wrote some important medical texts, including An essay on the most effectual means of preserving the health of seamen, and An essay on diseases incidental to Europeans in hot climates.

One of the ‘diseases incidental to Europeans in hot climates’ he was interested in preventing was guinea worm disease (dracunculiasis/dracontiasis), a disease caused by the worm Dracunculus medinensis (and one of my own specialisms). Acquired by drinking waters contaminated with Cyclops water fleas (copepod crustaceans) which have themselves been infected by the worm, guinea worm is a slow-burn disease, with the worms growing inside the body for a full year before the females make their way to the surface and emerge through the leg or foot in search of water to release their offspring into. It’s now nearly eradicated, but in the 18th century it occurred from the Caribbean to the Aral Sea, but Europeans most often contracted the disease in the Guinea region of West Africa – which is why they called it ‘guinea worm’.

We now know that guinea worm is caused by the combined presence of two separate organisms – the Cyclops and the worm – but Lind, writing a century before the developments of bacteriology and zoology, knew that diseases came from unhealthy environments. Everybody knew that if you went to unhealthy places, the change of environment disrupted your humours and made you sick, and Lind was chiefly interested in how to make travel (and therefore war, conquest and colonisation) safer for Europeans, whom he saw as dangerously out of their element in the tropics. According to Lind, guinea worm:

“has been supposed to proceed from a bad quality in the water of the country, which is in general owing to the woody, marshy soil.”

To medical men of his generation, disease was a problem with the environment – in this case the soil, which contaminated the water of Guinea. And Lind knew that guinea worm came from drinking bad water, because other doctors of his day had observed people who drank such water contracting the disease. Lind therefore set up an experiment:

“In order to know the contents and qualities of these waters, I procured those of Senegal, Gambia and Sierra Leon[ne], which were sent me in bottles, well corked and sealed.”

To Lind, it is self-evident that whatever is wrong with the water will be wrong with any water taken from the place the disease occurs. He sees the environment – the soil – as the driving force behind disease, so it is a perfectly reasonable assumption to him that all water from the dangerous environment of Guinea will contain whatever it is that causes guinea worm. He did not consider the possibility that guinea worm might only occur in certain pools within Guinea. Nevertheless, he writes:

“I could not, however, discover, by the help of a good microscope, the least appearance of any animalcules; nor did any chymical experiment discover uncommon contents or impurities in those waters.”

‘Animalcule’ was a general word for any microscopic organism. Cyclops water fleas, guinea worm’s ‘intermediate host’, were visible under 18th century microscopes – a few of decades later, a doctor called Colin Chisholm would suggest they were the young form of guinea worm.****

A modern scientist might conclude from the absence of any ‘animalcules’ that the bottles never contained any guinea worm-causing ‘qualities’, and go back to Guinea to try again in another pond. Lind, however, came to a very different conclusion:

All of them, after standing for some time exposed to the open air, became perfectly sweet and good.
Hence I am inclined to think, that the putrefaction of water destroys the live animalcules…which it may contain when fresh; and if such water be permitted to putrify, very wholesome water may be afterwards obtained in Guinea – and thus, supposing the Guinea-worm to be generated from animalcula, or their ova, contained in the waters of the country, their production in the human body may probably be afterwards prevented, by drinking those waters only that have been rendered perfectly sweet by undergoing a previous putrefaction”

Lind thinks that the guinea worm was in the water, but the ‘open air’ had destroyed it. He knows that the disease comes from water, and he therefore believes it impossible that his Guinean water did not contain guinea worm when it left Guinea. His guiding set of assumptions is that disease comes from the environment, and particularly the climate – take water out of the diseased environment, he concludes, and the water will become free of disease.

Which leads him to a very interesting theory:

“The quickest method of freshening such water is, by passing it through a series of vessels, placed under each other, having very small holes bored in their bottoms, so that it may fall in small divided drops, like a gentle shower of rain, through each of them, into a receiver fixed below. The wind, or air, having thus a free passage through the water, divided into small drops, will soon render it wholesome and sweet”

Lind believes that, as the soil and climate of Guinea makes water dangerous, putting this water into a different, more wholesome environment – the open air – will render it safe. This is perfectly reasonable to him, because it accords with everything he knows about disease. If hot climates and ‘marshy soil’ can make water dangerous, it makes sense that clean, cold air, will make it safe.

This may sound far-fetched, but the truth is that Lind’s method would actually protect you against guinea worm – the World Health Organisation recommends that anyone living in areas where guinea worm still exists filter water through a fine cloth mesh before drinking it. Modern scientists and doctors know that this works because it filters out the Cyclops which contain the worm, but Lind lived in a very different world, and drew his conclusions based on what he knew to be true. He wasn’t stupid, or in any way ignorant – he just started with a different set of assumptions from today’s scientists, and arrived at a very sensible conclusion from there.

It’s an underacknowledged truth in science that there are often several plausible explanations for any particular phenomenon. Part of our job as scientists is to work through the possibilities and come up with something which is as close as we can get to what is really going on. But we can’t test everything – we have to assume that we are right about the basics, and that we can rely on our colleagues’ work. We have to build our experiments around what we know to be true about the world – just as Lind did.

I truly believe that our understandings and assumptions in modern biology are basically correct – but it’s worth bearing in mind that back in the eighteenth century, James Lind thought exactly the same thing.

*Which we then try and confirm with statistics, of course.

**I’m aware this is probably an oversimplification, but I’m also not a physicist.

***If you’re interested in learning more about medical trials in the early modern navy, I cannot recommend the work of Professor Erica Charters highly enough.

****Source: my own research, as-yet unpublished. See C. Chisholm, ‘On the Malis Dracunculus’, Edinb Med Surg J 11/42(1815), pp.145-164.

Shell Games: How different Hermit Crabs coexist

From: “Shell resource partitioning as a mechanism of coexistence in two co-occurring terrestrial hermit crab species”, S. Steibl & C. Laforsch (2020), BMC Ecology 20:1

A fairly fundamental concept in ecology and evolutionary ecology is the competitive exclusion principle – that complete competitors cannot coexist. That is, a situation in which two species which are competing for exactly the same resources (e.g. food, territory or nesting sites) will not last very long. One species will win out, or will evolve to utilise different resources. This is the cause of niche separation (or ecological differentiation), where each species in a given ecosystem has a different ‘role’ and uses slightly different resources. This does not have to mean a complete separation of resources – nearly all small garden birds will eat birdseed, for example – just enough ‘partition’ of resources to ensure that competition for food or space is not intense enough to drive a species out. A classic example is found in wading birds – on any given shoreline or estuary, you will usually find a variety of birds with different beak lengths. The result of this is that the birds with longer beaks can feed on worms and other invertebrates buried deeper in the sand, minimising competition for the ones buried higher up. The worms are the resource which is ‘partitioned’ among the birds.

The process through which this niche separation is believed to evolve, character displacement, is fairly straightforward, but not all that well documented. The idea is that if two species find themselves competing for a resource, those among them which have traits allowing them to access a different resource under less competition will thrive and pass on their non-competitive traits to the next generation. Over time, the two species will evolve in different directions, purely due to the greater reproductive and survival success of the individuals which don’t have to spend their energy competing with another species for a resource. This often results in two species specialising towards slightly different resources and foodstuffs – our long-beaked and short-beaked waders from before, for example.

This is a fairly clear principle – if two species are competing, and thereby limiting access to resources for each other, there is a strong ‘selective pressure’ (that is, evolutionary incentive) favouring evolution which limits competition and effectively increases resource availability. However, although ecologists are very confident that this has occurred many times in the past, its not something that they get to watch in real time very often, so not as many examples are known as you might expect.

Sebastian Steibl and Christian Laforsch thought that hermit crabs might provide another example. These little crustaceans, famously, have very soft shells, which is useful for them as it allows them to grow very quickly, but also presents the risks of being eaten or drying out. To get round this, they hide in discarded shells of other species, often those of sea snails. This means that finding a well-fitting shell is hugely important to hermit crabs, and a potential source of competition between species. Steibl and Laforsch therefore looked at two species of hermit crab, Coenobita rugosus and Coenobita perlatus, which are roughly the same size (around six and a half millimetres) and seem to coexist very happily on the islands and atolls of the Maldives. They theorised that as these species both seemed to east almost any detritus they came across perfectly happily, and lived in pretty much the same places, any competition between the two species likely focused on shells rather than food or space. Which means that any evidence of character displacement or ‘resource partitioning’ was likely to be found in the shell.

Collecting 876 crabs, mostly but not overwhelmingly C. rugosus, they proceeded to not only geometrically and statistically analyse the shells the crabs picked, but also to test the preferences of 150 of each species in live experiments. That is, live crabs were given a choice of two empty shells, and the shapes of the ones they picked were recorded.

Steibl and Laforsch found what they were looking for – C. rugosus seemed to prefer smaller and rounder shells, while C. perlatus liked longer and narrower ones. This was the case both in the wild, and in the laboratory experiments. Colour did not appear to be important – the wild crabs all used pale shells, regardless of the species of crab, or the species the shell came from.

Different shaped shells do come with different advantages and disadvantages. Narrower shells are better for protecting against predators and keeping the crab from drying out, but limit the amount of eggs the crab can produce. More spacious shells, by contrast, allow the crabs to move further and are better for burrowing in the sand, but provide less protection and hold less water. C. perlatus therefore has better protection against predators, while C. rugosus is likely to be able to have more offspring, but be in greater danger.

The scientists therefore suggest that the two hermit crab species have managed to avoid competing for shells by specialising in different shell shapes, which themselves come with different advantages and disadvantages, and lend themselves to different reproductive strategies. Cause and effect are not easy to disentangle here – perhaps C. rugosus was already producing more offspring than C. perlatus and adapted its shell preference accordingly – but this does provide a very nice example of niche separation and ecological differentiation, and further study may prove it to be an example where character displacement has occurred.

Why are men taller than women?

From: “Expanding the evolutionary explanations for sex differences in the human skeleton”, H.M. Dunsworth (2020), Evolutionary Anthropology 2020 1-9.

 All across the world, whatever the average height is in the city or town or country, men tend to be a little taller than women. The traditional explanation for this, repeated to the point of being a truism, is that taller men have (or had, in the past) some sort of advantage over shorter men in forming reproductive partnerships and so leave behind a greater proportion of the next generation’s children, and the boys inherit their father’s tall stature. That is, men are ‘sexually selected’ to be taller by competition between males.

But is this actually what drives differences in stature? There is some evidence. The ‘male-taller norm’, where a man in a heterosexual romantic partnership tends to be taller than the woman can certainly be observed, but it is hard to imagine this driving evolution, as in practise it tends to mean that short men pair with short women and tall women with tall men. The pattern of larger males and smaller females exists across the great apes (of which we are one) and is particularly pronounced in gorillas. Gorillas experience very intense competition for mates between males, which seems to favour the larger ones. Perhaps this was also true for our ancestors?

Holly Dunsworth argues the opposite. Her paper sets out the case that while some sort of sexual selection on height may or may not occur or have occurred, to elevate this theory to the position of sole explanation is to overlook some fundamental biological facts. It may not even be necessary to assume that sexual selection for height occurred at all.

At this point I should make clear that we are transitioning from talking about men and women to talking about male and female skeletons, which are different things. Having a male or female skeleton tells you nothing about a person’s gender, and isn’t even always a reliable indicator of sex. ‘Male’ and ‘female’ are here used only to refer to the different patterns of influence that oestrogen and androgen have on the skeleton. Likewise, Dunsworth makes the excellent point that these are sex differences, not dimorphisms or dichotomies, as neither sex nor gender are totally separate, binary, homogeneous or distinct categories. For example, you cannot assume that everyone over 6 foot is male; likewise, a tall female skeleton will often be taller than a short male one. The same logic applies across a huge range of ‘male’ and ‘female’ traits. Biology is complicated and nuanced, and so are people!

One of the key controllers of skeletal growth (and therefore growth more broadly) in humans is estradiol, one of the most important oestrogen hormones. In low doses, estradiol causes the skeleton to grow (by increasing the amounts of two other hormones, GH and IGF-1, which do more of the biochemical heavy lifting), but in higher doses, it inhibits growth, and causes the skeleton to start fusing and solidifying, limiting future growth. The reason male skeletons are usually larger is that they usually grow for longer – the peak levels of estradiol needed to stop growth are reached in female bodies at around thirteen or fourteen, but not in male bodies until nearer sixteen. Male skeletons tend to end up bigger simply because they have an extra couple of years to grow.

The reasons for this shorter period of growth in human females is rooted in the biology of the reproductive organs. Estradiol is essential in regulating both male and female reproductive organs, but is far more heavily involved in the menstrual cycle. This means that it circulates at higher levels in the female body, and reaches the concentrations necessary to inhibit growth sooner in female adolescence than male adolescence. The differences in stature between females and males is a product of some pretty fundamental difference in male and female reproductive biology.

Does this rule out the possibility of sexual selection for tall males having occurred at some point in the past? It doesn’t disprove it, but as Dunsworth points out, it means that the sexual selection theory is not strictly necessary to explain the differences in stature, and casts a great deal of doubt on its preeminent status as the principal or only explanation. Indeed, given that significant changes in height (beyond those caused by improved diet in childhood) would require changes in the reproductive systems, mutations which significantly affect height may not actually be advantageous for reproduction. From an evolutionary point of view, it is no good having a mutation which makes you taller if it also has a knock-on effect that damages your fertility. Even if additional tallness can be gained at little cost, and if tall males were sexually selected for, the evolutionary effect of this is likely to be much smaller than the influence of the regulation of sperm production and other processes involving estridiol. At risk of being flippant, it seems to be as likely that larger male skeletons are an accident of biology caused by the evolution of the reproductive system as it is that tall cavemen were considered really hot.

A lot of the time in evolutionary biology, we have to start with the present state of things and work backwards based on what we know or can infer. Unless scientists rigorously examine and test their own and each other’s theories, it is easy to follow a trail of logic to an entirely or partially wrong answer. Often, the assumptions and ‘common sense’ of the world we live in influence our theories. In this case, widely-held (if utterly wrong) 19th-Century assumptions about the gendered behaviours of men and women – that men are active and competitive, and women passive and maternal – seem to have encouraged anthropologists and biologists of that time to latch onto the sexual selection theory and transmit it into the assumptions of later generations. Science is not infallible, and has a troubling history of allowing itself to be used to reinforce oppressive patriarchal and racist ideas. It is now time to do as Dunsworth does in this paper, to challenge ideas rooted in incorrect and oppressive notions of the past, and to use the light of scientific critique and debate to see the world as clearly as we can. This paper presents a brilliantly reasoned and highly necessary challenge to the received wisdom on human sex differences. It will be very interesting to see how science moves forward from here, and how the scientific consensus evolves into the future.

Bright Lights, Bold Lizards? How invasive species thrive in urban areas

From: “Urban invaders are not bold risk-takers: a study of three invasive lizards in Southern California”, B.J. Putman, G.B. Pauly, and D.T. Blumstein (2020), Current Zoology zoaa015

Invasive species have become a major headache for conservationists worldwide, causing problems ranging from rats damaging isolated seabird populations by eating their eggs to Japanese knotweed choking out other vegetation in Europe. Because of this it is important to understand exactly what gives these invasive species the edge over the local wildlife. One popular theory is that the invasive species are better able to exploit the disruption to the local environment caused by humans. For example, European weeds have been able to establish themselves across the globe probably in part because of their ability to colonise ploughed soil, which has allowed them to spread in the wake of the introduction of the European-style plough to the Americas and New Zealand. In a more modern context, it is thought that some invasive species flourish because of their great ability to exploit urbanisation and thrive in cities and towns.

And one of the things that urbanisation creates is a lower density of predator species than in more rural areas. It has been found in a couple of species that invasive species are ‘bolder’ and take greater risks. In an area with lots of predators, this behaviour would often be rewarded by being eaten, but the cities, with fewer predators, seem to favour bolder animals. Breanna Putman, Gregory Pauly and Daniel Blumstein wanted to find out if this applied to lizards in South California, where three invasive species, Italian wall lizards (Podarcis siculus), green anoles (Anolis carolinensis) and brown anoles (Anolis sagrei) live alongside the native western fence lizard (Sceloporus occidentalis). In order to asses whether the invasive species were bolder and more likely to take risks than the fence lizards, they used a measure called flight initiation distance (FID). The idea is that the researchers walked at a steady pace towards the lizards, and noted how far away they were then the lizards ran off (took flight). The bolder lizards would allow the researchers to get closer, while the shyer ones would run away sooner, when the researcher was further away.

The scientists speculated that the invasive lizards would be bolder, and allow them closer. They reasoned that since cities had fewer predators, it would reward bolder lizards, who could spend more time feeding and less time running away. And since the invasive species were doing so well, they must be being be favoured by the city, and would therefore be bolder.

A very clever theory, but unfortunately one that didn’t seem to apply to these lizards. The scientists’ results seem to show that the invasive lizards are no bolder than the native one, with either very similar flight initiation distances or slightly larger ones (i.e. they fled sooner). This was consistently found across different urban habitats and lizard species. The authors point out that cities are far from safe – not only is there traffic to contend with, but also cats and dogs, which can be as deadly to lizards as any wild predator.

So what were the advantages of the invasive species? The scientists noticed that they did not tend to wander far from a hiding place, which doubtless provides them with a greater degree of safety, especially from dogs and cats. This directly contradicts the theory that boldness is advantageous, suggesting that different rules may apply to lizards in southern California than other, perhaps larger, animals in other areas. On the other hand, lizards are cold-blooded – there may also be a connection between where and how often they like to bask in the sun to warm up and how far they venture from a hiding place.  Another possibile advantage of the invaders is that they may be more adaptable: though at each site the invasive lizards only tended to occupy a narrow range of ‘microhabitats’, across the area as whole, the invasive species varied more in microhabitat use. That is, the invaders at each site had different microhabitat preferences. The example the authors give is that brown anoles preferred walls in Orange, but mostly sat on the ground in Santa Ana.

What this paper mostly illustrates is how complex ecology, and in particular the ecology of invasive species, is. What applies to some species in some situations does not apply to all. A well-evidenced theory about the advantages of boldness to an invasive species can turn out not to apply to lizards in southern California. Which just goes so show how important research, even into the most insignificant species is, and how new evidence can always disrupt an established theory.

Plastic-eating bacteria: coming to a biorecycler near you?

From: “Toward Biorecycling: Isolation of a Soil Bacterium That Grows on a Polyurethane Oligomer and Monomer”, M.J.C. Espinosa, A.C. Blanco, T. Schmidgall, A.K. Atanasoff-Jardjalieff, E. Kappelmeyer, D. Tischler, D.H. Pieper, H.J. Heipieper and C. Eberlein (2020). Frontiers in Microbiology

Plastic is a problem. We pump oil out of the ground, turn it into something that’s even harder to  get rid of, and then dump it back out into the world. Plastic and microplastic pollution has tainted virtually every corner of the planet, and we still struggle to recycle most of what we produce. Quite often, our existing chemical and physical tools are not up to the task of breaking down plastic waste and turning it into something useful anything like efficiently enough to make recycling many plastics economical.

This is where bacteria come in. Bacteria seem to manage to survive in nearly every corner of the planet, and to be able to evolve to eat almost anything. Animals, generally, are restricted to being able to derive energy and nutrition only from a fairly narrow range of organic carbohydrates, proteins and lipids. We can’t come close to digesting plastics, as the countless animals found dead with stomachs full of plastic testify. But bacteria can digest countless chemicals which are inedible or downright toxic to animals. Their small size, flexible genomes, simple metabolisms and fast reproduction seem to give them the edge in finding ways to exploit sources of energy unavailable to other organisms.

María Espinosa’s team therefore went looking for bacteria which could digest plastics. They were particularly interested in a type of plastics called polyurethanes, which are difficult to break down in the first place, and which tends to produce the toxic and carcinogenic* diamine chemicals TDA and MDA when broken down.

Taking soil from ‘a site rich in brittle plastic waste’, the scientists isolated the bacteria they found in the soil and tried to grow them in the lab. To make sure that they were cultivating strains that could break down polyurethanes, the only potential sources of energy the bacteria were allowed were one of either disodium succinate (succinic acid, a normal energy source), a polyurethane, or TDA. In addition, the bacterial ‘cultures’ were kept in the dark to prevent any photosynthesising bacteria from growing.

The researchers found that a certain strain of Pseudomonas, a common bacteria used in all kinds of lab studies, was able to grow in all three situations. That meant that not only was it able to digest disodium succinate, as expected, but also the polyurethane, and even the toxic TDA. This is where the succinate culture came in handy. The scientists could tell that TDA is slightly toxic to even the Pseudomonas that could eat it, as the cultures grown on succinate grew noticeably faster than those on TDA, and when given TDA, stalled their growth significantly.

If the fact that this strain of Pseudomonas could digest something as toxic as TDA was surprising, the scientists were even more surprised to find that it could also gain nitrogen, as well as carbon and energy, from TDA. Nitrogen is essential for life, and makes up most of the atmosphere, but is very difficult to get out of the air – only a few species of soil-dwelling bacteria can reliable manage it, and most large organisms get their nitrogen either from the soil in the form of chemicals containing nitrogen produced by these bacteria (most plants) or by eating something which has already found some nitrogen (e.g. all animals). For this Pseudomonas strain to manage to get nitrogen from a toxic by-product of polyurethane breakdown is surprising, but it managed to grow perfectly happily in a situation where TDA was the only possible source of nitrogen. This tough little bacterium, able to digest not only polyurethanes, but also the toxic by-product of this digestion, may prove very useful in future plastic recycling facilities!

This is far from the first bacterial strain found to be able to digest a plastic, and functional biorecyclers are still a very long way off, but this study does at least offer a glimmer of hope, that we might be able to fix the natural world we’ve damaged so very much.

*that is, known to increase the risk of cancer

Are bigger doses better? Finding new ways to fight worms

From: “Efficacy of single versus four repeated doses of praziquantel against Schistosoma mansoni infection in school-aged children from Côte d’Ivoire based on Kato-Katz and POC-CCA: An open-label, randomised controlled trial (RePST)”, P.T. Hoekstra, M. Casacuberta-Partal, L. van Lieshout et al (2020), PLoS Neglected Tropical Diseases

Schistosoma is a parasitic flatworm (more specifically, a trematode or fluke) found across the globe. It has a very complex lifecycle, which involves its eggs hatching into a larval form which infects lake snails, before developing a form which swim in the water, infecting any humans they find there.* The ways and areas of the human body they infect varies between species (there are at least six of major medical importance spread across the world), but because of their connection with water they are commonly able to infect not only fishermen, but also anyone coming to water to wash anything, or collect water for drinking or watering crops. This means that Schistosoma can often spread to infect large portions of a community, particularly in areas without effective sanitation systems.

For this reason, the main weapon used against Schistosoma is preventative chemotherapy, where entire communities are dosed with a drug called praziquantel, whether they are known to have Schistosoma or not. This is fairly effective in curing individuals and disrupting Schistosoma transmission, but not perfect. The World Health Organisation estimates that 291 million people needed this preventative chemotherapy in 2018, and 97 million of these were reported to be treated – a proportion of around 1 in 3.**

One limitation of the current preventative chemotherapy approach is that praziquantel only seems to kill adult worms, and spares larvae. Pytsje Hoekstra’s team therefore wanted to find out if three additional later doses, making four doses of praziquantel rather than just one, increased the effectiveness of treatment. To find out, they organised a direct comparison in three villages near lake Taabo in south-central Côte d’Ivoire, a smallish West African nation where the main species of Schistosoma present is Schistosoma mansoni. They randomly assigned children who had tested positive for Schistosoma mansoni but negative for Schistosoma heamatobium into two groups – 70 were given the conventional single dose, and another 83 were given four doses. By only using children with solely S. mansoni, the researchers ensured a fair comparison between the two treatments, and eliminated the possibility that praziquantel affecting different species differently could impact on their results.

The results were initally encouraging – two different tests suggested that the four doses method was more effective. The Kato-Katz (KK) test, which is less sensitive and sometimes misses milder infections, produced an estimate that from a baseline of everyone in both groups being infected, after 10 weeks this had fallen to 58% (give or take 10%) of those given a single dose but only 14% of those given four doses still infected. Meanwhile the more sensitive point-of-care circulating cathodic antigen (POC-CCA) test, which sometimes produces false positives*** resulted in estimates of 82% of the single dose group still infected after ten days compared to 64% of those given four doses. This difference is partly due to the KK test missing less intense cases, and partly due to the fact that KK detects eggs, while POC-CCA detects adult worms. If a worm is hurt but not killed by praziquantel it may stop laying eggs, but still be hiding in the body.

Regardless of the final infection rates, after the first dose, any Schistosoma infections remaining were significantly milder and less intense. Both treatments were pretty effective at reducing the infection intensity (72% of those given the standard treatment were less intensively infected afterwards, compared to 95% of those given the four-dose treatment), and somewhat effective at curing patients (42% to 86% ‘cure rate’ for standard and intensive treatments, respectively). By all metrics, the intense treatment was slightly more effective, and did not appear to cause any worse side-effects than the standard single dose of praziquantel.

But there is a twist in the story. The cure rate for the intense treatment group four weeks after the fourth and final dose of praziquantel was not significantly different to the cure rate among the standard treatment group four weeks after. This suggests that Schistosoma mansoni bounces back from four doses just as often as it does from one. For that reason, the authors do not advocate the more intense treatment being rolled used, merely pointing out that it can probably be considered safe, and emphasising the need for better ways to test people for Schistosoma infections.

So does this mean that treatment for Schistosoma isn’t worth the time? Not really. The scientists point out that most dangerous illnesses resulting from Schistosoma occur when a person has a very intense infection. This study shows that praziquantel is very effective in reducing the intensity of infections, and can therefore help prevent the worst outcomes of infection. Anyone with even a passing knowledge of epidemiology will tell you that prevention is better than cure, and main the aim of preventative chemotherapy is to hinder transmission of Schistosoma and disrupt its lifestyle in order to prevent people from catching it again in the future. That said, the best weapon against Schistosoma is the same as it is for many diseases – access to good sanitation and clean, safe reliable water.

*For more detail, the American Centres for Disease Control and Prevention have a good summary here:

**The WHO factsheet here is also very good, despite some discrepancies in its figures:

***e.g. by mistaking a urinary tract infection for Schistosoma. For this reason, the scientists checked that no-one had a urinary tract infection

Mouse Microbiome Mutations: how mutant E. coli speed up evolution

From “Low mutational load and high mutation rate variation in gut commensal bacteria”, R.S Ramiro, P. Durão, C. Bank, I. Gordo (2020), PLoS Biology 18:3 e3000617

In recent years, biology has come to understand that the bacteria and microorganisms living inside animal guts are hugely influential in the functioning of the animal, affecting processes far beyond digestion. Because this microbiome and its bacterial microbiota are so important, it is important to understand how the gut microbiota functions in genetic and evolutionary terms, and how different species and strains of bacteria come to thrive or wither within the environment of the gut.

Mutation is necessary for evolution. If there is no variation, there can be no natural selection, and without new mutations, there would be a severe limit on variation. This is particularly important in bacteria, which, for a variety of reasons including their (relatively) simple biology, flexible genomes, short generation times and ability to transfer genetic material between individuals*, are able to mutate and evolve much faster than more complex forms of life. Oddly, most DNA-based microbes have more or less the same mutation rate. This suggests that their mutation rate is caused foremost by evolutionary forces, rather than the innate molecular biology of the microorganism.

Ricardo Ramiro, Paulo Durão, Claudia Bank and Isabel Gordo were investigating the microbiota of mice. They disrupted the microbiome of four mice by dosing them with the antibiotic streptomycin, and then seeded them with two strains of E. coli which had been genetically modified to fluoresce (glow) under ultraviolet light, so the new strains could be tracked over time.

 Several new strains with increased mutation rates (i.e. strains which mutated more often) emerged in one of the mice, with one strain mutating a thousand times faster than normal E. coli. Looking at the genetics of these ‘mutator’ mutants, the scientists found that they all shared a mutation in the gene responsible for producing a DNA Polymerase enzyme. These protein machines speed up the process of replicating DNA (which is needed for any kind of growth) by running along a strand of DNA and building an identical strand from the four nucleotide building blocks of DNA. It appears that this mutation promotes other mutations by inhibiting the ability of DNA Polymerase to ‘proofread’ new DNA, meaning more ‘copying mistakes’ (i.e. mutations) are retained. Thus, bacteria with this alternate form of DNA Polymerase mutate much faster.

But why did this gene persist? Mutation can be advantageous, but also dangerous, particularly for bacteria which can’t shuffle their genes through sexual reproduction. If a bacteria gains a damaging (deleterious) mutation, it can’t then get rid of it – it and it’s descendants will continue to have it until they die out from being outcompeted or accumulating so many deleterious mutations that they cannot function. And deleterious mutations are significantly more common than beneficial ones. It appeared that the deleterious mutations experienced by the mutator bacteria were not sufficiently bad to wipe them out quickly, but that wasn’t the whole story. A second mutation in the DNA polymerase gene shared by all the mutators appeared to be very beneficial, and strongly improved the bacteria’s ability to grow. The researchers speculate that the advantages of this second mutation allowed the first mutation to be carried along and the mutator bacteria to grow common, even though the mutator mutation itself was not beneficial. However, neither of these mutations reached ‘fixation’ – the point where an entire population is carrying a gene – suggesting that however helpful the mutations were, there are many other processes influencing which bacteria were able to thrive in the mouse gut.

Evolution is incredibly complicated. Theoretically, experimental evolution experiments like this one allow us to control certain factors, making it less complex and easier to understand. But this experiment shows that even when you can have some level of control, there are still a lot of processes going on which can have all sorts of unexpected impacts. We’re still a fair way from a complete understanding of evolution.

*Horizontal gene transmission, Leicester University have a decent summary of the principal mechanisms here:

The evolutionary reasons why lorikeet parrots have green backs and colourful faces

From: “Macroevolutionary bursts and constraints generate a rainbow in a clade of tropical birds”, J.T. Merwin, G.F. Seeholzer and B.T. Smith (2020), BMC Evolutionary Biology 20:32

Animal colouration is one of the most interesting phenomena in evolutionary biology, since it varies so much between different animals and is influenced by such a wide variety of factors. In many cases, the overriding influence seems to be the importance of camouflage in protecting the animal from predators. In others, colour serves to deter predators, attract mates, regulate temperature, or help individuals recognise each other.

But is it only ever one of these at once? We know that there are trade-offs – bright colours can impress mates, but also risk attracting the attention of predators. Does the need for camouflage outweigh the advantage of bright colours on an animal’s back but not its face? This, essentially, was the question Jon Merwin, Glenn Seeholzer and Brian Smith wanted to answer. They chose to look at lorikeets (Loriini), a group of small parrots, as there are many species of lorikeets with a variety of colours and patterns. Moreover, their colours are organised into different regions (e.g. wings, face etc.) and their evolutionary history is well-understood, aiding statistical and evolutionary analysis of the group. Finally, most female parrots share the same colour patterns as male ones, so any analysis does not need to account for sex. This allowed the researchers to take a variety of colour measurements on male museum specimens of 98 different lorikeets.

The scientist’s analysis was extremely complicated, taking into account a number of different factors, including brightness, hue, position, ancestry and several different possible patterns of randomness. As you might expect, they got a lot of different results, and it was not immediately obvious what all of them meant. Nevertheless, they found a clear divide between the front and back of the birds, and evidence that different evolutionary forces were principally acting on different areas of the lorikeets’ plumage. It appears that the crown of the head, along with the face and lower abdomen were most influenced by social or sexual selection – i.e. that the plumage on these areas is used for signalling to other members of the same species. This may also explain why the different species of lorikeet appear to have rapidly and independently evolved differently coloured faces. By contrast, the author’s analysis indicated that most wing and body patches were principally controlled by either climate or the need for camouflage. Given that most lorikeets have green backs and wings, and live in green trees, the researchers suggested that camouflage is probably the main driving force. Again, this goes some way to explaining the variety of colours found on lorikeets – their back and wing colours are constrained by the need for camouflage, but their faces are able to evolve all kinds of colours exactly because they are far less constrained by this.

Evolutionary biologists are increasingly recognising the importance of a concept called mosaic evolution – the idea that natural selection can act independently on different genes and traits. That is, mutations affecting a bird’s feet may not necessarily have any impact on its beak. This is important, as it can explain a lot about how a species has evolved over time. What this paper shows is that in birds, feather colour can undergo mosaic evolution, with the colour of the face evolving in a very different manner to that of the wings. This provides an explanation for a lot of the diversity of bird and animal colouration. It is not the case that an animal can have either camouflage or display; it can have both. Mosaic evolution allows natural selection to shape which colours appear where on the lorikeet.