skip to primary navigation skip to content
 

 

August « 2024 « Top of the Campops: 60 things you didn't know about family, marriage, work, and death since the middle ages

Top of the Campops: 60 things you didn't know about family, marriage, work, and death since the middle ages

Archive for August, 2024

Why were Hansel and Gretel not English?

Thursday, August 29th, 2024

Romola Davenport

Berhardina Midderigh-Bokhorst and Smith’s Fine Arts Publishing N.V. – The Hague. Hansel and Gretel (1937). Image credit: Russell-Cotes Art Gallery & Museum.

In the story of Hansel and Gretel, a famine drives a father to abandon his children in the woods, where they discover a house made of gingerbread and a cannibal witch. In the Magic Porridge Pot tale, a young girl forced by poverty to search for food in the woods and hedgerows is given a magic pot that produces abundant staple food on command.

These types of stories about hunger and famine abound in the folklore of most European societies, and embody folk memories of food scarcity. However, as the historian John Walter noted, these tropes are curiously absent from English fairy tales. Why? 

Otto Ubbelohde. Residents eat their way back to the town through a mound of porridge. Illustration to the fairy tale “Sweet Porridge” (1909).

Walter speculated that this reflected the exceptionally early disappearance of famine from England, centuries before the risk of famine had subsided in the rest of Europe. Famine remained a threat in most of Europe until the mid-18th century, and persisted in some areas into the 19th century and even the 20th century, especially in association with war. In England, on the other hand, the last national famine occurred in the 1590s, and the last regional famine in the 1620s. 

Famine and dearth

Famine is generally defined both in historical accounts and by historians as akilling event’, that is, an episode of substantial excess mortality caused directly or indirectly by a lack of food. Dearth, on the other hand, refers simply to a scarcity or costliness of food, a much more common occurrence in historical populations.

Historians have argued that while a poor harvest often caused dearth, it required at least two consecutive harvest failures to produce a famine, a relatively rare misfortune.  

Jean de Wavrin, Figures lying in the road, by the fields, due to famine. S. Netherlands (1470-1480). British Library, Shelf mark Royal 15 E. IV.

English famines

Famine was clearly a major concern in medieval England. The “Great Famine” of 1315-22 accompanied three successive years of crop failures (1315-17) and a subsequent cattle plague (1319-21), and is estimated by historians to have resulted in the deaths of around 10 per cent of the English population (and similar proportions elsewhere across France, northern and eastern Europe) 

Harvest failures and famine accompanied the Black Death in England in 1349-51. Further famines have also been identified in 1437-38, 1557, and 1597.

The famine of 1597 was caused by a run of extremely wet growing seasons that caused widespread crop failures across western and central Europe. In England the famine was most intense in upland and remote areas, and killed around one per cent of the English population.

Two Englands?

After the 1590s, famine seems to have receded from southern and eastern England. Severe harvest failures and famine struck many communities in the northern and upland parts of England again in the early 1620s. But at the national level the mortality impact was relatively slight.  

Famine finally retreated from the north and uplands of England after the 1620s. In the 1690s, a series of exceptionally wet and cold growing seasons affected most of western Europe and killed perhaps 10 per cent of the populations of France and Scotland. Remarkably, despite suffering similarly dire weather conditions, the English population experienced no excess mortality. As Walter put it, England had decisively slipped the shadow of famine by the mid-17th century.  

The escape from famine

Why did famine peter out so precociously in England compared with other European societies that were often subjected to similar weather conditions and even similar levels of harvest failure?  

The answer probably depends on what caused famines, something historians continue to debate.  

Jan Steen, “The Lean Kitchen” (c.1650-1655). Image credit: Bridgeman Images.

It is now widely recognised that modern famines often reflect a failure to redistribute existing food supplies, rather than an absolute lack of food availability.

However, it remains unclear whether historical famines were generally caused by natural and manmade disasters (harvest failures or warfare), or whether they could have been averted in many cases by political interventions to obtain and distribute food where it was needed.  

In the English case there is evidence for the contribution of both redistribution and increased food production in averting famine. Key factors were the agricultural revolution and the introduction of the Poor Laws. 

The agricultural revolution

Improvements in agricultural production since the 17th century are very likely to have contributed to the decline of famine. These improvements resulted from innovations in farming practices and animal breeding, as well as reclamation of heath, moorland and especially marshes.

They also reflected the progressive commercialization of English farming and the incentives provided by the development of a national market for grain and meat. 

Pieter Bruegel the Elder, “The Harvesters” (1565).

 This economic integration of the country encouraged regional specialization and trade. Upland areas increasingly specialised in pastoral agriculture and imported grain from areas of intensive arable farming. This specialization increased average yields in both types of area, and stimulated trade.  

However, this specialization may also explain at least partly the later disappearance of famine from the north and west of England, where the soils and topography favoured meat and wool production. In times of harvest failure, demand for grains went up, and most people could no longer afford meat. In pastoral areas which depended on imported grain, this meant that the price of grain rose just as demand for their own exports fell, dealing a double blow to their purchasing power.  

Pellizza da Volpedo, Weary limbs (1906).

The poor laws

In tandem with developments in agriculture and trade, England developed a system of poor laws that required local communities (parishes) to raise taxes to support their poor.

Parish officials distributed food or cash to enable the poor to buy food. This provided a safety net for many of the most vulnerable, and helped to reduce famine-induced migration that spread epidemic diseases. The implementation of the poor laws seems to have been more rudimentary in northern compared with southern England in the early 17th century, and this may have contributed to the later persistence of famine there.   

Why did the English peasants not starve?

It is likely that all these factors played important roles in securing the English population from famine. Even Thomas Malthus, usually an implacable opponent of the English poor laws, was driven to commend the operation of the English poor laws in averting famine. Returning from a tour of northern Sweden in 1799, where harvest failure had forced families to resort to grinding birch bark to make bread, Malthus noted that the price of grain had doubled there. However, in England, where similar weather conditions had caused widespread crop losses, the price of grain had tripled, but there was no starvation.  

Malthus explained this apparent paradox in terms made famous by Nobel laureate Amartya Sen, that the poor laws, by providing the poor with cash to buy bread, ensured that even the poorest retained purchasing power. This drove up the price of bread for everyone but also ensured that food was widely distributed and that no-one starved.  

As Malthus put it, without the operation of the poor laws the consequences of the harvest shortfall ‘would have fallen exclusively on… the poorest inhabitants, a very considerable number of whom must in consequence have starved. The operation of the parish allowances, by raising the price of provisions so high, caused the distress to be divided among five or six million, instead of two or three. In Sweden on the other hand the poor had no money to buy grain, and so their starvation had little effect on food prices.  

Crucially, the English poor laws did not extend to Ireland, where the British administration oversaw one of the last great famines in western Europe in the 1840s.  

Further reading

John Walter and Roger Schofield (eds), Famine, disease and the social order in early modern society (Cambridge University Press, 1989). 

Guido Alfani and Cormac Ó Gráda (eds), Famine in European history (Cambridge University Press 2017).  

References 

Healey, J. (2014) The first century of welfare: Poverty and poor relief in Lancashire, 1620–1730 (Boydell & Brewer) .

Hoyle, R. (2017) ‘Britain’, in Alfani, G. and Ó Gráda, C. (eds), Famine in European history (Cambridge University Press).  

Smith, R.M. (2017) ‘Contrasting susceptibility to famine in the early fourteenth- and late sixteenth-century: the significance of the late medieval social structural and village governmental changes, in Braddick, M. and Withington, P. (eds.) Popular culture and political agency in early modern England and Ireland. Essays in honour of John Walter (Boydell & Brewer). 

Walter, J. (1989) ‘The social economy of dearth in early modern England’ in Walter, J. and Schofield, R. (eds), Famine, disease and the social order in early modern society (Cambridge University Press), pp. 75-128. 

Wrigley, E.A. (1999), ‘Corn and crisis: Malthus on the high price of provisions’, Population and Development Review, 25, pp.121-128. https://doi.org/10.1111/j.1728-4457.1999.00121.x 

Wrigley, E.A. & Schofield, R.S. (1989) The population history of England 1541-1871, 2nd edn. (Cambridge University Press).

JOIN OUR NEWSLETTER
And get notified everytime we publish a new blog post.

Stuck in the mud!

Thursday, August 22nd, 2024

Kevin Schürer 

Of all situations for a constant residence, that which appears to me most delightful is a little village far in the country…” Thus starts Mary Russell Mitford’s Our Village, published in 1824, a bestseller in its day. It continues to describe this idyllic village as a place “with inhabitants whose faces are as familiar to us as the flowers in our garden; a little world of our own, close-packed and insulated like ants in an ant-hill, or bees in a hive where we know every one, [and] are known to every one”. 

The message is loud and clear. Prior to the coming of the railways and mass transportation, rural villages were slow-moving, tight-knit communities – places where people rarely came or went, and where the likelihood was that the majority of the population would live and die in the parish where they had been born and baptised. To all intents and purposes, they were stuck in the mud. 

Frederick William Jackson, ‘Sunday Morning‘. Image credit: Rochdale Arts & Heritage Service.

Migration from rural areas to urban areas

It is well known that England and Wales urbanised relatively rapidly over the course of the 19th century, partly as a result of developments in both industrialisation and transportation. Those living in towns and cities increased from around a third of the population in 1801, to just over half in 1851, and reaching just over three-quarters of the population by the end of the century.  

This switch from a predominantly rural society to a predominantly urban one could not have happened without migration from country to town. The second half of the 19th century, in particular, witnessed widespread rural depopulation, as people moved into towns in search of work and a better life.  

Richard Redgrave, The Emigrant’s Last Sight of Home (1858). Photo credit: Tate. CC-BY-NC-ND 3.0

Take the small rural parish of Elmdon in the remote north-west corner of Essex. At the 2021 census it recorded a population of 612, of which just over half were aged 50 or over. Like many small rural parishes, its heyday was the mid-19th century: it recorded a population of 743 in 1851.  

However, if we dig a little deeper into the population dynamics of this Essex village, we can see that the overlying trend of rural depopulation masks a more complex pattern of rural migration.  

Migration into rural villages 

The total population of Elmdon remained fairly constant between 1851 and 1861, but Jean Robin, a former Campop researcher, demonstrated that only half (52%) of the individuals living in Elmdon in 1851 were still present a decade later, in 1861. Some 12 percent had died between the two census years, and about 36 percent had moved away. 

So rural migration was not a oneway flow away from rural villages, since in 1861 a fifth of the Elmdon population had moved into the village from elsewhere over the course of the previous decade. Maybe the inhabitants of mid-19th century rural Elmdon were not so stuck in the mud after all!        

Pre-industrial migration 

But what of earlier periods? What was the situation in pre-industrial rural societies? The pioneering research of one of Campop’s founding fathers – Peter Laslett – has been mentioned in an earlier blog on household structure and the nuclear family.

Peter’s work on the 17th-century household listings for the villages of Clayworth (Nottinghamshire) and Cogenhoe (Northamptonshire) included an analysis of migration and population turnover. For the first of these villages, the total number of residents was little changed between 1676 and 1688 – 401 in the first of these years and 412 in the second. Yet over the 12-year period between the taking of the two listings, some 38 percent of the initial population had moved away, while 40 percent had moved in. 

Remarkably similar rates of migration into and out of the village are recorded for the smaller parish of Cogenhoe between 1618 and 1628, with 38 percent of the initial population of 185 moving out, and 36 percent of the later population of 180 moving in since 1618. Movement in and out of rural parishes in the pre-industrial period was therefore not only common, but it was potentially higher than that experienced in the mid-19th century.  

Joseph Mallord William Turner, ‘A Family Seen from Behind: A Man with a Bundle and a Woman Carrying an Infant; a Small Girl between them’ (1796). Picture credit: Tate. Image released under Creative Commons CC BY-NC-ND 4.0

Indeed, work by Larry Poos, a former Campop research student, using an early 14th century set of tithing listings for four Essex parishes, has demonstrated that in the case of the male population aged 12 and over, similar turnover rates to those of the 17th century were experienced. Thus, as far back as it is possible to determine, English rural society has exhibited evidence of being highly mobile  

Reasons for migration

A large proportion of this mobile rural population would have been young people searching for work  

In her pioneering study of servants in husbandry – essentially live-in farm servants – Ann Kussmaul has shown that in the early modern period, servants were usually hired on an annual basis, invariably serving no more than a year at a time on any one farm, and moving from farm to farm within the local area known to them. These servants, both males and females, would have been young and unmarried. One example is Joseph Mayett of Quainton, Buckinghamshire, who worked as a servant on 12 separate farms between the ages of 12 and 19, before joining the local militia in 1802.  

Movement within a parish

Population movement was not only widespread between rural parishes, usually within a relatively constrained local area, but also within villages and parishes.  

Using a fairly unique set of documents for the Berkshire parish of Binfield between 1790 and 1801, Maggie Escott, a former researcher at Campop, calculated that just under half of the households resident in 1790 remained in the same property in 1801. Of the rest, some 18 percent of households were dissolved due to death, 15 percent moved away from Binfield, whilst 16 percent of the households moved within the parish of Binfield, several moving more than once, and one household moving five times.  

Such internal migration was a common feature of urban areas as well, if not more so. A rare survey of the London parish of St George-in-the-East undertaken in 1847 showed that only a fifth of single men had remained in the same dwelling for three years or more, compared to a third of single women and 40 percent of families. A quarter of all families in the parish in 1847 had resided in the same dwelling for only a year or less.   

The autobiography of the social reformer Francis Place indicates that between 1785, when he was apprenticed to Joseph France, a maker of leather-breeches in Temple Bar, London, and 1800, when he established his own tailoring business in Charing Cross, he moved ten times including at least one move in the dead of night to avoid rent collectors.  

Coming unstuck?

So rather than being stuck in the mud, residential mobility and migration was the norm for large sections of the population in the past. Rather than migration resulting from the processes of urbanisation and industrialisation, England was already a mobile society in the pre-industrial period. 

Indeed, one might argue that a mobile labour force was one of the factors that helped industrialisation.  

Hugh Munro, ‘The Stranger’ (c.1931). Image credit: Glasgow Museums.

However, before concluding this investigation into migration, let us return to the Essex village of Elmdon. Whilst movement into and out of the parish was a common feature, it is worth remembering that around half those living there in 1851 could still be found resident in 1861.  

Indeed, in her detailed study of the village and its inhabitants, Jean Robin found that a small group of core ‘insider’ families, the Hoys and the Hayes, had been present in the village between the mid-17th and mid-20th centuries, while the Gamgees and the Greenhills had roots in the parish from the 18th century to the 1920s. A clear minority of the whole, but these families are perhaps the best examples of Mary Russell Mitford’s “bees in a hive … known to every one” – the true ‘sticks in the mud’.  

Further reading

Escott, M. M., ‘Residential mobility in a late eighteenth-century parish: Binfield, Berkshire 1779-1801’, Local Population Studies 40 (1988), 20-36.  

Kussmaul, A., Servants in husbandry in early modern England (Cambridge, University Press, 1981). 

Laslett, P. and Harrison, J., ‘Clayworth and Cogenhoe’, in Bell, H.E. and Ollard, R.L. (eds), Historical Essays, 1600-1750 Presented to David Ogg (London, 1963) 157-84.

Poos, L.R., ‘Population Turnover in Medieval Essex: The Evidence of some Early-Fourteenth-Century Tithing Lists’, in Bonfield, L., Smith, R. M. and Wrightson, K. (eds.), The World We Have Gained: histories of population and social structure (Oxford, Blackwell, 1986) 1-22. 

Robin, J., Elmdon: continuity and change in a north-west Essex village, 1961-1864 (Cambridge, University Press, 1980). 

Thale, M., (ed) The autobiography of Francis Place, 1771-1854 (Cambridge, University Press, 1972). 

Whitelaw, J., ‘A statistical return of the district of Christchurch in the parish of St George-in-the-East’, Royal Statistical Society (1847).   

Three score and ten?

Thursday, August 15th, 2024

Romola Davenport & Jim Oeppen

Campop’s studies of mortality suggest that, in England, average life expectancy at birth varied between 35 and 40 years in the centuries between 1600 and 1800It is a common misconception that, when life expectancy was so low, there must have been very few old peopleIn fact, the most common age for adult deaths was around 70 years, in line with the Biblical three score years and ten. So what does life expectancy actually measure?

George Paul Chalmers, “An Old Woman”, National Galleries of Scotland.

What is life expectancy?

To understand life expectancy, we can imagine a group of 1,000 babies born at the same time. We can measure how long each one lives. Figure 1 shows the lifespans for these infants as horizontal bars that indicate the length of life, arranged from top to bottom in order of lifespan. Their lifespans follow the pattern of mortality in England in 1841.   

Figure 1. Lengths of life and percent remaining alive of 1,000 babies born into a hypothetical population in England and Wales in 1841. Source: Human Mortality Database.

As you can see, in 1841 a lot of children died in the first five years of life. Of 1,000 babies, 138 (nearly 14 percent) died before reaching their first birthday. By age five, over a quarter of the original 1,000 babies were dead.  

However, after the first five years, the rate of attrition eased. Children who made it to their fifth birthday had a 50:50 chance of making it to their 60th birthday. Of the original 1,000 babies, 38 percent survived to age 60, and nearly 10 percent to age 80.  

So why was life expectancy only 42 in 1841? Because life expectancy is the average of all the different lengths of lives in the population. When mortality is high in infancy and childhood, then many of these lives are very short, and these many short lives really bring down the average age at death.  

Richard Tennant Cooper, “A Ghostly Skeleton Trying to Strangle a Sick Child; Representing Diphtheria”. Image: Wellcome Collection.

Calculating life expectancy 

To calculate life expectancy, we take all the ages at which people died, add them up, and then divide by the number of people. For example, if we had a ‘population’ of two people, one of whom died on their first birthday and the other who died on their 100th birthday, their average life expectancy would be their ages at death added together and divided by two (101/2 = an average life expectancy of 50.5 years). But neither individual died in their 50s, or anywhere near their 50s. The average is not a good indicator of mortality risk in this case, because the length of life is so variable in this population.  

On the other hand, if we have a population of two people, one of whom died on their 80th birthday and the other on their 100th, then average life expectancy would be 90 years, a much more representative estimate of average years lived. The latter case is much more like most populations in the world today. As life expectancy has risen, the benefits have been felt first at younger ages, and death has become increasingly concentrated in late adulthood 

Changing life expectancy over time 

In the early 1600s (the first period for which we can calculate life expectancy in the English population) there was a huge peak of deaths in infancy, but then deaths were strung out across the whole life course between birth and 110 years of age. That is, the length of life was very unpredictable in the 1600s, and the risk of death was fairly high at all ages. 

David des Granges, “The Saltonstall Family”, c.1636–7. The painting has been interpreted as depicting Sir Richard Saltonshall and his two wives and children. His first wife Elizabeth Basse, in the bed, died in 1630 leaving two young children, and Richard married Mary Parker in 1633. Image credit: Tate.

By 1800, this pattern had begun to shift. Mortality had become more concentrated at the oldest and youngest ages. In personal terms, this meant that fewer young children experienced the loss of their parents, fewer young adults were widowed, and fewer elderly parents experienced the untimely deaths of their adult children.  

By the 1960s, deaths in childhood and early adulthood were relatively rare, and most people could expect to live into their 60s, 70s or 80s. Life expectancy was around 72, and this is a much better reflection of the ages to which most people could expect to live. 

Today, when the death distribution is compressed and dominated by the adult peak, average life expectancy at birth is a much more representative statistic than in the past when the average fell between two peaks (infancy and old age). Nevertheless, most people die above the average age, and the most common age at death is almost 90

It’s a bit more complicated… 

So life expectancy is a kind of summary measure of mortality patterns in a population. It allows us to compare mortality trends over time, and between populations. But it is not a measure of the lifespan of a population, or even of the most common age at death.

Calculating life expectancy in real populations is also not quite as straightforward as we have suggested. Take the life expectancy of the English population in the 1960s. This doesn’t actually apply to the cohort of people born in 1960, because to calculate life expectancy for a real cohort we would have to wait until they were all dead in order to know how long they had lived! 

So to calculate life expectancy for the people born in 1960, we would take all the deaths that occurred in that year and use these to measure the risk of dying at each age in 1960. We then apply these risks to an imaginary population that was born in 1960 and work out the average age at which they would have died if they had faced these risks at each age. This captures the particular mortality patterns of the year 1960, and is given the technical term ‘period life expectancy’. This is what people usually mean when they refer to life expectancy.

Demographers are, however, also interested in the life expectancy of cohorts of real people. For example, we can follow cohorts with unusual experiences, such as men born in the last years of the 19th century who were of recruitment age in World War I, and compare how they fared compared with other cohorts born before and after them.

Great expectations

The modern rise in life expectancy has provided enormous social and economic benefits. Not only do we live longer, but there has been a massive reduction in uncertainty with respect to both our own lifetimes and the lifetimes of our family and friends. 

Further reading

Davenport, R.J. (2021) ‘Patterns of death, 1800 – 2020: Global rates and causes’ in P.N. Stearns (ed.) The Routledge History of Death Since 1800. Routledge. 

Wrigley, E.A., R.S. Davies, J.E. Oeppen and R.S. Schofield (1997) English Population History from Family Reconstitution. Cambridge University Press.

Women have always worked – for pay

Thursday, August 8th, 2024

Amy Erickson

It is commonly assumed that women entered the workforce in significant numbers only after the World Wars of the 20th century. While women may have been occupied with household duties in previous centuries, the assumption goes, they were much less likely than men to engage in paid labour. This blog explains why a) that’s wrong, and b) the issue is much more complicated than simply a progressive increase in women earning their own salary. 

The Woman Shopkeeper, British School. Photo credit: People’s Palace and Winter Gardens, Glasgow, licensed under CC BY-NC-ND.

In 2018 the female labour force participation rate reached a record high of 74 percent. Reliable figures began in 1851, with the first census in which it is possible to discern anything like a labour force participation rate. In 1851, 43 percent of women were reported to be in ‘regular employment’. ‘Regular’ was not defined, so that figure should be taken as a minimum of those engaged in paid employment, with no indication of hours worked.  

Mid-19th century concepts of full-time employment were very different from our own: agricultural work was from dawn (or earlier in the case of milking) to dusk, so varied seasonally; textile factory or mining or blast furnace shifts were 12 hours; shops were open in all daylight hours, six days a week. Today’s full time eight-hour day and 40-hour week would have been considered part-time for the last 500 years.  

If 43 percent of adult women were in regular employment in the mid-19th century, then women constituted nearly one third of the total labour force (not counting unpaid domestic work). Single women and widows were much more often employed than married women, only 10 percent of whom were in regular employment.  

However, while the great majority of women married, and most of those who married had children whose upbringing was certainly their mother’s responsibility, nonetheless more than half of all adult women (usually counted as 15+) were not married at any given point in time. 

The industrial revolution

The effect of the industrial revolution on women’s employment has been hotly debated for the last century. The current consensus is that the effects varied by type of manufacturing. 

The largest manufacturing sector, by number of people employed and by exports, was textiles. The mechanisation of spinning from the late 18th century had a catastrophic effect on women’s employment levels nationwide: yarn that was previously produced by hand by women all over the country was now produced in factories highly concentrated in particular towns, and much of the labour was men’s.  

J. Hinton, The Art of Stocking-Frame-Work Knitting, engraved for the Universal Magazine, 1750. Science Museum Group, © The Board of Trustees of the Science Museum.

The mechanisation of weaving in the early 19th century partially compensated for the earlier technological unemployment caused by the mechanisation of spinning, since factory weaving was largely female. But factory weaving, like factory spinning, was not evenly spread but geographically concentrated: cottons in Lancashire; woollens in West Yorkshire and the West Country; silks in Essex and Cheshire.  

The census evidence, available for the period 1851-1911, shows that female labour force participation rates were demand-led – that is, wherever paid employment was available in the period 1851-1911, women took it. So regional differences were marked. That situation probably applied earlier too.  

The best place to measure employment rates prior to 1851 is London, using court records which asked witnesses how they supported themselves. Around 1700, these records show a minimum of 65 percent of married women in employment, six and a half times the 1851 rate. Nearly all single and widowed women were in employment.  

Married women in employment still bore all of the domestic responsibilities, but they were likely to pay other women to do the required cooking, cleaning, washing, and childcare – either as live-in servants or on a casual daily basis as charwomen. This left the wealthier women who had received skilled training from their parents or an apprenticeship free to operate their trade. Both their activities and the servants’ employment increased the female labour force participation rate.

For married women, the drawback of earning was that technically their husbands owned all of their property, although there were ways around that draconian rule.

Sketch book of Paul Sandby (1745-1809), photo credit: Trustees of the British Museum.

Entrepreneurs

Given sufficient capital, running one’s own business was infinitely preferable for women, whose wages stuck at the biblical ratio of one half to two thirds of men’s wages for over 500 years (Leviticus 27:2-4). Both piecework and entrepreneurship were therefore preferable to wages. The censuses of 1851-1911 suggest that historically women were more likely than men to be entrepreneurs – whether they chose self-employment through necessity or to take advantage of opportunities. 

Before the 19th century, most work was domestic in the sense that it took place in or around someone’s home. Of course, all of the labour that we now refer to as unpaid domestic work was still necessary, but to a large extent women were paid to undertake it. 

Unknown artist; Esther Hammerton (1711-1746).
Hester succeeded her father as sexton at All Saints’ Church in Kingston, which required her to dig graves and ring the bells. By the end of the 18th century, every parish within London’s city walls and several without the walls had employed a women sexton at one time or another.

Labour force participation rates 

In the 16th, 17th, and 18th centuries, around one third of all households employed servants. By 1851, only 12 percent of households employed servants, which necessitated much more unpaid labour in the home. The mid-19th to the mid-20th century marked a historic low point in what we now call labour force participation rates, and of course saw the campaigns for women’s education, reforms to married women’s property law, and access to the professions of medicine and law.  

It is these campaigns that are often referenced as ‘opening up’ employment for women, but the story is considerably more complicated and by no means a simple progression from bad to better. Investigating women’s employment in the pre-census era puts into perspective the ‘record’ labour force participation rate of 2018: it looks now more like a return to an earlier status quo, rather than an achievement of equality of opportunity. 

Women gutting and salting herring for export in Wick, c.1900, photo credit: Johnston Collection, Wick.

Further reading

Open access

Xuesheng You, ‘The missing half: female labour force participation in Victorian England and Wales’, in The Online Historical Atlas of Occupational Structure and Population Geography in England and Wales 1600-2011, ed. L. Shaw-Taylor, A. Cockerill and M. Satchell (2017). 

Economies Past lets you explore female and male employment by local area 1851-1911. 

On Populations Past you can disaggregate women by marital status and relate their employment to households, to infant and child mortality, and to children’s employment by local area 1851-1911. 

The British Business Census of Entrepreneurs maps women and men in business 1851-1911. 

Paywall

Amy Erickson, ‘Married women’s occupations in eighteenth-century London’, Continuity & Change 23 (2008), 267-307. 

Wanda Henry, ‘Hester Hammerton and women sextons in eighteenth and nineteenth-century England’, Gender & History 31:2 (2019), 404-21. 

Carry van Lieshout, Harry Smith, Piero Montebruno & Robert J. Bennett, ‘Female entrepreneurship: business, marriage and motherhood in England and Wales, 1851–1911’, Social History 44:4 (2019), 440-68. 

Leigh Shaw-Taylor, ‘Diverse experiences: The geography of adult female employment in England and the 1851 census’, in Women’s Work in Industrial England: Regional and Local Perspectives, ed. Nigel Goose (2007). 

Jane Whittle, ‘A critique of approaches to “domestic work”: Women, work and the pre-industrial economy’, Past & Present 243 (2019), 35-70. 

Xuesheng You, ‘Women’s labour force participation in nineteenth-century England and Wales: evidence from the 1881 census enumerators’ books’, Economic History Review 73:1 (2020): 106-33. 

JOIN OUR NEWSLETTER
And get notified everytime we publish a new blog post.
 

What a big family you have, Grandma!

Thursday, August 1st, 2024

Alice Reid & Jim Oeppen

Looking backwards in time gives a mistaken impression that family sizes in the past were larger than they actually were. This blog explains why this happens, and explores the differences between the picture of the past painted by genealogies and the past as it actually was. 

Looking backwards at our families 

Alice’s grandmother, Margaret, had six children, of whom five survived to adulthood. She had 14 grandchildren and (so far) 25 great-grandchildren. She also had two sisters, Kathleen and Moira. Moira had two children, four grandchildren and six great grandchildren. Kathleen remained single and childless throughout her life. On average, the three sisters (Margaret, Kathleen and Moira) had 2.7 children apiece.  

Kathleen, Moira and Margaret with their mother Agnes (also known as Nan) in 1929. Family photograph, courtesy of Colin Reid.

Of the seven offspring in the next generation who survived to adulthood, five of them came from a family of six, and two from a family of two. If you were to gather them all in a room and ask how many children their mothers had (imagine they were not related and therefore did not worry about whether or not siblings should all answer the question), the answer would be 4.9 children The view from the children’s point of view is very different, because there are more of Margaret’s children to remember their big family. The fact that Kathleen had no children means that her family size (of zero) cannot be represented in a calculation of mothers’ family size as reported by children. 

In the next generation the difference is larger still, with the grandchildren’s point of view suggesting that their grandmothers’ generation had 5.2 children on average, nearly double the real number of 2.7. 

Looking back at previous generations of our own families can therefore give an inflated view of how large family sizes were in the past, and can produce distorted impressions of families and family formation. 

Alice’s grandmother Margaret (centre), with her surviving children and her husband. Family photograph, courtesy of Colin Reid.

Family history and genealogy 

Demography takes a “descendant” viewpointThe average family size is calculated from the mother’s viewpoint – the 2.7 children in the example above, not the ascendant 5.2By contrast almost all genealogies are ascendant: i.e. a survivor works backwards, recording the generations in their main line of ascent. (Descendant genealogies select a person in the past and follow their kin forward in time – a future blog will discuss Chinese genealogies, which are usually descendant.)  The extent to which a genealogist follows collateral kin in each generation, such as aunts and uncles etc., is variable – depending on the available records and enthusiasm.   

Campop’s work on reconstructing the demography of English families allows us to calculate the ascendant bias in family size from 1550 to 1850 (i.e. the extent to which ascendant genealogies overstate family sizes). The simple formula that links the averages for the ascendant and descendant views has been known for over a century.  

To simplify the picture, we start by removing the effect of celibacy (women remaining unmarried) and mortality. Assume that every woman married, and both she and her husband survived to at least her 50th birthday. The descendant average number of children over the period varied between about four and six children, but the ascendant view adds 1.5 to two extra children. This is like comparing the average number of children from Margaret and Moira (four) with the average from their children’s point of view (5.2). 

Including women such as Kathleen in the example above, who did not marry or have children increases this bias still further. Celibacy in the past among females surviving to age 50 is thought to have been about 10-15 percent. Adding these women with no descendants to the calculation raises the ascendant bias to about 2.5 children. Similar biases have been found for Basque villages 1800-1969, Brazil 1960-2000, France 1830-1896, the USA 1867-1955, and a variety of late 20th century, high fertility populations.  

Genealogy showing the descendants of Adam and Eve (London, 1611). British Library C.35.l.13.(2).

So, women with descendants, who are more likely to appear in genealogies, are not typical of women in general. Their experience should not be used to characterise the experience of the overall population.

Nevertheless, these women with descendants did exist, and it is also worth considering how they managed to fit larger than average numbers of children into their child-bearing histories. 

The maximum reproductive span for a woman is 35 years (between the ages of 15 and 50). But women in the British past were aged about 25 when they married for the first time (see blog on marriage), and the typical age at last birth in a non-contracepting population of women surviving to age 50 is 41 years, reducing the average fertile period to 26 years.

Tony Wrigley and colleagues at Campop calculated that average inter-birth intervals were 2.5 years: typical of a population with long breast-feeding. Thus, women in an ascendant genealogy would need an extra 6.25 years of reproduction. They must have married young, lived to 50, or had short birth-intervals (or multiple births), or all three. 

Children born to Andrew and Janet Gray, great great grandparents of Agnes (Nan) in the photograph above. Janet’s young age at marriage, survival beyond age 50, and very short birth-intervals enabled her to have 16 singleton births. Image courtesy of Colin Reid.

How do we know? 

This knowledge uses ‘family reconstitution’: the reconstruction of families by linking the baptisms, marriages, and burials recorded in parish registers. This process starts with a marriage and locates the baptisms of bride and groom to establish their birth dates and age at marriage. The births of their children are identified, enabling the age of the mother at birth to be calculated. Finally, the deaths of husband and wife are located in the records, yielding age at death.  

The same process is undertaken for the marriages of each of the children of the original couple, making inter-generational comparisons possible. Campop created a number of family reconstitutions for a variety of communities across England. These have to be treated very carefully to yield accurate demographic measures, but they are our best source of information about the population of England between the mid-16th and mid-19th centuries. 

The bias in ascendant genealogies can be calculated by comparing the average number of children per woman using all women in the population (the descending point of view), with the sibship sizes of those women who had children. In other words, by performing a similar comparison to the example in the first section of this blog.

Further reading 

E. A. Wrigley, R.S. Davies, J.E. Oeppen, and R.S. Schofield, English Population History from Family Reconstitution, 1580–1837 (Cambridge University Press, 1997).

JOIN OUR NEWSLETTER
And get notified everytime we publish a new blog post.