A particular challenge for investigating and prosecuting reported sexual offences is the absence of forensic incriminating evidence. This is particularly so in non-recent cases reported weeks, months, or up to decades after the alleged offences. Of course, there will likely be physical evidence if the sexual offence involved violence; or if the victim was a child or under the age of consent then DNA evidence could identify the culprit. But typically there is scant hard evidence to ‘nail’ an identified perpetrator. Without that, investigators and juries must rely on testimonial evidence (‘she said, he said’), circumstantial evidence (‘same place, same time’), and a sensing of the honesty of accounts given by the respective parties; drawing on knowledge and beliefs about what is normal or frequent behaviour in such situations. This isn’t the same as ironclad certainty, and mistakes can be made.
As in complex medical research where findings can include false positives (a healthy subject is assigned to cases showing symptoms of the disease) and false negatives (cases where no symptoms were discerned even though the subject has the disease), the lack of clear evidence to guide decisions in criminal justice at various stages means that the accused person can sporadically be regarded as guilty when they are not, or not guilty when they are. Sexual offence reports in particular are notorious for being dropped at various stages of the criminal justice process, including by police services at the initial report stage; by the police or prosecution services after further investigation; by the prosecution at the commencement or during the court proceedings; on the order of the judge during the court proceedings; and by the judge after the jury has reached a ‘not guilty verdict’. When viewed from the perspective of the criminal justice process to achieve the conviction of offenders, these stages whereby suspects are ‘let go’ are known as attrition points. It is plausible that the biggest unofficial attrition point is the decision by the victim not to report the offence at all, so common is sexual assault, from unwanted touching to rape (though that number may be reducing in the age of #metoo).
*
As someone who has also been a victim of rape and as an empathic person, I can appreciate the anger and resentment from a victim and feminist perspective when so many reported cases do not result in prosecution or are acquitted, and also the reluctance to report offences given the prospect of being examined and then facing the court process. Such reservations are not helped when media attention is given to high profile cases of accused or convicted people being exonerated when the complainant is found to have lied or made an error in identifying the wrong person. It is understandable that high profile cases where the accused is found not guilty or is found to have been wrongly convicted and imprisoned, result in reassurances that such ‘false allegations’ are rare, and reassurance from police and prosecutions services that they will always take reports of sexual offences seriously, giving praise to the courage of victims who came forward.
There is another side to this, though. As a criminologist, my research for the last 12 years into the context and causes of wrongful allegations of sexual and child abuse, and the impact on those wrongly accused and their relatives, has made me aware of hundreds of cases where individuals have been found to be falsely accused, and more who have had strong claims to be innocent. In my own work I have used the term ‘false’ to include both knowingly false (fabricated) and unintentionally false (mistaken identification of the culprit or distorted memories). When such cases are dismissed as ‘extremely’ or ‘vanishingly’ rare, and when I am personally accused of being an apologist for rapists, it strikes me as unreasonably disproportionate. Yes, I know that the statistical conclusions of the best quality surveys do seem to support the mantra that false allegations of sexual assault are rare. Yet they are not so rare as to justify metaphors such as ‘hen’s teeth’ and ‘being struck by lightning twice’ that are repeatedly applied in media commentary.
That is why I was very interested to discover a recent quantitative study by two government researchers in Australia earlier this year, titled True or false, or somewhere between? A review of the high-quality studies on the prevalence of false sexual assault reports, in which they analysed the methods and data reported in often-cited statistical surveys of the prevalence of false allegations, undertaken in various countries. The authors, Tom Nankivell and John Papadimitriou, have expertise in statistical analysis and public policy, and more than three decades experience each as researchers and policy analysts with various Australian Government agencies.
In this systematic review of widely-cited studies from across the world, Nankivell and Papadimitriou focused on seven surveys of false allegations which are regarded as methodologically rigorous and reliable in their findings. These are the same surveys that were selected by Ferguson and Malouff (2016) in their meta-analysis which found an overall prevalence rate of confirmed false reports of sexual assault to be 5%. Each of these surveys was focused on sexual assault and their criteria for inclusion followed official case classification and counting rules used by police when determining whether to classify a report as false. Nankivell and Papadimitriou examine the descriptions of how the academic authors of these ‘quality studies’ collected and categorised their data and reached their conclusions about the percentage of reports that were false.
They identify a pattern in each of these surveys which means that their estimates of false allegations should really be considered ‘lower bound’ estimates only. The review authors point out that, during the process by which police decide on whether to take cases forward for investigation and then for prosecution, or whether to leave them on file or drop them – attrition points – there may be false allegations among them that are not treated as such. They find that the ‘most commonly cited estimates are that “approximately 5%” or “between 2% and 10%” of reports are false.’ However, they observe that ‘the rules followed by the studies to determine whether to classify a report as “false” exclude many false and potentially false reports. Their estimates are premised on there being no false allegations among the many equivocal or ambiguous cases classified as having insufficient evidence, or where the alleged victim withdrew their complaint, or where the accused was tried but acquitted.’ Nankivell and Papadimitrioiu then review the attrition points at which potential false allegations were not investigated, applying ‘more realistic but still modest assumptions about the share of false reports in other categories, [and] show that the actual prevalence rate could be materially greater than the studies’ estimates.’
The case made by this review for estimating both upper and lower bounds in producing figures for the prevalence rate is, I think, particularly pertinent for understanding, and also challenging, claims that false allegations are very rare. The criteria that are used by police or researchers to categorise an allegation as false can be very limiting in terms of what counts as ‘false’. For example, as Nankivell and Papadimitriou note, Kelly, Lovett and Regan (2005) in their Home Office study used police internal counting rules which specified that ‘this category should be limited to cases where either there is a clear and credible admission by the complainants, or where there are strong evidential grounds’ (p.50). What is valuable in their review is that they look for stages (decision points) in the criminal justice process at which some possible and probable false allegations are being missed, and therefore not counted in the studies of false allegation prevalence. The pivotal point for arriving at more accurate estimates, as they explain, is that: ‘To gain a more realistic sense of the prevalence of false reports, it is necessary to explore the scope for false reports in the study’s sample over and above the study’s estimates’ (p.5).
In doing this, they make use of ‘attrition points’ which Kelly at al (2005) identified as stages at which true cases may have been dropped for a variety of reasons. Nankivell and Papadimitriou consider such stages also from the perspective of potential false allegations that may not have been acknowledged or recognised as such, as they explicate here:
‘For 2284 cases in their sample, Kelly and her colleagues traced progress through the criminal justice system and also classified the reasons for cases terminating when they did (table A). The study authors grouped these reasons into six “attrition points”. In the following sub-sections, we highlight the categories in these different attrition points that are most likely to harbour at least some false allegations, additional to those covered in the study’s lower bound estimate. The main candidates are:
• no evidence of assault and ”uncertain” false reports
• insufficient evidence etc
• withdrawn complaints
• acquittals.’ (p.5)
Not all of the prevalence studies that Nankivell and Papadimitriou reviewed provided sufficient information for such a detailed breakdown, given their differing contexts, constraints and objectives, but for the studies where such details of data categories were provided, they calculated the upper bound estimates, but based these on allowing only very low proportions of the missing/dropped cases to be counted as false allegations.
The key question they raise is: ‘Can we pinpoint the prevalence rate or devise a meaningful range?’. They attempt to calculate this for studies where attrition figures were available. For example:
‘Kelly et al.’s (2005) prevalence rate estimate was based on the extreme assumption that none of the 2284 cases outside those labelled false under police counting rules were false.
The calculations in this paper show that, even with reasonably modest assumptions about the actual level of false allegations in other categories, the prevalence rate for the study’s sample would easily exceed 10% and could approach 15% — that is, 4 to 6 times higher than the widely quoted figure for the study of 2.5%.’ (p.9 my emphasis).
This is a striking difference. The outer bound calculations for some of the other studies are also worth noting.
Nankivell and Papadimitriou’s sober conclusion is that ‘nothing we have found challenges the view that the majority of sexual assault reports are true: contrary to some historical beliefs around rape, there is no credible evidence that women routinely fabricate sexual assault claims or that a substantial share of reports are false’ (p.16). I particularly applaud their argument that: ‘it is an extreme and unjustifiable assumption to presume that no unproven accusation is false. And it is potentially misleading to promulgate prevalence estimates based on that assumption, as the high-quality studies effectively do, at least without properly communicating that assumption and its ramifications’ (p.17).
*
I have resisted reporting more of their conclusions and observations, which are indeed food for thought, because it is better for people to go to the report itself as the primary source. Given their track records as experienced quantitative researchers for government departments, this report by Nankivell and Papadimitriou warrants serious attention to its findings and far-reaching observations. While the issues of sexual abuse and false allegations generate much heated debate, these policy researchers bring an empirical approach and unrhetorical tone to the discussion. There is no indication that they have an axe to grind either way, and their even-handed approach in analysing the surveys included sending early drafts of their paper to the authors of the surveys to invite their comments and feedback. While they concur with the claims that the majority of sexual assault reports are true – surely a reasonable assumption and one that still leaves scope for plenty to be untrue – an implication of their observations about calculating higher bound estimates is that a more open mind should be kept with regard to the truth or falsity of complaints that do not lead to charges being made, and prosecutions that do not lead to guilty verdicts.
In the murky memories of being horribly abused, perhaps decades ago or perhaps when intoxicated after a party that was supposed to be fun, victims of abuse might be put off reporting the offence when they know that they have little proof and might be seen as making a false allegation. It is often argued that the publicity given to false allegations makes it harder for victims to report offences against them in case they are not believed. Others argue that the pendulum has swung too far in favour of complainants and that, especially since the #metoo movement, accusations are now easily made and believed largely regardless of their veracity. In the absence of definitive proof in individual cases, we turn to statistical studies to find out what is more likely based on past and current records. Yet such surveys to investigate false allegations have been few and far between, compared to regularly collected statistics on reported sexual offences. The loose use of the lowest statistics drawn from, mostly dated, studies on prevalence has worked to downgrade and blank out from policy documents the possibility of false (fabricated or mistaken) allegations. Hence the value of this careful, systematic review and the attention it draws to the considerable scope for omissions from survey estimates of the prevalence of such false allegations.
Dr Ros Burnett is a Research Associate of the University of Oxford Centre for Criminology. She is the editor of the book Wrongful Allegations of Sexual and Child Abuse. Oxford University Press, 2016