Today sees the publication of the latest figures from the Scottish Crime and Justice Survey (SCJS) – a national study that aims to overcome some of the limitations of the police-recorded crime statistics in Scotland. Crime surveys are now such a familiar part of the statistical landscape in the UK that it’s easy to forget that they are a relatively recent innovation. In fact, it’s only just over 30 years since the first British Crime Survey was carried out (in both Scotland and England & Wales), following pioneering work by SCPR (the forerunner to NatCen), led by the late Douglas Wood.
The principle of crime surveys is deceptively simple: by asking a representative sample of people directly about their experiences of crime and victimisation, it should be possible to shed light on the so-called ‘dark figure of crime’ – those incidents not reported to (or not recorded by) the police. In practice, though, crime surveys are methodologically challenging, and subject to their own silences and limitations.
The most obvious problems relate to respondent recall. Some people simply forget things that have happened to them, or misremember when those things happened – a particular problem in the context of annual crime rates. The ideal, of course, would be to simply ask about things that have happened in the very recent past (say, the last week); but the SCJS and almost all other national crime surveys have to use much longer recall periods (typically 12 months) in order to generate a large enough sample of incidents.
There is also a danger, of course, that people fail to recognise the things that have happened to them when faced with questions using the language of criminal justice (housebreaking, assault, etc.). To get round this, crime surveys generally use a series of ‘screener’ questions, worded in everyday language, to capture as many incidents as possible. Subtle changes in the wording of these can have a dramatic effect on people’s answers.
And, having captured people’s experiences through these screener questions, crime surveys then face the challenge of deciding how such incidents would have been recorded had they come to the attention of the police. This ‘offence coding’ element is especially problematic – not least because it assumes (almost certainly, wrongly) that the police themselves make consistent and logical decisions about how to record things and that such ‘decision trees’ can be codified, replicated and applied systematically to the information gleaned from the survey.
You can add to that complexity the fact that experience of victimisation is highly differentiated and closely related to availability for interview. Put crudely, those who spend a lot of time away from their home – and young men, in particular – are busy putting themselves in potentially risky situations and leaving their empty homes vulnerable to crime. So maximising response rates (or at least minimising non response bias) is also critical.
I haven’t even touched here on the question of those things that crime surveys are unable to measure – for example, crimes where there is no individual victim (e.g. vandalism to public spaces or environmental degradation), or where the victim is unaware of the crime in the first place (e.g. much fraud and white collar crime).
For all these reasons, then, crime surveys and the statistics they generate are highly fragile, and – like the police recorded crime statistics they are intended to complement – need to be compiled carefully and interpreted critically. Even with the help of surveys like the SCJS, we’re likely to be pursuing that ‘dark figure’ for some time to come…