Recently, a new research project by Nicholas Scurich (University of California, Irvine) and Richard S. John (University of Southern California) published a report entitled “The Dark Figure of Sexual Recidivism.” The premise of their research paper is arguing the persistent myth that extreme underreporting of sex crimes. This myth is nothing new, but it is worrisome whenever such reports are publish, since these bad reports influence media perceptions of registered persons and can lead to bad public policy.
On my website is an outline highlighting a myriad of myths, and at the end of that outline are a number of guidelines that I devised to help the average reader become a mythbuster. (See http://www.oncefallen.com/SOMyths.html). Perhaps it would be beneficial to take the time to review these mythbusting guidelines using the Scurich and John paper as the example.
The first step of mythbusting is “Consider the Source.” That means there are a number of questions related directly to the researchers themselves and how they present their findings. Is it a University Study (usually best), a media report (middle reliability) or a victim advocacy report (least reliable)? Is the study peer reviewed? Are these results preliminary or final? What is the sample size? Are they using hard numbers, or relying on estimates, “Goldolocks numbers”, or a heavy reliance on lofty sounding numbers rather than simplified answers? The answers to these questions help determine whether or not a particular report is based on sound science. The best studies are often peer-reviewed independent research studies that are peer reviewed, use the largest sample sizes possible, rely mostly on hard numbers and evidence, and heavily critique their own findings.
First, let’s look at the authors. The most important thing to ask is whether there are possible conflicts of interests that could influence the researchers’ findings.
Richard S. John is associate professor of psychology and a research associate at the Center for Risk and Economic Analysis of Terrorism Events (CREATE) at the University of Southern California. His research focuses on normative and descriptive models of human judgment and decision making and methodological issues in application of decision and probabilistic risk analysis (PRA). He has consulted on a number of large projects involving expert elicitation, including analysis of nuclear power plant risks (NUREG 1150) and analysis of cost and schedule risk for tritium supply alternatives.
Nicholas Scurich, PhD, is “a tenured professor of Psychology and Criminology at the University of California, Irvine. In 2017 he joined TAG (“Threat Assessment Group”) as a consultant and lecturer in workplace misconduct mitigation. Dr. Scurich’s focus on misconduct risk assessment has included scientific studies of how to assess the risk of misconduct, how to deter dangerous behavior, how to make scientifically informed decisions about risky individuals, and how to communicate risk information. He frequently consults for the Department of Homeland Security on issues related to risk assessment and security. For example, he has worked with the TSA to help develop novel approaches to allocating security resources and screening of airline passengers based on risk.”
These researchers’ bios are important because both Scurich and John work with agencies that have a vested interest in the field of risk assessment. That means the possibility in bias cannot be ruled out.
The Scurich and John article was published at SSRN.com, an online research paper sharing service. The use of SSRN to distribute scholarly works is not necessarily a bad thing (and even some very good mythbusting reports have been posted at SSRN), but it is important to note that SSRN’s goal of “rapid worldwide dissemination of research”. “Each of SSRN’s networks encourages the early distribution of research results by reviewing and distributing submitted abstracts and full text papers from scholars around the world.” That means that this article may merely be a preliminary result without little (or even zero) peer review before being posted at the website.
Before diving into the full article, read the abstract. Here is where you will find a summary of the report. Scurich and John writes, “Empirical studies of sexual offender recidivism have proliferated in recent decades. Virtually all of the studies define recidivism as a new legal charge or conviction for a sexual crime, and these studies tend to find recidivism rates on the order of 5-15% after 5 years and 10-25% after 10+ years. It is uncontroversial that such a definition of recidivism underestimates the true rate of sexual recidivism because most sexual crime is not reported to legal authorities, the so-called “dark figure of crime.” To estimate the magnitude of the dark figure of sexual recidivism, this paper uses a probabilistic simulation approach in conjunction with a.) victim self-report survey data about the rate of reporting sexual crime to legal authorities, b.) offender self-report data about the number of victims per offender, and c.) different assumptions about the chances of being convicted of a new sexual offense once it is reported. Under any configuration of assumptions, the dark figure is substantial, and as a consequence, the disparity between recidivism defined as a new legal charge or conviction for a sex crime and recidivism defined as actually committing a new sexual crime is large. These findings call into question the utility of recidivism studies that rely exclusively on official crime statistics to define sexual recidivism, and highlight the need for additional, long-term studies that use a variety of different measures to assess whether or not sexual recidivism has occurred.”
The abstract lays out the premise of the research– Scurich and John states a proposition that they believe currently recidivism studies underestimate sex offense recidivism, and they propose to determine a true rate of recidivism by utilizing victim self-reports, offender self-reports, and assumptions about how many recidivists are caught and reconvicted. Already, red alarms should be raised should be raised the two key terms “self-report data” and “assumptions.” However, we must not let our own biases take control.
It is time to take the second important step, which is the analysis of the research paper. The first section introduces the topic to the reader and lays the groundwork for the need for the research. The article opens with a discussion about beliefs on recidivism, then discusses a handful of major studies like the Harris and Morton-Bourgon and the US Department of Justice studies. It should be note that both the Harris studies created an estimate based on a multi-national study. That is important because these multinational studies have higher rates than the typical American recidivism study. Researchers have failed to consider that nations vary greatly in determining what constitutes a sex crime. For example, the Age of Consent in Canada was only 14 at the time of the Harris studies, while the Age of Consent in America is between 16 and 18. Thus, multinational studies cannot accurately reflect recidivism rates for America.
Scurich and John proclaim, “We take no position on the propriety of sexual offender legislation. However, we do question challenges to that legislation to the extent they are based on current empirical assertions that sexual offender recidivism is ‘low.’ In doing so, we first probe deeply the very concept of
sexual recidivism.” It is indeed true that there is not a universal standard of what constitutes “recidivism.” This unfortunately means that anyone can make an assumption that can neither be definitively proven true or false (which makes estimating recidivism subject to appeals to ignorance). Scurich and John waste little time taking advantage of this unknown factor.
On page 5, Scurich and John claim it is “uncontroversial that longer follow-up periods will result in more sexual offenses and thus a higher rate of sexual recidivism.” This is when the duo uses controversial studies to justify their claims. In addition to relying on the Harris and Hanson multinational study, the duo invoke the extremely controversial Prentky study.
The 1997 Prentky study made the controversial claim that after 25 years sex offenders’ recidivism is 52% for child molesters and 39% for rapists. Prentky’s findings were cited in amici briefs filed in favor of the registry during the Smith v Doe hearings. However, there were a number of issues with this study. First, Prentky himself offered a warning against applying his study to determine long-term recidivism in a subsequent study: “We would like to conclude with two important caveats. The obvious, marked heterogeneity of sexual offenders precludes automatic generalization of the rates reported here to other samples.” Prentky’s warning was not in the study that first mentions this study, but in a later study in the series. The study involved recidivists who were civilly committed between 1959 and 1985. “It was also the case that Prentky’s sample of molesters were highly likely to recidivate because they were convicted of many more sex offenses (N=4.6) than a more typical sample of incarcerated sex offenders, about 90% of whom have been convicted of only one sex offense.”One thing that critics have failed to point out is that the 52% number is not the actual re-offense rate, but an estimate called the survival or failure rate, “i.e., the estimated probability that child molesters would ‘survive’ in the community without being charged, convicted, or imprisoned for a sexual offense over the 25-year study period.” Even the SMART Office recognizes the Prentky study’s flaws: “Prentky and colleagues acknowledged that generalizing the recidivism rates found in the study to other samples of sex offenders was problematic due to the ‘marked heterogeneity of sex offenders,’ but they also suggested that the ‘crucial point to be gleaned from this study is the potential variability of the rates’ and not the specific rates themselves.”
In short, the Prentky Study is merely an estimated rate of success of a group of recidivists based upon a complex but convoluted formula. The actual rates of failure by the hard numbers don’t support the 52% estimate for the small number of repeat offenders, much less people labeled “sex offenders” as a collective unit. It is thus useless for determining recidivism rates.
Scurich and John utilized a far more controversial study by citing the 2004 Langevin study, particularly the reliance on data from a single civil commitment center as well as the 25-year follow-up period. However, Langevin’s study purged people without rearrests after 15 years, and it expanded the definition of recidivist to include crimes committed before subjects were to be included in the study. In an earlier report of the same data set, Langevin and Fedoroff (2000) reported that they were able to obtain follow-up criminal history records for 378 (54%) of the first 700 cases assessed at Dr. Langevin’s clinic between 1969 and 1974. In the 2004 report, the offenders lacking criminal history records in 1994 and 1999 were eliminated from the sample. Such a decision would retain recidivists and eliminate non-recidivists. More than half the individuals in the sample were already recidivists by Langevin’s definition at the time of their evaluations, thus ensuring at least a 50 percent recidivism rate. Thus, Langevin’s studies were fatally flawed.
It is laughable to read the footnote justifying inclusion of the Langevin study in this report: “The Langevin et al (2004) study was criticized for, among other things, having a “biased sample” since it involved individuals referred to a psychiatric clinic for treatment and thus were potentially unrepresentative of the ‘average’ sexual offender (Webster, Gartner, & Doob, 2005). However, that criticism was countered cogently by Rice and Harris (2005) who noted that Langevin et al (2004) made no claims about specific rates of recidivism but instead offered an illustration of the “uncontroversial” point that “a large portion of violent and sexual offenses go undetected by the criminal justice system (p. 97; see also Langevin, Curnoe, & Fedoroff, 2006).” We cite the Langevin study for this purpose.”
Scurich and John also rely heavily on self-reports. The “National Crime Victimization Survey” (NCVS) is the largest study that attempts to determine the level of crimes not reported to the police. The NCVS defines sexual assault as “A wide range of victimizations, separate from rape or attempted rape… Sexual assault also includes verbal threats.” Attempted rape also includes “verbal threats of rape.” This definition means a wide variety of actions that may not actually be criminal. The reader must consider the current social climate in which looking at a woman too long could be construed as “stare rape” or lying to a girlfriend about matters of the heart to engage in intimacy is now considered “rape” in the eyes of many women. Since these self-reports are not reported and thus investigated by the police, there is no way to determine if these “attempts” would warrant a criminal investigation or not. Harris and Hanson also noted an earlier study denoting nearly 3 of 5 respondents that failed to report an incident did not feel the incident was important enough to report, forcing the researchers to add, “Consequently, readers may wonder what counts as a sexual assault.” The NCVS understands its own limitations, as noted in the 2010 NCVS: “The estimates of rape/sexual assault are based on a small number of cases reported to the survey. Therefore, small absolute changes and fluctuations in the rates of victimization can result in large year-to-year percentage change estimates. For 2010, the estimate of rape or sexual assault is based on 57 unweighted cases compared to 36 unweighted cases in 2009.” That is 57 “unreported cases” out of sample size of nearly 71000 people: In 2010, 40974 households and 73283 individuals age 12 and older were interviewed for the NCVS. Each household was interviewed twice during the year. The response rate was 92.3% of households and 87.5% of eligible individuals.” Still, the survey strongly suggests the amount of under-reporting may be over-reported.
Scurich and John invoke the research of Sean Ahlmeyer and other reports that utilize the polygraph to manipulate the results. Many programs utilize the polygraph not because they believe the polygraph actually detects lies (because it does not), but because people believe the polygraphs work and are used as a tool for intimidation. Studies utilizing polygraphs are generally conducted in prisons or civil commitment centers. The oft-derided Butner study, released in 2007, utilized polygraphs to make the bold claim most CP viewers have undetected crimes against children. Inmates of the program reported that “the program’s emphasis on confession led them to “remember’ crimes that never happened. They disavowed disclosures that were later used as evidence against them.” Proponents of the Butner study denied that inmates were shown favoritism for participating in the Butner program, but the inmates stated the opposite: “For sex offenders, who occupy the bottom of the prison power hierarchy, the Butner unit was a safe haven in the federal prison system. One child-pornography convict, Markis Revland, told the judge at his civil-commitment hearing that when prisoners discover a sex offender among them ‘they’ll go to great lengths to stab that person.’ He requested treatment at Butner after being raped at knifepoint in a Kansas penitentiary. He was encouraged by the psychology staff at Butner to ‘get it all out,’ and came up with a hundred and forty-nine victims. Like other patients, he kept a “cheat sheet” in his cell so that he could remember his victims’ ages and the dates that he’d abused them. There was no evidence for the crimes, thirty-four of which would have occurred during a time when Revland was incarcerated. At his hearing, the judge concluded that his crimes were the ‘product of his imagination, not actual events.’ After having been held in prison nearly five years beyond the expiration of his criminal sentence, Revland was allowed to go home.” Polygraph-driven studies are useless for accurate recidivism studies.
Scurich and John also invoke the 1986 Gene Abel study on paraphilias. Paraphilia means any act considered deviant by societal norms, which should not be confused with pedophilia; it seems Scurich and John failed to notice the difference. Gene Abel’s “Self-Reported Sex Crimes of Non-Incarcerated Paraphiliacs” (1986) study had a number of problems — few offenders were voluntary (which would compel false admissions), inclusion of non-criminal paraphilias such as consensual homosexual relations, and Abel lists an estimated number of acts and victims over a lifetime. Abel states the study suggested paraphiliacs, “through coercion or varying degrees of compliance, repeated acts are carried out with the same victims or partners”.
There is one interesting point about the Abel study that actually debunks the Scurich and John report; Scurich and John use reports studying the subclass of particularly high-risk offenders to justify their claims of high levels of recidivism while the Abel study illustrates exactly why that is problematic. Abel provides a Mean and Median estimate of acts and number of victims. The Mean is the sum of all the numbers in the set divided by the amount of numbers in the set. The Median is the middle point of a number set, in which half the numbers are above the median and half are below. Scurich and John cited the highest number possible found in the Abel study, the mean number of estimated number of lifetime acts by those with male victims, listed in the Abel study as 281.7, but the researchers fail to mention the mean number, which is 10.1, obviously far lower than the scarier number. You don’t have to be a statistician to understand that if half of those in the Abel study committed LESS than 10.1 paraphilic acts while the average (mean) number of acts was assumed to be 281.7, then there must be a small group of people that have grossly inflated the average. These small subgroups are the subjects of Langevin, Prentky, and the Ahlmeyer studies, the studies that give grossly inflated estimates of recidivism.
Scurich and John used studies that relied on estimates to come up with a number based upon their own estimates. The duo use advanced statistical models such as Poisson distribution but at the core is an estimate based upon what they personally feel it should be, a high number. There is no discussion on the probability of false reporting of sex crimes, or of the vast discrepancy between rearrest and reconviction rates. But perhaps most importantly, this paper suffers from a fatal design flaw in using estimates of underreporting in general to underreporting by registered persons. There is no way to know how many of those claims of offenses in the NCVS were committed by registered persons. The best we can do is look at the number of overall arrests and see the number of arrests by registrants. The 2008 study by Sandler, Freeman, & Socia, “Does a Watched Pot Boil? A Time-Series Analysis of New York State’s Sex Offender Registration and Notification Law,” found 95.9% of rape arrests and 94.1% of child molestation arrests were of first time offenders, not registered persons. This is a huge oversight in the Scurich and John report as it leads the uninitiated to make the assumption every unreported incident is the result of those already convicted of a sex offense.
Mythbusting tactic number 3 is to watch out for deflection tactics, and it is in the discussion area where this tactic can be found in detail. The core premise of the entire Scurich and John research paper is an appeal to ignorance. Scurich and John state, “We do not endorse any specific sexual recidivism rate as the ‘correct’ one…there is no single, universal recidivism rate that describes all varieties of sexual offenders.” If this statement is true, then this research paper cannot even make any actual claims on sex crime re-offense at all. Yet, in the very next sentence, Scurich and John do just that, proclaiming, “By parity of reason, sweeping proclamations that ‘…only a minority of sex offenders recidivate’ (Calkins et al., 2014, p. 449)” are inapposite.” By parity of reason, the same sweeping proclamation that “the majority of sex crimes are vastly unreported” is equally inapposite.
This has been a lengthy analysis but to simplify all we have discussed, this report is a faulty estimate of underreporting that intentionally used controversial, faulty studies and justifies a faulty assumption that actual re-offense rates are significantly higher than current recidivism studies show. A thorough analysis like this can be possible for laypersons if you use the proper resources. As part of mythbusting tactic number 3, I advise using a proper research website to help debunk various myths. If you are a frequent user of OnceFallen.com, then some of the statements used to debunk this report should look very familiar. Yes, I copy-pasted quite a few statements from my website into this report, and my site is available for people to use in a similar manner. The SOSEN forums is also a great repository for mythbusters, so if you have not signed up for the forums, you should do so today. If you are interested in becoming a mythbuster, it is important to remember the following guidelines that I discussed at the 2013 NARSOL Conference:
Consider the source: Is the source peer reviewed? Where is the report coming from, and where is it published? (University/ Independent research > Media > Victim Advocates) Are these hard numbers or estimates? Watch out for “round numbers”, “Goldilocks numbers”, and small sample sizes.
Read the Actual Source: Don’t rely on media reports. What is the premise? How is the study devised? What studies are they citing? What is their method of research? Are they relying on hard numbers or estimates as the basis of their experiments?
Arm yourself and watch out for common deflection tactics: Watch out for common logical fallacies, counter fallacies with sound reasoning and facts, cite authentic sources, compare the research paper with similar research reports and resources sites
Don’t rely on stats alone to prove a point: Sometimes, anecdotes are better fought with anecdotes; emotional stories can help illustrate the dangers of bad public policy just as they helped create such policies. If you have your own story to tell, use it!
“Know Your Enemy”: Should you find yourself accepting a media interview, writing a critical report, or participating in a head-to-head debate, know who—or what—you are facing. Research the TV show, panelist or author of the research paper.
“Accentuate the positive, eliminate the negative”: Semantics matters to many people. How we word our arguments is important. Instead of saying “Sex offenders have a 1% recidivism rate” say “Over 99% of registrants never reoffend.” the larger number is in OUR favor. Know your sources because you’ll be called out on your sources. Better yet, turn the tables on your opponents by asking them to cite their sources.
By Derek W Logue of OnceFallen.com