LAST WEEK, the Washington Post revealed that in 268 trials dating back to 1972, 26 out of 28 examiners within the FBI Laboratory’s microscopic hair comparison unit “overstated forensic matches in a way that favored prosecutors in more than 95 percent” of the cases. These included cases where 14 people have since been either executed or died in prison.
The hair analysis review — the largest-ever post-conviction review of questionable forensic evidence by the FBI — has been ongoing since 2012. The review is a joint effort by the FBI, Innocence Project and the National Association of Criminal Defense Lawyers. The preliminary results announced last week represent just a small percentage of the nearly 3,000 criminal cases in which the FBI hair examiners may have provided analysis. Of the 329 DNA exonerations to date, 74 involved flawed hair-evidence analysis.
While these revelations are certainly disturbing — and the implications alarming — the reality is that they represent the tip of the iceberg when it comes to flawed forensics.
In a landmark 2009 report, the National Academy of Sciences concluded that, aside from DNA, there was little, if any, meaningful scientific underpinning to many of the forensic disciplines. “With the exception of nuclear DNA analysis … no forensic method has been rigorously shown to have the capacity to consistently, and with a high degree of certainty, demonstrate a connection between evidence and a specific individual or source,” reads the report.
There is one thing that all troubling forensic techniques have in common: They’re all based on the idea that patterns, or impressions, are unique and can be matched to the thing, or person, who made them. But the validity of this premise has not been subjected to rigorous scientific inquiry. “The forensic science community has had little opportunity to pursue or become proficient in the research that is needed to support what it does,” the NAS report said.
Nonetheless, courts routinely allow forensic practitioners to testify in front of jurors, anointing them “experts” in these pattern-matching fields — together dubbed forensic “sciences” despite the lack of evidence to support that — based only on their individual, practical experience. These witnesses, who are largely presented as learned and unbiased arbiters of truth, can hold great sway with jurors whose expectations are often that real life mimics the television crime lab or police procedural.
But that is not the case, as the first results from the FBI hair evidence review clearly show. And given the conclusions of the NAS report, future results are not likely to improve. What’s more, if other pattern-matching disciplines were subjected to the same scrutiny as hair analysis, there is no reason to think the results would be any better. For some disciplines the results could even be worse. Consider the examples below:
Indeed, some of the harshest criticism contained in the NAS report focuses on bite-mark evidence, and concludes that there is no scientific underpinning to the discipline. In a recent four-part series on bite-mark analysis, the Washington Post’s Radley Balko described how forensic odontologists — dentists who profess expertise in bite-mark analysis (and who are qualified as such by the American Board of Forensic Odontology) not only reject the NAS’s conclusion, but actively attack anyone who dares to criticize the field. Two examples: In 2013, ABFO leadership orchestrated an aggressive — and ultimately unsuccessful — plan to expel their own colleague, Dr. Michael Bowers, from membership within the American Academy of Forensic Sciences, which would have hamstrung Bowers from testifying against the practice in court. His crime: being a vocal critic of bite-mark “science.” In 2014, speaking at an ABFO dinner, Manhattan prosecutor Melissa Mourges, a strident supporter of bite-mark evidence, not only derided Mary Bush’s work, but also peppered her remarks with petty insults about Bush’s physical appearance.
Of course, as it is with hair analysis — and, really, any of the questionable forensic disciplines critiqued by the NAS — the utter lack of a scientific foundation has done nothing to keep bite-mark evidence out of the courtroom. To date, DNA has exonerated 24 individuals sent to prison on bite-mark evidence.
If only it were that easy.
While there is some actual science involved in bloodstain-pattern analysis — knowledge of the physics of fluids is helpful, as is an understanding of the pathology of wounds — the sheer number of variables involved in the creation of any given bloodstain makes reaching any definitive conclusion about the circumstances of its origin difficult at best. “The uncertainties associated with bloodstain-pattern analysis are enormous,” the NAS report concluded.
Yet for defendants, as with other forensic disciplines, the conclusions of a bloodstain “expert,” can mean the difference between living free or behind bars. The NAS report warns that while science supports “some aspects” of bloodstain-pattern analysis — whether blood “spattered quickly or slowly” for example — some experts “extrapolate far beyond what can be concluded.” This risk was powerfully demonstrated in the bizarre case of Warren Horinek, a former Fort Worth, Texas, police officer who, based solely on the conclusions of a blood-pattern expert, was convicted and sentenced to 30 years in prison for the 1995 murder of his wife — a death that the police, medical examiner, and prosecutor all concluded was actually suicide.
Horinek remains in prison.
There are several problems with this type of evidence — not least of which is the fact that while the evidence found at a crime scene remains static, fixed in time, shoe and tire wear is continuous, meaning in part that unless you can immediately match a shoe or tire to a crime scene, the potential probative value of that evidence could quickly be irretrievably lost. But more concerning is that there is no science demonstrating that any particular marks are actually unique, nor are there any standards for how many unique characteristics it takes to declare a match between object and evidence. There is “no defined threshold that must be surpassed, nor are there any studies that associate the number of matching characteristics with the probability that the impressions were made by a common source,” reads the NAS report. “Experts in impression evidence will argue that they accumulate a sense of those probabilities through experience, which may be true. However it is difficult to avoid biases in experience-based judgments, especially in the absence of a feedback mechanism to correct an erroneous judgment.”
Indeed, spurious shoe-print evidence offered by an FBI examiner helped to send Charles Irvin Fain to death row for the 1982 kidnapping, rape and murder of a 9-year-old girl in Idaho. According to the examiner, wear on Fain’s shoes matched wear patterns in shoe prints connected to the crime — and those wear patterns, the expert concluded, were created by a person with a particular gait. The perpetrator would “have to have the same characteristic walk as the individual who owned those shoes,” the expert testified.
DNA testing ultimately led to Fain’s release from prison in 2001 after spending 18 years on death row.
Importantly, fingerprints collected from crime scenes are often only partial prints, distorted, smudged, or generally “noisy,” as one group of investigators, seeking to formulate error rates for fingerprint examination, wrote last year. And that’s where problems can happen: Consider the case of Brandon Mayfield, the Oregon lawyer who was falsely accused of participation in the 2004 Madrid, Spain train bombings based on a fingerprint collected from a bag containing detonation devices. The FBI later admitted it bungled the print match.
Fortunately, there are ongoing efforts underway within the discipline’s community of experts to validate forensic fingerprint examinations. Jennifer Mnookin, a UCLA law professor and lead investigator into fingerprint error rates, says that leaders in the field have begun to embrace the emerging “research culture” that the area is taking on. “At this point it’s not that the work is done,” she says. “It isn’t. But compared to bite marks … to handwriting [analysis], there is now a growing body of research looking at these questions [of validity and reliability] in a way that didn’t exist 10 to 15 years ago.”
Whether all of the state cases will ever be identified let alone reviewed, remains to be seen.
For Timothy Bridges, the stakes couldn’t be much higher. Bridges was convicted and sentenced to life in prison for the beating and rape of an 83-year-old woman in Charlotte, North Carolina, in the spring of 1989. The victim (who died of unrelated causes before Bridges trial), variously described her attacker and denied that she was raped. Ultimately, Bridges, who had wavy shoulder-length hair — which is how the victim once described her attacker — was charged with the crime. There was no DNA to connect Bridges to the crime and he was not a match for a bloody palm print found at the scene (that print was never matched to anyone). But there were two hairs collected that an FBI-trained examiner testified not only were “likely” Bridges, but also that there was a very low chance they could belong to anyone else: The “likelihood of two Caucasian individuals having indistinguishable head hair is very low,” expert Elinos Whitlock testified — the very sort of language unsupported by science and found in the faulty cases identified in the current FBI review.
Bridges appealed his conviction, arguing in part that there was no scientific basis to Whitlock’s testimony. In 1992, the state appeals court disagreed: “We find no reversible error,” the court ruled, concluding that testimony by a “properly qualified witness on hair identification” was admissible.
Bridges is currently seeking a new trial and the state is reportedly reviewing the matter.