Researchers blow whistle on forensic science method

Posted by
Spread the love
Earn Bitcoin
Earn Bitcoin

Like fingerprints, a firearm’s discarded shell casings have unique markings. This allows forensic experts to compare casings from a crime scene with those from a suspect’s gun. Finding and reporting a mismatch can help free the innocent, just as a match can incriminate the guilty.

But a new study from Iowa State University researchers reveals mismatches are more likely than matches to be reported as “inconclusive” in cartridge-case comparisons.

“Firearms experts are failing to report evidence that’s favorable to the defense, and it has to be addressed and corrected. This is a terrible injustice to innocent people who are counting on expert examiners to issue a report showing that their gun was not involved but instead are left defenseless by a report that says the result was inconclusive,” says Gary Wells, an internationally recognized pioneer and scholar in eyewitness memory research.

The Distinguished Professor Emeritus co-authored the paper with Andrew Smith, associate professor of quantitative psychology. Smith studies memory, judgment and decision-making and is affiliated with both the Cognitive Psychology Program and the Psychology and Law Research group at Iowa State.

The two researchers pulled a dataset from a previously published experiment involving 228 firearms examiners and 1,811 cartridge-case comparisons. Overall, the participants were highly accurate in determining whether casings from a common firearm matched or mismatched. But when Smith and Wells applied a well-established mathematical model to the data, they found 32% of actual mismatch trials were reported as inconclusive compared to 1% of actual match trials.

“If the 16% of inconclusive reports lined up more evenly across actual matches and non-matches, we could chalk it up to human error. But the asymmetry, combined with the near-perfect performance of examiners, indicated something else was going on. They almost certainly knew that most of the cases they called inconclusive were actual mismatches,” says Smith.

Asking the wrong question

The researchers say a flawed response scale could help explain the dissociation between what examiners know and what they report.

advertisement


Currently, the Association of Firearm and Tool Mark Examiners’ Conclusion Scale asks forensic firearms experts whether the crime-scene casings and casings from the suspect’s gun are from the same source. Smith and Wells say the problem with the “source” question is that it’s possible for a mismatch to be attributable to an altered firearm or degraded evidence.

With these possible explanations, Smith and Wells say some examiners might take the position that it is never appropriate to call something a mismatch and instead default to calling the results inconclusive.

“Instead of asking examiners to make source determinations, examiners should simply be asked if the shell casings from the suspect’s gun match the casings found at the crime scene. Asking if the casings match or not and to what degree could provide more transparency,” says Smith.

Questions about alterations and degradation could be asked separately, Smith adds.

Wells emphasizes that until the response scale is fixed, defense lawyers should cross-examine forensic firearms experts who claim inconclusive results. They need to “show their work,” he says. Wells also recommends getting a second opinion if the cartridge-case comparison report comes back as inconclusive.

Bias in the lab

The researchers say another possible explanation for calling a result inconclusive when it’s actually a mismatch is “adversarial allegiance bias.”

advertisement


“Most forensic firearm examiners and their labs are retained by the prosecution or police departments,” says Smith. “Some examiners might render reports that are inconclusive despite the mismatch because they don’t want to hurt the side that’s essentially their employer.”

Smith and Wells say this type of bias can also occur at the lab level. They point to survey data showing some labs have policies that do not allow examiners to report mismatches.

“It’s hard to get rid of bias, but fixing the response scale would go a long way in solving the problem,” says Wells. “In the meantime, there are likely past cases that need relitigated.”

The researchers underscore that forensic science needs to be proficient in not just incriminating the guilty but also in freeing the innocent from suspicion. Minimizing bias and improving transparency in cartridge-case comparisons will help create a more fair and efficient criminal justice system.