AI and the law: when truth becomes negotiable
A recent Washington Post article revealed something that should alarm anyone who believes in the rule of law: judges have been caught incorporating AI-generated misinformation into their court orders: fake quotes, fabricated case citations, and AI-hallucinated legal precedents are all making their way into the official record.
This isn't an isolated problem. It's part of a disturbing pattern we're seeing across institutions. Federal agencies have released misleading AI-doctored videos. The White House has been caught using manipulated content on an almost-daily basis. False generative material spreads online largely unchecked. The common thread? People are using AI-generated content that sounds plausible without doing the work to verify it.
We've entered an era where truth has become negotiable. People, including policymakers have adopted a "good enough" standard: as long as the content serves their purposes. Whether that's depicting cities as more dangerous than they actually are or supporting a legal argument, accuracy is optional because they believe the ends justify the means.
It's no longer about what's verifiable; it's about what's convincing, even when the means involve fabrication.
This is especially corrosive to the rule of law, which depends on objective facts. Lady Justice is supposed to be blindfolded and indifferent to power, wealth, and status: her blindfold represents impartiality, her commitment to judging cases based on facts and evidence rather than influence.
But now? She's been fitted with Meta Quest goggles, seeing whatever reality those with the most resources and least scruples choose to show her. The fact that she’s armed with a sword (or in the case of our administration, an entire army), is extremely disturbing.
When judges unknowingly cite fabricated cases, when agencies release doctored videos, when institutions treat AI output as reliable without verification, they're not just making isolated errors. They're normalizing a culture where verification is treated as optional and plausibility trumps truth.
The Asymmetry Problem
Here's what makes this so dangerous: the technology makes generating convincing falsehoods frictionless, while fact-checking remains slow, effortful, and expensive. That asymmetry tilts the playing field toward those willing to cut corners.
In the judicial system, the problem compounds. A court order containing AI hallucinations doesn't just affect one case because it enters the legal record. Other lawyers cite it. It becomes precedent. Fiction gets laundered into legitimacy through the very mechanisms designed to establish truth. This isn’t new: lobbyists for the pharmaceutical industry and policymakers often repeat lies so often that they hope they get adopted as truths - but the imbalance of power unchecked is widening that divide.
We're living through a dangerous gap where technology has outpaced our institutional safeguards. We need judges who understand they can't trust AI output without verification. We need lawyers who can spot synthetic evidence. We need professional standards that treat AI-assisted work with appropriate skepticism.
But right now, we don't have those things. And every day that gap persists, we move further from a shared factual foundation toward a world where truth is whatever the most sophisticated technology can make people believe.
The question isn't whether AI will be part of our institutional future—it already is. The question is whether we can build the safeguards, develop the literacy, and maintain the standards necessary to prevent it from undermining the very concept of objective truth.
Because once institutions abandon the principle that facts matter, that verification is non-negotiable, that truth isn't just whatever serves the moment—we've lost something far more valuable than efficiency. We've lost the foundation that makes justice possible at all.