zkVerify blog

AI Can’t Be Trusted (Yet), Here’s How We Fix It

a poster that says " ai can 't be trusted ( yet ) here 's how we fix it "

In 2012, a rogue AI trading algorithm wiped out $460 million in just 45 minutes, bringing Knight Capital Group to the brink of collapse. In 2018, an AI-powered recruiting tool at Amazon was scrapped after it systematically discriminated against female candidates. And just last year, a Tulane University study showed AI-driven sentencing recommendations led to harsher punishments for black defendants in Virginia, USA.

AI is everywhere, making life-altering decisions in finance, hiring, law enforcement, and beyond. But there’s one problem: we can’t see how it works.

The world runs on trust, but AI hasn’t earned it yet.

The Black Box Problem in AI

AI models operate in a black box. We see the inputs and outputs, but the reasoning behind their decisions remains hidden. This lack of transparency has real-world consequences.

Take ShotSpotter, an AI-driven gunshot detection system. In 2021, Chicago police used its analysis to jail Michael Williams for nearly a year. The AI flagged a noise as a gunshot, linking it to Williams' location. But when ShotSpotter later revised its data, it undermined the case. The AI had spoken, and a man lost a year of his life because of it.

AI isn’t neutral as it inherits biases from its training data and sometimes makes things up. The problem isn’t just that AI can be wrong. It’s that we have no way to prove when it’s right.

Zero-Knowledge Proofs: The Missing Link

What if we could verify an AI’s decision without exposing its entire dataset or proprietary model? That’s where zero-knowledge proofs (ZKPs) come in.

ZKPs are cryptographic proofs that let an AI system prove it followed the correct logic, without revealing how it arrived at its decision. Think of it as a math-driven lie detector, as a way to prove something is true without showing your work.

This changes everything:

  • In finance, a DeFi protocol could prove an AI-based trading strategy isn’t front-running users.
  • In hiring, a company could confirm its AI isn’t discriminating against candidates based on race or gender.
  • In healthcare, an AI model could demonstrate its cancer diagnosis was based on real patient data, not flawed assumptions.

AI doesn’t need to be a black box, we just need a better way to verify it.

zkVerify: Scaling AI Verification

ZKPs are powerful, but verifying them at scale is computationally expensive. That’s where zkVerify comes in.

zkVerify is a modular verification network designed to handle proof verification at a fraction of the cost and speed of existing methods. By offloading the heavy lifting, zkVerify makes AI accountability scalable and practical turning cryptographic proof into something that can be integrated across industries.

In a world increasingly driven by AI, cryptographic proofs guaranteeing its correct (and fair) functioning is not optional, but the only way forward.