Choose the computational integrity gadgets best fit for your application.
Computational integrity is a fundamental property that ensures the output of a
computation is provably correct and has been executed as intended.
The status quo today forces users to trust centralized AI operators to run
models correctly without manipulating their inputs or cheating them with worse
models.
Verifiable computing, powered by the computational integrity gadgets below,
enables any computation—whether conducted by a trusted or untrusted party—to
be verified for accuracy and correctness, without redoing the often complex
computation itself.
Ritual takes a credibly-neutral approach to computational integrity by
enabling users to leverage different gadgets based on their app specific needs
and their willingness to pay.
Ritual's modular design and flexible underlying architecture empower user choice.
Ritual enables both eager and lazy consumption of proofs from supported gadgets.
Lazy consumption enables use cases where computational integrity is only
required in the sad path:
Save costs: Lazy proofs are
generated only when disputes or errors occurImprove performance: Minimize proof verification
for applications with infrequent disputesBetter developer experience: Build
simpler, easier to audit applications with fewer hot paths
A one-size-fits-all paradigm to computational integrity creates inherent
trade-offs between security, cost, and performance. Each gadget has its own
trade-offs and best use cases:
Zero Knowledge Machine Learning (ZKML) builds on
zero-knowledge proofs to
cryptographically assert correct execution of an AI model.Ritual’s
ZK generation and verification sidecars
enshrine this gadget natively, enabling users to make strong assertions of model
correctness, with robust blockchain liveness and safety.
Robust security: Offers the
strongest correctness guarantees via cryptographyHigh complexity: Computationally
expensive, demands high resources, and is slowestLimited support: Only simple models are supported by modern ZKML proving systems today
Optimistic Machine Learning (OPML), inspired by
optimistic rollups,
assumes model execution is correct by default, with verification occurring only
when disputes arise.At a high level, the system works as follows:
Model execution servers stake capital to participate
These servers then execute operations, periodically committing intermediary
outputs
If users doubt correctness, they can contest outputs via a fraud proof system
The system views models as sequences of functions and uses an interactive
bisection approach, checking layer by layer, to identify output
inconsistencies
If model execution is indeed incorrect, server stake is slashed
Cost effective: Especially efficient for use cases where disputes rarely occurExtended support: Bisection approach better
supports large, complex models (like LLMs)Weaker security: Relies on
incentivized behavior rather than cryptographic securityComplex sad path: Dispute resolution is lengthy, complex, and demands some re-execution
Trusted Execution Environments (TEEs) provide hardware-based secure computing
through isolated execution zones where sensitive code and data remain protected.Ritual’s TEE Execution sidecar
enshrines this gadget natively by executing AI models in secure enclaves
enabling data confidentiality and preventing model tampering.
Performant: Enables sans-gadget competitive performance for most AI model typesReal-time: Better suited for real-time
applications with limited proving complexity or overheadVendor trust: Requires trust
in chip manufacturers and secure enclave softwareHardware attacks: Susceptible to sophisticated side-channel hardware attacks
Most model operations are computationally complex, especially when performing
resource-intensive operations like fine-tuning or inference of modern LLMs.To better support these operations with a low computational overhead tool,
Ritual has pioneered a new class of verification gadgets, dubbed
Probabilistic Proof Machine Learning.The first of this line of tools is vTune, a
new way to verify LLM fine-tuning through backdoors.
Computationally cheap: Time and cost-efficient for even the most complex model operationsThird-party support: Suitable for trustlessly
verifying third-party model API executionStatistical correctness: Not suitable for when perfect verification guarantees are necessary