Background
Centralized APIs are broken
Access to AI inference is dominated by a handful of centralized operators serving models via API. These service providers geo-restrict access, require operator trust, and censor. Developer experience using these APIs is cumbersome, with limited optionality. Because of closed-source moats, these centralized model providers unilaterally price and serve compute, without any competitive market mechanism to drive cost savings for developers or end-users.Crypto × AI
Today’s developers have unprecedented access to AI tools for creating
responsive, tailored user experiences. At the same time, maintaining AI
systems that are transparent, unbiased, and free from undue control has become
more crucial than ever.
Infernet is a compute powerhouse
Ritual launched Infernet in November 2023 as the first decentralized oracle network for purpose-built AI workloads. Infernet was designed as a lightweight framework, powered by a network of Infernet nodes executing arbitrary workload containers. With the1.0.0
upgrade, we introduced highly
requested features including:
audited
on-chain payments,
frameworks for
verification of compute, and
support for
streaming responses.
Today, there are 8,000+ independent Infernet nodes, all with diverse
hardware profiles and capabilities, ready to execute requests. This is without
any expectation of incentives.
Bridging Infernet ↔︎ Chain
Infernet shortcomings
Noticeably, when architecting Infernet, we explicitly opted for a design that was simple yet flexible, forgoing things like coordinating nodes through consensus or integrating a robust job routing mechanism. While this enabled us to quickly onboard thousands of compute providers and iterate on and validate novel, on-chain AI applications with builders, it inhibited adoption from a broad set of Web2 and Web3 developers.Chain-as-an-interface
The combination of the trustless properties of Ritual Chain with the expressive compute of the Infernet mesh solves these shortcomings to enable truly transparent AI:- We extend Resonance to enable routing compute requests to Infernet nodes. This enables an out-of-the-box, robust pricing and routing mechanism for Infernet.
- We extend execution sidecars to tap into the broad Infernet mesh for underlying compute.
- We introduce flexible Web2 adapters that conform to common centralized API schemas, but abstract underlying requests silently orchestrated by Ritual Chain, with the same strong privacy and verifiability guardrails offered to on-chain AI inference.
- We enable Web2 requests to, for the first time, be verifiable and reproducible through modular computational integrity primitives.