- Execute a TEE image to research data for a prediction market
- Use an LLM to consume the research data to generate a succinct market prompt
- Create a new market consuming the prompt, via a prediction market interface abstraction
- Automate 1-3 via a scheduled transaction to create new prediction markets automatically
Initial assumptions
For sake of example, assume that:
- A Python program is uploaded to and accessible via the Ritual TEE precompile
- This program calls out to external news sources (NYT, X.com) to fetch event data
- We have an abstracted prediction market interface to make new markets
- We will use an LLM model already cached on the Ritual Network
(
huggingface/Ritual-Net/Meta-Llama-3.1-8B-Instruct_Q4_KM)
In practice, you will likely want to use your own fine-tuned models purpose-built for this use case, rather than the default LLM models cached on Ritual.
Setup scheduler and prediction market interface
We begin with preliminary setup, using our familiar
IScheduler interface and a IExamplePredictionProtocol stub interface:Setup market creation pipeline
Next, we will setup our core function,
marketCreationPipeline() that will orchestrate three steps:- Call our TEE precompile with our
researcher-programimage - Pipe our research results into an LLM inference call
- Take our inference call output and create a new prediction market
Automate market creation at fixed schedule
Now that we have setup our one-time
marketCreationPipeline() function, we can use scheduled transactions to invoke this function automatically: