Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Get started with Lilypad v1
The cloud is just somebody else's computer...
This guide will take you through
Setting up Metamask Wallet for the Lilypad Lalechuza (eth) Testnet
Funding your wallet with Lilypad Testnet tokens from the faucet.
Running a Hello, (cow) World! Example
Connecting to LaLechuza Testnet
Install MetaMask Extension here
Next, add the Lalechuza testnet chain to MetaMask.
Network name: Lilypad Lalechuza testnet
New RPC URL: http://testnet.lilypadnetwork.org:8545
Chain ID: 1337
Currency symbol: lilETH
Block explorer URL: (leave blank)
To do this, open MetaMask then click on the network button at the top left of the popup (in the menu bar):
Then click the "Add Network" Button.
Next, click on "Add a network manually" at the bottom of the page and enter the Lilypad Testnet details:
Docker & Lilypad, Private Key
Supported platforms: Linux, macOS, WSL2 on both x86_64 and arm64
installed & running
Get your private key from MetaMask: Accounts -> Account Details -> Show private key
Set your private key in terminal
You can verify you have set it with:
Filecoin Data Prep
Code not tested.
The Filecoin Data Prep Module is designed to chunk data into CAR files from an S3 bucket - hence the name, since it prepares the data to be uploaded to Filecoin. The repo for this module can be found .
Ensure you have installed all requirements
Run the module
Execution:
Results:
Click Save.
📚 Reference
View Lilypad v1 references
🖼️ Examples
View Lilypad v0 examples
Overview
Architecture Diagrams
Get Started
Win Hacks & Contribute to Lilypad
Verification, Mediation & Job Pricing on Lilypad v1
A cowsay job
The following is how to run the "Hello, World!" Job from the Lilypad CLI.
To Run from a smart contract skip to the section [Smart Contract] Run "Hello, World!" Job
Start Docker
Open a Terminal window and run the following command:
Ensure your user is in the docker group if necessary on your platform
Results:
Navigate to the IPFS CID result output in the Results -> https://ipfs.io/ipfs/QmNjJUyFZpSg7HC9akujZ6KHWvJbCEytre3NRSMHzCA6NR
Patience! This could take up to a minute to propagate through the IPFS network.
Then click on the stdout folder and you should see the job result!
How to run a cowsay job from a Smart contract!
Open the Contract
Click this link to open the ExampleClient.sol contract in remix
Alternatively, open the Remix IDE in your browser and copy in the below ExampleClient.sol contract:
Connect to the Testnet
Connect MetaMask to the Lalechuza testnet & ensure you have testnet lilETH funds.
In the deploy tab in remix [fourth tab in the side bar], ensure you set the environment to "Injected Provider - MetaMask"
Deploy the Contact In remix, compile the ExampleClient.sol contract [third tab in the side bar]
Deploy a new contract by pasting in the Modicum contract address (found here) to the constructor
Deploy an existing ExampleClient.sol contract by using the "At Address" with the following pre-deployed contract address: 0x035C7593D3355b9bE0459dF2296053f887d051f1
Call the runCowsay function
The moment of truth! Let's run the cowsay example!!
Then add the string parameter for what you want the cow to say and click the transact button! Your MetaMask wallet should pop up asking you to confirm the transaction.
Wait a couple of minutes for the job to complete on the compute network. Then you will be able to click the fetchAllResults button to get your IPFS CID result.
Open the string starting with "https://ipfs.io/" in your browser which contains the output of the compute job:
The cowsay result is found under "stdout"
Congrats!
Run a node to provide job services
Currently runs on Linux, macOS, WSL2 - but only x86_64 architectures
To contribute your resources to the network and get paid:
That's it! This will run a Lilypad docker node on your local machine which can accept and run jobs.
As a resource provider:
I register my resource offer and send my deposit against that single offer
The offer is matched with a job and I run the job
if I ran the job properly - I get my deposit back AND the cost of the job
if I was caught cheating - I will lose my deposit
Trustless Distributed Compute for web3
Lilypad is a powerful trustless distributed compute platform which leverages key features of blockchain to enable developers to call arbitrary verifiable compute jobs directly from their smart contracts! Lilypad's initial focus is on bringing together the current demand for GPUs (from AI & ML) with decentralised physical infrastructure networks like Filecoin which can supply this demand.
The ability to perform off-chain decentralised compute over data from smart contracts opens the door to a multitude of possible applications including:
Inference AI jobs
ML training jobs
Invoking & supporting generic ZK computations
Cross-chain interoperability complement to bridge protocols
Utilising inbuilt storage on IPFS
Federated Learning consensus (with Bacalhau insulated jobs)
IOT & Sensor Data integrations
Providing a platform for Digital twins
Supply chain tracking & analysis
ETL & data preparation jobs
How we see Lilypad network operating on a high level
Lilypad v1 is currently in early testnet phase and is a custom implementation of the ideas & code contained in the paper: "Mechanisms for Outsourcing Computation via a Decentralized Market", which proposed a mediator approach to resolving consensus of deterministic jobs, and also offers insights into running non-deterministic jobs.
See the for more info
Currently, two testnets are functional, both of which allow arbitrary untrusted nodes to join, but use a set of mutually trusted mediators to check jobs using verification by replication (see MODICUM paper for details).
Lalechuza - a testnet built on geth
Larana - a testnet built on Filecoin IPC (an advanced scaling solution for blockchain that implements a subnet pattern)
Lilypad Testnet Architecture
This page is a dynamic work in progress! We're working on some better diagrams!
See the & V1 Research Documents for further information on how the MODICUM architecture was enhanced in implementation
See docs.bacalhau.org for more information on how Bacalhau operates
With testnet Lilypad ETH
To obtain funds, connect to the Lilypad Lalechuza network on your wallet and head to the faucet at http://faucet.lilypad.tech
Faucet: http://faucet.lilypad.tech
Copy your MetaMask wallet address into the bar and click request.
Ideas and Guides for Contributing to Lilypad
Here's some ideas for what you could build with Lilypad v1!
Contribute a Module to Lilypad!
We're sorry about these docs... we're actively working on it!
Create a gasless transaction pipeline with Bacalhau
Build new features for Waterlily.ai
Peruse the Past Hackathon Winnerspage!
Fine-tuning models from inputs.
LoRA stands for Low Rank Adaptation and is a mathematical technique to reduce the number of parameters that are trained in a model - so instead of fine-tuning all the weights that constitute the weight matrix of the pre-trained large language model, two smaller matrices that approximate this larger matrix are fine-tuned.
What this means in practice is that instead of needing to build a custom model off all the original input data material (and needed many many GPUs to do so), LoRA means you can fine-tune an existing model (such as SDXL 0.9 Stable Diffusion) to be biased towards a certain result on just one GPU.
For example, an open source Stable Diffusion model can be fine-tuned to produce images in the style of Claude Monet paintings using LoRA. Fun fact: This is how Waterlily.ai trains artist models - look how good the results are even without an up to date Stable Diffusion model like SDXL0.9!
A Claude Monet data set is available in zip form on IPFS here: ipfs://bafybeiglwmvudxxethpi46el3o5m44lrki2sjzgs7whvb6xaz6e65wm7am One of the easiest ways to upload a dataset to IPFS is to use web3.storage.
Ensure you have installed all requirements [CLI] Install Run Requirements
To run a LoRA fine-tuning job, just provide the training data for the job to the command:
NB: the params above should be yaml eg. {seed: 42, 'images_cid': 'Qm...'}
where images_cid contains an images.zip with training images in it.
This will output a result model CID, which can then be used to generate new images in this particular style:
Ensure you have set up your Metamask for Lalechuza Network and have funded your wallet. Setting up Metamask & Funding your Wallet from Faucet
See the code repo here
A cowsay job
Ensure you have installed all requirements
Start Docker
Open a Terminal window and run the following command
Ensure your user is in the docker group if necessary on your platform
Output:
This could take up to a minute to propagate through the IPFS network. Please be patient
Then click on the stdout folder and you should see the job result!
A Fast Chat LLM Inference Module for Lilypad
This LLM Inference Module is a community-contributed module developed at AugmentHack.xyz The repo for this module can be found
See the original AugmentHack entry below:
Usage:
Inputs:
To use it you would run:
Outputs:
The output will be an IPFS CID, for example running the above input would result in the following link:
Current State
You can read a much more thorough and academic explanation on these ideas in the and
This page also comes with a warning:
There is currently no verification on smart contract-called Lilypad jobs
Verification DOES exist on the CLI version, however.
How can nodes be trusted to do the compute job?
A resource provider node (compute node), is currently required to put down a deposit of funds before any compute jobs arrive.
On a network of compute nodes, there is also mediator nodes, whose job is to check that nodes in the network are correctly running the submitted jobs (eg. by running the job itself to check for correctness)
While not every job will take the same amount of time or compute resources to run, currently the network prices all jobs at the same amount. This will change in future versions to more accurately calculate and charge for the compute resource requirements of a job as well as compensating the compute nodes appropriately.
Run a Stable Diffusion Text to Image Job
Generically, stable diffusion is what happens when you put a couple of drops of dye into a bucket of water. Given time, the dye randomly disperses and eventually settles into a uniform distribution which colours all the water evenly
In computer science, you define rules for your (dye) particles to follow and the medium this takes place in.
Stable Diffusion is a machine learning model used for text-to-image processing (like Dall-E) and based on a diffusion probabilistic model that uses a transformer to generate images from text. There are several open-source stable diffusion models out there (made famous by Stability.ai) and they continue to improve and become even more fully featured - SDXL0.9 is one of the more recently open-sourced models.
To run stable diffusion use the SDXL module like so:
The output will look like this:
Take the ipfs link given in the results and paste it into your browser:
Please be patient! IPFS can take some time to propagate and doesn't always work immediately.
In the /outputs folder, you'll find the image:
Since modules are deterministic, running this command with the same text prompt will produce the same image, since the same seed is also used (the default seed is 0).
To change the image, you can pass in a different seed number:
NB: You could also add the seed as a parameter to run this.
return remoteContractInstance.runModuleWithDefaultMediators{value: msg.value}("sdxl:v0.9-lilypad1",``
params
);
Try it yourself!
Set the remix environment to "Injected Provider - MetaMask" (& ensure MetaMask has the lalechuza chain selected)
Call the runSDXL Module, passing in a prompt and sending 2 lilETH in the value field. Your MetaMask wallet should pop up for you to confirm the payment and transaction.
Give it some time and check the resultCID variable. You can then open this result in your browser with https://ipfs.io/ipfs/<resultCID> or ipfs://<resultCID> in IPFS compatible browsers like Brave.
First, ensure you set remix to pay 2 lilETH to the function by setting the "value" field
See the Results
Yay we're rich!
Good first project
Moderate skill needed
God mode!
Build cowsay as a service! -> see Hello (cow) World!
Stable diffusion as a service (text to image) -> see Stable Diffusion (SDXL0.9)
Use Stable diffusion fine tuning as a service (e.g. give me pics of you and add a beer or whatever, via LoRA) - this is a distributed compute network that triggers from a smart contract and that ACTUALLY CONSUMES DATA FROM IPFS and writes output data as CIDs and consumes them from inference - this is breakthrough computing!
Filecoin data prep: integrate with Filecoin service providers (requires knowledge of Filecoin Storage Market) -> see Filecoin Data Prep
Build a Javascript wrapper for the CLI
Advanced: arbitrary WASM - write your own code and compile it to WASM and run it
God Mode: Contribute a module (see below)
Navigate to the IPFS CID result output in the Results ->
Where "paramsStr" is a question in CID form for the LLM. For example is a bare file CID which contains
Under link you will see
Pssst... here's a question on Claude Monet you could try too ;) bafybeihu62yl76fcypidaiz35gq3yjguxawy5zzwadzvlcgpnfkuy2do3i
We didn't want to hold back on releasing a testnet build that developers could use straight away to build their awesome ideas on, however, it's worth noting that Lilypad v1 is a minimal modicum-based testnet and the implemented mediation and game theory on this testnet is very much at baby stage - as are many other aspects of the robust compute network that we are working hard to build.
Fun fact: This is also why all jobs on the network currently need to be deterministic!
If a resource node is found to have cheated and not run the job, the penalty for doing so is high & their deposit is significantly slashed. While not every job will be verified for truthiness, a good analogy is to think of a train ticket inspector on a train. You may not always have your ticket checked, but if the fine for not buying a train ticket is high enough (let's say several ether!), then you will more than likely spend a small amount of Gwei buying the ticket, rather than risking the fine and being kicked off the train. Currently, the mediation (or checking that nodes in the network are correctly running submitted jobs) that happens on a smart contract called job is provided by a Lilypad default mediator. However, if you don't trust our mediator then you can run your own mediator and run compute nodes that trust your mediator instead.
Read more about these ideas in the &
Ensure you have installed all requirements
See this on how seed's work for more info on this
Make sure you have connected to the Lalechuza testnet and funded your wallet with testnet lilETH. See &
To trigger the SDXL0.9 module from a smart contract, firstly you need to create your own client contract to call the module from. In order to receive results back from the Lilypad network, you will also need to 1. Connect to the (and create an instance of it in your own contract using the ) 2. Implement the interface.
Click to open in Remix IDE!
Ensure your MetaMask wallet is set to the and has from the .
Then
- Deploy a new contract passing in the OR
- Open the contract at this example address: 0x31e7bF121EaB1C0B081347D8889863362e9ad53A
FYI! You can try all examples in one contract. See
Find the . There's also a generic .
Run in a deterministic environment
Ensure you have installed all requirements [CLI] Install Run Requirements
Contribute your own module to Lilypad
Contributing your own module is currently a non-trivial process. The Lilypad team is aiming to make this DX easier as well as to add tutorials, walkthroughs and approaches to making modules over the next month. In the meantime if you try this and do get stuck - please reach out to us in the Lilypad Discord for help!
Contributing your own module to use on Lilypad is possible and welcome! Essentially modules on Lilypad currently operate like Bacalhau Job Specifications do - take a look at this page on Creating your own Jobs to see more about how this works.
Modules should be either Docker or WASM Images that align to the Bacalhau job specifications (which then have some added metadata that defines the runtime options of the module).
Here is an example of the SDXL module in python:
It's a function that given a "string" will return a bacalhau docker job spec. That string can be whatever you want, JSON, csv, raw LLM prompt etc. In terms of output directories - you can see the SDXL example linked above names the "/outputs" folder and then will use that path in the command inside the Docker container. Any named folders like this that the Docker image can write files into - will be included as part of the results Lilypad gets back out of the job (and will change the result hash)
It's currently advisable to develop your module with bacalhau first (because it's FAR easier to get setup as a development environment than Lilypad is currently). If you can get a function like the one shown above that will, given a string, write a bacalhau job spec that you can test with the bacalhau CLI - then you have already done 98% of the Lilypad module
To add this to Lilypad, submit a PR which also includes this file here.
See more in this guide on contributing to the Lilypad project: CONTRIBUTING.md
From Lilypad incentivised testnet [Q4 2023] onwards (and perhaps even earlier), its probable modules contributed by community members will be eligible for a % fee of jobs that run these modules in order to encourage the growth of the module ecosystem on the Lilypad Network.