Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
A cowsay job
Ensure you have installed all requirements [CLI] Install Run Requirements
Start Docker
Open a Terminal window and run the following command
Ensure your user is in the docker group if necessary on your platform
Output:
Navigate to the IPFS CID result output in the Results -> https://ipfs.io/ipfs/QmNjJUyFZpSg7HC9akujZ6KHWvJbCEytre3NRSMHzCA6NR
This could take up to a minute to propagate through the IPFS network. Please be patient
Then click on the stdout folder and you should see the job result!
Fine-tuning models from inputs.
LoRA stands for Low Rank Adaptation and is a mathematical technique to reduce the number of parameters that are trained in a model - so instead of fine-tuning all the weights that constitute the weight matrix of the pre-trained large language model, two smaller matrices that approximate this larger matrix are fine-tuned.
What this means in practice is that instead of needing to build a custom model off all the original input data material (and needed many many GPUs to do so), LoRA means you can fine-tune an existing model (such as SDXL 0.9 Stable Diffusion) to be biased towards a certain result on just one GPU.
For example, an open source Stable Diffusion model can be fine-tuned to produce images in the style of Claude Monet paintings using LoRA. Fun fact: This is how Waterlily.ai trains artist models - look how good the results are even without an up to date Stable Diffusion model like SDXL0.9!
A Claude Monet data set is available in zip form on IPFS here: ipfs://bafybeiglwmvudxxethpi46el3o5m44lrki2sjzgs7whvb6xaz6e65wm7am One of the easiest ways to upload a dataset to IPFS is to use web3.storage.
Ensure you have installed all requirements [CLI] Install Run Requirements
To run a LoRA fine-tuning job, just provide the training data for the job to the command:
NB: the params above should be yaml eg. {seed: 42, 'images_cid': 'Qm...'}
where images_cid contains an images.zip with training images in it.
This will output a result model CID, which can then be used to generate new images in this particular style:
Ensure you have set up your Metamask for Lalechuza Network and have funded your wallet. Setting up Metamask & Funding your Wallet from Faucet
See the code repo here
A Fast Chat LLM Inference Module for Lilypad
This LLM Inference Module is a community-contributed module developed at AugmentHack.xyz The repo for this module can be found here.
See the original AugmentHack entry below:
Usage:
Inputs:
Where "paramsStr" is a question in CID form for the LLM. For example https://ipfs.io/ipfs/QmcPjQwVcJiFge3yNjVL2NoZsTQ3GBpXAZe21S2Ncg16Gt is a bare file CID which contains
To use it you would run:
Outputs:
The output will be an IPFS CID, for example running the above input would result in the following link:
https://ipfs.io/ipfs/QmVNXCAfJgER6U7Z5XT8QaAVFPdwmtSFE6c9sUaAx7ttZs
Under link/output/result.json you will see
Run in a deterministic environment
Contribute your own module to Lilypad
Modules must be deterministic: This means that the hash of the results directory will be the same given the same inputs (since we use the same hash tool as IPFS, in practice this means that the CIDs must match). Therefore, as long as a job always converges to the same answer, it does not matter which path it takes to get there. It just matters that the hash CID is the same as this is how mediators validate a result in the trustless network.
Modules should be either Docker or WASM Images that align to the Bacalhau job specifications (which then have some added metadata that defines the runtime options of the module).
Here is an example of the SDXL module in python:
From Lilypad incentivised testnet [Q4 2023] onwards (and perhaps even earlier), its probable modules contributed by community members will be eligible for a % fee of jobs that run these modules in order to encourage the growth of the module ecosystem on the Lilypad Network.
Run a Stable Diffusion Text to Image Job
Generically, stable diffusion is what happens when you put a couple of drops of dye into a bucket of water. Given time, the dye randomly disperses and eventually settles into a uniform distribution which colours all the water evenly
In computer science, you define rules for your (dye) particles to follow and the medium this takes place in.
Stable Diffusion is a machine learning model used for text-to-image processing (like Dall-E) and based on a diffusion probabilistic model that uses a transformer to generate images from text. There are several open-source stable diffusion models out there (made famous by Stability.ai) and they continue to improve and become even more fully featured - SDXL0.9 is one of the more recently open-sourced models.
To run stable diffusion use the SDXL module like so:
The output will look like this:
Take the ipfs link given in the results and paste it into your browser:
Please be patient! IPFS can take some time to propagate and doesn't always work immediately.
In the /outputs folder, you'll find the image:
Since modules are deterministic, running this command with the same text prompt will produce the same image, since the same seed is also used (the default seed is 0).
To change the image, you can pass in a different seed number:
NB: You could also add the seed as a parameter to run this.
return remoteContractInstance.runModuleWithDefaultMediators{value: msg.value}("sdxl:v0.9-lilypad1",``
params
);
Try it yourself!
Set the remix environment to "Injected Provider - MetaMask" (& ensure MetaMask has the lalechuza chain selected)
Call the runSDXL Module, passing in a prompt and sending 2 lilETH in the value field. Your MetaMask wallet should pop up for you to confirm the payment and transaction.
Give it some time and check the resultCID variable. You can then open this result in your browser with https://ipfs.io/ipfs/<resultCID> or ipfs://<resultCID> in IPFS compatible browsers like Brave.
Pssst... here's a question on Claude Monet you could try too ;) bafybeihu62yl76fcypidaiz35gq3yjguxawy5zzwadzvlcgpnfkuy2do3i
The Filecoin Data Prep Module is designed to chunk data into CAR files from an S3 bucket - hence the name, since it prepares the data to be uploaded to Filecoin. The repo for this module can be found .
Ensure you have installed all requirements
Ensure you have installed all requirements
Contributing your own module is currently a non-trivial process. The Lilypad team is aiming to make this DX easier as well as to add tutorials, walkthroughs and approaches to making modules over the next month. In the meantime if you try this and do get stuck - please reach out to us in Slack for help!
Contributing your own module to use on Lilypad is possible and welcome! Essentially modules on Lilypad currently operate like Job Specifications do - take a look at this page on to see more about how this works. In order that the modules on Lilypad are able to be verified and run on a trustless compute network, the modules also need to be deterministic, so that the job results can be verified by a mediator as needed. See for more information on this process.
It's a function that given a "string" will return a docker job spec. That string can be whatever you want, JSON, csv, raw LLM prompt etc. In terms of output directories - you can see the SDXL example linked above names the "/outputs" folder and then will use that path in the command inside the Docker container. Any named folders like this that the Docker image can write files into - will be included as part of the results Lilypad gets back out of the job (and will change the result hash)
It's currently advisable to develop your module with first (because it's FAR easier to get setup as a development environment than Lilypad is currently). If you can get a function like the one shown above that will, given a string, write a bacalhau job spec that you can test with the bacalhau CLI - then you have already done 98% of the Lilypad module
To add this to Lilypad, submit a PR which also includes this file .
See more in this guide on contributing to the Lilypad project:
Ensure you have installed all requirements
See this on how seed's work for more info on this
Make sure you have connected to the Lalechuza testnet and funded your wallet with testnet lilETH. See &
To trigger the SDXL0.9 module from a smart contract, firstly you need to create your own client contract to call the module from. In order to receive results back from the Lilypad network, you will also need to 1. Connect to the (and create an instance of it in your own contract using the ) 2. Implement the interface.
Click to open in Remix IDE!
Ensure your MetaMask wallet is set to the and has from the .
Then
- Deploy a new contract passing in the OR
- Open the contract at this example address: 0x31e7bF121EaB1C0B081347D8889863362e9ad53A
FYI! You can try all examples in one contract. See
Find the . There's also a generic .