Run Llama 2 on the Lilypad network
These instructions provide steps for running the Llama2 module on the Lilypad network using Docker and the Lilypad CLI. Find the module repo here.
Before running llama2, make sure you have the Lilypad CLI installed on your machine and your private key environment variable is set. This is necessary for operations within the Lilypad network.
Learn more about installing the Lilypad CLI and running a Lilypad job with this video guide.
Example:
Ensure you have the necessary permissions and resources to run Docker containers with GPU support.
The module version (6d4fd8c07b5f64907bd22624603c2dd54165c215) may be updated. Check for the latest version before running.
Adjust port mappings and volume mounts as needed for your specific setup.
To view the results in a local directory, navigate to the local folder provided by the job result.
A Lightweight Stable Diffusion Module for Lilypad
These instructions provide steps for running the SDXL Turbo Pipeline module on the Lilypad network using Docker and Lilypad CLI. Find the module repo .
Before running sdxl turbo, make sure you have the on your machine and your private key environment variable is set. This is necessary for operations within the Lilypad network.
Learn more about installing the Lilypad CLI and running a Lilypad job with this .
export WEB3_PRIVATE_KEY=<YOUR_PRIVATE_KEY>Example:
Ensure you have the necessary permissions and resources to run Docker containers with GPU support.
The module version (ae17e969cadab1c53d7cabab1927bb403f02fd2a) may be updated. Check for the latest version before running.
Adjust port mappings and volume mounts as needed for your specific setup.
To view the results in a local directory, navigate to the local folder.
lilypad run github.com/noryev/module-llama2:6d4fd8c07b5f64907bd22624603c2dd54165c215 -i prompt="your prompt here"lilypad run github.com/noryev/module-llama2:6d4fd8c07b5f64907bd22624603c2dd54165c215 -i prompt="what is a giant sand trout on arrakis?"open /tmp/lilypad/data/downloaded-files/<fileID>export WEB3_PRIVATE_KEY=<YOUR_PRIVATE_KEY>lilypad run github.com/Lilypad-Tech/module-sdxl:d6a89ed92f4e798459b2990340669da00c56c80c -i prompt="your prompt here"lilypad run github.com/Lilypad-Tech/module-sdxl:d6a89ed92f4e798459b2990340669da00c56c80c -i prompt="a spaceship parked on a mountain"open /tmp/lilypad/data/downloaded-files/<fileID>How to build your own compute job for Lilypad
A Lilypad module is a Git repository that allows you to perform various tasks using predefined templates and inputs. This guide will walk you through creating a Lilypad module, including defining a JSON template, handling inputs, and following best practices.
For a more in-depth look at building modules, refer to this end-to-end guide.
Below are a few examples of modules you can run on Lilypad. From language models to image generators and fun utilities, the network supports a growing list of AI modules.
To view the full list of available modules on Lilypad, please check out !
Start by creating a Git repository for your Lilypad module. The module's versions will be represented as Git tags. Below is the basic structure of a Lilypad Module.
Download model files
Handle all dependencies (requirements.txt)
Implement input/output through environment variables
Write outputs to /outputs directory
To use a model offline, you first need to download it and store it in a local directory. This guarantees that your code can load the model without requiring an internet connection. Here's a simple process to achieve this:
Install required libraries
Use a script to download the model (eg: python download_model.py)
Verify that the model files are in your directory
To make sure your Docker image is compatible with Lilypad, you need to define the architecture explicitly during the build process. This is particularly important if you are building the image on a system like macOS, which uses a different architecture (darwin/arm64) than Lilypad's infrastructure (linux/amd64).
The examples below are for building, tagging and pushing an image to DockerHub, but you can use any platform you prefer for hosting the image.
For Linux: docker buildx build -t <USERNAME>/<MODULE_NAME>:<MODULE_TAG> --push .
For MacOS:
Format in template:
Usage in CLI:
During development, you will need to use the Git hash to test your module. This allows you to verify that your module functions correctly and produces the expected results.
Below is a working Lilypad module run cmd for reference. (you can use this to run a Lilypad job within the Lilypad CLI):
Use the following command syntax to run your Module on Lilypad Testnet.
If the job run appears to be stuck after a few minutes (sometimes it takes time for the Module to download to the RP node), cancel the job and try again. Open a ticket in with any issues that persist.
Here are some example Lilypad modules for reference:
: Lilypad "Hello World" example
: Text to text
: Text to image generation
Deprecated examples:
: An example module for LoRa training tasks.
: An example module for LoRa inference tasks.
: An example module related to DuckDB.
These examples can help you understand how to structure your Lilypad modules and follow best practices.
In this guide, we've covered the essential steps to create a Lilypad module, including defining a JSON template, handling inputs, and testing your module. By following these best practices, you can build reliable and reusable modules for Lilypad.
For more information and additional examples, refer to the official Lilypad documentation and the Cowsay example module.
your-module/
โโโ model-directory # Stores locally downloaded model files
โโโ download_model.[py/js/etc] # Script to download model files locally
โโโ requirements.txt # Module dependencies
โโโ Dockerfile # Container definition
โโโ run_script.[py/js/etc] # Main execution script
โโโ lilypad_module.json.tmpl # Lilypad configuration
โโโ README.md # Documentationfrom transformers import AutoTokenizer, AutoModelForSeq2SeqLM
def download_model():
model_name = "<namespace>/<model_identifier>"
# Ensure you have a directory named 'model' in your current working directory or specify a path
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForSeq2SeqLM.from_pretrained(model_name)
# Save the tokenizer and model
tokenizer.save_pretrained('./model')
model.save_pretrained('./model')
if __name__ == "__main__":
download_model()import os
import json
from transformers import AutoModel, AutoTokenizer
def main():
# Load model and tokenizer from local directory
model_path = '/model' # Path to the local model directory
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModel.from_pretrained(model_path)
# Get inputs from environment variables
input_var = os.environ.get('INPUT_VAR', 'default')
# Your model code here
result = your_model_function(input_var, model, tokenizer)
# Save outputs
output_path = '/outputs/result.json'
with open(output_path, 'w') as f:
json.dump({'result': result}, f)
if __name__ == "__main__":
main()
# Use specific base image
FROM base-image:version
# Set working directory
WORKDIR /workspace
# Install dependencies
RUN apt-get update && apt-get install -y \
your-dependencies && \
rm -rf /var/lib/apt/lists/*
# Install model requirements
RUN pip install your-requirements
# Environment variables for running offline and using the local model
# HF_HOME points to the directory where the model code is
ENV HF_HOME=/model
ENV TRANSFORMERS_OFFLINE=1
# Create necessary directories
RUN mkdir -p /outputs
# Copy execution script
COPY run_script.* /workspace/
# Set entrypoint
ENTRYPOINT ["command", "/workspace/run_script"]docker buildx build \
--platform linux/amd64 \
-t <USERNAME>/<MODULE_NAME>:<MODULE_TAG> \
--push \
.{
"machine": {
"gpu": 1, # Set to 0 if GPU not needed
"cpu": 1000, # CPU allocation
"ram": 8000 # Minimum RAM needed to run the module
},
"gpus": [ { "vram": 24576 }, { "vram": 40960 } ] # VRAM in MBs. Solver will default to largest one
"job": {
"APIVersion": "V1beta1",
"Spec": {
"Deal": {
"Concurrency": 1
},
"Docker": {
"Entrypoint": ["command", "/workspace/run_script"],
"WorkingDirectory": "/workspace",
"EnvironmentVariables": [
# Environment variables with defaults
{{ if .var_name }}"VAR_NAME={{ js .var_name }}"{{ else }}"VAR_NAME=default_value"{{ end }}
],
# Specify the Docker image to use for this module
"Image": "repo-owner/repo-name:tag"
},
"Engine": "Docker",
"Network": {
"Type": "None"
},
"Outputs": [
{
"Name": "outputs",
"Path": "/outputs"
}
],
"PublisherSpec": {
"Type": "ipfs"
},
"Resources": {
"GPU": "1" # Must match machine.gpu
},
"Timeout": 1800
}
}
}{{ if .variable }}"VARNAME={{ js .variable }}"{{ else }}"VARNAME=default"{{ end }}lilypad run repo:tag -i variable=valuelilypad run github.com/Lilypad-Tech/module-sdxl:6cf06f4038f1cff01a06c4eabc8135fd9835a78a --web3-private-key <your-private-key> -i prompt="a lilypad floating on a pond"lilypad run github.com/noryev/module-sdxl-ipfs:ae17e969cadab1c53d7cabab1927bb403f02fd2a -i prompt="your prompt here"Run a cowsay job
cowsay is a simple, text-based program originally written for Unix-like operating systems that generates ASCII pictures of a cow with a speech bubble containing a specified message.
This module was created as a "Hello World" for the Lilypad Network!
Before running cowsay, make sure you have the Lilypad CLI installed on your machine and your private key environment variable is set. This is necessary for operations within the Lilypad network.
export WEB3_PRIVATE_KEY=<YOUR_PRIVATE_KEY>Once you've installed the CLI, run the cowsay command:
To view the results in a local directory, navigate to the local folder.
Here, you can view the stdout and stderr as well as the outputs folder for the run:
lilypad run cowsay:v0.0.4 -i Message="hello, lilypad" open /tmp/lilypad/data/downloaded-files/QmQHrsiAuzTLn5VU6jg5LoXBRrAkEVRKiYeJE29w54gg9Qโ โ โ โ โ โ โฃโฃคโฃคโข โฃคโฃโ โ โ โ โ
โ โ โ โ โขดโฃฟโฃฟโฃฟโฃฟโขธโฃฟโกโ โ โ โ โ โโโ โโโโโโ โโโ โโโโโโโโโโ โโโโโโ โโโโโโโ
โ โ โฃฐโฃฟโฃฆโกโขฟโฃฟโฃฟโขธโกฟโ โ โ โ โขโ โโโ โโโโโโ โโโโ โโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ โขฐโฃฟโฃฟโฃฟโฃฟโฃฆโกโฃฟโขธโ โขโฃ โฃดโฃพโฃฟโก โโโ โโโโโโ โโโโโโโ โโโโโโโโโโโโโโโโโโโ โโโ
โ โฃโฃโฃโฃโฃโฃโฃโ โ โฃโฃโฃโฃโฃโฃโฃ โโโ โโโโโโ โโโโโ โโโโโโโ โโโโโโโโโโโ โโโ
โ โขนโฃฟโฃฟโฃฟโฃฟโ โฃกโฃฟโขธโฃฎโกปโฃฟโฃฟโฃฟโฃฟโก โโโโโโโโโโโโโโโโโโโโโโ โโโ โโโ โโโโโโโโโโโ
โ โ โขปโฃฟโกโฃฉโฃพโฃฟโฃฟโขธโฃฟโฃฟโฃโ ปโฃฟโกโ โโโโโโโโโโโโโโโโโโโโโโ โโโ โโโ โโโโโโโโโโ v2.13.0
โ โ โ โ โขพโฃฟโฃฟโฃฟโฃฟโขธโฃฟโฃฟโฃฟโกทโ โ โ
โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ Decentralized Compute Network https://lilypad.tech
๐ Lilypad submitting job
2025-03-05T12:56:38-06:00 WRN ../runner/work/lilypad/lilypad/cmd/lilypad/utils.go:63 > failed to get GPU info: gpuFillInfo not implemented on darwin
2025-03-05T12:56:38-06:00 INF ../runner/work/lilypad/lilypad/pkg/web3/sdk.go:209 > Connected to arbitrum-sepolia-rpc.publicnode.com
2025-03-05T12:56:38-06:00 INF ../runner/work/lilypad/lilypad/pkg/jobcreator/run.go:27 > Public Address: 0xB86bCAe21AC95BCe7a49C057dC8d911033f8CB7c
Enumerating objects: 42, done.
Counting objects: 100% (22/22), done.
Compressing objects: 100% (4/4), done.
Total 42 (delta 18), reused 19 (delta 18), pack-reused 20 (from 1)
๐ Deal agreed. Running job...
๐ค Results submitted. Awaiting verification...
๐ค Results submitted. Awaiting verification...
โ
Results accepted. Downloading result...
๐ Data ID: QmP2SQttNC3Hrh2xpY7bNHzV2jHq7MbfLahRC46DVzn5rG
๐ Lilypad job completed, try ๐
open /tmp/lilypad/data/downloaded-files/QmQHrsiAuzTLn5VU6jg5LoXBRrAkEVRKiYeJE29w54gg9Q
cat /tmp/lilypad/data/downloaded-files/QmQHrsiAuzTLn5VU6jg5LoXBRrAkEVRKiYeJE29w54gg9Q/stdout
cat /tmp/lilypad/data/downloaded-files/QmQHrsiAuzTLn5VU6jg5LoXBRrAkEVRKiYeJE29w54gg9Q/stderr~ % cat /tmp/lilypad/data/downloaded-files/QmQHrsiAuzTLn5VU6jg5LoXBRrAkEVRKiYeJE29w54gg9Q/stdout
________________
< hello, lilypad >
----------------
\ ^__^
\ (oo)\_______
(__)\ )\/\
||----w |
|| ||