Only this pageAll pages
Powered by GitBook
Couldn't generate the PDF for 123 pages, generation stopped at 100.
Extend with 50 more pages.
1 of 100

Lilypad

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Developer Resources

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Use Cases, Agents & Projects

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Tooling, Plugins & Integrations

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

AI Automation Workflows

Loading...

Resource Providers

Loading...

Loading...

Loading...

Loading...

Loading...

Research & Vision

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Community & Events

Loading...

Loading...

Archive

Loading...

Loading...

Loading...

Loading...

Older Versions

Lilypad v0 (deprecated)

Reference

Lilypad v0 Quick Start

Get Started with Lilypad v0 - Call off-chain distributed compute from your smart contract!

Deployed Network Details

Lilypad v0 Integrated Networks

Lilypad Smart Contracts

The Lilypad v0 Smart Contracts

Creating your own Jobs

Creating Bacalhau Job Spec's

Architecture

Lilypad v0 is a "bridge" for running Bacalhau compute jobs via a smart contracts.

Examples

Stable Diffusion

Running a Stable Diffusion Job from a smart contract with Lilypad v0

Drand Off-chain Voting

Implementing an unbiased off-chain voting process using Lilypad and Bacalhau, with Drand Time-lock encryption

Ethereum Address Fraud Detector

Running a fraud detection job with Lilypad

Lilypad v1 (deprecated)

Reference

Overview

Trustless Distributed Compute for web3

Architecture Diagrams

Get started here!!

Setting up Metamask

Funding your Wallet from Faucet

[CLI] Install Run Requirements

[CLI] Run "Hello, World!" Job

[Smart Contract] Run "Hello, World!" Job

Run "Hello, World" Node

Win Hacks & Contribute to Lilypad

Ideas and Guides for Contributing to Lilypad

Verification, Mediation & Job Pricing on Lilypad v1

Current State

Examples

Hello (cow) World!

A cowsay job

Stable Diffusion (SDXL0.9)

LoRA Fine Tuning

Fine-tuning models from inputs.

LLM Inference

A Fast Chat LLM Inference Module for Lilypad

Duck DB [coming soon]

Community Member Contribution

Inference API

Get started with the Lilypad inference API

Lilypad allows you to run on-demand compute jobs on the network using the Lilypad API, enabling AI inference and high-performance workloads without managing infrastructure.

Overview

This guide will take you through:

Lilypad Home

Overview

You can use Lilypad to run AI workload models including Stable Diffusion and Stable Diffusion Video, or you can add your own module to run on the Lilypad Network. Using Lilypad's distributed compute nodes, you can build and run your own containerized workloads that require high-performance computing.

Job Modules

Resource Providers

Earn Lilybit Rewards

Roadmap

Join the Community & Chat with Us

Obtaining

is developing a serverless, distributed compute network that enables internet-scale data processing, AI, ML & other arbitrary computation, while unleashing idle processing power & unlocking a new marketplace for compute.

A Lilypad Job Module is a standard containerized (Docker) process for running compute workloads on Lilypad. Easily deploy an AI model or other compute intensive workloads to the Lilypad decentralized GPU network. Learn more about running AI workloads via , and use our or to start sending job requests.

Build your own Lilypad job modules either or with our package.

The Lilypad Network aggregates idle compute (CPU and GPU) capacity from providers around the world. Compute providers on the network are referred to as .

If you're interested in becoming a Lilypad Resource Provider, first make sure that you meet our , then register for the . Learn more about running a Resource Provider on Lilypad !

There are many ways to for participating in the Lilypad economy! Resource Providers earn rewards for running compute jobs and the community earns rewards for building on the network and supporting the ecosystem! Find all rewards opportunities below:

Apply to the RP Beta program and earn Resource Providers

Join the by submitting info

Add a model to run on Lilypad or build a project listed in the

Complete open source

and start contributing!

View the .

API keys
Basic usage
Lilypad
Lilypad job modules
inference API
CLI
from an empty project
create-lilypad-module
Resource Providers
minimum hardware requirements
Lilypad Resource Provider Beta Program
here
earn Lilybit rewards
rewards
Lilypad Ambassador Program
here
Bounty Board
community issues
Join our Discord
Lilypad Roadmap

CLI

Get started with Lilypad

Lilypad allows you to run on-demand compute jobs on the network using the Lilypad CLI, enabling AI inference and high-performance workloads without managing infrastructure.

Overview

This guide will take you through:

Setting Up Your Wallet

Configure a crypto wallet to receive testnet tokens used to interact with the Lilypad Network

The wallet you use for your account must have both ETH (to run smart contracts on Ethereum) and Lilypad (LP) tokens in order to pay for jobs (or receive funds for jobs) on the network.

Create a MetaMask wallet

Connect to the Arbitrum Sepolia Testnet in MetaMask.

The Lilypad Testnet (IncentiveNet) is currently running on the Arbitrum L2 network built on Ethereum.

In order to change to the Arbitrum network in the wallet, open MetaMask and click the network button in the top left of the menu bar:

Then select "Add network":

Next, select "Add a network manually":

Input the required Arbitrum Sepolia Testnet network info, and then "Save":

  • Network name: Arbitrum Sepolia

  • Chain ID: 421614

  • Currency symbol: ETH

Import the Testnet LP token

The wallet is now setup and will display an ETH (Arbitrum Sepolia) token balance. In order to also display the LP token balance, the LP token will need to be imported.

Select "Import tokens" from the three dot menu next to the network name:

Select "Custom token" and add the Lilypad token contract address and token symbol. Then "Save".

  • Token contract address: 0x0352485f8a3cB6d305875FaC0C40ef01e0C06535

  • Token symbol: LP

You should now see both ETH and LP listed in the wallet (initial ETH and LP balances will be 0).

Now you're ready to fund the wallet with testnet LP and ETH tokens!

Funding Your Wallet

Get Testnet Lilypad Tokens (LP) and Arbitrum Sepolia Testnet ETH

tldr: To obtain funds, first ensure the wallet is connected to the Arbitrum Sepolia network. then, collect LP and ETH tokens from these faucets:

Faucet Guide

Get Testnet LP tokens

Follow these steps to successfully claim your Testnet LP tokens:

  1. Authenticate with Discord.

  2. Copy your MetaMask wallet address into the input.

  3. Click "Request".

Testnet LP tokens will be sent to the wallet address provided to the faucet.

Get Arbitrum Sepolia Testnet ETH

View tokens

With a balance of both LP and ETH, you're ready to run jobs with the Lilypad CLI!

the Lilypad CLI.

.

, the Lilypad module equivalent to Hello, World!

Both Resource Providers (GPU compute nodes) and those looking to run jobs on the Lilypad network need to set up a Metamask account in order to run jobs on Lilypad. The public key of your wallet address is how you are identified on the network, and is how you can look up the transactions you make on the . You need an account for running jobs, and a separate account for each GPU you want to set up on the network.

End users of Lilypad can decide which crypto wallet they would like to use. In this guide, we advise using a crypto wallet.

Install

Network info is referenced directly from the Arbitrum Sepolia .

New RPC URL:

Block explorer URL: (optional)

(3rd party faucet list)

Find out why you need tokens in the

You must be a member of to claim tokens

Navigate to the Lilypad Testnet .

Get Arbitrum Sepolia ETH from this list of . Each faucet is designed differently, so follow the instructions provided.

If you need more tokens and already have Sepolia ETH, use the to transfer the tokens over to Arbitrum Sepolia.

Installing
Setting up your wallet
Running a Cowsay job on Lilypad Network
Arbitrum Sepolia blockchain explorer
MetaMask
MetaMask Extension
documentation
https://sepolia-rollup.arbitrum.io/rpc
https://sepolia.arbiscan.io
Lilypad Testnet tokens (LP)
Arbitrum Sepolia ETH
FAQs
the Lilypad Discord
faucet
third party faucets
official Arbitrum bridge

Usage

Instructions on how to use the Lilypad API

Once you have your API key, you can start running AI inference jobs using Lilypad's Anura API. Below is a simple "Hello World" example to get started.

Get Available Models

Before running a job, check which models are supported:

curl -X GET "https://anura-testnet.lilypad.tech/api/v1/models" \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer YOUR_API_KEY"

Run a Job

Use the API to send a simple chat completion request using one of the available models:

curl -X POST "https://anura-testnet.lilypad.tech/api/v1/chat/completions" \
  -H "Content-Type: application/json" \
  -H "Accept: text/event-stream" \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -d '{
    "model": "MODEL_NAME:MODEL_VERSION",
    "messages": [
      {
        "role": "system",
        "content": "You are a helpful AI assistant"
      },
      {
        "role": "user",
        "content": "What order do frogs belong to?"
      }
    ],
    "temperature": 0.6
  }'

Replace "MODEL_NAME:MODEL_VERSION" with the model you want to use from the previous step.

Retreive Job Results

You can check the status of your job and retrieve outputs using the job ID:

curl -X GET "https://anura-testnet.lilypad.tech/api/v1/jobs/{job_id}" \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer YOUR_API_KEY"

This will return details on whether the job is still processing or has completed.

Setup

Instructions for getting started with the Lilypad API

To interact with Lilypad's API, you'll first need to create an account and obtain an API key for authentication. This key allows you to submit jobs and retrieve results securely.

Sign Up

Generate API key

Next, log in to the dashboard and click "Create API key".

Now that you've created a new API key you can copy and securely store your API key for usage.

BUILDER-VERSE

What is it?

An open, ongoing program designed to help you build with Lilypad. There are no deadlines or fixed formats. Whether you're creating modules, building agent workflows or integrating Lilypad into a larger stack, Builder-verse gives you the space, support and rewards to grow your ideas.

How to participate

Lilypad rewards contributors who help grow the network, whether you're building modules and agents or providing GPU compute.

Builders

  • Building agents or full-stack applications that use Lilypad modules

  • Driving usage. The more your module or agent is used, the more rewards you earn

What can I build?

  • Your own custom AI modules

  • Agent pipelines using existing modules

  • Apps using Lilypad’s CLI or API

  • Integrations with your existing tools or workflows

Tools & Guides

Looking for inspiration or guidance? Check out these resources to get you started!

Get Involved

Join the Lilypad Discord and follow us on socials to stay up to date with the action!

Usage

Running the most important Hello World on Lilypad!

Run Cowsay

cowsay is a classic CLI tool that prints messages in a speech bubble from an ASCII cow. You can run it as a job on the Lilypad network.

By default, Lilypad CLI saves results to a temporary directory. To specify a custom directory for your job results, set the DATA_DIR environment variable before running your command: export DATA_DIR=~/<DIRECTORY>

Run the command:

lilypad run cowsay:v0.0.4 -i Message="moo"

Wait for the compute to take place and for the results to be published:

⠀⠀⠀⠀⠀⠀⣀⣤⣤⢠⣤⣀⠀⠀⠀⠀⠀
⠀⠀⠀⠀⢴⣿⣿⣿⣿⢸⣿⡟⠀⠀⠀⠀⠀    ██╗     ██╗██╗  ██╗   ██╗██████╗  █████╗ ██████╗
⠀⠀⣰⣿⣦⡙⢿⣿⣿⢸⡿⠀⠀⠀⠀⢀⠀    ██║     ██║██║  ╚██╗ ██╔╝██╔══██╗██╔══██╗██╔══██╗
⠀⢰⣿⣿⣿⣿⣦⡙⣿⢸⠁⢀⣠⣴⣾⣿⡆    ██║     ██║██║   ╚████╔╝ ██████╔╝███████║██║  ██║
⠀⣛⣛⣛⣛⣛⣛⣛⠈⠀⣚⣛⣛⣛⣛⣛⣛    ██║     ██║██║    ╚██╔╝  ██╔═══╝ ██╔══██║██║  ██║
⠀⢹⣿⣿⣿⣿⠟⣡⣿⢸⣮⡻⣿⣿⣿⣿⡏    ███████╗██║███████╗██║   ██║     ██║  ██║██████╔╝
⠀⠀⢻⣿⡟⣩⣾⣿⣿⢸⣿⣿⣌⠻⣿⡟⠀    ╚══════╝╚═╝╚══════╝╚═╝   ╚═╝     ╚═╝  ╚═╝╚═════╝ v2.13.0
⠀⠀⠀⠉⢾⣿⣿⣿⣿⢸⣿⣿⣿⡷⠈⠀⠀
⠀⠀⠀⠀⠀⠈⠙⠛⠛⠘⠛⠋⠁⠀ ⠀⠀⠀   Decentralized Compute Network  https://lilypad.tech

🌟  Lilypad submitting job
2025-03-05T12:56:38-06:00 WRN ../runner/work/lilypad/lilypad/cmd/lilypad/utils.go:63 > failed to get GPU info: gpuFillInfo not implemented on darwin
2025-03-05T12:56:38-06:00 INF ../runner/work/lilypad/lilypad/pkg/web3/sdk.go:209 > Connected to arbitrum-sepolia-rpc.publicnode.com
2025-03-05T12:56:38-06:00 INF ../runner/work/lilypad/lilypad/pkg/jobcreator/run.go:27 > Public Address: 0xB86bCAe21AC95BCe7a49C057dC8d911033f8CB7c
Enumerating objects: 42, done.
Counting objects: 100% (22/22), done.
Compressing objects: 100% (4/4), done.
Total 42 (delta 18), reused 19 (delta 18), pack-reused 20 (from 1)
💌  Deal agreed. Running job...
🤔  Results submitted. Awaiting verification...
🤔  Results submitted. Awaiting verification...
✅  Results accepted. Downloading result...
🆔  Data ID: QmP2SQttNC3Hrh2xpY7bNHzV2jHq7MbfLahRC46DVzn5rG

🍂 Lilypad job completed, try 👇
    open /tmp/lilypad/data/downloaded-files/QmQHrsiAuzTLn5VU6jg5LoXBRrAkEVRKiYeJE29w54gg9Q
    cat /tmp/lilypad/data/downloaded-files/QmQHrsiAuzTLn5VU6jg5LoXBRrAkEVRKiYeJE29w54gg9Q/stdout
    cat /tmp/lilypad/data/downloaded-files/QmQHrsiAuzTLn5VU6jg5LoXBRrAkEVRKiYeJE29w54gg9Q/stderr

View your results:

cat /tmp/lilypad/data/downloaded-files/QmQHrsiAuzTLn5VU6jg5LoXBRrAkEVRKiYeJE29w54gg9Q/stdout
~ % cat /tmp/lilypad/data/downloaded-files/QmQHrsiAuzTLn5VU6jg5LoXBRrAkEVRKiYeJE29w54gg9Q/stdout
 ________________ 
< hello, lilypad >
 ---------------- 
        \   ^__^
         \  (oo)\_______
            (__)\       )\/\
                ||----w |
                ||     ||

Lilybit_ Rewards

Information about the Lilybit_ rewards program

There are multiple ways to participate in the Lilypad ecosystem and earn rewards for your contributions.

Contributing to the Lilypad Network will earn Lilybit_ credits, which will be redeemable for Lilypad (LP) mainnet tokens at the Token Generation Event (TGE).

Developers

Community Members

Resource Providers

In exchange for providing compute power to Lilypad's decentralized compute network, Resource Providers (RPs) accumulate Lilybit_ credits based on their contributions to the network.

RP incentives on Testnet will be provided via the Lilypad RP Beta Program. Any RP can submit info to join the RP beta program, although RPs with more resources will be prioritized in order to run large models on Lilypad Network.

Why Lilypad

Perform off-chain decentralized compute over data, with on-chain guarantees. Call this functionality directly from a smart contract, CLI, and an easy to use abstraction layer.

The network is actively laying groundwork for multi-chain integration and the deployment of an incentivized testnet.

Lilypad has evolved from earlier versions (v0, v1 & v2), where the network served as a proof of concept for verifiable, decentralized compute directly from smart contracts. These earlier iterations established the groundwork for what is now a robust, scalable platform with expanded features and multichain support.

Bacalhau has been integral to Lilypad since its early versions (v0 and v1), serving as the backbone for verifiable off-chain compute. In these early iterations, Bacalhau was used for Proof of Concept projects, helping users execute decentralized compute jobs from smart contracts.

Objective and problem statement

Lilypad aims to mitigate the challenges predominantly associated with the accessibility of high-performance computational hardware. At present, numerous barriers impede developers and organizations from smoothly integrating projects that require high-performance computing, such as AI technologies, into their applications.

Unlike conventional centralized systems, where access to powerful compute hardware is restricted and costly, Lilypad endeavors to democratize this access. Through its verifiable, trustless, and decentralized computational network, Lilypad extends unrestricted, global access to computational power. By leveraging decentralized infrastructure networks such as Filecoin, Lilypad is strategically positioned to enhance the efficiency, transparency, and accessibility of high-performance computing hardware.

Applications

Perform off-chain decentralized compute over data, with on-chain guarantees, and to call this functionality directly from a smart contract, CLI, API and an easy to use abstraction layer, opens the door to a multitude of possible applications including:

  • Inference AI jobs

  • ML training jobs

  • Invoking & supporting generic ZK computations

  • Cross-chain interoperability complement to bridge protocols

  • Federated Learning consensus (with Bacalhau insulated jobs)

  • IOT & Sensor Data integrations

  • Providing a platform for Digital twins

  • Supply chain tracking & analysis

  • ETL & data preparation jobs

Key features

Some of the key features of Lilypad include:

  1. Verifiable Serverless Decentralized Compute Network: Lilypad is a decentralized compute network that aims to provide global, permissionless access to compute power. The Network orchestrates off-chain compute (a global GPU marketplace) and uses on-chain verification (Arbitrum L2 on Ethereum) to provide guarantees of compute success.

  2. Open Compute Network: Lilypad is an open compute network allowing users to access and run AI models/other programs in a serverless manner. Module creators and general users can access a curated set of modules or can easily create their own Lilypad module to run an AI model/other program on the network.

  3. Decentralization of Mediators: The team also aims to decentralize the mediators in the network. This means that the decision-making process and governance of the network will be distributed among multiple participants, ensuring a more decentralized and resilient system.


Installation

Instructions for installing the Lilypad CLI on your machine

The installation process for the Lilypad CLI involves several automated steps to configure it for your specific system. Initially, the setup script identifies your computer's architecture and operating system to ensure compatibility. It will then download the latest production build of the Lilypad CLI directly from the official GitHub repository using curl and wget.

Once the CLI tool is downloaded, the script sets the necessary permissions to make the executable file runnable. It then moves the executable to a standard location in your system's path to allow it to be run from any terminal window.

Install via officially released binaries

Lilypad offers two distinct installation options to cater to different roles within the network:

  • One for Lilypad users who wish to run compute jobs on the Lilypad Network.

  • Another for resource providers who supply the computational resources to the Lilypad Network.

Select the appropriate installation based on your role:

Set WEB3_PRIVATE_KEY

You're required to set your private key environment variable, WEB3_PRIVATE_KEY, to interact securely with the network.

Verify installation

To verify the installation, running lilypad in the terminal should display a list of available commands, indicating that Lilypad CLI is ready to use.

Uninstall Lilypad

To uninstall Lilypad, you'll need to remove the lilypad binary.

The following command detects where the lilypad binary is located and removes it.

🚨 Using sudo rm -rf can be dangerous if not used carefully. Proceed with caution.

You can verify Lilypad has been uninstalled successfully with the following command:

If the uninstall was successful, you should see the message lilypad not found.

For more advanced usage, including streaming completions, multi-turn conversations, and additional endpoints, check out the .

Visit the and sign up using an email address.

is a space for developers, labs, and builders to create powerful tools and workflows on the Lilypad Network. It’s part of our mission to accelerate open source AI through decentralized compute infrastructure and accessible tooling.

You don’t need to wait for a hackathon. Submit anytime and track your impact on the . Top contributors may also be eligible for additional support, amplification, and future grant opportunities.

Creating and

- Step-by-step guide to creating your own Lilypad module

- CLI tool to scaffold a new module with everything you need to get started

- Create composable AI workflows powered by Lilypad modules

Check out our for a curated collection of modules, templates, and community examples!

Before you run a Lilypad job, make sure you have and have set in your environment.

Your WEB3_PRIVATE_KEY can be retrieved from the MetaMask account details menu. For more info, check out the on how to get a your private key. Be sure to keep your private key safe and never share it or store it in unsecured places to prevent unauthorized access to your funds.

Lilypad provides distributed computational services underpinned by the . The network provides infrastructure for use cases like AI inference, ML training, DeSci and more. Lilypad strategically collaborates with decentralized infrastructure networks, such as Filecoin, to formulate a transparent, efficient, and accessible computational ecosystem.

Find Lilypad on or visit .

Mainstream Web3 Application Support: Lilypad is designed to enable mainstream web3 applications to use its compute network with the , and . It aims to make decentralized AI compute more accessible, efficient, and transparent for developers and users.

Multichain Support: The Lilypad Incentivized Testnet on Arbitrum in June 2024 with plans to go multi-chain in the near future. Supporting multiple blockchain networks will increase the scalability and interoperability of the network, allowing users to choose the blockchain that best suits their needs.

Incentivized Test Net: The is live! The IncentiveNet program provide users withto participate in running nodes, testing, and improving the network. Learn more by checking out the .

When using the CLI always ensure you're running the of Lilypad.

The resource provider version of Lilypad is not supported on Darwin/macOS.

A WEB3_PRIVATE_KEY can be retrieved from the Metamask account details menu. For more info, check out the on viewing a wallet's private key. Be sure to keep your private key safe and never share it or store it in unsecured places to prevent unauthorized access to your funds. You also need a separate private key

To use the Lilypad CLI, the set private key will need to hold Lilypad testnet tokens and Arbitrum Sepolia ETH. You can find those instructions in the documentation.

Thats it! You've successfully installed the Lilypad CLI on your machine!

inference API Documentation
Lilypad API Dashboard
Builder-verse
leaderboard
publishing modules
Build a module guide
create-lilypad-module
Agents on Lilypad
awesome-lilypad repo
Discord
Twitter/X
GitHub
Read the full blog announcement
Explore awesome-lilypad on GitHub
official guide from MetaMask
# Detect your machine's architecture and set it as $OSARCH
OSARCH=$(uname -m | awk '{if ($0 ~ /arm64|aarch64/) print "arm64"; else if ($0 ~ /x86_64|amd64/) print "amd64"; else print "unsupported_arch"}') && export OSARCH;
# Detect your operating system and set it as $OSNAME
OSNAME=$(uname -s | awk '{if ($1 == "Darwin") print "darwin"; else if ($1 == "Linux") print "linux"; else print "unsupported_os"}') && export OSNAME;
# Download the latest production build
curl https://api.github.com/repos/lilypad-tech/lilypad/releases/latest | grep "browser_download_url.*lilypad-$OSNAME-$OSARCH-cpu" | cut -d : -f 2,3 | tr -d \" | wget -i - -O lilypad

# Make Lilypad executable and install it
chmod +x lilypad
sudo mv lilypad /usr/local/bin/lilypad
# Detect your machine's architecture and set it as $OSARCH
OSARCH=$(uname -m | awk '{if ($0 ~ /arm64|aarch64/) print "arm64"; else if ($0 ~ /x86_64|amd64/) print "amd64"; else print "unsupported_arch"}') && export OSARCH;
# Detect your operating system and set it as $OSNAME
OSNAME=$(uname -s | awk '{if ($1 == "Darwin") print "darwin"; else if ($1 == "Linux") print "linux"; else print "unsupported_os"}') && export OSNAME;
# Download the latest production build
curl https://api.github.com/repos/lilypad-tech/lilypad/releases/latest | grep "browser_download_url.*lilypad-$OSNAME-$OSARCH-gpu" | cut -d : -f 2,3 | tr -d \" | wget -i - -O lilypad

# Make Lilypad executable and install it
chmod +x lilypad
sudo mv lilypad /usr/local/bin/lilypad
export WEB3_PRIVATE_KEY=<your private key>
Lilypad: <VERSION>
Commit: <COMMIT>

Usage:
  lilypad [command]

Available Commands:
  completion        Generate the autocompletion script for the specified shell
  help              Help about any command
  jobcreator        Start the lilypad job creator service.
  mediator          Start the lilypad mediator service.
  pow-signal        Send a pow signal to smart contract.
  resource-provider Start the lilypad resource-provider service.
  run               Run a job on the Lilypad network.
  solver            Start the lilypad solver service.
  version           Get the lilypad version

Flags:
  -h, --help             help for lilypad
  -n, --network string   Sets a target network configuration (default "testnet")

Use "lilypad [command] --help" for more information about a command.
sudo rm -rf $(which lilypad)
which lilypad
Lilypad CLI installed
a WEB3_PRIVATE_KEY env variable

Getting Started

Getting started with `create-lilypad-module`

Installation

First, you'll need to install the package:

pip install create-lilypad-module

If you've previously installed create-lilypad-module, you should to ensure that you're using the latest version:

pip install --upgrade create-lilypad-module

Now run create-lilypad-module:

create-lilypad-module

The CLI will ask for the name of your project. Alternatively, you can run:

create-lilypad-module project_name
cd project_name

Output

project_name
├── config
│   └── constants.py
├── scripts
│   ├── docker_build.py
│   ├── download_models.py
│   └── run_module.py
├── src
│   └── run_inference.py
├── .dockerignore
├── .env
├── .gitignore
├── Dockerfile
├── lilypad_module.json.tmpl
├── README.md
└── requirements.txt
In the video above, Ally Haire, Co-Founder of Lilypad, provides an introduction to the project and its goals.
Filecoin Unleashed [July '23] - Lilypad v1 Unlocks internet-scale web3 applications
Bounty Board
Community Leaderboard
Ambassador Program
Minimum hardware requirements
Lilypad RP Beta Program
Lilybit_ Rewards
Bacalhau Project
GitHub
the blog
API
Lilypad CLI
Smart Contracts
first launched
Lilypad IncentiveNet
Lilybit_ rewards
IncentiveNet Leaderboard
latest version
official guide from Metamask
Setting Up Your Wallet
🎉

Inference API

Anura, Lilypad's official AI inference API

Getting Started

Use Anura to start running AI inference job modules on Lilypad's decentralized compute network:

Get Started with Text Generation

  1. Find which models we support:

curl GET "https://anura-testnet.lilypad.tech/api/v1/models" \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer YOUR_API_KEY"
  1. Choose a model, customize your request and fire away:

curl -X POST "https://anura-testnet.lilypad.tech/api/v1/chat/completions" \
-H "Content-Type: application/json" \
-H "Accept: text/event-stream" \
-H "Authorization: Bearer YOUR_API_KEY" \
-d '{
  "model": "MODEL_NAME:MODEL_VERSION",
  "messages": [{
    "role": "system",
    "content": "you are a helpful AI assistant"
  },
  {
    "role": "user",
    "content": "what order do frogs belong to?"
  }],
  "stream": true,
  "temperature": 0.6
}'

Get Started with Image Generation

  1. Find which models we support:

curl GET "https://anura-testnet.lilypad.tech/api/v1/image/models" \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer YOUR_API_KEY"
  1. Choose a model and generate your first image

curl -X POST https://anura-testnet.lilypad.tech/api/v1/image/generate \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -d '{"prompt": "A spaceship parked on a lilypad", "model": "sdxl-turbo"}' \
  --output spaceship.png

API Endpoints

API Clients

If you are using an API client such as Bruno or Postman, you can use our provided collections below.

Rate limits

Currently the rate limit for the api is set to 20 calls per second

Get Available Models

To see which models are available:

curl -X GET "https://anura-testnet.lilypad.tech/api/v1/models" \
-H "Content-Type: application/json" \
-H "Authorization: Bearer YOUR_API_KEY"

Chat Completions API

* = Required

Chat Completions

POST /api/v1/chat/completions

Note: Due to the decentralized nature of the Lilypad Network we recommend using the streaming variant where possible at this time

This endpoint provides both a streaming interface using Server-Sent Events (SSE) and non-streaming interface for chat completions which is compliant with the OpenAI specification. This means that you can plug and play Anura using the OpenAI SDK by simply passing in the Anura Url and API Key into your client like so:

import OpenAI from 'openai';

const client = new OpenAI({
    baseURL: 'https://anura-testnet.lilypad.tech/api/v1',
    apiKey: process.env.ANURA_API_KEY || '',
});

const completion = await client.chat.completions.create({
  model: 'llama3.1:8b',
  messages: [
    { role: 'system', content: 'You are a helpful AI assistant.' },
    { role: 'user', content: 'Are semicolons optional in JavaScript?' },
  ],
});

return completion.choices[0].message.content;

Request Headers

  • Content-Type: application/json*

  • Accept: text/event-stream (recommended for streaming)

  • Authorization: Bearer YOUR_API_KEY*

Request Parameters

Parameter
Description
Type

model*

Model ID used to generate the response (e.g. deepseek-r1:7b). Required.

string

messages*

A list of messages comprising the conversation so far. Required.

array

Optional Parameters and Default Values
Paraneter
Description
Default

frequency_penalty

Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim.

0

max_tokens

The maximum number of tokens that can be generated in the chat completion.

presence_penalty

Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics.

0

response_format

seed

If specified, our system will make a best effort to sample deterministically, such that repeated requests with the same seed and parameters should return the same result. Determinism is not guaranteed, and you should refer to the system_fingerprint response parameter to monitor changes in the backend.

null

stop

Up to 4 sequences where the API will stop generating further tokens. The returned text will not contain the stop sequence.

stream

false

stream_options

Options for streaming response. Only set this when you set stream: true.

null

temperature

What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both.

1

tools

A list of tools the model may call. Currently, only functions are supported as a tool. Use this to provide a list of functions the model may generate JSON inputs for. A max of 128 functions are supported. At the moment only a select number models support tooling including:

  • llama3.1:8b

  • qwen2.5:7b

  • qwen2.5-coder:7b

  • phi4-mini:3.8b

  • mistral:7b

top_p

An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.

We generally recommend altering this or temperature but not both.

1

Request Body (non-streaming)

{
  "model": "llama3.1:8b",
  "messages": [
    {
      "role": "system",
      "content": "you are a helpful AI assistant"
    },
    {
      "role": "user",
      "content": "write a haiku about lilypads"
    }
  ],
  "temperature": 0.6
}

Response Format (non-streaming)

The response is an OpenAI ChatCompletion Object with the following format:

{
    "id": "jobId-Qmds4fif8RLVKrSKfWVGHe7fDkBwizzV5omd3kPSnc8Xdf-jobState-ResultsSubmitted",
    "object": "chat.completion",
    "created": 1742509938,
    "model": "llama3.1:8b",
    "system_fingerprint": "",
    "choices": [
        {
            "index": 0,
            "message": {
                "role": "assistant",
                "content": "Lily pads dance\nOn the water's gentle lap\nSerene beauty"
            },
            "finish_reason": "stop"
        }
    ],
    "usage": {
        "prompt_tokens": 2048,
        "completion_tokens": 184,
        "total_tokens": 2232
    }
}

Response Codes

  • 200 OK: Request successful, stream begins

  • 400 Bad Request: Invalid request parameters

  • 401 Unauthorized: Invalid or missing API key

  • 404 Not Found: Requested model not found

  • 500 Internal Server Error: Server error processing request

Response Object Fields

The response data contains the following fields:

Field
Description

id

A unique identifier for the chat completion

object

The object type

created

Timestamp when the response was created

model

The model used for generation

choices

The array containing the assistant's response

choices[0].message.role

Always "assistant" for responses

choices[0].message.content

The generated text content

choices[0].message.tool_calls

The array containing the corresponding tool response objects (this is only applicable if you make a tool request)

choices[0].finish_reason

Reason for completion (e.g., "stop", "length")

usage.prompt_tokens

The number of tokens used in the prompt

usage.completion_tokens

The number of tokens in the generated completion

usage.total_tokens

The sum of the prompt_tokens and the completion_tokens

Request Body (streaming)

{
  "model": "llama3.1:8b",
  "messages": [
    {
      "role": "system",
      "content": "you are a helpful AI assistant"
    },
    {
      "role": "user",
      "content": "write a haiku about lilypads"
    }
  ],
  "stream": true,
  "temperature": 0.6

Response Format (streaming)

The response is a stream of Server-Sent Events (SSE) with chunked OpenAI ChatCompletion objects with the following format:

Initial response:

data: {
    "id": "jobId-QmZXDGS7m8VuJrURqsKvByGKHCM749NMVFmEA2hH2DtDWs-jobState-DealNegotiating",
    "object": "chat.completion.chunk",
    "created": 1742425132,
    "model": "llama3.1:8b",
    "system_fingerprint": "",
    "choices": [
        {
            "index": 0,
            "delta": {
                "role": "assistant",
                "content": null
            },
            "finish_reason": null
        }
    ],
    "usage": {
        "prompt_tokens": 0,
        "completion_tokens": 0,
        "total_tokens": 0
    }
}

Processing updates:

data: {
    "id": "jobId-QmZXDGS7m8VuJrURqsKvByGKHCM749NMVFmEA2hH2DtDWs-jobState-DealAgreed",
    "object": "chat.completion.chunk",
    "created": 1742425135,
    "model": "llama3.1:8b",
    "system_fingerprint": "",
    "choices": [
        {
            "index": 0,
            "delta": {
                "role": "assistant",
                "content": null
            },
            "finish_reason": null
        }
    ],
    "usage": {
        "prompt_tokens": 0,
        "completion_tokens": 0,
        "total_tokens": 0
    }
}

Content delivery:

data: {
    "id": "jobId-QmZXDGS7m8VuJrURqsKvByGKHCM749NMVFmEA2hH2DtDWs-jobState-ResultsSubmitted",
    "object": "chat.completion.chunk",
    "created": 1742425147,
    "model": "llama3.1:8b",
    "system_fingerprint": "",
    "choices": [
        {
            "index": 0,
            "delta": {
                "role": "assistant",
                "content": "Lily pads dance\nOn the water's gentle lap\nSerene beauty"
            },
            "finish_reason": "stop"
        }
    ],
    "usage": {
        "prompt_tokens": 2048,
        "completion_tokens": 456,
        "total_tokens": 2504
    }
}

Completion marker:

data: [DONE]

Response Codes

  • 200 OK: Request successful, stream begins

  • 400 Bad Request: Invalid request parameters

  • 401 Unauthorized: Invalid or missing API key

  • 404 Not Found: Requested model not found

  • 500 Internal Server Error: Server error processing request

Response Object Fields

The delta event data contains the following fields:

Field
Description

id

A unique identifier for the chat completion

object

The object type

created

Timestamp when the response was created

model

The model used for generation

choices

The array containing the assistant's response

choices[0].delta.role

Always "assistant" for responses

choices[0].delta.content

The generated text content

choices[0].delta.tool_calls

The array containing the corresponding tool response objects (this is only applicable if you make a tool request)

choices[0].finish_reason

Reason for completion (e.g., "stop", "length")

usage.prompt_tokens

The number of tokens used in the prompt

usage.completion_tokens

The number of tokens in the generated completion

usage.total_tokens

The sum of the prompt_tokens and the completion_tokens

Conversation Context

The API supports multi-turn conversations by including previous messages in the request:

{
    "model": "llama2:7b",
    "messages": [
        {
            "role": "user",
            "content": "write a haiku about lilypads"
        },
        {
            "role": "assistant",
            "content": "Lily pads dance\nOn the water's gentle lap\nSerene beauty"
        },
        {
            "role": "user",
            "content": "Now write one about frogs"
        }
    ],
    "temperature": 0.6
}

This allows for contextual follow-up questions and maintaining conversation history.

Tooling calls

The Anura chat completions endpoint supports requests with tooling allowing for function calling through many popular AI frameworks and sdks.

At the moment only a select number models support tooling including:

  • llama3.1:8b

  • qwen2.5:7b

  • qwen2.5-coder:7b

  • phi4-mini:3.8b

  • mistral:7b

Below is a sample request and response

Request:

{
    "model": "mistral:7b",
    "messages": [
        {
            "role": "system",
            "content": "you are a helpful AI assistant"
        },
        {
            "role": "user",
            "content": "What's the weather in Tokyo?"
        }
    ],
    "tools": [
        {
            "type": "function",
            "function": {
                "name": "get_current_weather",
                "description": "Get the current weather for a city",
                "parameters": {
                    "type": "object",
                    "properties": {
                        "city": {
                            "type": "string",
                            "description": "The name of the city"
                        }
                    },
                    "required": [
                        "city"
                    ]
                }
            }
        }
    ],
    "temperature": 0.6,
    "stream": false
}

Response:

{
    "id": "jobId-QmTm3E4oEu4TYp1FLykHdnrkPyX6cLz2UUYS45YrmrzqdN-jobState-ResultsSubmitted",
    "object": "chat.completion",
    "created": 1742790608,
    "model": "mistral:7b",
    "system_fingerprint": "",
    "choices": [
        {
            "index": 0,
            "message": {
                "role": "assistant",
                "content": "",
                "tool_calls": [
                    {
                        "id": "call_syyia0kt",
                        "index": 0,
                        "type": "function",
                        "function": {
                            "name": "get_current_weather",
                            "arguments": "{\"city\":\"Tokyo\"}"
                        }
                    }
                ]
            },
            "finish_reason": "tool_calls"
        }
    ],
    "usage": {
        "prompt_tokens": 88,
        "completion_tokens": 22,
        "total_tokens": 110
    }
}

Vision Support

The chat completions API also supports vision requests allowing for image-to-text search against a base64 encoded image. This will allow you to make a query against an image asking a LLM what the image is or about particular details around it. Currently vision is only supported via the following models (more coming soon):

  • llava:7b

  • gemma3:4b

Additionally, the vision capability is limited by the following constraints:

  • Images must only be base64 encoded (you cannot pass a link to an image at this time)

  • Maximum image size is 512px x 512px

  • Support for JPEG or PNG format

Request:

{
    "model": "llava:7b",
    "messages": [
        {
            "role": "system",
            "content": "you are a helpful AI assistant"
        },
        {
            "role": "user",
            "content": [
                {
                    "type": "text",
                    "text": "What's in this image?"
                },
                {
                    "type": "image_url",
                    "image_url": "data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAMkAAACqCAYAAAAHpmvxAAAKpmlDQ1BJQ0MgUHJvZmlsZQAASImVlgdQU+kWgP97bzoJLXQpoTdBikAAKSG00KWDqIQkQCghBIKKDZHFFVwLIiJgQ1eago0ia0EsWFgUFEVFF2RRUNfFgg2Vd4Eh7O6b9968M3Nyvjk5/yn/3H/mAEAhs4XCFFgWgFRBpijYy40WGRVNw48CDECAIqACOzYnQ8gICvIDqMzav8uHewCasnfMpnL9+///VeS4vAwOAFAQynHcDE4qyqdRHeEIRZkAIBWoX3dFpnCK21BWEKENonx3ihNmeGSK42b463RMaDATAAw6FYHMZosSACCro35aFicBzUNehLKFgMsXoDzVr3NqahoX5aMoG6ExQpSn8tPj/pIn4W854yQ52ewECc/MMi0Ed36GMIW96v+8jv8tqSni2RoGqJITRd7BqEX7gvqS03wlLIgLCJxlPnc6fpoTxd5hs8zJYEbPMpft7is5mxLgN8vxfE+WJE8mK3SWeRkeIbMsSguW1IoXMRmzzBbN1RUnh0n8iTyWJH92YmjELGfxwwNmOSM5xHcuhinxi8TBkv55Ai+3ubqektlTM/4yL58lOZuZGOotmZ091z9PwJjLmREp6Y3Lc/eYiwmTxAsz3SS1hClBknheipfEn5EVIjmbiX6Qc2eDJHeYxPYJmmUQBLyANbACYYAJ3IF/Jm9l5tQQzDThKhE/ITGTxkBfF4/GEnDM59OsLKxsAJh6qzOfwru+6TcIKRHmfFufAuByAHUmzfmYWgCcQO+A9GbOp9cLgIwKAFd7OGJR1owPM/WDBSQgAxSAKtAEusAImKHd2QJH4Ao8gA8IBKEgCiwDHJAIUoEIrABrwAaQDwrBdrALlIH94BCoBsfASdAMzoKL4Cq4CW6DXvAIDIBh8BKMgQ9gAoIgPESBqJAqpAXpQ6aQFUSHnCEPyA8KhqKgWCgBEkBiaA20ESqEiqAy6CBUA52AzkAXoetQN/QAGoRGobfQFxiBybACrAEbwAtgOsyAfeFQeCmcAKfD2XAevBUuhSvho3ATfBG+CffCA/BLeBwBiBSihGgjZggdYSKBSDQSj4iQdUgBUoJUIvVIK9KB3EEGkFfIZwwOQ8XQMGYYR4w3JgzDwaRj1mG2YMow1ZgmzGXMHcwgZgzzHUvBqmNNsQ5YFjYSm4Bdgc3HlmCPYBuxV7C92GHsBxwOp4QzxNnhvHFRuCTcatwW3F5cA64N140bwo3j8XhVvCneCR+IZ+Mz8fn4Pfij+Av4Hvww/hNBiqBFsCJ4EqIJAkIuoYRQSzhP6CE8J0wQZYn6RAdiIJFLXEXcRjxMbCXeIg4TJ0hyJEOSEymUlETaQCol1ZOukPpJ76SkpHSk7KUWS/GlcqRKpY5LXZMalPpMliebkJnkGLKYvJVcRW4jPyC/o1AoBhRXSjQlk7KVUkO5RHlC+SRNlTaXZklzpddLl0s3SfdIv5YhyujLMGSWyWTLlMickrkl80qWKGsgy5Rly66TLZc9I3tfdlyOKmcpFyiXKrdFrlbuutyIPF7eQN5DniufJ39I/pL8EBWh6lKZVA51I/Uw9Qp1WAGnYKjAUkhSKFQ4ptClMKYor7hQMVxxpWK54jnFASVEyUCJpZSitE3ppNI9pS/KGsoMZZ7yZuV65R7ljyrzVFxVeCoFKg0qvSpfVGmqHqrJqjtUm1Ufq2HUTNQWq61Q26d2Re3VPIV5jvM48wrmnZz3UB1WN1EPVl+tfki9U31cQ1PDS0OosUfjksYrTSVNV80kzWLN85qjWlQtZy2+VrHWBa0XNEUag5ZCK6Vdpo1pq2t7a4u1D2p3aU/oGOqE6eTqNOg81iXp0nXjdYt123XH9LT0/PXW6NXpPdQn6tP1E/V363fofzQwNIgw2GTQbDBiqGLIMsw2rDPsN6IYuRilG1Ua3TXGGdONk433Gt82gU1sTBJNyk1umcKmtqZ8072m3fOx8+3nC+ZXzr9vRjZjmGWZ1ZkNmiuZ+5nnmjebv16gtyB6wY4FHQu+W9hYpFgctnhkKW/pY5lr2Wr51srEimNVbnXXmmLtab3eusX6zULThbyF+xb22VBt/G022bTbfLO1sxXZ1tuO2unZxdpV2N2nK9CD6Fvo1+yx9m726+3P2n92sHXIdDjp8KejmWOyY63jyCLDRbxFhxcNOek4sZ0OOg0405xjnQ84D7hou7BdKl2euuq6cl2PuD5nGDOSGEcZr90s3ERujW4fmQ7Mtcw2d8Tdy73AvctD3iPMo8zjiaeOZ4JnneeYl43Xaq82b6y3r/cO7/ssDRaHVcMa87HzWetz2ZfsG+Jb5vvUz8RP5NfqD/v7+O/07w/QDxAENAeCQFbgzsDHQYZB6UG/LMYtDlpcvvhZsGXwmuCOEGrI8pDakA+hbqHbQh+FGYWJw9rDZcJjwmvCP0a4RxRFDEQuiFwbeTNKLYof1RKNjw6PPhI9vsRjya4lwzE2Mfkx95YaLl259PoytWUpy84tl1nOXn4qFhsbEVsb+5UdyK5kj8ex4irixjhMzm7OS64rt5g7ynPiFfGexzvFF8WPJDgl7EwYTXRJLEl8xWfyy/hvkryT9id9TA5MrkqeTIlIaUglpMamnhHIC5IFl9M001amdQtNhfnCgXSH9F3pYyJf0ZEMKGNpRkumAroUdYqNxD+IB7Ocs8qzPq0IX3FqpdxKwcrOVSarNq96nu2Z/fNqzGrO6vY12ms2rBlcy1h7cB20Lm5d+3rd9Xnrh3O8cqo3kDYkb/g11yK3KPf9xoiNrXkaeTl5Qz94/VCXL50vyr+/yXHT/h8xP/J/7NpsvXnP5u8F3IIbhRaFJYVft3C23PjJ8qfSnya3xm/t2ma7bd923HbB9ns7XHZUF8kVZRcN7fTf2VRMKy4ofr9r+a7rJQtL9u8m7RbvHij1K23Zo7dn+56vZYllveVu5Q0V6hWbKz7u5e7t2ee6r36/xv7C/V8O8A/0HfQ62FRpUFlyCHco69Czw+GHO36m/1xzRO1I4ZFvVYKqgerg6ss1djU1teq12+rgOnHd6NGYo7ePuR9rqTerP9ig1FB4HBwXH39xIvbEvZO+J9tP0U/Vn9Y/XdFIbSxogppWNY01JzYPtES1dJ/xOdPe6tja+Iv5L1Vntc+Wn1M8t+086Xze+ckL2RfG24Rtry4mXBxqX97+6FLkpbuXF1/uuuJ75dpVz6uXOhgdF645XTt73eH6mRv0G803bW82ddp0Nv5q82tjl21X0y27Wy237W+3di/qPt/j0nPxjvudq3dZd2/2BvR23wu713c/5v5AH7dv5EHKgzcPsx5OPMrpx/YXPJZ9XPJE/Unlb8a/NQzYDpwbdB/sfBry9NEQZ+jl7xm/fx3Oe0Z5VvJc63nNiNXI2VHP0dsvlrwYfil8OfEq/w+5PypeG70+/afrn51jkWPDb0RvJt9ueaf6rur9wvft40HjTz6kfpj4WPBJ9VP1Z/rnji8RX55PrPiK/1r6zfhb63ff7/2TqZOTQraIPb0KIKjC8fEAvK0CgBIFAPU2uj8smdmlpwWa2f+nCfwnntm3p8UWgAbUTK1lzjkANKKq1waAtCsAQaiGugLY2lqis3vv9I4+JbhTAFi9RTcH6Onx/hzwD5nZ3//S9z8tkGT9m/0XpV8Ef9hUGu0AAABWZVhJZk1NACoAAAAIAAGHaQAEAAAAAQAAABoAAAAAAAOShgAHAAAAEgAAAESgAgAEAAAAAQAAAMmgAwAEAAAAAQAAAKoAAAAAQVNDSUkAAABTY3JlZW5zaG90bt216gAAAdZpVFh0WE1MOmNvbS5hZG9iZS54bXAAAAAAADx4OnhtcG1ldGEgeG1sbnM6eD0iYWRvYmU6bnM6bWV0YS8iIHg6eG1wdGs9IlhNUCBDb3JlIDYuMC4wIj4KICAgPHJkZjpSREYgeG1sbnM6cmRmPSJodHRwOi8vd3d3LnczLm9yZy8xOTk5LzAyLzIyLXJkZi1zeW50YXgtbnMjIj4KICAgICAgPHJkZjpEZXNjcmlwdGlvbiByZGY6YWJvdXQ9IiIKICAgICAgICAgICAgeG1sbnM6ZXhpZj0iaHR0cDovL25zLmFkb2JlLmNvbS9leGlmLzEuMC8iPgogICAgICAgICA8ZXhpZjpQaXhlbFlEaW1lbnNpb24+MTcwPC9leGlmOlBpeGVsWURpbWVuc2lvbj4KICAgICAgICAgPGV4aWY6UGl4ZWxYRGltZW5zaW9uPjIwMTwvZXhpZjpQaXhlbFhEaW1lbnNpb24+CiAgICAgICAgIDxleGlmOlVzZXJDb21tZW50PlNjcmVlbnNob3Q8L2V4aWY6VXNlckNvbW1lbnQ+CiAgICAgIDwvcmRmOkRlc2NyaXB0aW9uPgogICA8L3JkZjpSREY+CjwveDp4bXBtZXRhPgr55CqOAABAAElEQVR4Aey92ZNk2XnYd6oys7Iya9+7et+mp3tmMBgMtsFKiSQomhaDlkzKlkIvluwI/Rn2ix8c9qNfFOFwkGGHFsoiKZkUSVAkMQRAALPv09P7vtS+ZlVlVpV/v+/cW10zmHHIBDDdjfDtvnVv3uXcc77z7d93zuk69/Tnd9P/vz2aEOjqSsndrbs7Trv83V2N611cS5Vq6qpwr1KJZ7qr/PY6e7W3kd/lnZ3tDu938Wot1Zp9qTE8lCy5vd5K6/Pz3G+zb6e0s5N22dPubtrpdNJuXNvlWJ7n+z7Ttcs5z5XPx8d89+dsA9qP+VYi0d+0GXTyI7Xtb09xHoTBeSYQjgVBlETSXYVAuNbN7r3unh7Oa2n05Mm0DZJ3QPb1e3chnkqqDw6kkSPHUv/4OESwnZZu3kpbGxtpe2szdUkUO9t8B6KDyDqt9SASCWWnXUDJOpV13KUuEEpsBXHseu9Rg+lP2MGPP5HsB0DZefuvfdz5o96J+9oRhMDvUjpwEgTRVUgMjyE9JBwJBElRHxxMtf6B9Nn/8r9Ka8uraene/XT5L/8k9fT1pdGjx9Nnfu3XwPOetHp/Nq3NLKcEUYHZCCXQAaTv6e9PNZ5dvXU7iGd7ayt1U4ddCCckC8TF41mC8KZVktWERBHe1v9Rh7H1/E/cHi8i2Yc80T5/77tWctz/17bbefs6UHVhb/vIvb3rf9OTfXXbK+Ljru3dtDn72sR5EEcqCKNQo7prqFQSiZKDY6Xemyo99VTta6YeVKmJM+fSxOlzqTk1nerj26n/wFTqbKymofGJNMC+3e5O9y9+kGYuXUzvf+/F1F3bTfVmI/Whgk0//WxW5yj7zvtXUr2BejbYSJ3V1T0i2elC4lDnLqRHVsesN1ckoo/Cc1/bHtfTx4NI9iPWvvM9hAKv6KZAqBLl497H9cqOT6BHFwQRxRUvle9+3Gt/o2v76lq+/2P1Kurut723d993ZdFeA/m6sD2UFJ7vEYkEAnEMHT2eeoeHU31oGGnQk8ZOnE4jXKvU6txPqVrrSVOnz6Y+7tcbvamztZuW7txLC7fvpLWFxdQ3OoAE6uH9UYjrybS2uJxWF5fSxFneGR4E93fT7LvvpPbaatgp1IpNOyWrWruqXNbTyxwDttbfbT/R5CuP3d9Hn0hKYAva8lykKc67ukUmO4aj1/c9Vz7jpUwUnNDhdjBvFB3oMd8PzujDH7ft7+yyHh/3HNf2vls85xf3ru0/L/CIu3E/PuE7xXvaELYJXaeQHNko71KS1GoQRC1Vm800+dTTaeT4qdQ3MYkd0U49jWaqQjy7bZF4N1SlyWMnMcwzYlerO6m1tJLW5hdDPetG9ar19qfm6FQQ2Mbb76KKXUlPfutX08DEMEVsp9V7d0Kd6my0kqVKKLtd2CrBbApiwZ6JPuDaz5Pq9WgTSYmMJeJwFNkC4eSykoRIxLXuEpnier5VIhu/MDAzMcjZgvOJuXudCdF43r1D5wfF4OSRkPZtvl9u1ofNKyXyF5e84q3YwoiVAPgVfx/cyvX2qVxIVDW3Kz/thS4M8pAiEr/qlTvn3UgG7ZDe4ZE0fvoMnqqJVOtpQv/dqX8Izq/Hi4KzIb5DW9ij/nq5MOQ3NtPh555NE0+cSmd/6Vtpt9NOdVS15vhowGbsOIY9BLeL8d9G6uy0t1HdjlMGcF5eSu3VlbRD/Xa7VbtsgDDlyN4FEwpb3v7QmN8PN9v7GG6PJpE8wDixMCOiQPe6u+eiHgQR7s/yGtfDM8MtEbREYPul7MToSHXp6FRuyP2KDqbnfTJ2dewgLH792EbZgdvc2P+NqJvXrFu57dWjuFbe2rvug1z0enFN8lRCBlFwzfZ2IznCBtE451zDugGRjBw+mpoDQ6hUdQjcf7wHlnZB5OHSDdzNCGy5bQhkZXY2bbU2UqcNknOrik2ju1j1bGt9M213QG6kmFKjs4TnC69Xc3Qs7Wytpw3czTt4wrRFhKHEKhGGbWJhnFtlN379XGyPHpGUEPbIXnJXOWiGvgiU1ZC4V+jp8aznviex0D3xDkc7M0uSAllKDmenbvOkxzgvOZ/dm1WIT+ply4+trGN8kStR7/JWUf/8YFF/n8nP7RFYvJNL9NreDgJGW7wGYYSEKIikd2gEbn8gTZ18EuKpxXMVYQTyRtUpbqewGYoPBhy2WmtpBrfvNh6q7XY7ba6spJHpqVRt9/B7O22ub4DnqFHsreXFtD43lzqbm2ns6GFNItSy3rS1tBjvyqqUUF18M4iF36FmIU32VFfb5iZ8H9Pt0SMSAVkgTalqyAEDWUQCCUBisMc8L1SQeNbrXvO5snN41/5B0Ygu2oXDGgvwoh0aneoDnEfwrCAgnTXcjHf848+iuznfh/xeFImLOvts+e391/afx/3yHZ+3vkXppY0VErEMFIZUgUhsawQOq2n67DNpEO/VFgjcWroH119PO0gJJUC1B3uFPeIm/Ja4akiL1upyWsAdfP31N1HP6uy11NfsT4Ojo8RWeoNItrFjjK0Io0qtl+t4ttZa6ZV/+2+DWKo91TR0YDKkzc7mhrXPMLQFwiDg28kwsV3COR6Kv549dtujRSQFogWwC2QPhPS8JACJw/NyDyKphJdn4NgxVIEtvDeoDK0WQbOjqa9/MPWCEJffeYtOhkvKadtb9F1WuUJtkDDYIxZQEInElJWq3KdlF0sT+Zwz/1tnN1U/T/OfB/f23S/oYE/C+WxJPLmcB78tL4KEBRyqGOP1AeIfGuX1JnGQQRhEDQTVMO9OKzOz6RZt3CIAKKH5blZFZRrABzh1gE1rdT0t3r2fenpxGSOdGqhZa6sLqG7jqX/qINJpOsoMZoKhr/esp0+jfixtLC0E8TUJRNZ6KmkLSbNx/37aqUAUwFXmtMu3Q2213WXbZUKP8fZoEEkBzJAAAlPkAeD8CSQqCSLkPcTxIf0cRNE9avBs6jOfTVura+zLaX12Jh35zHNpeGIiNeuNNLu4mNbo5O32ZuosL0MkdBwEofQAK7KOze+SaD5qjxSkEF0tHpSEUiJ3IIR4Yb1jsw3FW4Ew+Xqc0i7f910RWAOdX/F8fp/zEgaU4bX+sfHUNz6VegdHIJah1AvBVOvYITxnxHzhzu10/vt/hUt3lrIhcCROODOEYdQjE3EVeFUgDjk8JadKFyjw/ltpYPJgOnDuMxj0zYCnMNc20aXcg7Tp5ZudrQ1+G7UfxG08lNbu3YNQlnEQEI6XWVmm7WfHIca58CwIxIY/psTyaBCJSAUQA3EAcCAenavqINBDpcLtGeqDncEehKIqgcpQJYDWN3kgHfvaN3geQxKOuUEaxpGDk2l1YSVdfP9iOvq1v4O+vZqWZ26ne2++nCoiH19sY8Du0Pk76OclgUg0SpwCxTNBWcdiUxWyvm4ZATn3WvE71z8juoVkgshIWraRF0J9qjT6IsItx3bPREJBINQ2yF9Ks+e++Yup2j+UfN72V/h+1ABJ8v6Lf5luIEXa7Y3UO0S0HLuhyt6LtDECr7oVRjbv9A8PpBPPPJEuvvJ6mr12Pd18+z2au53uXXg3XX7pxXSaAOPkmafZnwpDfpsyN9dWcAnfxahfpdq76dYbb6Zzv/Kr5ID1U8+dNP/eu6kDg9EOCdVRQpEgIBThJDyCKQmgx3B7+EQCAB8gHIgkF6L7g8OGSsW1iDAjMYrj+FPPAOqu8LoMHZyGw3WIOoNgdFK1BnckdtA8MJ3Wt3bSZndPaqBCtPHUtEE6JUcv8YAqqRjGGVrECjZWSPBDTanswhFJ5AvVS6nyCZwv5zYFiu550ayvhBt5U4WLNscKMpJUkWbVujlV1bRw80Ygb0/fQBo8cCTqXpEJ8P42UlBpuLmyRLyjNw2OESVHGtaHxo0KRrt34fBaSxsry+n+lUtp5saVgM2Rz30RA/tI6h8dTs3h/rDhM5J2g/DVVKXKbVy46zN30ursQtqEQdQJFnY2t6LNuxjz9z94M20sz6fFm1fS8Rd+IeAlI9Go31pbgzAaaXD6YGpvtiHqZho++URavnEdpoKnrFC5glAgXh3qmVI4PMbbo0EkAHBPvQjOk4kldGqlhsQiEsltMUAHDx0JlUAde/TEieg8EVLeypOpBiLW+4bS4uxc2sQQrcLx1pfvoK+vBULUGv2hqlSquDxbfBvjtwsE6SLOEJwcXSEb9FkK2L9BtNYNIhaZJVI3JY7181q4UUMiQIAcTSD0Od+pQCQiva+tEtGuYhTXB4bTwNTRUI2i/l0G+Uj/6OxiLG+moZHJNDBGgI/2KkF8RnerRNzeakdA8PbF86Fu9YC840bajxxOQ1OoZiODMAWI3mpSb6PuNcrfXFhIl27dpP0SbSONwGRWZufDNby9u0GeFwmPMIzVuftp/NTZaHdWmXD90p5tVCttFBmIUrs5MhGSvE3ayp4UKRhfSJDiPBjhJzAd4fgobw+fSIBOqZ4IKJFRYIdKhSqlbhzcGaRrEOAaPHIiNScOoZ+Pp8ZQTvfWCK1DPGAPrBIER0WQWOZv3UoryytpmyDY4s3LcEi46JIcuo/Oxs2JvtxA167UGmm3iSG6tpxqNb5HJLlLiSKCg2SqSI3RSWITGM7NgTR88DAIs4MEa6el+zOpiYrTQE8fIK1DRJJYNZzbqDEmAtie1VnUFdI6tiHI41/+BeogoeI94h56TGotzKVbr7+c7l28FERXBxG/8MJvQnzklSD12qiDocJQGe2QOxfeT/euXEgXXvrr9NS3fgkJchx4DKbX/v0fhsQUjqMHD4bBrldss7UJYZGoiK0yOHUsffWFr0E4VVJQFtJL/9fvYfjPpPXFubR69x7Et5DWqc+rv/vb2ChfSsOHTwDvgyFNUlc7ou9DwgD4rS+uwcCwjahnV1drry+j3TQ9ACgQy/PHkFAeHpHsB5z2Qej5IIwEQgdHtJnOzfZItjvqEEXfxHggkZirB6bqMzJXkYit2yAgN7u5t7WO2mK+EWK/2ttMDThfpd6PHUJqxTYeLvaVmbl4j5eQPoOpp4LkoT71GhIDwhQBKgTuRkn7kIPqXl2D228TiOuqddLBpybAYVQZiLlK+dZpG+LooKvXwiCnnkibVaSazKCCq7WXdPU6dlQvnqUm5bVmIep6JU0eOpTGJye5P4wadiB/vwumQZkqL0GtSJGZW9fSpVd/kFaJV0w+eTZVSClpEd9YhQE8/7e/kaanJtLICAyEb0XUn7btUB9du21U0xYBw/mFJaQsUgE17vnf+HswkZvpyksvpXs8t7mymtora8RI7qa7771OntfNdPTz30yHnj4HnbbT4v2baQWvlvDQoO+BObRRxdrYLqEu0849KSKHecy3h0ckAk5CKXcQMxMIRBKxAAhFKQIRqLo0zWCdnMarMha/dXuKPGAl3LuTOqo9YFEgE4i0hcqlhOlt96W29zfR96Gm7ipPbK+B5FthiFZBpFovqgjf6MGu0S7RKK5jP/QMj2Is92MPDKXBg4ei48XUDtJKZEMApb5BCIe6V/w20gUZAjqjv1OHSl3wgtwE9fpGRmgeqqDfgBH0oJ7VkVpV1JdtkGsHVbAKUTnOozmKujQxlTYjAwCSp52CSSkl4d/94L20NHOPL6Y0OX0oDHXB1+hrpKfPnUmT2CQDA/1pcWOLT1NfYBRZujxvWZ31duoFfmvYaKsU0kdKitshEH19aQ5GU8GZAcxQu9ZmcfECv03cxCOHJpG6fWl9dRHCx2ajjcGolIhIpawRUJCVlev8nGwPl0gEZRCJBl4hQSCMiIMoIUAkubnu3cMYpcMMFqqTzt1GdVCPMdIbI+roLFWRbmwJR9qp7rTR96dOnEgjcMuVheV0+dXXkCotxkd0Qu1pzd8Pznfkuc+n0QPECECq7q1t1IdFBilRLhQwQdp439gw7tdhdPQ53KsLRKFXsBFwFmhu8J3eXtzP6ucgzQbGbH8fah9Ejq8Mm6MZiKwXdPLksUC6Lbj00p3FNABS9fDewrWLaeb65UB+y5siEVFv3RZ2ScRttI94bhfEXZufI3P3Znr53/8u6t84KtaxdJhM3Y3NVjowMZK+dOZ4enZ8LK1tbqfF5Y10C+/VBirWJrtl1CDaUYj66cOH09emT6cN4Pf63cX0o8vXSGw8kU5+4XnspcU0d/UqZk9XWrl9HVgRXd9cTzPvv4rBjro7PY30egpYblBmN/YLHjjoIfoRiEi4buUx/3q8/z4cIslQDciFC7X8rX6uqlVKEPTcPrw7B597Hr34CGnbw3Bb7JStlbR4+1ZaunsnHThzhtylEWyCIToNYukfpoNyIM4AWg1J0Y8rdBFde3lmHhtiLnt45mZRtbATSNabnz6QBuHgY4eOpUFSPYZHxtLgseNp/CBlKZVQz7qIZtfgwruVrnT33fdTP2pRz0AfwbmutOH3aINy7NqFS2ljdQnkWoHYqI8b7TKuUccWaqBKjWErdCER1pfmqcOdVMH+aeCy7cHF240K00VsIkqT+NhlIao8b/35f0i3iGloOxx85jNp9PAh7Jru9JXTp9NpCPmJ8RFNhnR/djHduzub/rf/8X+I/CzTTzT4Gzgwzj79dDr5T/9bJF01DfbW0tcPkB7f6ElXFlfSezfvpa/8o3+MK/jldPXlV9IlpFskM3a20n1soAqBTBMqD3/uS8AFRtIh5rS+nJavXkhbEBcdEM39efvzcIikgGKWIoVoBsnUZ2OHSCSUHhCrdwTVYXyScdgLaZWo8vrCfGrN3Iezz8BZ59MAakyNQRN9BNkQ/rA0ERa04lRVQDWkQllDpF6YzZrTw1GvRFjyn+q4YRsQURPkbPT2pTpSq1onko3xvzZ7L0bmbRCldoyFQTs9auYytTB4O3Bw86NM09cA7652pw2QX+KoGR0nUJfb041K1ICT41HiKOKr7kmAJifWYAbmQHVjW4QnT5CIcBC9GbrbO510/a1X0/zNa2l1fgbnQRO1x2j7Tlq+cyc1jkwjmTDwd7rDm6fU0ZvVQmpZZ3ef1dW7jC2ysbmT1jdUTysJ+kjjqEutRictDA2kOaSpA7Omz51NN996GZhZT2wsgrCrqF49wGcDabpDncLty32H/u7y2y2qvu+Y2/F4E89DJZI8Hhri0AaRQDyil0sgRtH7UB0GSZUYgtNfeeWVdOPNt9K111/HZXkH71RGsoMnz6R6Tx+j7kiniG4CsTTeKc8LOeUdNebwdNgGczfvhDrT7JqKyPUQ35gkpjI8MoqqhBGKGqO0WLt3LS2uL4anSCQz8c9YgR6dXp5ZB1klELNqszsUNys2kCqINsXoiWORpRteOiRaBNYgWtPXtxnotLlG+gjvjx09EanrZtpuSii03U3EFOU6+Kg3Ibw3v/1HuLHnA1mHyPyt4IjQDXz3rbdS99NPpu5tPGhrKYhkp4r0HBvFOdAETsQvIBAnclA6ua3jFl9FJdMI3253pbEBVD/iJUPNevoX330j6n8K1/Dbf/JH4Y2ToHcgsHUYk0S+CvOQWCgYaWIQlvKVskHY8Qn+8C1+l0RTXn0cjw+HSJQahYoVxFFIkTDcQRJjDkqQqVNPgOQpvUxy3d1Ll1GxbqY1/PcDGKbDuIMHxsZAWAYFwcHb6+u8V1c5zuzMcQ0azSC8QTKHoTYxUAfGRgjCraV+YhDNZm86ceRgqkOQvY1GGj52NK3ibt2i87epQ2VkGo+YLmi8VxjyIorR/Df/+A8zYtDj5lINYdMMTE6lIYx7DWU5vRLKOIKbiAK2phppGhrrdQYyVccHuYYaRD7ZxatX0hbesl4ItU06TaAW0nAZiXn7/Nvp1vn3iV/cRoUjiRGkT913GZv+7QiI9uCF+1+vXkxPP/O59JWv/WI68uS51EE8bLBPnjxNsPEy0mc2vH/LSII3f/S99L/wjf/8t/5xOgl8T2BjtNaImkOMV1ETNfA3VlvYcSvp2Oe/nG69+QpesQ62zUKogzWkrcxC20/K79o1EOkxE0UmlEwwcW7b3Yr7+cfj9ffhEEkBo5JQwCYwCXVFCYIEqKAqDE1NRtqECXl3Ll5G918JTj15+mQ68+UvoBYZ36inbWyUdaPleHtGkDqWWUoRz8FZfhvTgFg4r0AQunGb/X1Ij5E0hku56rf57pqj7rANzFeqD+X8KvU166V9s13B8KfuB86eK4gc1zKqSoM4icE8pVZ9AHUNO6hi+jrkoSpiOse9d97Bi9SPtOrje9kbZsLgrXffSrMMpd2C+AyYGqwTuagqHqWVtATXXqZtVCKNYXvoyl6+exvnxTpJnGtpl7bcR/3bUToAq7+FbdIg9R2qTwO4kRdIz5GwdJGr0nl+6+L76Qff+dM0iz3U88u/mupIkzs4Jt67jD3FIKstnRs4ISaIpi+i4i3fp34VcrSU9uxKph3UL3RS6ERVC0kSNabS+4hBfqWEiWvcely3h0ckstdgsR45KXZjJOYbqRef//7LaYGM1ZXF+dDpBycn8MIcSc9+61dINK2kdaLSd6/e4AiHpVOH4eakoUZfRAfxR0IR6Trc14WqgW1ArbdRTwN4egYIAkKaBBwZx42urQNAgqiD6CJV5oaoQRCLzxhhP/Tss/FMxHOMvaAhqbMvgOwN3cYQL1+Ob3dQVTbwit146QdpiPSYIRIJu6g7lUmLGOOv/tEfUl9sFJB3A6Joh52jKqTRj+uW78pDajCF8SeeDIl1/tt/Qntb2V7h3VXUwXV0rVs3r6eTv/qtNDE+lBoVMndxRtSQkBE4VQIJB+qzCjN45+XvpzWI7MwLX8Uh0Z2uQ3jn33w7DU8fSZvASlXu0NNnI2Yj0wjGI1ALeCJSkaYQNIHQEPeUbflB4MW5v38etk+XSASyWxw9F5H22SNiG+rRFp348h/8AVyUnCqe0Tb5/N/7+2n40GFeAWNQTc5MDqeTTxxLtS9+Jr156Vp698oNOC7Rbwxm86PMa+qVqyOVRGZjGdt4kxYuvJVOfvbZNAICTeCh6odYUNLg9q1068VX0yHcr30QSZeCwK+XHc83NzDWdQWbJ2b5pm/oUDAvTBXrwLmn023sJqgpvHBNrt1+78301p/8O1C5mu41r4XDQPXFtI8Vxo0vXD0f6qWEK3EiMvxwbBVUs8A3xNeh57+anvq1v5smTp0IYr/84p+lNTj8poFRuHWoaLi4f/d/+p/DFnL6oBuvvRJeNmMrEmUuDMQF/ms4QG5d+iD98e//6zR75VJIrU3GvS9DJN1KU2JHa9MSWTPswnXqG3Wkj3q4v7O5HKri9hppNEoS4cSx/EZIEGr187B9ukSyH2JypILbBtFIOIpzuKyGez+SpKrXieDh8S9+iXEORpGH0wFUm6cmxtIIHLIPAtjcraQBcrD6eff9t19PB089mcZIwGtAIBXsC0lRjrcyAzKD1Nolg5NjICbzTkEYy3D5XVPrQZoZ1JBd7JQBiNHUk26QVEKx8x0Suwq3XcD1rKQLZKD0OgTTRwQ9PE60oQFSmeukBFvGRb0Ad58nWbAxciC8TN1VAnGUpedsY5G09l0TAykOT5hGvxIo/kfFRTLUG8oaPXYCtQ77i5vrxEv0VIUahWTL3JsyuLd482q4tlUR17VFwrDOTg45vlCX0UjYS7TnwvdeJNsYFzd16sJLt7GykOq7zJ7S1Y+BPkNbyc/CEWEyqB46E0OtYzA3ygki/giBxMWSQLyXH6Luj+f26RMJHS4C8WfvCMyL3xIJ1+kYDenmFITAOIqjzz8Pd+6kQRIET02MptNMWNBNsGsLA3JWl+bKCgOCFtP9G1eZCAGvDnaCU+fsgPyqR6aQrIEwxi+2UQ9047aJRq92FslTWk6beKpa3J+/fS3tQCRbGNO9IHuFOEKoaxqmEIvjVGL8hJKJIKdE2MO3jIGo1nSLhEgxvWsSli7TFRBtJbxCGOq5oaAPgUc8VVut1dxe2h/c2PeReOJXSAZxKmAF05g6EFzdS5skE5axD9u3h6jcMwcs+Dd/tG9yYXB4ynYTYTXOde1u0a77F84TS8K7ZcYBRNDGfor4EkzEXDelci+B1pDGtFeiyZ43JZ4dx2YFfo63T59IBKaEUCCMhnFpEHoe0gSpoEF84oWvp76pKaLPO+nJI4fSSeyHE3DutbXtNDM3D4HMp9/+7f89zYGMCyBHE0P8NQJuuxiepnksg6CO0245DkK0K7478RffCQPejhfZNORl5/N4ifpJWBw/fhtpcwgEyY6EblRAbZJ6c4jg5RD1F4szYgQyc9rGRbwVUWgJJN+/9e4baZYyJdI2BBYIT7uNLzjJAg+S0u4MJZ4ax0B1stjYKTnsM1Po6+RyTQITXMw8J5E4+jImcpBIhClb5thZ5Qnpwr0oLD6QnxKto3gIxBlPRPSdNmrdNvEb7myS5OnAKnPiWsC3jo3miMhtpGM38ahu6mTyY29/TmWxDR+7xUc+9s5jd/HhEAlgQrKDBMLLk7xn49CgXDU1xibSARCjn70NBzPH6D4GZR9qyQpE8eqPfpRef+WldBVE3ADhzM+qokfXG3QoiLSI+zSi3nDMUDFEDRGDey3UnDyeBKTTx99W1VEVQTqhY+tVamMIKykkEFOorGvg1B5K2ohMeLbCdmRchEvzDb+1xEClVnBjcqGI64S9QUESiQU6IZxz9g6hHmpXqEYZTQ+jvURsPio89J5VGTJr0E5JUN7WuSDaxwEJJZU9eL9A4HyTGkY141iqQKpyO0yGoeNgG7VUx8FGqyftol45hKDOfeFg3Q1gqgorvSwrNALL/DnfHgqRiGx2rYiVAZ2P2YjnnA4aQoIM4DLto4OWed6EwVm4dYfI9+Lly+mN119NL3//r4KrgRaBSO0WiYh6j4aq6c67b+OhwTUaSAUCB4HAWUHQLdSuXbkyHiyj1Dtwx6gRhKKev42+7xh5RABJkdmjhSspyhIB9ZLtIRnlugXnBhmlD6WSz5kHphSpdNeiXNsncViXeJ/2i79KCidlMDjZWl7Iz0ap+Y/woqKoiOsRD8o4Hxf3ED7iFEEs2hYZuvkmr+ZHo4nlaS6ZvzyrdJJwTaXvws3dBcy7kORN6hlSHsKQQ6h6aZtYn6iTheXK7RW39zvuPbj8OJ89FCIRYHtcSICrZgXkQRi4psNPp5jUocOgoYXb99M9kHro6NE0x9Sbr7z7QVp476108/13SMZjzDodG+WBYFsEFCefZLI2VIQbROhNh09GlkEwkZULIDHPgWwZMfiejoLoaNyjcHvHcdSIV3QzjXqH/Ks2Q3t9vvRmtbFXNlDftnGRqqq14baW7cztMU2PKo5pKiCWE7z5PSXTTodEQIhRx0R8GwkXaR1UfvbW9ZjDd2B0QnIPwpHIQiJYZ2yLdXLNVu/NxBRAqj58IJiJxBPTjUqd7OLmhzYvFDTz4ze9J2Gb1Vyk+MtUgH+VsmKMiCqnxM0mwThORomipyura97ITK78jL/3to/e27vx+Jw8JCLJRCFyZhWrOIIg3XCwHjxTU+RaLV+5leZwc1649H6a/tznQNRVBia9meYuvhcuzDBe5epwexPUdcnKpdXdm7y/IVcWcUA4kcH5qWq4fCePnARBRT7GVqDf26eBuCDjFtJngdF5r/zJ74cdYqpMqGZG4DFmDUY2hyeDmMPAxWCvEPH3vMcd41duq/5+8wIETSykg4G+QsRcYnVe3diUNkiqnQ413yKISCX0sJW5UiK/9TZ3S2RcZ5jx6LHjydkVV+dQ4whESqA7lNNFm4VjSLN9uCdyh8QS3HsYnE/2hI1t51K8K6yEUXOAIQkTJDM+ASwbwXxsu+kpAIoMgwOIduynLRwtwCOYjECkDnmTaPj2nijjagB5rxL7avnon37qRLInQYSNgGPfu+ZP9PQqLt1egnT30NGXb95Ic3hgttCVTcs2jVvXqoarg5kkAtUttyAa9WU6yzhBEGDGgOhMXbcVpuMxUBZeIYhEHT8QpODcVbxiTTxqo0guO1qu2QMh6L0qiUTEiXOlHkQnYSo5GjwjMioNjbgvkqOlCtNivLq2jhIhe5xARtrqtgvidVoaz9wLt25OKBQmSpJwzdJG583qJ17keJMD555KN15+FdtqEacGkW/ytoSBLtyMmAVo4wNeDyrwa+x+GKYS5fNb+PNPx4Twcpcp9BDjceCXz9ueAbxrvi4z6UFN3VhngBXlKvVVTa1v9CPH+EQci0ZSSjTYejyG26dOJGU3fRKsVEnoFTgVXB4kWEPN2IIL38H/vwlyb8I9He/h6D87LDxJdIimQMQNwl4Aqei8KIueVdcPI5nOdcK1uTu3YuCTiFFBUmQumst0ft0JOOhXf/MfMWTWCdogEhCUwuI5MSXjXO5wkXMdD1oHJHddjxaEsaMdQj36SbrUibBO4NEcLY34Fq7qNoSsl0iVxVGSfCjK7+rCA6a6I5ZBdNG+sG+2I02nSdZyLwPAJsnPclzHKrAxVZ18mXCEWKPIgC6AG3haEggE5/bAdoF4fIDNlJxA8lCt+DpE4VieUBeBnaMpx0mJ2SDrehfGYmJm2GyUbRLlJiMiQyWjvGBMlOdk2lGpqATfsR6ee3zMtk+dSD4EnxJeBSDtKNUZ/qYF0uIXiV2sMI+UXGtLjgvy6YUB72LLUV1OLYeL2gm6UUVY09YtNu5Fx/DDC3Yg9kZOMZH76bnhFogxcvgJhumeZrjuULr09nvxchP7ZhQuqqRQBSqrzBu8mz+gbeF0RUt43VpkCSgJXQtEtatBCv/4iZNIQcaCG1tBbVnHltpC5PjPciMwSBlm6rqpPDoM2bZywndIG3n15UiAbBJUHcSpceIrXwF5a+kD4BPOBZ+lfiMEHbvwAMJD0uL1SxQGsGi/MsZjqF+cW/ecogOxhq3kE0hOxrs0ULWazPHVPzYIYZP6v1EJ9cscN0dtGpStAUe9XsZQ1sh0UF0sJVFkd3M/6o5r8AHMonmP3Z+HSyQfBVfgMcAGrMtw5FUCfesECjfJTRJhOvR8jN/IXV4gqU+L/OZ8mSCYiSUmUKDjspQAOVBLVM00rlWPMucUAUlxJ3Do+JKBCTKCHd9BpqvSqofMXxErBl3R6eAIm12e34vqcy3sHq63iN7rQuVJvqNKl9Ur86Vcp9A3JWzboq0jcuqxklAyweFeDRdu/kpWAzOKzTDo6eCzz4VdUutj1OQTpylniyyBC2np9lWkF+PkQdpDTMi3zgCqJYx800tkHkEoHJUimbFkdcv6hxSxbXquIOLxM+fSFJJq6smz0R7LWWWwmlJdAlGyKrFX6Z8+CKQXV71erxjeIFyDOGRQShM7lI94fIy3R4tICniqi6+SbNhCj9+AaznfrddEoEg6FFsBvLCXQOSIci2JRCRwcFV0XMZqHvIaRmeXqe7FcNNSteAZPVp1VJk+0i8kkBy/IMkR+yRzeqYcKokk+jt3eiB2rkrgYUzepueJWkmIYYOAvDoUHHClF0r3cQ17KoiUepkekpGM92hDJCOC0DmSblmZE89fvxKGvXNg1QjkjWAzSWwjeAFby6hdmxUCfP0xqdwcw3YdheksMxKJdUHvs1oMcc51E2RRB5EXGGgLdpNhPHH6TGQ5O/Lx9ntvp8UbN9MyMzXqXRsgLagCYVveKvai+XA9JnMGbARE2S/UuShfQsn95Icez+3RIhI6cYOM3jbju7fItZojzXsZDtZGzSrVBNEmD6pSSvBD+wVOpgE9CsIc/+qX05HnP58Ofu7ZGPIawTkydNWl8wt53iiNU9POB6aPMCXQGPZEH7EKDHRUJKcVwtNJThPSpxtp4FQ5csii46PX7W8RgGCj2Ke3yZGDIwcOBbJvkf6yxqQKDpjKBA4nB8Hk1jXHmsCRu/GEmVrvEGIlUHjaMPid0MHovvaFszJuQdiOQLz0Vy/i4VtPn/+H/zWOCTyAZ86mb/yzf5Ze+zf/Orh+g5ScwenDDC2eSgeeeSrdZKZFPYKm7UhqDjOAUtMHf/Zn2BFzQWSQD1F1UviJL42dPpu++U//CXUhbQeHww9/51+F+uqkfutzEArl9mOzjR44HBNRqGY2B8nrAo5V4Cthq86WRLMr0SBXEWGAap+qGh3HrcdkezSIRKD5P4AnQIFeaWgGJ/JCPPIAUQNjxdOs+ihFNDRj2h/UjCZczyi1NoAR80BoOlBXbUzU1kB6YG84z23EAyQ2vrm1tgT37eDBgQvryqWjXZ1BtSQqxrnfVIJFPUEAhxKvkLC4RhqM6fodEHuDBTsXcRDocZNIbI5H3w2jne/tVvXO4XGDWHYhoCYE4kwsDtYSFptIGaPgxjGc+HuZcfoVxp/MXPxCjPl3IJhDkE98+QWkDMOJNxi41YfqyAhDZ2zsHxuJPDVtHjOPNbJX8Awa5Ye62Y2FMCxh+igT0Z0O5tLgfUdhrswtR3JpHeajpF66PZRGxsaQrhmmzgVgW2ZgDIOHDvA9njMQCZxcgDQTC3CKdIUAmgCIfnzc/nzqRCK+xlYQhIATIdSXY1dtQl/HAAmAC+w93bYkDDshtsBe+hvkBymGDx2JOa3k1toWujHNy+rwvIQgotcKCeKkC41hJAhEkqcfNV8JdQeOvok9ITLwJxABbIpvZKKwBXa6FUCNg4MugCgmMsZYEIxn7ZEV8pvuk7flOz2ocL3M6aW3SmZAw0LFCu8VS7NVMB2cFYbsSmISLqKDpw2u3NLYj7gKXiRUwnUnriDAeevN10MtHMCAl0hOvvDlIKDZSzfDs1brIVMBAhk9fhjYaR9kNc5kTjOUAxZ4rDzqkRtHIh18mgkicAb0NJkID8mxeGcmpm8aYVHSXpI+m0ibcWaq0VW9wBSpQyz7sEpcZ5Y40AATWzgbZnjFgLGEopq67TnfDtLgGOfR78Lu8dk+dSIJgshgC+IQUcKglNvyrwcu2DvY50BsOKITP5MiIaDFSwENx43OKIEPtkogdYzIw5/7AhIkr0++OjMXCGdnOYGac1pJkKpuQ4ePBVI6gtDyjEH0EleZYBy8ksfxLPcuXU/D5I0poaSIUt2iElF7DqgXHSZpuxtJgX1jo0wi/WssQ8BMkLzfTwbxxst/HS5ekwXHjz5RtFcFR2kpM1C/j4ErHHVKkETZXyNC78RwDrndiMCoGccOG5bgQff01u//G4hlI+IlJ1/4YuqDSSxCpBf+4s/SnffeDQJ0LPoIC++YodwLcitVTYxcRYWtYWv1DuXU+4knzqVTX/8GM68cSJPHDzLr5T1skfPp0vdeBi7AGvXMycXHDqCOMSTA9l744YvpxOe/FvEjl5Gbh6B6+2FMTlKHu1sY5S06bd/v4vJjdvjUiUT4BDMpJIc/9ghFzw9czITGXtywLnnmGIZAUDi0mcFyvoPnnmV9claIxXg010mvjrMvDoPkznVlqsoKi9U4q0l8S/UFjmpAsIkny1ypbq4F8tOPIp9ItEWOlcb1NuM6anxHZI+UGf1V9rt/YrcVEgkcH0nSwxxfFcpeYkBWDY5vvphjQ/y2XHkLyZIO57hO3IMgQgXzgaJImYDnXley9TOtUUTkQVKJyhWnzERw3Aq1SfdQuzYYsWke2iZu4MUbNxg8dQX7w9wvvXekzS/dD6JXskaSIp+T4AawmyZOnYppkSafeAJb5SgI7xCBdrrw4l+zfPVF1DdUO9zoO1tkU5PBUOHdLZ7RKTJ26HhITdNjeprDuLsnqbvSj3YiPfz4nvTn3H9xTaA9httDIZL9cAo8oQMCo0CQDkgl964QiKuBFFUMajku9MN1DV+mDwKBKnqhiJ7X4W5uTa5JUCK5HqMgkmImE5FZD5LE0Qu3i5wjsYhNpPR7XUgQ+h8Eh0i0C9DV26x9DgXzFN0sAsdRJPA91US+BSH24t617JZcVInFAzH1kHEbCLbj6EEkmCkk2he+G96rUL9sfBQZx5C0SAaJtmu3FwMapsFuxD4W5uEp1cjVu7fI41qOaP7KHSaGcLwMkf0OKlrkhPkd8sVoHYVndbOJeimcpp58Mly8A0yHOsysKEruDl6z+Rt30u13z4cTAnZF3duMb3dCP+YQpq1LToiBWmVQU4mst9E0Fm2s7LoWTg/2AFq0al8Dy9+P0fHhEYnUETtqFh0Q+jnI3MEbJPeuIcarTABdXWYMOsajC8U4n+0mSHDlrZfIYXoCdeMcKSQTEATDWFElJAw5u7OKLFy7CkfFaMa2cRKJwXHGP9iZgZP+EclFBDxY2hHUpcM0n9oCvhdLMfOb3s/dKTOUOtxABEuQoByvoorVQ93k2L0QqWVqxLfk9JZB2S1UqJBK4Kzf9Vq0GWSGsuPcypXEF8RCUDUTbQ4UmovmeA+nXXKycOs4f+UiOWpkG6P6eT0MaAhe6Tt5gumKkGjbBBc12k985avpMEOXD5wlYKrqZjuoxxoB0JmL19LVH73OhHkMuoL5DB0aAqbL6d77uIFv3UBtZOgAZfYPj6djz305pLSxJB0Nd69ciqBiL1L8xzdhVsLtx+8+Dlc+fSIpkMNODETgd7ZJCgSRk8PB1khFGWCchdPs1AD+7KULobro6p06+zQTOH8BY/NZosOj2B5wTJB7kFGLd977IF0nr+naj16KtHJdvb0ECp2L12UbfC7UN1dhgqCW797FPsEuoD6DU0dQ4+DGqnxw7h2yiiVgu1iiUJrY4UFnXrOuGLKRj8VNo8+OG1nGq3X95R8Gkegl6wOxtJtUR0wStATVni7UmQ0msygTMEsC4aNBRNtkBlgvU2ycnUS7xaj8KrAxs0D7Sem2TcqLEguBwRAAuL5Egmdunra55EOlzgwt1EOiWiU6fhWXsNJBtcz6LN26jRRitkayro+98DV4CQodhL9KcqlrMfrNGbKuE/BzwotlPHkSekwt5HcLT5lzH1v1n7ft0yUSIQgyBa5JHIF9mTgkFCFcShQXsRlQhQK5R3dOwcng+CBgH+PbDz3zWRIQWYKB8e99zKDiBHEdh+PCve9fuJDmrlyNwJtzA7suYBf2hfaMWGT5VsC1SIKjgwByScd+jEwfFfP5DukvSIOchm+dpYjc9dbZqrpJQD6vOqjXzGi0tsPCtWvEaG4H8puaEhMn6MqNXanA7CfCAQIIVRJpZzn8iXsfKp8rEV/hG9s8I6FIIFuruI79psTGMxRlhYLwnfurG4lmen8FO6PWpI5I0RWMduuvCzg8iBI56qIzyhuEZYVExCOGOkxDeJmS00dcZJM0lRm/rZRlDy8eMDL9J5hOpP8z0TjEFa0o4BONKQGXfzyWfz9dIhFEdiR7JpCsZsVyxgA/qx8gKcBvMTzXoFitMcb8T0/SaU1+jzJh2ueYYG4CHXwJDngf26EJx17Gj383vf3vfi8tXL8Ogczi29fghfnB3fVeyfWwBLgikYJEYJVT/5gSvgZib+AIOIqEiiXNIJxsRxQTKFgQ8ZByK5E4kBqkazKsuA8PknbP9Zd+mGaQegbr+siDqmMndTOi0NkX98bNYCfEVKhMp1pvMG3oKupSGPvCIMOnhIXwclcSqJb5nCqgTEQvX514j+1zeIBwazO59c4OTIPKKgVyNL+WVmnr0u1rMdZmiFy0I2cYd4MTYxfpUkHFzdOhbqbrqIiOlBzCuG8asJw+zGwyu+ny92x9lmSqtGvzM8E8lGb46YAlQwhwEy8tCqfchqi7r/H7cd4+fSIpoZVhGQAs1S47P/YCMRZIix+Eox/5ImrXwFn05ZX09h/+BRzxTuQS6SodOnwIBF9lIocFCOQy3AwSQOK4LoncswN3DxUHvdx8okBUvr3JeIhsfxCPIDNXqXD19dcitcPnt9ZXQs140NFUFYKQa6vi5I2CoPbV+0z4wD537TJ1uIptsxrjSpwl0hlX5Li+Yyp7JhSaid2zGVIsOwcycdB+OTbtd/c82ykGIfVMOdwXbq0aCKxMs99sM1E1NpzqkQThHFmFUOGa9WSXAOUZHWJG2Gcr1OeN88wMoyRhYJoSwlbZJRQCUzoTrtyxg8fCLukgqS0pp9kQmKTuEoeOhBqeONeC32rhRFidp5goxZI+vHn5E259+MFH79fDI5L9sAikEIhAMc4zkkR2LZ3q3FbDR46jGrhOBsl7t++Eoa46JFfTaN8k1yuMbRAmOC7EhQ4UeojGbzc2iL20y3BdpYXXVBX0yoiMcuHl+8wxvOk6IiAiCCgS+c6Pb6IM1z2Avc7Uor0h8VoHDdxeDGUJQmSPgVFhMPgCXBfksm2WUR7jM/vanmEhwUg4pTeMdziPgU7ICuunM8NFjDJRWdcC2Tlzoj6va6DLfHa6YQwcI9UFj5uw09sW93nc2lFp1E9mlMEuCxuqgFFMzicx8T3h04+qG7YdEtb2uu6KqmBsfHNvK08LWAW1e3P/M3sPP5onD5VIogNLINq5AC5fAznoEN25HWyR+ctX0jSq0PYonRpEYa5TnindAVga2XqmqhiOrj8iwrtme3BX+4NydGmG7x6VK8rmWyKYmzgkMi7ddhQh5Uhc5RaYU/7wuVzhErHErFhRK4h5FmmBaoRLuEFEX4+TEW7/iVDaQdKwdpK5URrbodNzPzbetR7xDc9BuiBUEFuDOV8nVY334zlfsm3WKVSxbIiXiGgdo54cI71GJK4ylKD4Hne5/pEGcs866aELSWs9+GdWg2s5+i3rPw7T0qBvAfcKdkuP0y9Rz70t6rT3K072uvrDlx/5Xw+HSALwIGcA0g5mF8DF7+Ce0ZFcpxMV7T3ovAdOHWfo7H+R3mEN8ZuvvZ7unz8PF9f+oLNBQpHKeaM0ujWI+8dY59D0EspVAuUFMhmgxbcCyTja+UNMH2ROl9OOBoHxfEkED3Aod7FIks8ycvmcU466ei+YHPlgroniMOBAXh7zDW0G62FU3dT5QEKI2bt7WxAt3y6uVkm4zJ4jXMy2kTYJD3eJJyQJ3+xgh8Q4GlRIs1iUYNYrRhsqwXwF+Kh++q7t97LnPBXHvTrwrJLGpeUaqFF14lT92CXTzzyfZi6zDgmS8v6Vi2RCsLS28wFgE/UN4tWCeSnZaaIF5zI5RF35GFYg38+tzUS9v92fcL5XqYd78nCI5CNtFmjCMzahXMDMtUJc/m308GE8V2St3mcQFqrXBp6oDsSwA1LEeuyBCEwHhBSQe8uulSJ6YTx32O4D1JMAAl8DUUQoJ3pwE+nUt61PICTXfLbcSvIIJCgryU2j6vEtzveepwy9PW75Pf5G2/QO5aWy+WDcj8Z7r9jKU71v8Y4ZtFzchsBE/ngyyqLM8ErRTLi7qeuOiZf5xNqRYmWJsBx9JeonA2GicLMXzP0KD5kBVeyZOrNEmtOmQ2No+mgwKIcVrxGLWSFVxtiIozvN1jYI2zCFKDbhSv2sg8c45/uquaYTIEWteICzqHvxItcL2OxdKE587hHYHiqRlJ2dASes9gGFU+MO/WNjpEGQmg2RrDgQi2zXTWyAiCxDBEoRuaToFqP/KEwEV0KESgb3rOnHF0kEuOoM57yWj3SqA5+MLsPmQ50LbPJZNp/78S2/73XHt/gdZ320xHjBsrlepV5+M1y3/HaQsdLM4buU4OvF36hZ/C5hICi0F/gbT+XzXG58J960jKIc3LqqQdtWg/ci9pLFBRfcbL/PgqwY9H1k8Q7i5RplSTlTcCrEYkx5HyTlxOlUXdzUyb+ZSQl1ssUiRosxt3AL54kBS9dXcbxNtUdCzMRuK4I4AFp8S+Dt3/02/3NcKNfb9sZ7vBttLwEuAB6R7eEQyScBQICxi/h9dNRzBAsncANXyanaHh1Pc/TFTYhkBpdtL/GBqROnSa4jPYPM2Ba7Q3+NS6g6VfC62EF6gNpIiJwezm90ePshOoRvOW2RurWJg6Z/yL0jLYaHoht9OLay0+h+7xXXtW86vKeh76V8Ty+aq05BeD4LwYo2ZbPjOdGoLKO88ZHvlJc9ikolYcWv/TdtDxJlp83EDKh6ticGmVHe3jesh5yf4OthRjj+8n/3T2J0YwdGhHBAgjoATMKMj1FXSJprqnKrLJr0yr/8nYj9UGKoWdNPnsIOIU4TY0VoHe0NzxtMqYvM5pBmEKvXwwbkI6V9lWGfCSugq/pn+2iT98q+CYDtb2fuiE/978MhkrKZAKAEzh5ABEqx1+GEhsoqcn86uRtEXEeKzOOFakXUGM+W7smIdxAsDPVFTBU7RBA/RGfQCdkFpNpiZ3m9QFq/FeoadVFV4NlsMIM5FsDt2Hyu+KH2kMsGkaiThq4dG7lVlCVRxAQTnIvcUcTeHx0GRaEc4hPFz+JDhYMh//Jv3I4PgujUMddXHV9izMOWIx7CPWM/ttE6aeArVb0XqwzD+V1n8elf+EbqYUb9DnaSa9x3tsx3wyVNZvGNl1m0B4TWQXLiha8wS/4b6c7bb0EgN6KvBkZHSDA9l85+4yu800qLRP9dGi+M9qIuwiLCSgKacqKRwljJaNshRKXJHjF4z2e9xzFGT3Ie9+27cvP+Q9geLpF8XIMLQAigCh2IpoxamwHmMUbJkVPk2HeXUZYGaiQDOu1PrM1eANfo9gMktwMKThadkfuj/HzJ/cP4BxkjcU+VKFNC8ZgdlDspEFv64VsiI92NCgPCqtfjYAhiATGD6HhLooiy4nXaA064hceLcrIOIgF4MVNgjul4r/itCilR8FCe6yojv0ONjZ+Y9hISMDxzua45j0sCcUb4JomRLFzENEFHPvscy96R6InUta0SyDoTj2v33X7rDb7Dt/jO1JlTLKL6DnOdvRZJjw3S7k2uHGGmlmlWydpgPrE20tI0onAbs+qVDKIbN7s2yM6Oai5bwBKG4aQQwkLEj2c8sgs/+lpnSriz+R1LansPeASxlH3B+5/29ggRSSaEEvEkhvCWGCmnll53NxcquKdIE0MxRBwdS6StSBixZUDGO9EBuEadzZEtcDLOomu8G6qYapoLiroevN4nc7h8uiQUETSwmI8FNy9LosBuhtpW+/PsIaazy8GhmkDaCPAFJyc9XzWQ7zieQ6+bu9m+/g5Ejuv1QHrT5WNded7RKNfIzmkoLIpKFN9zpYSqpU6KICAJU2BISEFM+aga5JzGtj3fD94S6pkq1uyFq+nm66+kd/7DH4bqegoJcpJkyJf/j3/BLC0/ZMWxyxB2FwR2kPy4Kd4j5YVAbi8zwZz9xi8zBuVKunfhAyYHv0TBtN06lPAC0UM6A9c9l7aI7nX7VEKQiOKc30qe4r4DzpDPxT2JKRqQ7+dfn8rfh0skAgMw+E/AdMHVFMUC0OjuKhxqXk7FtV06aYX0i9rBw2kKW8WAouuau3yBrwDt3DEidslxPcZudqwcF0JAZ6+by8W5i/H0kG3svbBjVE9AYjtZ2yUi3AT+wnPEs6G+RKYt7xaqjDlOgYS8VyGVPLJr5cTUN6tD1qFAYtFUBIpjdsf6bhA9NOjYelNYSpWqCvL7fCB2UWYuS4RX7FjUvjLzFS8WZ+WxwKvip2APY5x5BG69/k668J0/J3/rBkHb++m53/gHIVmu/OB1kkXfijUqu7De67h8G7h8HRag1L7x9vuk/PTF9EYNhitMnHgihjBcZ43FLlzBXRVnhNHe0wYkq5sUGKWNjM++lTAkktzXagT0n7t15J7HXeYTy94xCYT2er/cAnfKHz/b48Mlkr22ieVsNLzkLurE8+i6u3DaOsNUu+mYbjj9AKMVD9NpuoaXGEexseL64fl1ETfEPRy1RGi5rLMp1lgZN65BEBKJhGF2raMf97iyxKKkCsTMkkHVKTsCfB+uzmCtbo6W5W+JTQkh8vo7jPQgUuokN7dqJaJzalXDbVvgRFl5cb0mt1dq+g5vFmQQvyK2EzgCjGQaIEw+ZmRTbcqxH+/JtVUFeaGAaaib+eNxb3V2FkZzP1SpO++8CdHkbF95jcmeq7jbnU/LDILSoI6kT7x4Bmo3CCI2tshXk3jxHmrHqN6ZR9bxm9RPuJgzWQXOzgBpmnnN/wAAQABJREFUeo45cU69pHNBAjIFiLUxOEea8M6evQZAxAUZZ0gT9bDimkfb9WltXeee/vyn97X9rbKh7AFIkEpO0QXChV4vYkMUTdy/I8x9+/Rv/HrqP3Q0AlsxWg7gzjFP8Pz127gmlwJh5egN1iF3hJ3p9KoxDv1VGjhju6vU6kNp44IF7oGJwtlqxCbcub/3u7hcHj4JSHRl8X7B8ctESMtzEoSiM+NQPBrIzXnZzwUoNGukq9jiURHEnQcjBQfVUxtIG8DhvKaVxHWPSlw4tAgcEz+AfM4OGc+LnGRJR4SfwVK6u2+/9VYMxXU4rpw9g4HJKMYPxm/LWsFzuGtuFyn7ZggPMGzBrODxo8cZvDYRap82il7BgDHSlrBtLJZUxp60n5yg49TXv0mfOvz3frr7/ruUmwnGMStrN68HUalq7VJX6xweMc49chMpIuF7zPD4kFQpO+lndHzoRBL6tJgh15UTFzp2NwCv0gFO6zl68mQ6+5/9KvlbR2O0oj1qp0eKvMa7yA2hRQAQrhaL7qjGwJn1MulSlktDH84vYSYHgI//AF24B0p+iEBEzIhN+BydlL1iPOx1O7Hg5Nm7ldNlvK4E9D0Rz8TAuMYH/e075mq54I+eIefQEpEdZhxIzlEkl2GExKJNztllTGKD/DRjLFmigDh+n3qVCOU34zdH68iT0cgsbfhNHfbOec8UeQlBggv7peAOG9SNh6OunbUF7LQ8O4pDdSswL+0nZ0pxlGOoo93O2MLMNDA1CWV9mSHMTCZRY0nwWPN+cDTWujzw9FOp0mTtGCSO0sO5vHTdbxDZ394g6ZJl9tZu3ULSZKkiwZj9INFYdwqjXrYh77Y/tqLv8o+fzd+Hr27ZSLmvu+c2XsnCUS4pEOevYBi+w6AfPCAjTH2jjaDt4KQJNTEfIoliZIeZJWZoiURwTpdG2CRNxOURtiAuJ7vLnSUnlgMXHBqkKV26cc3fIhfliDiBoCK8cYjgdlldiLiEnVpctzISrCMTRUSfNcUkvF7UzFlLTM1XRy/HyfiMEkDC0tgX6bRpJA6N5Dx/lzASTA8QxPPYC6oPFaWAQ9g2godN+LjBTiiXnfrJWOKacwAAb+vgmHmRkQ4AQUmtgXGxkAuuZfPZkGhIM93wfiecEgzndd3HTca3OLWsQ4hrSPBaL1Ot0l9bLTKGkeYja8dSP4Z+lQFwlu9YF/sqsiG2B/i2MSukFjPCKCWtQfaCWXEZgEe9XrzEHnl4XvN32Tie+FlsD5dIaFw0nqMA2VXvtOPoLDmySOJ4dAnlFrlayAVmOj+InWIqRCaMCH4JbXYliKfCDTwIAmkTLTYBcZUhqlsgW4uynEBCLq4e7sRtOa1kM8aMm+MVIxPh9HJ4EccOFIkU9RKNakvmtnBF6qnKZTsi25ejto7Da+dvXo1nlUKjR05HGr73lpk6VERTSrjZTutd4r6RbMeo2I5N6t9CkpQR94wQIku8yp8ChuVPa7N3b+9inISt5bcgCtvVxe6GEzgkmVIsr89OW62MjIH2bXcz2I2JNuybPISa5bF5zwi/0mR7k3R/Ac5VGVIgOWVru4nMzj82cuxoak5Ocw1prqoIM/AbPf2MumTpCec4VqK3tTHLTpRwuRab+AH8w1Djmm2JO+X9/NTP5O/DI5KycR4FAB0Q6Qp0ihO1qX7tdOEFKZ5buslEb9xbAuB9Bw8HUtohyzOsiYiBv8aM56ouWy25MaoM+yazF9ohoVMLcPbgkHRgcF9VFACfo/Gqa92UNReIYScYF3AzX8klGhzuGsiq6I/NbhI5gkzoNDqPXy4GpK6tNOLFQJoFCEbPmomOcsyM9NHNURd7PNdpF6OYwB55aEEIvC8ICpTIXy0/G78e/MmlISvyCS/lunlUqpQ/JZDQ+W0/tkFE5yUKvkIvcJTghQtuZghK6bNBprWGuUSR4ZLjSS6RvY60zOlB2HwQ9fA4I0qZ+8tpjpxx3xk3VxlENwbTqTPJXq3BupPHjpAweSld+s53IaBTpCAx283RMwx5QMo6vxhMJBpubcAN5yDoIgMgREy4hqmteGPzCxzx9GexPTwi2dcauXCIVoARurQGr1yOjhXtdrszN3O0oh28DvJHj/PeBkSxOreAQYi3hjHdqmix4CZI7XSd6r95vcIChQoM2hPfokDEF4yS5xnTY11CCHVrXXUEZBBR7ZBA2KjRR/qF+opbbBIKL6AgFFWkTaolpsc4E6RPbKtj799oR35P7s0ZRfi9QGru+FquffFSie3+LM7jgAhRFQ01ipeCAfB903Xk+qps26pQcuRoE0QCQUsoUVR4qnJDeC1UIcsXVk7Onc8zd9dREnYgwUyZj7ElkyudPK+FJzENMGfA2ET2NmJnuhy2KlYvEjJaw2f0EDZxBjiRhxI68vROnkrz9J1lmU4UEpw3dPBktdKaWn/gRN0CNjZeoP2MtodPJDau6AhdjXkDWYLL6ZFSLaDHIA6lw/bcdhiy4XbVOAd4DlM1hXtZTw3PCbg8GwjAhetkrp0Rs+A9xXdAQL9NB4mKqlU+G4DnuEXekkbtg807bBwe9Itl5Isl4fkrn0vntIE67mD7UDjfAgE+pA9JFLndcdyDgaXEp/jLfR8pPl98MFeCj8v/vSbSOimdEtHf2kNBINhvLrm9AyfvoGI6TiQi2hBKxCw45naW7bNM/kFcflTktO47lC/SGmVvQBw12uWEFrqcVdViCDDZ2S1nvsTtPn0KiY/tY3B9mUklnBbJ52QISllVz2Fmf7z60ivhZneiu3ESLjfINt4imNtGzdSxQwU4KvUoSPhQh2gvbSxh5++4R41/2tujQSS2ikYGgtMRsbqsl/hnw/N0snsYEkgQeEbIXb26f8hAl3PR7kbsZI1sYQkqNoGqRJK1ex7YxlGgsoXOG6eoFzzn8sz7t71OiIu+/+N98XHPxHNwbP/ZDNUsvlB41rhGXQL5OIaBzQvBJKiXhBuBSI3rop6Za1KQ/+MZv2D9Lcp6WR5p+yCWs8JU4ebCs22OGxKEpVTjed7g0ZJV+C0QX0bgtyAm15AM1ZJ3Q9UK2PAK3zRO4rf8twaxGz8yLcYpWHdVRZEmbmsL91F/76JOvYNbn2ArhHSPsSj8SIef+3zMPEmj+e3S20fTM792INQ437311ptWLzXwoG3htAgmVs2Szm/LCLiNTisgJHYlfdk2ru8/97mfwlaZmDz43/8UyvnJihARouOLHqG0HHUur6uyZG+MQAr1SNXA8yAqkIoOa5Ip3M80p/2keDvORCQRoHpQGqR+D7CwzihTnPYQR1G3NgjmkghywzzFT+acFMq9gmtFy+yEB3V70BHltaKTaEPuo/xbZMoXcjuKp6JEi5MQskQ0CFmmsYis+XfUwU9YrlJI5IbaRQzVH4fXxlRDIZ2ipXEv0Bju7pBaObaqS2Be/jJ/eSIqmqVg/k6GawzMyp8E5jChaGL8iXp4wX/5G0gXYGwsJWZQgUhizUjVV9qjty+QnPL0cOkhM8uhQaylGwveNmlnqgnYH6bhOBR7j5BpX0w4rnoqNwgYFFC0/vsIoqgh7/70t0eDSPa1y8YKhugckVVssmNUrUAqoBtIFETjNTpDANcx/A6w+MzY9BHSI4YjsmsSJC9EYt/EiVN4mI6lA2fO0TkiGhwJBAoCsfPQjzMhSnzGWii7xA1qZGdTEXa38pyj/61f7PlevutfNt/zniaxMRwJQ4Jw5zsR4ffc9JfgzF73WTmtyCwuCBEQknrHOeVtk3S1A2d3zi0RVaAFw1AS8rxtdN3IYBQ6Q9Dz96od5UXBUcXsyjWDgKwCCCMepNpKknxetKFoiy9p+EsEEsi2i4yGrdNh2PIIfcJ71H+bIKPvR/8wPZFqrXDoZdLBniZuXyUtiO8qYTUYlUTtyErbKU3UGI7tRIPZyQChwCQCqtE+Hii2uBZfKi5E+8q7P/nx4atbZRuKhu013ROBCKIE3xIw7IXdHX3nqyLTBONKjnzm2XT87FNplpkI77x/Pq0y08oMEqSCKjbO/Rd+6x8gTVgXhA78zj+/hlpBqdgcLbwzpqh43fwih6OKvPwPo18kjBiFCLqfk+XuogKZgMQfMXXP3KD+1nyPuKhnnfJ7iBNUKnWi1JthR8UCnflJ8TxULpFP1SZ7nbSTdsg+GIWIGCsCDFrM5r4bXrt4ww9TN9zR2ByuuWI+mkG/DZZys1oyAEdxPth8z427IJ6E6tzL/WPTfNeZZpj5hJ3KxyPxaLxie2yXr0r4+btO7B0bbdzcdD0WnqDOvlxlWqIRgsBP/MIvsobkFNnGi+nii3+Vzv4yM+0j8atMnEdzccuzRB6pLgeY6qg+w+yRnNd6lUbcc1Jy0misqyEB2AenuQ3xV+KBsYT30m9HJfP9XLGf7O+jQyQf1w57IbaMbIFwqFHlNTthEgI4eOpUmjx0MF1++dV06Yc/THcufJAWAWwf3hUX4HSmxx1iMHO4jxdu3Ay/fYulEWJEIp3qAraWtbmKy1M3J0QjwcSyDESOnQVRtSyrgLtkvd5kxhNUCjk+KKPK4ETYos8Dgi7Vo1xbOatxkchVgpLklhrQepeUkko8pVcbw9rYhBFnc9XM+DWwGKMfQZzwTukEYCsRNiSf9YEpWK6SRClgvQNnyEpQcu1tnIcKBii7CQYOs4CP4/zHD5Maf+1yqHGsQEQ5GnLCW/hjuxQcIKRcKLJy9wIprQzPRGaB10Rcvj955mysntVkksGZixejCkMs5zBz6QPSVQ4y8/0Jys7L4rmUtwvJ+j3bvL40T5rMNGUhlYhrKRXDbqP8LrSAgDWfDQlLfeyfso7xIevxU9geLSIB0EEIAXCB/vG/RU5dnSLy5OnTaeTgoVRHVF959ZV0mbEPi8wJbE5Rs9nPHMCM1WaAkcG7uevX0u233yGweD8mjOhAHbpmQ5wDTOZuB5kYBYno99u9zBZvynrsjOsOvZrUi0VcluADL4BEipzgrXB81R5xquwbuR0dFRIQidOBu2pLxHM8ZB9G8qBZjfr+8a6pg3vDQFydoGl8GyN8gyCnK94qXeK9gA2VUOqFmmbE2+70m/lgXCbSOyRmCCOrhBmm2zRAMGtU92HHDYxOso+nhTnmMGbOYe/tlVWee8kG8q6SeMfZX/Qe7goDr4OkwgAC0dAfIvDr+oujSBLnRnNWS+E7fIQ5C2BivmN8xJnyzTyggdFPvcw0o3TfIYeowSpkziXWNtubtps8GTGcKoyJhkbVbJ9aB79sY0gyjrEJrJ9we7SI5KONERMBgN6pfMy/5by95Av1MeX/4c8+D35tp3kI4+0X/zLUpanT59LE0eNp9gazOaJyzJNWf//ShZgXa54VaTfgUOroqk92rIaymBcIpezvKOZ38NoYOGMyipGhtHBvITVZq9BkvfTaSzFL5DbqmoQq8biFN6YwWnW9iixu9ld0HOe6mfPGdwObucc3HSrr2HcfVlVqoLerHq4RG1L67WDclv1evg9V8CwuX9VF4UR5WQvJBGsKiDGibdJFlCRhOAchgeA9SDFq0E3uVa3OMhMEZueYN0zkzFJHJLP+wEWB4ZyQfE8p4uq+IqNRLdcv6XSyazhgqCRj0xV97ItfTdNPnqXNnfTq7/0B36JtuH018p3kY32eaWlp8/jxM9TTtSw7EbTs6WUWFlStPH8zg+k6TFnUqKarSH/7KNeJLGPOrKEMQKh26QYNMFDnnwJxUGRsjwaR7O99zqPD7QyN59i1EQqOCfLpIRk5fCRciU3WF7z5zjvpg+9+nxkUrwaRLFRuphuv/wCPY1ab7n7wbnAhF/50etCdbfUrOpkyK0gbt0AY1JXm6ESM4Xa9k6FJco0owyk8jeLfu3Aee+dtPGdM8gB311WtirC9BZfWpiFnyfiAPWY2gEiRubdu1kzo/ubT7JCIp/HTC7wGgtVBorgPMsxdvRY2gmkecZ9yAzlBZFfqisBhEHjBvSF6cSQjskFEOLx1QTXRw6eB3i2Cc705PMA15wFgae4Dh6OdTqFa758kdYdpmmjH7o7pN6Ifm98OApAIJGb/e09Co31KVPrMNri5pN39C+8zpr4feA7AcPpyW6mgUkHpIIY72fkAqp7qVR2jf2udCcErSxALiZ24rhsY/DpX+kangym2qHusJw9xKVECvhKOTM49GCvXgUVR86jPXjvyr/9Pfx8+kYglbhyjwSE5gCTHUBEAisicz/XLV1MvakEfy5G5KOgiYx5un/+AKUrfiPl2K+jtjuvYaTMOgo72PaVCXrsEgxhDXOzMxjnlyR1FILxCNTwuupHt0AYTJLh6U0Y4PTRVxtXD6VB7nEnEctvUJbxL6sdwS6fxCekkBkUPqRZCqJQX3DkownbmJudOlDzBFzo6Uttx10a+FGpXm7ZEeQUydBcxhzqZtS53py0il3ayBqPmctlc36z+SNg6AFDag0C0q4SDu3CkWLg6KSW0RSnTQyqJs6b0mjqCA2BrPRNnWd9olC8FxRSNyF1HO/GMUT/VRYdRayeszdyLycuHjx5JYyeOIRHJ8kUiqjY5MHtbVZbBXC5T0eV6l0gXGhFlSIAyS7MolKyqgt3U3xWCu+uuwYINQl1iPDz1l0CALO+jrkq01qsklKgzF/6G28MnEite9oIEEuc0V87EHkc44F7sgM7thziaGJv14fF07YffTVfeeCNdY/jp9gZG+O5qcMXB8fG0DkJvrYAozosFNwsuDsBqoVLIWeX8pn+r+zdimQfHgTsmpYFXLOauonrCuBeOZro4viCm45kCIVgCeoWl40BkJ6czY3ZrS49QJgw71Pq7nPQAy8o5bHdvo7ysDhhnUJcxAxYvGjGC8FyB7D+mLgAbiVgCGTl6MhYxkoBaJgSuSqy0k7EcNdoDxlJXuPTKfES4RdgqNpV5U314lEy+bCFRnUzDia+diqkOfAbGptCTYEJIKQOEm6TKPyAFmlYQR3Bs9broK5CTYw9qZ30ASYDd1G4tkzPHwCy+MXvhQjg2Tv/iN7BDyOWaY/40YGW2sykqVd5bvHUzpKLxE9WqyFKmvXrpZi5c5H0YwBM0yFwyGQXB0u6w3QppQp3hT8BMlRAcom4SWEia6D9u/gTbwycSACyQI6Jc2h6KTDpb41XicP0QVYv68HAQx5HnvxgS4N758+lH//JfYRAyPanpDkgIkX1k6mA6+4UvpSvvvhmzeayhrmyzalUFIBuNrlQzJ5Xz1SPbVlUuz1vVN5wXI5UXlQgiLvSPDccVhAb2EIjGshBy/Abq2fLdG8yCf59ZHF2wR48V3HAH4iG71Zkat0jykxuKZm57BABxZIRDNaBjd8g164fLi/wdEHvDTFm+ahBR5HAZt4Gpw2ns1JnIT3PwlKP+DMK1meC749JwEG0bj52I1Wa2eg1zmYEOg0VsjhWQtEEcSe+cMAt3L8mYPQyIMuu5AbH06EbGFZ6JICocfSSNuAWzyafx1za0W655spmmnvoCKSi3UNlIOMWNvI67OufPddLBZ8/hbRxP11ksyHQd4yYdGFsXRK0avHTrRlq8fT2IxLSiMSTQyPFpQODcaK1Ys75SPU17qunWK99L66a6kBCps8A6aZJEnp21EmbUK9zCdqA7cP2bbA+XSKh4AHzfUU4sF5DDy4k9qhpU0Vl7GegzfPQ4CNeM1WZdB8T5tUbwljRHBtMNvFty5k3cqIszrHolt6FnJQANwx1G2HVrlNdUjbKKFXo65WuwNhiz7chHEx3BzAInrKM0Cyen43oZ8RjDecMDRj2FusAHqc2GpTfpZGIiqB8S+A6E7ziW7rArshafO0rkt3r88VvBmXfTMAhaAwZV9kttCI9ybYOOBhFJj9Xa3AidjluU7zn7vXsbRO+QXFi6mcMtjSqW2hnGeqJEZlWrGnXTBtC54NoplpMn0oC4QXSJ1Db82BaNzVf3neYmUE8gD6G1KNuhDHjqOBc2qpEuaLoyySTbuHvHnziRlm7cib6yj7ogANUwJcHg1CRHGA2Iv8Yw35gJx+HWqMOm9tj3LhQ7euxkSA/jOuWANj1c5njFFgKadoT8oy3C2I4sN+H+n7g9PCIpKmxVJZScv5TVK431HIHOBBJGMRxu8NAR5qT9HO3dxai9wmpWP2IZ5mOMfHP99jrxi9shztdAmPOvvlwgu1F11KI2nYAYaKvKgBx6pCQ+CVKkdwmB/nH0fDo7uLv6inWM3YMzp5PeMtQfhKsxqSRy/UZnWanUSCdnkJXxkjoEN0pHKv1a2AqLIInp9paV22vvcO6P2Gm///jcNPbOCOWOgMg3llib3bmDVfPg+qv3bjH+HA6NDaJN4gtm3W6S3q9TQmQMd7aEZeHs6v8ikXW1/O5uPHTUWbtDybGlSkrZbaPmwEaVz+fDhrKaxSZMfP+TtvguMFudvYsH0AVeB5EiDDtAtbIOq6xBPwdhjp0+lU58/Yvpjd/9IwgS4sQm2umsRv2d7ebYl77AdQKaZHpf+QHOFxhlY3gTFfNQgMrs474x7MbnX4iqxMA2iEwVK/hMMJUM1rgAfSBjIEjgUTYgAP9JLfnx6w8vLcUKs2eJAb3DYcKALlQsPVgO4VVdqMA9xs5+NvUfOApi96b3//zbGHS3eDele8zasc5IPyXIF//hb9IhzHTOoj6uLxheF7iS4l6XpIgCRWTDFY4qYTaH+5kqx1jKqG6gQBQ7XIQJo7kU0dRV472HiSPsRD08pmTo3VpkXZSlm1chktk0zDeexDX9zeNPpIvo/C3e34ULyrF7UAWNujvriDlOpq9H+ot1gTEIj8Wiww9QzjlUuRr1XaU+fE2ygtCZoI9BYyt3brDj4p65g+3hEFgJJEtOnxO8brH8NgQ1fvhkGj10PI2Ru+bQ217Uqcg0gPhNC9Fwb0Kg6vFrqErGldaWMKiLgjwGJ47fUvaDjVpRNzYutwlCOohNNXOENCDrG8OYlYQMeHPQW4sFmM788i+S7PhZ4lwscsoaji49d+3ll8Ie0v4bxDHjysCjx2SCeLbGmJ+YtHqlVE68rCNplqKe9oENzozWU8+zJOSMirFn8OZ6llWPtpQ/Pvn4cCSJlbMhxR4gplHRsELFknjkInLjHuIF2g62UE5lIMo1QeSEDhlVX3XQ1RZrvzv5gu5HILmnquRzRC9qV0/Mho7BCPG52KaAF0HkupkoQEdZktyG+mVbSYB7zUt5ImzHZ28RBXbV2zWCk45o1GOmJGhCDDfl7FXdzKR8QIyuE29ukraJ3zPabmqKaRqOPlQl87PmMy2jglzFvnge1yholnrhnm/cu8n4CmISUTeqY98HR7Re7MUWbmEQX9gKH8Qp56ysBaOo85znEQjk+w5Gs1HGLTbJBtjAptpVugDPiG7zkXAdq45Sh1K1MSdLwvC/BFlUpqwCahYrZ8G9WzAV+1ZV2SX33Byy7LJ0S7duYecRwGQa26s/+C6er2swt9l0641X+Q5GP6ttaZesEoEPLx3wiTw7mIeeN+3LTZwWpu3LUKM6tG+3CoMrGFt4t2yzN0OS8Bju+wjg+oxwA0774bfXiH0nnx6RWJly49xOFHndJYbSDnEF2VCD6BiNUtf6iMVFm6gHAGRZDsqaJA7gUX2qwtmdMMDhuGtzAI0x1WUiIQIYAPEJuLGuUuMCRnCdVM0phkRcB/oA1SAQEx55MBOJdS3qKfF0RVRZAOsA6k7r6NB6omYunY9Ew244ZQPCO4RR340qdw0DeptOroUu3ZOGXSQVQtezpPdGXJerzqcr8e2EZzqm4qETVyjrKtLhK3D9o/XRdAT35z2QbBG1aA1k39LWiLoV/Rx1zX+c5jRS2CGO9Z2lDEue1dsUDgDOg+R1GSN97QcRuMV7a6wBD8YxdSyeMlQuOiXGpOuadfLsGJNP7GIDNTCGIgCMDGMaE0Ef6wA+QnQxDJjzSNuHSLKtA2OASBw2PXfpckj9PqZNvf7Kj8JLaJrQ9dd+yLMsMQ4jFFbLd5jSFobonAASrPEVXdjii8uTu9xfk98CNEYv0n8xgE/4SAQSjpWKPzSpuLZ33XtuXv+E7dMjEitAxWOTOGxEEImqRkEkqFrBCQGG8YVepuI0bjGGSE5khC5cvpQufucvYoinNovxEBcTPfTZzzH589n0/h9/OziSbk3tmEpBfKFfc64LVW/UxAnWOaEDenHPtuYwIPGESCBZny84jNVVBZIPfQSALlXQGMJvXyGdAp16A7WkhlpxCpf0JPWdhWhnidPUcAQ0MDoHSU489/d/C5vqMAuhHkiLVy/H0tp6ft74P38nZo3cZa6rsIUkWJAT2ZL+9Pw76Uvkpn35+Ok0PjqWXrxyKb1v7GGLHDCfK/pVT1EmGmAKLHX1hgs3xpGrbmoXOBY9z5lF6kAmTBBKT5ZJlpuO4gSTJk4/hSsblS5Gf1bT8LHTpJEcwyY4DYJiAyEhXRb72g++B6EwRl+3q8wouhaElPrZhOUW60Z2oQk4W31jfCy+HzPEIDnvvvNWuv/+u+mD//htVEZSVOgDG6Q7/doPXqSE3JaTX/lmRO0PP/cc0hoXP3aX9srM5YvYoDfSElLpuV//u0Hw+cMwMiqjuuymJhD9yu9dxqBEsBPYBXPme6E9+KANKAHq733bz94m8eMFUfjdkBhcCwJR9APEIBIkR9gg6unsBuAOPPVMGqKD+pjY7P5FV9W9lOaJqufcJwvritwg06xbqFtOsrbpmGq4oGqR6o82iLZODcPfzpILGQtRfyXhPKSAHDUARMfGFvUr6kjdrZ9EaZ0V3s6vGn2qaseVGt87RJlfxzV7HS55n5ywdb7fj8t66NDBNHGWNBlymBrq1NgjG04wTVly5FmQxY73+0bIY6y4MAM2EwRDxblZVLlJjFVJdgjpugoSt3k+VCdVS56P6L51LXYuhmtYyaghXgUG7uEE0TYrNm0QnohAXo0sg35sAb+tCjNw4DAu50ORw9bQVQ48DbQO4oqW8QiTdaShMAh+TbXdovoe4xd/qINerHDp+559wu4DIm3/2IH4nnB2Mo2R6UO4f4+nEeZac/0UJxBUTZUAVWtX5mBK9KdTGWm7tFF9KSkkqEcLDk3F7+9VQhBH7+X73nOzsvuP+deH/v7sJEn5cT63V+GiMgIjak+HBkXzOwjFo8QSBENAD6Rqihx4njRWXXNdUZ1HxPEsnelKsWuzc2nxBtN0EgPQmJaziWwCwM4MqYKdkCPNIDhwFHG2NwsvkPUSuDy/B1NPSqQryorH+OMtVTptiz7sGtPBp2jKcaTU69gYSxCdKpf5V64gPHrqCToaQxlizi+jWsnVIQ4h0Yu0UZVwibgW78fkcmj1DlueQb2ZWVtNR0jDGcHobwKbFerewsWty1kiUhJa/wxnuCjEruHcC0PobFQhKO7CPSNmgyTYABbOPGkbkEcheVRMXcRHIqsRxAOAoZbmsezUEmSWeRmHGmDWE/PWJEwn5mhjCz7YhA4fLDe+q40gN3eR1YjM05/mqulo2IYoDM4aL9FTtzrbidkeR6YPw8wGQvVTKhi01daMWSBROQeQ2mZ5W997776BQY+dggot4XZIVDWlXrVLtiYjsNNDDSvqperpu6ra9vtHtYWy+h5/dpLEClhRj24cgzhA7A8RRCE57AAliR6XCp3bIE2hQVTdxLvb77yN5+PlCDZpZIr4TXTkAaY/dZDQzIUPWB7gTaSC4ydsvJPSgQR0upHvHsZK6DHpJVmxHx24fxiCQXUIcSCQ9nWq9Y36xVGOVxButEME8HGuE3dgtHCaYGK1X4fzfRU16jLu2df0DDGs1ADfoc98Nh1AFTzxt34p7BfHueumVe1Zw65aZ+/CVhh94sk0fob9ySfJS5qLDu2Q3r4Eoi+BRPiu0uu4f8dpyzng8t9861fSJRIflyhrjfeVmg8YkR2eR/y1sNtEKpsY41corwUnXmC5hOXZeyyKhOsYNbPl9Epk4To/MKDLqgpSQ9f21iori+EpdJLsWNwH165p75MnIHxsDRf1WZ9zbgHUG+AhhPaxmgBXSBoqoQGuWmtsK3LIQHql58RJMrmPnEjjMJMtgr7TZ84xBojhD6eOx3LfGywaO0yAeOHmrWCQU6fOwnwOwkQnqMs4km2aIPNkpCv1k3kcq51BFJEMKYOzLz2WW0kUXPLqAwkTP8qn9o4/XUlSVsQje1SsOObEM9QrgK+0COIxHgLHjd9KDwkEwDtz48TTqCejwxjjc8RDXoJ7TcRusGptZj585YraSy9+B6/IQhh7uZckTjxbiHMDT1VcnTWNf7h1bz9Tpw6Q0hBEIbejniK/XMajBLyvDUb6va56IOJl7xfAx7BWRetGatVAvE2CXndAku+jW6+y/FmX40Ag+j7qbKqF7/XDLV3GrgURrBer9PagLgz/0t9JDbw8uoKdhmjuvXeD60Z6B2VXqY/LvD2L6oNlkX50m+RNfl+hrHUNbyQQmBdtsFeDI/I9jwYZbapqihxW20SK6ShFkWpy2yb13OC7efojBgssEE8BcYVLZ305pKUS0H4YPX4CJpYTONfmF3mulo5+/gt4q66FC9ccMmHpFnCMcxUxrwFj60IdNql7rZ9ZH+nnxvBkGidrO6+rsssQ6wNpliENdy+cZwDW2ZgjzX6aJXA8CTEZf1Ft2zSdn7YrmWSaqtvGeHqRMM2Jo3hEJ9Py7eupizbgtgNylGJ97F/2kCqoy8IpQgXAJeru/aINtsPtp0MkFuxWHEviiMqIfOyqRsGlJRI7QQLhqASBnQQilpH1BkZqk70FMJeZFNsJlsdPnQr1xdTv3Z2L8akNZtRYBuEcxqpuG35yVAiBZ8eaF6Quq+u1p4nLlwFUeqZ4mN36ukscAo//1D/iMnSC78fE2gVBO+ewiGscRmmAiEtVjUF+b+FelHA2qENiVviYvV5CQZd2wjxHDFrh4PDaE85eCOE5XkRD3rR/KyQx6XWrwThqhRNjFEQa5bkpVM87TLC3COLfv32LeApL2Kky2KfREBvkhnohkVAfvX6x+RBNjsFJth2C3mGJBLl/GPUQTdhl/Da9xUKDacAEGoypkbHJ5Mr+s0yNduHlwDRHc9Z6ya6GAPzu3sb9jG8Z2EHAnALxiBcZ0DReFIzTbwA/Y1oruNTNojbNSKIpVUW1DH5Em3UTW07gD9eciUWPZ3NkinrKhEFt0l2q2FnildMTySD2VC9wLiDmVC54FPXO2Z649jMhEqHCB/IhsA1CEPnyHgCnUgFkkc5zd6WHkgSE8bdR8P6D5CcdOhwc+M3f/7+JrMMN4IJTZ5+MuYBtiINxbr7yGrooHhaCV/FpPwuglUYGDnsIoKliSSD6251RJVafAlD5eV4QGL5nnbxIHfoR6y6uOchiN5MQppyTO+md//jd1GHmxa3OCkvQXcJYR6VCz94EKdYxpgf5xgkM2rVu+D3xgTrpKxap5NNV7Le0Cby4CcH108Ye6ieXN+YjwrpysKkvddSRXgi9gZH8NTx7Xz99Jr1CxPothrhekusvzFJcdGcglmXnEXmZ0GQkwTgoM9rFN500Ir/jezItaIVrS0g/A5yqbKopm3ik5KwytfYqfcJzSjntw1odvR/ObZlt1CKZlONdmkye7YhK4zIdouf7N5/dRTUNKgWSInqdBMoDqFMSt+7m5dvXsIMG4/edd98Nye/Cpoee+WIaIk1FArj0w++luRtXULORvqhXUBHSB7UczcAshBZ12UD1GxobRRKRS4cLXWY2yiR4OxDUCgxDYpMfupWzVwbssI9E1bil80bgFPD12Z9ckigp3Ci47JAgCjkPSCfifZQ4MmF4HaQGicNQ57kq/vhdPEerMwvpvT/955G8pvE5cpSFRW/ehijei6Wp1yLKzBxOGHuOjOND/HehG9Ko6cQYbguH07B0Xb8BkhOr9nb+vwcAibNvlBQHAoBy9rmbTJJG2kZPHzOnnzyVFu7MwZ3IzGWfu4a7EVVnbe5+Wrl1PdVpbwPi3fl/2nuzH7+OLM/v5r6SmczkvovaSqVSbd015ZkejLurp20MMDYGgxnYDwYGth/9YsD+E+xnAwYMPxgYeANsDzxudHnsstHdU11dS9eilkoLtVDiziQpJskkk7mv/ny+ceOXP1KUuqgSSZVRl/zl3eLGcuJsceLECYjxFuOHcdTE3z9yrLk0ezHhe5w2UHI4kabpNwNPOtK5nb2MU+SABmpwC2iJUiJZwDARPRoJpZr1jePPNqfQuacYW/3rV19rbjCznHi5cP7wPHo8xJGySu/H10tpwVE8DnjOvXXxiEQHXjpMJviFKgtt930Gu0mjwQNJTx01JQ9jmPAY2s1uvcxLDMAE1vUjY1DmuMFJwDX6IuOtdtKQDPONhJkfNRbOE1isDr74crMbZJ+/OdvMEbTu7swFKTGfOGczhrVr98FjzTFUuTxHUo+y1PdH/91/3dzcxruhnXOahDn1gX9Hv/5NxiiHYvn84T//b6O2utzZyc/f/ff/Q2bsD2f8ev31V1NvHSE1JIgQqSbEI4VkklF8zrtUP39+PSKphGFWXlsAZ3onBBACkUj4SQgd6SF3BijeD9AJ6pmuVNt15Eg4inX+6P0PoutrOtRqtQA3dtOeu1cvZXvqrJ2gZUqO/Fir7eSjxOL8iZ08jJVmCJWnH2QWIGEVnERcrUlabkZxIcmmPlwPMcC3zm7ZIGHENQPuPXf5cnObuQ09WzOzThvjXAg3W4cDroFgC8wzrOKO0YMpaYPwrOrNK6iJumLUSTjb7MScKxE9CxO5ryZkz4ZYVa92ok5L1y3cbc4haTaoyzbP+ghWjX2sWQNAw6iRqg6rdGhM4sFJEJK8bacqDAooqQuSBhu4lZFFgkscGjbsK57J1f1332Fa3juekcSEawn6xyw9ksPkTu65QEuzukSwfOsjihQBd46SP3BXwmuFov+1UkpUlu/8h0xCK52qaoiae4krqhil+90wMFuedwKXxVwwD9XfJcaja2gWBhY3OJ4eEJkIpUVKrTtXL6e8YQhcohdXJJAwcPLvibrFKDUz8sICeNHuQj2lDZ+dSMyIw76pEmRHb4VYVKX40cpCHBKGBJPnO9e6nE8cPR4Oc+x3vxVrjIiouXcTMbuQ/SzeZexxlXe6oaACICHS2ZyVGi5TdZBOj6aMpEFyjLJfybDjAtuc2pa260Uqkg7rer9nf9aXuIZk37eONfMMABcR27chDBdYuQOUewbePHs2ABRRRlCl1u4yAJUwAPgS5d5DmlxnAdgyyLzRT2nAZ/7qVYwQTCYevY4EWMkE4ihrNjR1WraEeufi+cBJDnoXQlyCKJew5iyxMu/N2+eaDyCSEQbokxAx8iAm43kQaBdt1idsjjqqb5e5I8YjSCphU6VDJAjIR9eHCEQA+yUOmiB4hUz6UBiBOKolkTtyKymBPjRdiIT2g41xjHSwr2ST+J3AVHW8w/JoXUDiycs3HsEROsExom1W/bkN49MM7hzXumpawhKtN7sInE0GEIBjjAXytewyOXgA58grb91pzv7k580gli3Xr7gOZt/FmZj3Hc/orKkjanzW8K6Yu4I7D5a7PcdPshEU5Wvt47dDJI7fwEvbzb8eVIBSa2tejs9GJBKIP46oUlzHx0kiaCVE3EskDDqk/rSxD4qYhJaR+/Ci2ffCV+JaIKXPnD4NMl6IpeTulfOZFJTq7yGSXa/gHIiiskdVyg5mcDZMAAPnP3Qh32QSb2QXHIclverzmSOxbtTTAbwSQ6NAHyvw9h09wjr4480UFihnlxfhROfeOI0kG0zn3DxHMOe//CHXWohA1El8x0A2B39zcCewshmUCWAxuwc3usdA/hd3rja38AZe59cLl5ttLkFoBPHGEvTSP/33IEw8iEW4YA1SBdVlz6nnmis//Qmu4xebj954HfXDfTv4hoGrZS3x6wcGN1mbMQoxjgFf13ysdNbEy4zoXtLpWOig3RYnEr715dp/tiF9o9kVqRsTOXnFUlcRRIYmfJHC/nQpcXfdBQwoe5mJVzrLafUcjtMiyKal0LUpY3uxKu1lPHD2TMzGW1i6yJbM+E+mcS7l7KD85nkisvBtiJE6qho5gB/DfHsCFWsZmG2sbTav/cv/FSdGfLd476ZAxiZYQjojchgvFcudHsaXX/srJiQPh1EOsyhtdX4WyTbSHHrpJSQJ5u4bV5rrZ05jtj7O3BGGEeq7DW5GvRQuwFd128qWXQq47Do+G5G0GYSbC9UAt3DxSAsKrAumdEuQe3g2xMwg65jHJjB5+gyiWmGGfJnBYqxT5LWM28UGCHGAmLBrC0tw7V2MUW6gv6KSLCOSUXH0AypqFRwD4Do4H8J61dfvYiEsQxgAIrapZyxNqFcuuXULuSnmNHYhqpUyI4wH5maZK2DwrT+UyDTz1un4Dd08ey5ndXj1ctda65a9RKc5wA633h5i9hvpRvhOEXORjl+lAzdxf3ADocUeLT5wKscsP/9ZBr9uSrQMFxVBFm7gSo/kmH0PSTlzJSFaVyCQbE2A9CBRVBCWhGTrwE3ugRoD+9FmVeSKVc101Ny0tEHCysIj7kMgwVQ7DIwVYeHoShklT7gnxN7nJB+EIyNxLinISz597UBdGI8SDENYWo5qktw4E7JrTD4iRdAPw8XHCE8qM1vV3IzaJfz0lTvwpVdYcHUk396+cgnfrQ/TTtOYrxJkz1EnEEepKUHpcKnJikXms5R8o2gcTs4aPcbtGVQ1lYbOl41DoGisZLXFeIaB/sLtGENunnuPNEVNc1GZ0esNXAE3abZvFXWu4KvjIeCjBwU4KG13H5+NSMgombVnbmQX4ZKRLHIcGhbVqlWxehw7wJn6R7A6jRpggfdw3NnLuJhj3dE6MYV1SAJRAz6Ci8fyvSXSjsduvsIqu018ltAt0kHbw3AAOlsAK14doIv4UcWiHgA0Gi6xDtHxowBygg7ci5vLISSIRLmK5en6lRnEsAukUOPgMh+99x7xoT6IK77qg+Vrwx+DQ7kKbhUzo4HeLHsDhFuEew8MMY5BTOveoboTLg5IVkRYxhEOIId+8hOWHbMu//DhgGuDmXStSfOXQJgzZzIPsYALubt3icAialSltrc2ga8OfH38hOsqebp6sYxHCkF1pALPbZ/9vnOUG5GWjqKeEBR5WsZAD/0C/ETQCax7zvhbD/V3mUuYBMRd50hUHZXw+YGd2U7CcRWql752LhnQQ9cKyMxGcPo8+vW/xYThMcZ5zG/AMOcuXKCO5AODGoaxjePbZXAPxyj2pxJR952JY0eQ4mgfMDirrsp399J5pI8hVfFAwO3/4Jdfxgp6BQZ2FxzYi8rnPArbbb/zVnP05a8zaD/GqtZ9zfSpZ7NacwWmvIZECq7K4Pll6w/muKIRCZcdwD2idYuPc3jmZ4OseUy4Fhap0RIH3H0Kyj2ICfOFb/1e8+Ev30CMLsHNd2fl4PVzl9nB9W1i84I4xMbaCwBn3vplc+VtdnyFu67Mr6SDJJ4FZoa1xgzuwkluGJUC7t2PquCk+d5jBFsWwaPqUZ+uY4I8p46xW+8zp5pnXvoSaXRRJ8I5MaDmGVDfQ8WaPHWyufz6GxDHu80bf/wvUI+Y30Z12kadeeHvficSzw4581ffjxqxDYHQszFBYmltbiLltrDlD8IAFkGKdZAlbiLUrcypsIfgPbxsKWsAAnRxlz5USi1XD6pquN2bS3EdvCd2L8QpgkkoHoXLQfCqCbR7jkGvg9PyXncTZvHh5CViPd/SN6pWWvcyNmgJJowK7qt1EXFCMrx8YZMnWOqsT5nvVbOUJnos26bdIO44XH50SjVmmYjvcGn6BOoMkWysoA7ShhCdREHxEtcg0mWCyd6X/uCPcJp8MUQ9g3n3xgfvNzfOsAbo9o0g/ChGmy995/fpG40tGFjEK6SwlrPtjaXE6Jo6eQoH1mej/k28CxEjyS/+/KeM3VC96PtTE7/HDD0Wt8NG0UerQJPXW/wiS3wvv/V6cxCifgYGOcd0grP+bjmnqmvAu84mqJQrtCNdgWX38eiSxA7gZ2PKQL2MQzI52EoPicVZboMm3LqBq8YP/zLjDWd2h1B97BlD8ux99pnm5De+nmeu7Xjre/8qVgrJOGZHVQ6w2iAK7h8u13PvEUaOpSPR8Z3Zrs59Nsy6ab0amca0y+B8E2SfY5fZ4uDnQJVOoGM2cB3Z2t5o3vzud5vr77wNJzqf+QzLGmNwvZdFUxKoa8b1Kt5kINlL+T0gmWOrcHpy001mUZs8gNU5T+4aMywIqv4u0EV4haAcehUkVLcP4tseCCW7Q3GOBBHRBC9I425QZWKw+KIp2Rht4DVsWtUFxx9KLwhTVStl+Uzm1XJDniUdRNHHlnT2i2MNObPGDlXOQ5hQlbZa+7RSRS2zbUp14Kuerop25fXXUYdRN+HC1sV+skYr+JvJHDZpn8QayyIwPv6Nb0MwBHM4f6GZhfsb+WRBdRpzsWrxGIQ3fYJZfFXvMFwazmEfSjBOxq4wpzSP+47t0SDjVIDMZBRPBnFBDwY3SVX1UoVfgyHpSex24dMQlwz2zsxlrIQ/wljCfA5jG8czU0f2xXzsNuAZxMOQZPpbwOnB45GJpBJIGqX0sHHdZwqV49kRlI6euoEF50JzjQGdqoocfxAutRtfG1eeuRTTysk97t2iA0CMPgaXcjm5pQM8ASSns6M2IBz6DSKhgxns37dVQgDp6kE8VVFtjBBiN2oavH0NQkG6GUNriPoZ08k1Cxd/9rPm5rn3ucZ0CYIPY9I19M5u1I6Z0+xQy0DzHn5TKlN2QlnqO0pSVR13smWBEAQkYYTTy/1FVpFITs5zJcvGKvccNCvtCXqJwG36EF0Qz+8KIa2390mTvCAIa9JKkSA/hF7HIimAP5YUdAuBcMd/JYQSxGiJfcDF9SGu1xhn8m3vs88liLVxhJUijjn8Ld2AzQlsclMyuXrwLvNFjosqk7CdGzCSdWCRsK2cRXpDE+1GzXGPl4Vbs83199/H1Ms+kUgBVyvqJmRg8wn6KRpJrXzOlmeg8+HgjOUXPGDch+TW3d5dgN1K262yb108nyXJBr5zP0pjFJinE8ISrVL8xvkzzcRhpK3MCE1kYx3rIviQMQlpVbeEezex1io9MpHEhAgy1gzNNAVB5cUOrbVFdcLJMiaimKkd75+G2v86A3RnnUeWpzOL6hZhl1HDSo9CJFiBdHPf7mNG/fQbDMZZrwES5gexqGI547v7IGMatpy2Gt2Hddlz8rn4GO2j48ewSBklZAluNPPuB80gRDuBNWsWsXv2Rz9qPvyLv2CwfDGc2DUZew4/0xx5+Wt09kbzzp99L/tsgIGp3iTjpVhoqIOEk8iISg8GqGtLBXHjNiGwJVa+UqpYx1jkrCjXkQYSUQ65vEhc7n1X1CRh6LYJuK+I4bxP6CEyyBp89PiigjHeAV5QaNLJ1ZOVEi3fKXX5nnKV7iKmA93+0d2YRAnLBJLqin7kq19jjgNXeDUBqr+KCdofc+7RFuTgwnYX8Ny+a+RFFryR0DKUfvap28LJjPR3M2yq6tP1DzTdG6FlFh+s2fStbkEG7XgGK5bBKJQYaYvwSDs4Cz+e70Jdu/HhOYjrPVhUKUuYIn6b5779bbSO+fSFwQeXGfeIb0MYWNy6xAV1+57HS+MIsdlmLjE+wScOXDj80svNAZgzlJIJ3owxwa3iy8WYpNbB+rTHIxOJFBo1S8iLEAIwvzIWAYsTaEGEX2cQ2weQNoNoWImw7UPaIA+EMnmMxVLfBMHmmzkGr3OXLiAmF7Fy7KXjmIHG4XmDzrb7JT7t5tGXUQ8sNqpMh0oUxf3Fk/aUHqVHWJOAOGWh1saajnx9zcGXXwqx3Ll8tXnzT/6Y+Y9LIVqwzJaQISZPOOw19GUtaKuUN4p5dwi/pFHs7Ut4yLpUdGvTwaaWHRAVZA0Ci6jtoZoEqidKCihNXZUKmCJUHVNQqXvBZj4CqyuSeDZf0+q9GwQv9BOElFN7KJmiO7ff5nvTFQrxgv8O/NuPkUJ6IqiS6LI/jqvHxOFDkRpKZ+sX864qIh4Mqm/q7bdRkwYduIN49EImV5dQQWUQi5jGXY7gwjbVMesswfRi8XNJtP5oThQvaMJlHLCJmmO6ftSvrFOnTInBOgbx05YWRMDJqg9GtTUMFIwRZNcz2nqtM250UlOVS8ub8bzc6df5lnEmhPeQdhRv72nGu3dxZYkljrrNXZtF6pwLgx5lTJLxGvB21j4SpD2rLbWQo92geP7+qn9ESn6qQpEgXteMc3a2GnFOxVeW7iBmsSIAHNdXb64wqAKRQ2RUWLXH/fI2WcGnA6BR3t2PYhxbu41fWVyLtSkckE4ynI7IOcge4QW+LcZRd32+jB6y5+SzWeSkCjEygSmRlhpKp2+EzgHAa8QEnnnzzebyqz9H/cFtgc6zSXaeSJe18lg+HGcoIXoxH0Zvp56LS0QfZHCtQ6AqFD1betIOVaWCXgJo8nGdhu4i2QpOGFEPVSMPy+MVRx6WzrBHrKwnEY52loVgeVKeQ2HJI2lbAvCb+l3brenglGU97Oxydm5EQlElMajCbgLmac1auaOrO/EBUFuqeqhX9QqWogUmR8doi3NSVllJoXqjRF7BekXLopqmvcBALqwkVM2ZxmjSA1PUvL/ILPz89Us0vuj7GmG8TtWtK99VCWvzotJzVhq5ianjQFVCt8fWj2sVwpM4rJOLwHaxKE/VaxXVXouXbve7UOfHCfLhtnKZMoAB3LupjxfMWwMJ76Hq/HbRv7Zd7ps+JN/u49GIpH7ZZgj7KaoWWBtisVEU6MC3f2wu1qo53J5FuH5s7jY+NnogoTvBLJaO+WuEycHlRG6271kaRwc6pmBlLPMpzL6CUbojfPTBO9E1NRd2N0Tgus/F5MmTmbVX9TNMzYYAZVHVOiZjBBrhaX7RXHntF80Hf/b/AOj51EWrmDMPW4QbkkBuXX4/RGwzJZpluNYmnQl5M4a5Rzo9kFVvQDwVKjuaHMKxoZIt5byIIjzgPxSLGJdT8oxXQdqcKyBbKSJmcGj7tz1iTwixPK500D6XY7fSTxJQYvDv/qNwQoqiHvyDqTmzHUmiUWOSaCRIWp/fuXSlOfNnf5rPbYcThTyOlFD9knj0ajCncHzaZ4gmwxhtAFs6NlEhq4ex476Tr3ylOfHNrzfbu2B4U5jlMZFff+fNSHZhI96sY1BxS/K+1tiRRtpumhIiEU6k1T0/ljIIT1zI9uEM+JcgvMCadIde+jJrjk43GxC7XtRTSEnnxHRV6UE70HlyBMbs/owG3VAjUH0zrtcaElHJJ5woOJB8EJqPRCSl8kAwgCdTiCUICwWGCrGWLDMZtrKMr9PMlUgSv3FhjDsWObZww5cxTL5WRwLSYdDAAToqjmtxgdOlsoj0Neck8MtZwh9niLmM4llLue0hQMcY+I2x+s+oirdYc2AwMwnx0ttnMR8eCULdYS7knf/rT0CIMv7IYJ9v5XxGCQnyi2yUGVVSjM6BdMFdRHVRx8RsoS0YbS+z7EXlEx6Fi8ayw+w3yhadrUWIV+SlKiIcOp1vh+Qo3WE7PKJ6cCPix6W7pksC8/U7fm0aLvLNx3vWxvGOf5H6ELMD8Ljgg8ROvioIexnXje3d09zD+qdnwSK7GDtrn/kWCHHfCy/ErWaQOZmUTBusm0aPLJrqYT4FT4h1uLcqoISiJLl942bT+/75ZvgAriWMVWK+RzuIqkRfr/G9OyZr6RyfwFKo6mp9ZRC2ULxq+8C1Jws3bxGoHDWY/ompmcnKqSNMPPK95Um0k4dZZsx8ytyVc80v/89/yfcwNqSdQe8MOrjv5EkspPubuxhrFpmXG33uecanB5tRvBcOsDWEJmMNOSvUhY9Tj/rnkYgkH9XOtsvMzHso3vUX+kMZ80oAGOrSOQFnqvAxlg4AAEAASURBVMen9jEAOxMg9hE3S9MfKASwSkA1sDRcwfSRSBaUznC3Vpaq6nHKOg0nktJbvNYWb8fvgsNsgbB3r+IfBYdwRyjXOGysAHTGH0twPIMO3Dz7ASZN3Ljl9NQ5HJhOlStnsEZ5BbGKWlQAxVxG2/m60dhcDxFPBlHc0/MZT+1kCY0iRE4ThpiEkRLHdJTt89qIXJc/0kHSmI833Ycw9r59bj2S5sF0SVW+LW2s+bR9Rf22gKcMax5rn8i4AEJrEVpzph91xQlBkXgdM+3k8WeyNNqxwOwHH0RtEaFX4MYyDo0GmXVXwvKvEEkPEpkwp0wE9487H8OYDEOO6+J1NXKyVYdPTetZutBIJMrcWleuuK0GogTL45XLfY0t7EvHT2oe9VCFd4s5+2SO9UeaoiWeHphTYpHhHS68XL8zzDhpdDeMEYKbxKvjyAsvNXsgniEmU8UNyxPWO7V51DFJrZW5UKFgDT0mEHqRAAZavgtXcguEdcLTOBM6feIUk1FHm3n0WwdKBhRwleHqIvrtAlFNBC6iUODvVK1UUS6veHSCzz1CqpolbjgXokVmN/4+N86cpRPPQThE91gmeIEz20QRdP21wSHO/eSHxdeHjq2IJne3k5UDInXGFQCpvrfT+1iYpDhQXeobLaZQn5dJOqUJX4a45KJUivaVQTU2d28FN9+HoCgnbfV7sdyfR5parqNu5d76WJJlKbEkEudWRCY/1bIEzABEITxbYe6FOEMglp288klg7CSpBLKoKw4R452fuTtzDd3dfQ770n86+Onh61JeiWb5zg2YzfVm/S5LBuDoTliuwmzW0ApCJPSdFkSLiima862Ll2CKH0J8HzUHXv5qBvFOCC7PzWI1gyAlQlVZELOajUst2wq38HGcZzBvl2FPPfM8mofWQtxycJ+RSLwGAPlUKbkFTlCZME+3sJh0Lb5jS5gvzeIbLGuYvTXyLDIGHp/+W4kYuUvjBJqKZHeT+TJh2X08uiTxayvXHlo/xvDv1+X8wquvwaGuIBiWmmPfYAs2bJ9zrMG4c/0Gg15C7DBQ3HP0MGJSfxqsD6yXmL95M+7vd65cIMeSr/b8g6x37kWfNDbVtXfeBUU2UMXKAHKQwdxe3FYczC3cvMvAjTxZQefyUZfL2sFvfvePmxvvvxtVbcuZvEogANVI7MWEW8YXQUeBzU9VzUHh+J59IAKcFUJyVtwJLE28GglisxctqG4Il+960K/pBn4Fcd340/FK2drBpjn+KR0qxw0MgWMrI/Imb2U+6VHuIJQYFXzbSiGrGeRQmsUiYHLLZ4AtQUtIEG/pIhkP78nP+jv3kUE5SD558lnawzgFHX7+xjWQHwTkn4gk4bnaUCdGpYzf6UazAcK6t4he2rqWq/oIm55FxjHATcnkikFN5QMg3egu6ggBrfOtrinTz7yIZEKlYT7MseUA2sG2FkNgvHNQCyvvj1l3vXm3e5lNpy7obcCekKlzTEozTrL50Tyos2c1CydF7Z9Fxp2LFzAa8d0ozHS652gzMEW0fKSJ+Pd3/qP/mMlGvM/Bx43VrWbXyWfw4CYtwfHWXSTXdXw2IjEDGmHFJsh8N/5Qetde/OWbAB1vW4hBj1ud9TbhRsbHVbcvLsyubLPP8Z2is8KF6NiYIH3BodpidMRtAs3J3FWhtugcisvhzLAIHbVBl3o6WGlkXj296Mh0qnF5nd1dQwo5QA/3pvPlnn5b9d/CjQsHFjcdrGohmcD4cJvJM2fEydRKURcqQwdUVafUlleplXflKvl7jQSiRNbCM9BmgouMCtyQvOaXsn1Kw3QB0YFP02YARNrbl0Fe4JRBPNm1uee9EiZERz7Wzzwkkki0CqgAmrZxr3lbAhOxBeM2UgVzozUqDIRyhKHcW3XFjYOEn5YgTajIncxLOH7Th8w6Wn+XL/TxrI97l8xqABlUs2Dp7CAz/La7h0z1eNC3rofxxBDPhuXcFO4Gqelk6+FhvtTKfzKLcSxXxl+bgilmMpIxkzuKbSAJJLDMG7WQESbZHQwPAKV9H5uWGltgiHVF/eBjEIhsY3VUk4CJO8OeqI4MFwYYLgxCZEqaIrNLlX49IkHkHf3Wt5vdOAy64ObdP/9zOhQznA6HDA6NwCHXVAxvElVEG7pmYLvbiaC7s1AsBOLgSiIBQr6iU4gQgm65yq5Hcu7Jw/sx+8HhkEYiqvtWJAogTnIfnfmAzrLDsODAnfp0kaCjthDnG0T6kIv1IBlcu144X1mbHc5HJ8t1xKV6aHaU2+w/fLy5yRZzmwDTQxXH7zUn6tbtoXokYpUBfR7d9yfqB/WVIO2YjGXg+HHrp9zia6W0cT0G7iEvvsgqRdZkt5xx8eb3wzk1oSstJLoQaBASwgdeMojMc5AHF5Sx0xgMtFRSIkdFAQEkFrl2P/kbcbJXDs73Q7TZfHQt0YNAL2SJ4/Z5Zq9BdDm0yG1YJ9trlBQR2Wv7Ts+GmPfxhnadeSQJzNL4x6mPjoNIhTFXNg5Tp+k9mfVWetlPIZJa7cCJ/mzboXXLuGUv/oN/i/IcMyxmEd7r//0/b9awkDrGEfktxyzGp3GwZLwjP9N3bP+LGB9QtxwDbbHU2/aqEhtYYgCNZNehoyHGbfCylyUWQ0SCcV6u+/jMRCLFjWFROPjVV/CTwkMTgvmD//Q/aS7+1V81V/B4FUHCmdStQYxBzIGDiHYHVHbaIGbByWn8enQ6A/HkikXs0lQBhJVoAIAqtgeHmRQkhZHZJZJzP/kxkftwn2fQ6USQXqqajTcpS7cXJ6/mZi5yZs0Bx7rcFoROthCdKpDru1MuCOhzAVzUm61mHin0S9ZUa7nxOw8nMrVmmVCDQ6jZd/wiEdp0IrxIK7FpSFB1cfC4tUJnyrn58TT17ekbgRGgm9NxCxD3udfezHvL0w/M+ZpdbNvm0gIHw3Gh1/WedqbOIJQSQiT2gVUwKLeSNnNHTIYavkjifP7f/PsJzXRnZiYDYEOHkhGqBioY62niIyaH4qiE5rxQkI86u/mPSOq6/L0vvAKSrzCTzdqfWdb6MNfVrEA09I2mCWe+RcyeHgbT5BlXEKQ7+isrN51rKpJKTBZHPJM4ZXuWmP15vefEcea/TpDOnMr4V5Ow9YrHtXmmb8r3Q0is3ezYu+fUs80f/Gf/ORY71ESYrqrg5Z/+eTyIl9kY9vpfv4b66GKvdQbwp8iL/VvQSuY+/JDn1LXr+GxEQm+o62me02wGpma8cRUHuPkrjknK8tWDr3w1lClh3PwQ6xJiMv1A56hjihjIRAiIcKBTrnEo1p/CE9YwT7r7LWpCK/Jjp6fyhrnUfp8JPxBE7ify2mDnAnRtKZN+5CewObxPB5CQsFmUpdmxDATTkS2CqOr5rofdXZPezvM/55YOkp9/0i2+L1e8L0jmWcSXsyPLoxIYwUVGUq1kEihBQ3nv+IE8hGna4aDcfPgLt48PFde6ysQKZ134V9JQhJIZIih1tUjmqpBGI3DEEaw6SkoRcU7iAP5KdksIAXOtRI7BgGedgzqlScAHesylUlQktZyMzSjXlYEuo5XhCDMH8v1+y6HqY9oAzWfANZKDMtMO7yWO+kvefMg3tSa6isSgkrYDT7LjQSE61Ga9rIsqCjR8Z3nAQm+CySNHMt4amaK1MmpejzOBuoRL0ZJ9A7wXMOxc5fouOOtyhiX9whgCJE/S1+ORiMTOTxMocQvEXWWAc/PMmQwKjUwy84tfYNVi0g0EMEzpETx8D7z8lbitXPrpz5o51KP5s6pHqimMKRgYjxI5XT1yGH11S9fnAAQkBUFGmIHv57eMH5H6ZyxgAEIHN7d4c2++Tbk7IHA84oywri8C36DLBdxFFKuyCCm5qupRXEoU1QE8QPSZN7Is9gRxzkS91iMEEjB7V8ormeUtf8gz1ibveUMSuyUBL3xAfbJLLmZkEcx6KTVlCgZqcxzhz0FvPvab9tAnKjPEMZOWuhSCFRkhF5A1TAW4+HwIwhhFsk9gGh+F+Uggqr1X3n0/urzcV4SSWOToRpJ3lWI5bFupu/WQidhXwmZDeMQMLjcvpnq38DYAw91rlwoR8+0A5QVqJau0R+tgiIx3mnvtRwnG52F8AszDD22H1zyiVJAWSQsC2w4ZnoxQBqmXhn2U537TEiXyBZWL9h8icKGuK1jCkjt/Jo4eibOtdaHCzcIMgT1gHsZ6Mz/3gFlvd86yCvV4JCLJR1K3wxoCHmgrP/0//g8xvcolU1Eqqwewk3zjbGo/ii+WOvb4/r0ERZhr7kBAwmTy6LHm5O//YfPcH/zbSCHWH7Bs9of/1X/TLLO12jrLNIXYzffeS8cLODmESGAZAmALBzX9qxw4ZsDOu0163701VG/sWAkiJlTUKweI6RifY9otvk90lJxNAqFdlgHk7Z1wxWqLV41Ir5EulQ8gRPVioi1jizxMh2gSd8NMl5JmDxKkXdxtsLKMMtvtM8vVcubOVPdYx7+ME6YqZCRQEBMgUR+5oHAWHgWJvAQORLkvxOutiGuQhkFWAH4pZs6MByDytBOGs3j9SuqGOKN8TKK22YZ62C7uKkEIC9tsvmAh7zSKuBq09UsDwVZRSU03dey5uKL7XsKX8DTxqm41TVnlaLpIERlgfhKKsC/PC/MluYeMiaY7G2+UzFtYKB2f7MVZUeJcwSfwxum3wqBllB4Zv1hXAKTqtTiDpzLjoTWYqzEGouby2kG8SwWsg9qO5dPQTFoXoqVOtG2nj5P9I/pupUa2oADUgWSPgzIpU8yXmvlto6rEQkJlkpaK3GOj+0V0PkN9eqgrK9pcEWcHu6/InmeOQsmEp7kt4gYXgBets7PNG0BsM3HoGpLRhOFBjcAfK8QDQmHEKqv16FiRQw7hnuDZFxxLxzqmYE2YLnYit3zXz/MyMC5qifiSsUe4KOUKfP5HZNv0YKrN4kbE4n0fgHcw7syvM9YitxLDNRPOC+mjZigexw4OcEt/8h1IN4QlrZcBpjq/Pk/xQYJTOikr10zhKYlKBHs4BSalJlqU3H4gbu9YGN13Xr+4AND6iojUVaIAkHwKDIUt8IzZGIJ2XFIP61TUQoDJkclUCcq8ODu/MS/BAR8jpWjJ1BXFbb9VaaN2ybgiFfnO8tufCBk4pk7Uxzr5a9Uhm+Ve7KV+hQndw/nVeRvz0HJq5Jz58+dZ36OnRBkzBaC0y77SgKP1a/nmbDMLgU2/8GUIpayRsezUAUKINCNPC6sEonTxfSpQAcL50SVJPg7ERBEaRcN9ZmFKE86KM10QVlkBaGA2K28YShfICEQPnQuNZKjXqYewcgBu51VqLhYOIAfgtJl7b3oRUFdov7zHxFiA5B3I2dOLqiYCQFjRnUFefY9US+yEjAUkatMEIZAqtoF6d+rCtcCSAehdGkKxMMu3snwbYHLnzP8A66d1kTBkTm8/1hqrax2xWA2DwC5NHQxn5VvyKQgr7SsBjCqJZLE9lOu8gJHaRRwDQtRZfQeWBeHNwnpYP4lRF3gG94xB3G9wEEIl2wA0GxoltVXG1Olj3oXAvUCqKC03RAyOWOCEi+1UitEpmzIo1E/L4gHIRYgnzK+WOzgqBhipnyUNBKuD+3TKJnHS+01+3JdyWyQ0r/YnOeQwqRchnlS0WWfcuwXjuQbDUDV17coKktcjROGFdaaewRfydChgOKc5iGmUqPijwNf6O5uuY2uHKCzftrfEUeDb1sV82+PRiMQKkKlcuJ1SDkEEgFQyHMHKUujcO6ebRSJVDGCfVjdfYVCkeCSLAHIVgtG5TAc7He5GmOi58fbpZpXJRW30AjDImQ4zz8IFa2c5r1AJZXkezsuKNCcBXXlmDNxVv4fLFp28DdwAsMb27GeMI+GmFYxj8H4VLtj1B1CRJCLHK0FK1DYrK1JqCs1glKSaPKOOAPiJQ8cz+28m7ral+XCcGMZTeMGOwtWtbzqT+lRO1tHP7SABkgoAP45RGMUoMNt75AidzsIziMWybp07S7ucBV+jjgUZHedMHCGQGzq4UkxiT0jV5ESTlFr0h+/GjQqDSrfK+MT9iCCvpOrBcGIdVQGHCLohU7A6wl9jQdknnTEVDqrCMvDBs9dJSJ09lToGX1CKuKzWMeEEcxu7iCnwiUcHFimIfFTpyhEG0MI8FE37NxiXbFw4X/ChJqxpvBf3uPcIvCUI/P3O//mfok2w/+QzzzLmnW4+/H+/18ydeR8pBPMBVrU/VL/shzyjjSHm5Fb+PBqRWIk2MxsAIyqEAVQl/nBiK+uPXVDXMTNuLGBGpdIZeIsUHKKFaXqg+Lf/l/8JtwHUEDry7gcYAbSAtdJGgPk/q8ZMTz7mZSgYUVx79y5m2Efw8lybxsWCeZeBUXR+VSyeuW+Fg8NSOZA9BbuP4CgqlrPoqBk8DDGSo6K6EKblWEUKtwL+pVzVE/7E4iZCuf+7A+ASjALVTtMnC+8NiicBZ5yWr/lj4bQ/BIIkE+HkYH5fDitHG1Oml7adMrkUzFOH8FGzA9Mc4Ui9qVOCxQGvLdaa+yx58JW4zgepg4+nmPDd2jyfSVolWAavJAkXRur59TLMxVn48rG58aH9zc8t4mQX1ktrFqIvdXWcMDjKfAOuRwMwj91w7qHRSeoG8Qku25mjwLFcdl23b3NKX3AlrEiyjQQD4XLtn7SpTd+BkwkhktpX20j/1JExj208973voW7hgQ7DWMazwHXtxm+ONG8JRSIVrwVujDrddeL6kYkkHL5mksa0mfPMDkyLRGYAquRI5bs73iTtwZfNwoWzpZGoAyGQVNY37cG3aQDnUHjKVNngM5KVtc1wTKSE6k2sROiqcn31TvVWkbMc5kt+DO6YrAgSVOAK2LYBAXIhw/aznIIidokfhktbhkge8U3jDZRmxEgD3fXhYZsOtr7tYcfIrYLsQfjybZAiaURLS+Bnu62nd1zrm1SPDhx8QP6l/KLG8hXIxPcgjlpqjH/UUw8IvX91mY9VSQThX1QspEEG1MDL6CYeaT/fp/ZgZ8ziqZYPRcABxh3sqYLkHoJIZBiWpeqVYNVwGaVY2iE+2B77xLp5za9ep8DOnxZenEjJYd95xYPOM+47cPWdfcM5yTxbnrBmWwnMvJaXl4G/+Uj45IuUCj773PyEiedO3nzG8chEYgZkk/powrT93vjMN2m8iMOLAggrzMuuw+f1KH5Q5U4Ej6RKJUspycPXLbcwWohqRS9pN1nmK6DVq4cxFQ8fPxhpsjQPcTDeUMQ6I7yKa4oASLkUbYejxKTOkQ62iTqrXpSWlPp0/lIVTadKSsvTQ9ZjfRAXDLjUIoNE/dL2EhtqdO8RBrQQzxr+TFh80grhUwAUIik6MXXUwiOntcNqguTcEqRw4lfqvQOzJEl6rqx7+/O5qSJx+U6C7O2n3cBCdcp1E7rlzLOFmu31O6OF1KMSMAC1WN7zR6JxbIa5PeMaX3BolvVTCd6Vm47HVHfNTbwo80S6pUh4xSughO2ReCAW+5N/Nr37iArUeVDwqRt/Qgyd9+0FGkAHyaxeRUouQ4htspwqrIANbwMDauxlrtMf3em5fjQi6QYoH1ufZNoCrqj5BYihykCae89tmlSmTfJAXaxlgBzol6v2Lx9o9SAPB9NCVj0d85A1iMFgewu3AgjHGWrnB1Zx+V66R0wsOm6E2ExuvaZ6lU0v+VaC9Ii7h6VYv/aZnceLnRp4q4qBZc1xAl7baIREs+914kkuiis2ps9Z1lon+N40kVpQ+W6891oky/TJY5TPCrkO/Kiz9QdR/YVofJeOo6xOOqtF4fWXGnf9MR2/wv3a5yaXQfkzP5C4hx1pjUfm+n5Ns4ZvXWU8aNyugpS2n/QaIZCEezDfT+JdPUH0dsc1xtx99y9+ENNuxmKUGUYjHGEs+uURRRDmgM8WfRJmQt6rRGLsd32NdfG5A2i+tV6qP0r6EAxwpSHl58kr21YuylkY5Gift3eBD5KtwIiHvI6Knfct7Nq05WQC8/DMZ1wX+LXPfVfLbr97NCLpLszM6z3XVrZTGM9z37Yr7esu2MrVb7vOHwNMfZfEVv5+zuPjan62LpEyck1swdt6DDNwX19z0gxrG+Mfj6Rn286sNyBtjg6ylFtbFkIpt4UZpL0pJe0UWRzUOjeQPGmkM+YGvZYw+zHpuj21m5kOjRGzyi3TcCnp6Sn1MOsKLwlGZI0+LJwspoWuNcl/4evj1Ma/3Z2bF+UPyWu+QUAlFUgqIXqvBXGS8c0c99tYIHV1qUeQOTfkDtGsLJXJTWGXgNJlJJgUYY7qu2TvoF3jAIBPGc68C8NtLAS9I1rQgCZEknkn4u7GCiqDE6aOGUUQ210QpVxTStTsWrlPOBeY8JLvC0MxoU8L3AJX8+V/yrAcD8vuur7vvqTo/P1sRNKdeVuBFMgsej0i5ngXixCAzGFF22MnZX1CvVtOnsbUxwEcbWz126SxfBspRyJdOKb33PUOoGfTWUMYA4Z34WS5wkTa3UWuJ5o1iMI5EjzgqBdfohbZ2a75ygx8yhfY3FdA13p4tgy5VoE413BSEQIpItHoJrPAslJVDwNID+9G/UIVGZs8zNZrrougoD6taba+tKEMxpUqcmXybmFQO7BtftpmxSrcKgKVdPUpNfMDicIzP3m05y2QUj839z/Z//wLqIwgPpbEEECSkJ4DfsEMPwursCjNfcR6f6SILkhrWNWsd5ifRM11LF/USF+5fh0hoRNr4gTvNgxDFc/AD5EwSg4JFYmvZMuAXsagyhw1lzqTb+cIrHfaVWDWefuxC+vl12m/b21/PbfZVJial9em6DwzLc/y87rr6Hnp5d/pqknXm896WSvn993XD7vvfmYFCyo8rJ4lL/ITCHI8EYGHXCPG5UpyKhBAE6yi3LNzGOXMlmcuICJa/Dw2dtWNRJ4njUEnnCV2ADr74YeZBCthewR5IUKrWdoikgpaTMKWQZljmHkdvEeKYFruxbqmx4FWn2Nf+kpMx0ZrmSSa5ZBrKGBL8xdY5Yfqpz0/Jlu5ev2JKO0YpRBqSrel5aLzt6vbui5TT2ET9QkYUE/hoSk0qzkZuHutv5bBpy+9/hpMxCAbIr6ZC+MC23LvM1sNLGQsuVXyKRW5J+3o1H4mTPcxBsTSiGftPQJUr2JZdOHb/mdPsdfLSZYCv8g3m4lSc/2N15kxJ8AdRpW4gyjNLD9MgvGijCL14WwldipSyn/wbzeedV+brvv+Yfn8Cvl/NknyYCW77y20VqxW6sH7rvSCvZua+Xrn6P6+vfZ99EiAyFX7K1fZp7stKxHDIaAtEEaiGlBqjLEOZf80ncDsup+jvjnvYPki9dSJZ9CjXY/ChJVuF3A4pUVQgzSpW27yhwzsS3RyJqgySUUapYnI00+ZbrGgmdsc5i46X8FcDAaGAd5XYhNWtf3lLJFQUn5Wshwlfb3z3AWprsswEL/1PcgWZOa+SuAQgxycciX0YZB6Wf+o1llQb4NYfqiz/0qddvIrNeDePHOjs6Q+VK1ZlWcynFVWpqpFrDEOMmSsazUksiHGQv2E/VnH5d296DNWIV3KoU4Fd8jZ67SjlPgr/+3+5mF5dL//FTP9/InEgrsr8rCKdlWuIkjXo51L83nwe56FPHxuJyZ1+zeP7lcxMidAJzgPM2IcrXFntA2+zTgFF2qJRG4bIjl+gAk3PANYp+ACH3UPkTN1TF0ozKJiV03BUd90aQ+RSHk4VvIJYXCIEpk5IhdCbTXuuWg5+hQdOHmEBCaywuXolGE5MUxwDiXXFCav7eWZ6R52kKYMRqmL32tsyLlwf8vxvrSZWXOQt7fvFu1Az/Jd8rQOXChRUh+eW9VOmdx3rrFkQSQZh1gOz7UoGnLIeYr1xRMZs7maNO77SPN+xmlKt54NNAD6RjhEemG2zWSm7ezk/7BGPvDsYWnJY6e+v0L6B5J03z6+Laq7S3kM16XPOvy4IALAEn+FmTDqPmJ1odPjSq6KBFK4x56J5YIuplpARx9nmelu9k/Zf+rF7JJUTNR0fquClDxBEgjIWXnHJConCWpAXq7vcP9xLWpyyZiWUSuMpbUVJ0Air7tCL3U1nxaZuc+1Z35BcM8env+mX0nZNp4b2h9ZICBAxMqxq3oaNZRxxODuQ/EaGMN8PYDkXWI2XVIMOaZ865PsSt6hnrZeXCuhDOanG45xvVwzdA8nVfcSMY9nv/at5tjXfqcZZ02IZmMtXhuur9eNhYwzpgMGBadLO1N2bXdt11M8Px5J8rgalE4rmQeRgGxFqKpmhEvKidFv+wB+RULP0aFBGF3P9b518VR2yLKz/QexzN/UYZJ5DvVlpQLqFJmAJYUgC7FQB55ZNm94JydEghGDKrv+MiZxHUx1c1nHHO1a1UgSJvScAHUZMKPYfMuf0qgH/3a198FXH7uvaYNtvBWPOz/a5vuun7Goss/9CIwDa6DxATKeq/lYgNe2CzUp6poZ5j1nCMr2SByqp8Kz9x6bjOJpoGQR3q7qXME5cZF18qP48d2dIWwPC54Me5pomMBQ4tVL3DFSlSqB9CeAxGo96eM3i0gqdNrO87ZaeXYQoBBGHbfkvemD1HBUvlHF0ltXFaksG05GZGHYGgbTSIesY0hna81qOWukCXnJI0kb3R09xHFmWZHoehF/ejYb8giXdJClEBZ5gFjlHQ6NINB2DwQoUleE8PrzPGp2Vpk2CIZ6aKbu0wOZmfMNwssaqtY2pQptwrSUNss0tonCr2TqHCQUwWUKBsHW6zcholC1YkEUPrR9BcvXKkSkpNab1yUBK7oLOV4yR9WtqFwysaJ6pRj1O8oIcXcKfToXv5lEUmFlR3AdXd2O5Zdu9KHvfKQ6AIFo0i9dzIo0nN9WWOC1yhr4pdvX4qCo27chWtWjo2OvoQ6wZoUMUlpmn+X85iIm+VzCS2fzCG6b/Rtxve8b0qkTZ0nVfBwIhydZS8MYR1VMl3qNRM44b/FL3c3P3+dxtHBI4x/IL0sAAjEG0hgxRgiwMcDqwpvnT2P1u0Lk/YudaoS4+d5gChJ8YgVEmpZ6ZowCLJxdjykcqbsGvO7NDkF0OhC6XGKhmcNfaurWjeZLe1ktOXSKfthi/clMQuGak4ElDMbQi5d3ZumtcwsPu/GLcPxmEUlFpHoGguFE3Mc8GY4kwvlCBKzwdtTgM5CTk4uCHIc4TnEW3DXNGIhjlQly0JFGmnQ8UTJTu2CQmYw5kUvIMwhZJumiNoBMm8wVae1qjNKCylUJWIuPwfW8zzOz8dipZHne1vlzQZCPZRLABGZGO9x1YF+zm6WubwKP1RgzUC9td77D4sdkrIi+uep3RaJY3UhRTdaOs0gPVwmz2MIhceUeyyFcSyKEyGcZZnT13fean/7P/3t82gxva0TGhKsFvplHa/utMAz7CsLjbwc2XpdKefXEjy8ukdgbDztaJMsrkhQi4a4b2QC6z3dEeUtESUagAlzMFQoDuMeziUAGlCEa3it1OuvJIQo7zryL419bJyYUy5bGdCWdVxC/qApyUE2iPetF9Uq9yLeqDcWAUPJMvjV/zq0cJPWvcbT5pdy2up1yzNa2wLnFOaWA8ztux+ekIQ1vGUMtH2QlzGw+g2AgG9QuMuVjGUhtk49UmTwXUziqk4jPF45V3J/ko/fPYFlkvILaZdDrgT637iv9V4iDj80gmZkh9UwO/HnKxxeXSARMANZCqANA7tsO8Y0dF0T2mYNABpPx6eK6DAoV5foT+bPjmmxRx5CTybzN5txbb6J6GUl+KV6yS+zNuI3nSLx7o10xcGWmXi9XO07EsKPjkREPA6SN7i3krbXLpbZ9qCXbm7ift75JqtpR5zBDOzfhUQhLNHgMR8GxknHnmgtgmHIZfzg2cK/4G2c/QEUqE6xZ9typDmpqJCkPaLMOjkL7YUf/CGGQkJK9zEWtsEiq9AOOkXyv1F66VRwqdRMK8wIgw4yJetnQNHv6tfUq9aOUxwSWh9X9V3n29InkQUKote4CnI9CCDzLYM9v2l/Eda4hEjo/M+/a4J1596dTnWfYVoiEfl5KIG7WReBLdeyVb8bsaRywuavEYsIsalmbWqPozAzIzY/ttMmg1ENOyjP4KhyV8Q5EYt5KI82hIUhrqg4PgVhHESeTiQmrVIh1p6m0q20Phdja0r6a4Fc5+71Hzm0eMAYql7pV5BR+Thxe+PGPcAI1VCl7jbANtfuHSAyJmEg2jsFsh89oJASPNa5DJKWsUhTtwgM6YYwglEEmCoddzozqucwGo8tIDWfglwhxKlzdSk93nV2YnN3Qx/pat+ohnElgM+YnTPymSiyb9zSOp0MkAqEeXdeFEHxRgFTuy3X3s/JcpOcdAA6gq+SQU/Ms8wAishyOd7pk1LUmBtLbWlPf6G32Hj3e3EP9crOZu2z7lo4BORIC1OggLZKJK9ZBhHfJbQ7q7mSl68DjA6YKgvRwjX9m6qlfCARKiUkY5HFhVurf1tu6dw4v23svq2rTef+wi/u+L0hV8ijIJ4zCHEgnoQYhaYyhoJbh+onlhcqlc6UqpUH8Qhe0NTVLu63W/YRtGbZDODsv5JIBJ0oNVj6gcYK6GsfAcU3/gI6UxfolnIy64grHASWJu4B2tcFS2yJ3nvu+dMDDIPDYnz0dInmgWUEan4E4hQDkWSJ/IQB7q5oH7fRsjyCB2FF+468lkvhtyfnh7vHd8lpiMVgB8yO60O9/7ssoz0gAiGSKnWWRATBL1z2U2F/q6qpHWRvfIoez5vZeJBkdT+Gpk4GWXbXnrl6qV6oovfy8LmZhEIFDJDIEj7GKtwiaZztrm2xH5xBDuu+9/iQEqek4m18+lfiARWDXDZeWQBJulSx1n3FuJFstgLh+b0AHLVm2MYiqlLSdVEkPa+tRa6pkzqw9zMfokDqQSiBuZ2CAEMcm8/3Xmj2EizXvO0iTO+szkUiZV2Gt+oCGDCPPp8k159zdD4OkeHp/niyRAKwc7VngdQiDFyK/9yKJqk4Z2RUEkAh8nnd2fpuucsdCJBJHOwaBwxk82bCXew4da6YPE4mFMcgCUUh62WxnBK43jERwJ67ZS+fYBekjIo2zeIrnDeFYxZKUwbmIe+tl8ZAvnNCxhbPsrhjULaXOwCt5JJAVXOb7ITo9ggeZYDM/51Dcjm2NmXfiCaUtInM3HCI9SkHl/cMIpIWfsMi3FS6eKTOwAj5lnNAyCVVOmYXqJ2qpWxUoRaBnIq2w05N5UcmyuzFqJOXaDplHJhJpd4dC7ESJBKljqNBxHBn3IZH3sP5kiTX0RlxxwvSZb/9hMzU1zaTuOpswvdWcP/1GM3v5fHPz0vk4QA4RSqrpJcpjW/+dtgRLUhylPvXjyRGJgPCwM0Twet0iv11UET+I4/NWVajPPee6c4aYRIoQVxm098CdDJAwTnACo9qLj7MXiMM0wPJSV4uzvmSY+RDd2+/hhHfhrdea2WuXm3nmTkZQw1QDdEXR/VvM9p8cVLVBFUz/I1WsQiSqcpSHepEJSBcfcRjXaogIKlA9dxIB5XKsEHlSL2TDtgbjhAW/EAZnYZPnJs61Fxzd1+39DkJVeAJT4SXcQigSiWMxVCylqc8gDuGnhHODUccdW8RPi7GDfFUR/dxzfqqaVksrRfEtSF2spWOLPhZalZCm+MRJLDCkCYijSEg8G7ZgNtShr2egmYaA7mFFM07vHaPYQ50SYg14F2JIYTbwi3U8eSKx/QCjdHJu7IXyzB4KwpfO/hhR+J0dZ8dXdcvOl2h4VsR/cdqbPHycwSKuECDlLWI39Q+yMyvIPAiyT8DNFwk4cQviePdnP0iQON0iRib2xlxpoDUnz8raEbmqW6ChimTwX+Y+JJJiNUOaMB/igF2ysl2mHXHpMOodZBwCEyHcTWqdQfPeE4dt+A4MghxdBBIemiSBCxUoN6bz8Gyb8533wk8i8dfCx7oKpxCIz6iH95z93s1bSYzEc3Wl6mYhjAzW1SxF4LZfuKNGzoi3ZXOnO08fFr/skYl0ym4CEMreI4egJ+Jz4UB65eLNbMjawxYLxgNzJzNjkA0yQUmBZCaRFAZSMueRRwry7RfjePxEko4EupztVBE6apXP7Wg7MZ3bIrpAM106lGvTeA03LsTQvgMZTCeQh+CKI8xqj4GYd69fRXW61bz7gx8TmsctrnFBQWq89af/CtMt1hxUo2XctPVXEunlqtMn2MecDv7owvmYeoP0rI1wZ2AlhLPCLhzqbyOJ9IdAVLdKhHhxuJcdvPoJBDFMUAQjqchp3aHLveYPsd3YLeYK9GNaYXvnYZYXDw9CTMLAtvKrnDTEnwgORcKIlxVZQhTiEOVW1bQDO2HI84704L7sU64EZECe8UaBmWbqu9evxQys1Uq3kRIPuBspKbVUzxIZ2NMPIjWEk/xY+enSZ61Xg3gYuB/l5R//RXPhvbey/Z/hT7/6j/4ZA3NUL8LYTg9PN+fefx/nzqnmq3/0D5vTf/mvmWi8i1cCamctqIVHamHD668CIC+e/J/HSyTdSFABwDkd65lOza8SQkswIYZwQKrnOzsc7mywhWFW/CnijTCvWuS9joVgRwJwL2veZVA6BiIaHSRIjmRw11UtK/2YdqeOHyVPCY+geRDMKoPMrG2gU5RGBnUD5Zh0HOd7BuMDTA7K+eSsnMmG64rchRuq4u3uI/4sA1+oL9Ycl7Xq4iIiRC1zBh5iy+Hg3/X6wYSaH9mTb44gCGgJBVbisMz8hEmYxg4MK8Ox/kV6ADPHH8KYZ35nXrbHwNn6r5mHG8EOsvuT21WsyDzIu+AsVjuIKV66EFKJsF9gIMMYBNl7jUUG7B1zbRtVnrYmfKjmYsYyI0aOYcmAQn+F4HI6VA6j0rqHoVuRD7D4K1Ky0+YC09S1wqVA46n+fXxE0jY8XJJrgV9074L0IlLtwEooCTAtN+Sd+q77gKvq6NGrE6G7WY2zq9XY3oMZdLoE1bHHKouGFm8TsY9geCu4euthOzw9nY6y43TA01XEDUUHsONPsVeeOnpYNGFS3XbO1YrWtdbT8gaUHiDK5ro7LLEkFSRTatiOeK1qBeM9HzHgx/7PdngGAtfc6QZFTsBpQfJaPV0JpTt6EFw1EafBchTCKARSJAgvS11SIIVyH2KJmtnCToR2nCERCDfz1OQtoXjP+KkoSpSioYA7B+uuSNSCZzqtbX5rHR20O7+RsQ3l6Km7Tiwu42ytMcnqYTs2SCNx6OUsoQwiQV3cRk/BiJxTwQtY6WsQceC2AoyU6Ft820+/jk7tZZ39EbQyLIrpBPMtMEghKaeFSYiF9j/F4/ERiY1qGx4AdDqZjqVTQhhtR+4QCTvswtXcC3H6+S83B194WfUW35/TzeXXXm3mbxE8bQyuPLLc7GbF2/69U7FY3bk+08y8/Yvm4qs/ttB0lPt+ZFo83HOrOfqVb7C/9wnWd7/Y3LxygQ7GZV3kghBmz57F0ZGVdFpwsMQouYwRrEu9ksPucg7BgbtmZXcTXrp1I06LWxtYy0RKkH+LNeyJlk/ZS8ziGwBhCQl2/o2fs/XdC0FIl7eKWAaxg62TswjwIBLI9SEKngs730b6ShQgWiRFyizXHRULIvFa44ISqw8Ov+h+7LduE3LW/cxZMgyxKGHKehese3D9DSJD6kVgGNER1EyFpnNKfRIHS5I38Nr1kEDcSXd4kt2OJ/fHQBILGmVOHmB3WyJxXnjjVRiD+8Ncat763ndj3ZOJLMGENGgYknUR+EzQF9tI+A33gHnw6CKYtL2FwYPJntT94yUSW9EShxxNJSaqVEskIld+dKoIOP3SV7IeeuLAgagBNy9cz0Y9l994O502gYg++PyzEe/zH11jU5o38WA9y0Ytt3CtuAaXLJN4qlVyfrF7iHUTuw/w3Stfy2C1D5VNK8wyKxDvMH7JpjYgEHawBDxw3JJBbA/qGT5aBjhIUAfUi7iXgyQrIP745G52bSWOFYSkircOkilmNlhM5P6QDuT7+0ciTTQRL96+Ec6pNMr8C+de1K2BWI4+3t2FsxZuWpiMsGNMAcIVdaowmhCIYw4Q3+shdpHaZA5oEyve0kezeDy7LV7ZgtpFX1Kc7vqTR0/EIqXEDJdX4jIDb4hZCYY/+LfB6TUBAwcJxMDTI5Mwpxe/TnjV46xqxBiieRtJ4f7v186eYzzmehI3x9Ej+Br7xR9nKwjWv+NQ6bjPbcZH+MadplTl3Mg0jKDFE/Fl5+i+3nn6pK8eH5HURtuiXMMF7eDun5IEFWSMCb1xfoe++jVWyR3ERDoCIt5tbl+60Ny+MpMVg/ufO9kcePH5Zi+bQbpfiQHWZt56i11e3425Vi/buMWrmsGh/Q1hxdq1f39z+JVvNEe+/rUMvpUOC+7XTfCDucsXWQR0HQJBjUJkuT+FG4Kqt+uu4Uq7SA84dHG70NUdCURTxplnMCK8rh3r6PcihXlYj1KXooa5k4MTa8t3bwbZBIdB80zrPMvgLjwBIBgPUaJceeNdF5IAt8BPmClF2l8fsBogrGoP9dL9ZogxxjILnNYxOtybnQ0DWcRo4C62TnQ6Yeie92PAZXwv68+VgLTXn+OSWx98GCIJAiNV+ahTC93mx9hA9sjXv4HKtA/mAZEQENyYzh+9925zm23CrbMu7+bnEt59z3+p2ffMSQiE0EqJIomEo91z1z5KH0VCVsKoZ3K5r+25BRYtnPL6Cf55fERSG1Ebjo5bzLYQipKETlbtGgLYx7/9bzTP/p2/jZq1pzn309ebmXfONB/+8PsgFhNyqACqSN/8R/8OptOT4E5f8/N/8b9l56wFEHwU7rQIF1wRwemYan+3+D1YlZ77e7/ffPUf/xMk0ySER4DumevN29/9E4hjJrv+ooiDCMXMG0JBp84aEXB8EEIYZL5DaSKnXO9lYEsHHz6mCRfkQi1xHCA3lag0DrigyGWqttGjID3qBls9m0YpZzhUI9G7b30G78bikmPL1u87QIwWfhVm9ex4w6gswzCXEVRPx3DC1PHb5p17iUx/a+ZKTNkbbjdhnchriHHdqb/3HTZYAtEhKAlOGG9A7GFM586Di0oiAlhANMIzih9nvQZ2HznaPP+d7xAkm28hyh4MJqvsGXKX8eASG+Dcw73HsupY8tnf/Z3my3//j5h0dK8S6sY7QxTduHCNsREqK671HkVy1sYX5vAgNOrbJ33+/ImkEoUt4bqjKvgckV2liQA2gsYkHXXjnfdRnd5XD2iuv/ceiHwlXMiB7q6pE83zf/v3QKiR5uaFi83Vd043Z37wg/gerdApcn7dsbUiFa9V5gAY8B964cXmlX/3nyJFULPo3Otvv9vcunChufr223j50vkMSl21WEKbUleuRWCl2CAD0l372MbA+LbUWRVpAfVsBFVijPkFA94xH93pK+s2MLTc3GZwm6jrvHG8M77/cIjEwfvt8+9EwixuGZcLZOPdwCAz3agbEid/Sh06uXKh9JCPC0cZS8tcJAYl8ABE7GZJ7lnpgFjL2qW/fjUMZO7Sxag9vahzfVr5gI8RGg2htP+55yNlJXBN4ctsy+eyWkMqaUHcMZVD8GE8jp1QXZUcvNcYsop5WwJT0iI7ILjJqMqz771KP5T2qGbaN84PrUEcEnBWZKK+aYEE4sCCxWlE13z40eIP8H+ax+dPJB9rjVyh/YVOSoc7uNQFRES/y8ByHt3ZnaruzFwEqEQqAQnHJk5gV2efeH53Zq5lT/hrDOJv49ZguB5NwJp4RQAwLIjm4HAXasTxr32jOfwiuz4xmFycvRl14BbzILcvnFP/SSdRkx0pIpHA6ewPJ9zMx5tsOYe5WSuPuwpLWA5qJZxy+AFXZgVCBdnDHKT5YoJ1c1QtcrZJd/E+HPv6mGDTUJB8QKoQitmQr/8kDDOt0rcylzqmU4r0MD5yj3ONBetEW1xevdvcuXwZFYuVl0g+iT7iTgIkT8ci7vRUnSxj3kUfdD2J8YwllKie1iF1Ah6ebSv18XvNv1tGmulluQBB5gwXtLlcVE1V0dSTtI5BlRqL7E8zd+0aKvV+8pF4BCuSCiLxvWtLPHbgmdv7/wQW9z96knePl0hsnH3t4WXbWF2r5UhjbAn8y+//ACsR6gGu6kCqk0ZT7D4Cuu07eSqD419+7/9uZs+fixpxj52WVF1ESDtOxHKSS114F3G1jr38SvN3/4N/xg5Uo81H5y807/34x83Zn/44OyDp5LjEeMdDBMuSUfLJYN884a4b82yFzCIkxzXWyQ5cZ4Wei6lceTjBtmI84qe50+W+WIjIN7PHaS8vIQiXB+uu4ez2c6g4gQUfqtpkPxGYwhZuGiGuFinJNXXr/BFmUVVbNZUxRDg4UrhncJTx2j2IYqFZwJBxg1jEC3Oz1JOVguHaToSWyVAZgOMCXW+s9xoqmC4C+mbdvUqABvas17VdyanatUQkGZlPIRL7DsuWIRrZ9np9Wbd5UKcXacw6/XtXcOthjLFMpExddIwxrBlYv7izr/6cZb1N8+If/lHgQ9G8M04w48qZq+xVf745ePJop7lfxIvHRyTd1N8hFOxbcFfVn0VE8Lk338Z6dT2IaieKRXJRueUQYngZl5IrDLDf+/6fwY0ul5ly3skB5ZBxvgOpRaxwIvBrZWm1ufjW283/8V/8l1wvgtwuRyXU6Y3ZIHPMuuHoIJ0l8k3WaStF2s7lA4Jiw/XtUZGWhKoYSyBRHZj7jWqF3xhNxEDTh15+mX1JWN+upYlnbtiZpgsLYnxFWlCXPgPBQVQpV4Ru25OxEXl2DtUtvhVBq8FD+I1inTPaiWrbAHMzCzevZ2fjRfZc1EBQF0vxWeAkrGRQUydOJcSpVrpeTNCuRly4znbNb59mfEC8MSSKKpF+axobJGYltURV5rhQz2iTaqib+JS6If2vsD4ea9WdmUv2XkmvmgaM5jC+7Gb2fZiJQ11X+iByYT6FL9c6xDhLGfZd6T9efAGPx0YkQY6HNNjOWgA495iXcH/AjAl4pmriXuxaiezbcUyGSgt1/LvsMbGCDpzZXBFHxA0CewnXQv3JegjycWGTSKXbhUHoNMU6X+GOvUoGOazI4/LbdEybl4hgnk6CpdN8zs+/ZEv+XvNecyh6/CAqlBnZTvN3/foIOxC7/4d+XWl/VctA0ujjEiLlO7FXOD3X3ksYFVHIr3NYsDn5vyUY62m41uVFxhiMQfpHbsWcvTALs9F8LcHZFr4ly9Tbb0eZ/8hmo5S1qEUPIlH6LWABK2oirVMiAn+fZ4wH/AvzsnyRGwJCzb135VyYnWMa+814ZZrFnbQtschop4UDLwluEQ/rO+fPRo0lJx4bYrXEGY5KSsqSvrbcb9OEwL8+fVrnx0YkNtouziHAbDc/ue8t9OZlJMlyJqmYK2Dwpz47wSB0hRhNLlyaZq5kBcTWWuTEXNQrEGqLOYDu9eZ2cOHQPmdlHFJqEFv82uI8psm5zF/4Ps59QTqqodCyOiDTFjdxZBSB+cUcy9l4wKo56VQTw/6cse/DzWJkwp11XT/BpGCVFqZJAyEmpQx1DeFRdlQW7+s1dZYwOoTitTACeZJHW89k2QLRNzIY4Xf36gx+ZRC9rjl84+Sdy2TXM8cDPMwrefgVzYCAp0+eJCYWg27S3foAIwlty1wGc0wyFoncGfcSstXxAlLE+ZLAh7bTzl5Usy1UyLlf/ixtc1zkbPs8TGyZYHSZnUfFgsICS+ugs+gyc0RXf/YTpDDr23FPWYMYVddch6NUKwwKuNn+FlfSBq+/AMdjI5K0LQ22oSIB3A1OvrWy3Uyho/egMvQ8B8eFE8Xu3+racj2BtYrF5RZjjwU4lDb+eOCCMA625VYijBIjqpdP4d4u+NEG72BQ1SGuFnBG1S2JULwpyFiRkrOEwXyA9RNZ6mY0I0iyPbivKN0GUWvQyHeIvkXqID1cVxlTCSDI73va4DMlnGVGgnjOO2AhQnitJBFO6iCec0gZSsyuw0e00Zl4GYnvVYUcX60uIjFV25J/azzg2kH/5LGTzd5nnmXG/2Qzh1o0e+5DBvXsXNsSsuOsSdqpFcqwRzH9At9CINSTI+MRpNbtmUvMvXyE+fZMiNW+0gOBmR++ZwwHrI2eIpxd7572QFiLWAbf+dnPeVfqpiQf0usAfzfX/AiH8msZibgiw/Gv7ejAJY+e+J/Pn0hskFDy4DqNLNhZgNY+EyjZMqEiislZKah1xI2L1uBGfei9w5s4KkpIIL+r6hTxmdhTreG5SFeP4jfkqjjVNDgtHdRDWJQQFKZQxwqqQ87+ZvAL0lmPSBI6JisJSaO6pPPdEKbdXuZlXGknEdmW9B2ERcvSHiVGEN735mV9kqftUZqUdHnmO+/bc6l7ixA8zrsKu06j6oXqEZY8iYvD8ZgMYoNtt+NYSD1sJy98qybYTEEcu/YfwMKnWzzFkkbpk6rLsJBoqlWFOGAm5Kc1LwYLEk0yz5SxCciv9UxzsNs3NNtwf4shjfuxb6xj1EBKaI6fOnkqVkv7Jilor2vqXY0oLB2rOcteKqHEVeqUfuzAtzQx3yelMHuKx+dPJDbGRtXO9rr9BQh0coiD9yJJ59rP7FlUGkGyRme5x3sPoWdEfjmTKoERA/Wd0vPUgaAMWBZvh9++eAH9HBeRduCpqlACspXqZIdZPFD3HMelAi6m567f2xEFeXeq7aOC7AURulWC6Om0yfZYt9K+LiKp7a1EYbqHXQdb/T6lJb/Uxdv7JEt5rxXN9SgpFyyNAyXqTaSpRAoSWs+Anj/Tz5xqRrFmubTW8gsTKZIrBAOCKjHWkbyaY8t4AgufUpA6T+Jf5XyIlqi5+UsQIy6LMKsQiHUm3RZSWtXPPUms39SJl9mPhfEP8zYm0R1+8fZsc/P2+RhjeliAZYML8tMGCDN1s37+hF2+pA7tmQcFxrl48n8eD5G0jbK5BRiKTJChBX6AUIHCOaZTJQiI3gNns5cniabRw2x8HbCGS6b3fU2u/uSKnPOY7/efONlsH3ceo0XcJOM9/+ICTn2sk/XIQbSU0jHcicgOVuwkr9s80mm1ru27Sjw76er3pY18vJOHBSVLkbfkXSphkrYc06QRXtRDmNW6FHRZAZk1TXtoaVtlfGEE99TZtqLW7n32+WbyyNFYlPSjCqxIr9u7UqOadat/1TbMSO8DPi+wBtkHYECO7Q4giTSsSCRarjJntK6Kq4QEdn6E2re6eDduRbsPHmz2Hj/ZKdPq72KyuAd43+q9wHeqW1jZ9UYA1m5R5yas26rEwjhSue0D+8gf/5/28fiIpKtlFdHSYUprFxVx+FeeEusUHErkyw+EEWAPEkjt8IpQIn+9Nic7JUfnooUx+ZGzKTrvS5IWSfO4Xovg/K9Ect+Z53xYxxnJTmQxs/ZX3luSBGhxvvPEvad871V7+J2Hz203l8KlQ1CW7wGCDTOvNOjgm7SDYwS2AEYStgxAGvN7pYZuL46htlzDkRcQiRKD8VOcGMlTlUp11LOqrCZdt4obnZ4OgSgJ/FaE1vNh/3MvMqa5nH5xHNOBJbU1rSrdLhwZEy2G6haY2GyYA6mHqLcqmX3N8BM3Go01qM6kLSor7bSt9WcJwqb+hMFTOh4vkdDA2um2L9c8C5LZ9zzQpaEAgwdKE9+XHs89N7HR+8zvva9HJ119EPRqb0yWD8opNRHgPpZDtS8rMZTn1ME0/FInOqy7o6r1xSiP+Txpyzc1/1KEeYgcJbvkmQJoW56Vevio885Lfp3WkZEtDjysD0TiQNetsixLV3+PsowYAwgfuva+rhmPyZpvCreHSDJoJo1hfAhHihE7TAiQo8Zyh9TRWDF19FgsThKGdZMROcu++8AhVK+FGEWcq7G2pb4u5WWcB2Eab6uMeWx/+RX1D58xpFPqBOwtLxY1xn7Zd7ISRoW3OadsS3mDAP4VAAAFHUlEQVT6x+e/HVx3m+y5itScK1IXVwvRgR4KF/Sa90oT0qVruI4kqfnV772veebSb9sjl1339XmAXm46SM9t51okbDu1dK4vlQRtF+V9F2Gk60pHphtzWe6D1DXvUqQF1av7r3ee7rSptlO4iMH+GKtpvYvvls/h7ve9Q20pMAGGnWsf1ecUxLUbehrlxLFfWI7tQhqRW9JqxSr5kL6rzoGNj7QE+o31thwPbuq4LkT9CTDrHseZt+WGQUkYrWVOKRKVy0y5LupZC7uu+pSCn9zfxy5JAmw7S8C0CKAZk36hPwCAyOh7u8111LnmMdwundnem76ka4FTn+fW780vf3O986cAucC4C+BeWqe8sB5+4bk+49YOTx19BZF4Mn2+Kdfd5sn6LlkldT6oV59+bvOsSGpedUySskEqD8db2dAzCE31IJoKV79N2S1sKiyTp89CQBKe6QrM0l5h761pauVrfVJq+QNkuu64TBrh1cLCR9SvJGvhmJe+F34m9Jo/lUi4DsGUTEhDOuCeNNzl8N1TPB4vkdSG2ch0AI3nWbqHi20XHPGudmYAY7r89w8ElJvyLNn5vj3qd+W2RRBvahIL6xw7NynHOrVcr3RIfc/Z/9bLi/vStJn5bd6XdDtFlOed+894UcruKov2pI4iIO0vMClwK3XnfeCGRKiN95uu68CKNJFAHRiSiP9FMUzWNsvCPrnmpM/RwsDr1IH7fGod87D8qe/8rF6bUGIqBCVR+E1bpvl05W0uT/t4MkRiK214e64dGpcMn6XjPRck8FGIKh1SesUOv48o6jdJ/Ah/2k4InqROpV61erWe3R2V6+4iauJ67n73616bp23jLBLZZn8pSmSy4gaQQJLU8UYlgNKSAi+rcR+8kB41XarY5vtJ1f1Ymx9M2N329rrCzBr4qEgP6pH7ivztS4gi6WVCWhU9+KgQTk2bTMq7p/j3yRFJdyMLBEuHBzYAw873uWef5W/5E3Wifebz+zq/K90nPa9JOh1vZ+w8rFf3n2sa6/Tg8bBnD6b5rPddeVd1yyDSDk1oeXLN3Ex7Lbx2asj7kqSk84sWniWd70sCT53vapquspPBJ/x5aDLVqfaF+XZg/cB1sky6tnSv+ZneXwb37TOffxGOp0MkteUPAUKAWzutTdfV7wUH/O6BNCbtdEz3u4eU0Q38zje1TvVcv6vn+vxJnNv21brZ/lzzvEgDK9ESehUnPMk78aoLYMnKB3lWCMp0Qb9uOJmlx6/a3vvwl5uu77pYEJLhvoQpIqW36Wsb/f4+AknKL8afp0sk3TDoAnI3wE3yMTA/rHNrXg971513MvxYjvXrL8651lmE9pqBrsgNvy7EYE0faOuDrQrRkCzPWyLxG/G2vjOb7qODtHmokluOnaty35JZ5217cV9n3Z9XJ0VpT9u+j6Wp7a7nnc+e2tUXh0geBQSfBkDfdSPPp6V9lDKfRlrr3tWeIBRt60asT0J2qxviqPUWJl1wue9dTfOQ8yelq8+tSyWk1PUheTz4qFP/h/XNw549mMETvv/NJJK/CUhfQED/TVX+1Pef0p4Own1qBrxsCeTTiOpvyuLB97XsSjB5311Xy6z3lUDrfXdmD3vW/f4pX///k0ieMlAfa/G/JkLdh9CfVtFPKufTkN38Pum77ufd159Why/Iu98SyRekIx57NSpi1vNnLfBRvn+UtJ+1Pk/guxgWn0A5vy3itxD4jYXAb4nkN7brflvxJwWB/w+BVh8aPT3LkgAAAABJRU5ErkJggg=="
                }
            ]
        }
    ],
    "temperature": 0.6,
    "stream": false
}

Response:

{
    "id": "jobId-QmcJohc71DrHnVbXbLRt7drDyPSU1dyNTskU3a1yRR7zBu-jobState-ResultsSubmitted",
    "object": "chat.completion",
    "created": 1744411025,
    "model": "llava:7b",
    "system_fingerprint": "",
    "choices": [
        {
            "index": 0,
            "message": {
                "role": "assistant",
                "content": " The image shows a creature that appears to be a blend of a frog and some form of robotic or mechanical structure. It has the body shape of a frog, with prominent eyes and limbs. The creature is depicted with a sleek, technologically advanced design, featuring metallic parts and futuristic elements. The color scheme includes blues, blacks, and greens, giving it a high-tech aesthetic. This creature could be an example of concept art for a video game or science fiction story. "
            },
            "finish_reason": "stop"
        }
    ],
    "usage": {
        "prompt_tokens": 600,
        "completion_tokens": 106,
        "total_tokens": 706
    }
}

Embeddings

Use the embeddings endpoint to compute embeddings for user queries supported by the nomic-embed-text model. This endpoint is OpenAI compliant which means you can use it with the OpenAI SDK (see the end of the Embeddings section for a code example)

Endpoint

POST /api/v1/embeddings

Request Headers

  • Content-Type: application/json*

  • Authorization: Bearer YOUR_API_KEY*

Request Parameters

Parameter
Description
Type

model*

Model ID used to generate the response (e.g. nomic-embed-text). Required.

string

input*

The input to create embeddings from. This can be either a single string or an array of strings. Required

string or array of strings

Request Sample (single input)

{
    "model": "nomic-embed-text",
    "input": "why is the sky blue?",
}

Response Sample (single input)

{
    "object": "list",
    "data": [
        {
            "object": "embedding",
            "embedding": [
                0.009716383,
                0.0443843,
                -0.14053911,
                0.0011783413,
                0.031978477,
                0.1073299,
                -0.008574652,
                ...,
                -0.009498251,
                -0.041506674,
                0.020256031
            ],
            "index": 0
        },
    ],
    "model": "nomic-embed-text",
    "usage": {
        "prompt_tokens": 6,
        "total_tokens": 6
    }
}

Request Sample (multiple input)

{
    "model": "nomic-embed-text",
    "input": ["why is the sky blue?", "why is the grass green?"],
}

Response Sample (multiple input)

{
    "object": "list",
    "data": [
        {
            "object": "embedding",
            "embedding": [
                0.009716383,
                0.0443843,
                -0.14053911,
                0.0011783413,
                0.031978477,
                0.1073299,
                -0.008574652,
                ...,
                -0.009498251,
                -0.041506674,
                0.020256031
            ],
            "index": 0
        },
        {
            "object": "embedding",
            "embedding": [
               0.028126348,
                0.043248065,
                -0.18586768,
                0.03491587,
                0.055507593,
                0.12088179,
                -0.009062298
                ...,
                -0.035023507,
                -0.07451658,
                0.011851714
            ],
            "index": 1
        }

    ],
    "model": "nomic-embed-text",
    "usage": {
        "prompt_tokens": 12,
        "total_tokens": 12
    }
}

Response Codes

  • 200 OK: Request successful, stream begins

  • 400 Bad Request: Invalid request parameters

  • 401 Unauthorized: Invalid or missing API key

  • 404 Not Found: Requested model not found

  • 500 Internal Server Error: Server error processing request

Example Code using the OpenAI SDK

import OpenAI from 'openai'

const openai = new OpenAI({
  baseURL: 'https://anura-testnet.lilypad.tech/api/v1/',
  apiKey: 'YOUR_KEY_HERE',
})

const embedding = await openai.embeddings.create({
  model: "nomic-embed-text",
  input: ["why is the sky blue?", "why is the grass green?"],
})

Image Generation

The Anura API enables you to run stable diffusion jobs to generate images executed through our decentralized compute network. It's really easy to get started generating your own generative AI art using Anura through the endpoints we provide.

Retrieve the list supported image generation models

GET /api/v1/image/models

Request Headers

  • Content-Type: application/json*

  • Authorization: Bearer YOUR_API_KEY*

Request Parameters

Parameter
Description
Type

model*

Model ID used to generate the response (e.g. sdxl-turbo). Required.

string

prompt*

The prompt input to generate your image from (max limit of 1000 characters)

string

Request Sample

curl -X GET "https://anura-testnet.lilypad.tech/api/v1/image/models" \
-H "Content-Type: application/json" \
-H "Authorization: Bearer your_api_key_here"

Response

{
    "data": {
        "models": [
            "sdxl-turbo"
        ]
    },
    "message": "Retrieved models successfully",
    "status": 200
}

Response Codes

  • 200 OK: Request successful, stream begins

  • 400 Bad Request: Invalid request parameters

  • 401 Unauthorized: Invalid or missing API key

  • 404 Not Found: Requested model not found

  • 500 Internal Server Error: Server error processing request

Currently we support sdxl-turbo; however, we are always adding new models, so stay tuned!

Generate an AI Image

POST /api/v1/image/generate

Request Headers

  • Content-Type: application/json*

  • Authorization: Bearer YOUR_API_KEY*

Request Parameters

Parameter
Description
Type

model*

Model ID used to generate the response (e.g. sdxl-turbo). Required.

string

prompt*

The prompt input to generate your image from (max limit of 1000 characters)

string

Request Sample

{
    "prompt": "A spaceship parked on a lilypad",
    "model": "sdxl-turbo"
}

Alternatively you can also make the same request through a curl command and have the image be output to a file on your machine

curl -X POST https://anura-testnet.lilypad.tech/api/v1/image/generate \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer your_api_key_here" \
  -d '{"prompt": "A spaceship parked on a lilypad", "model": "sdxl-turbo"}' \
  --output spaceship.png

The result of running this command will be the creation of the spaceship.png file in the directory you ran the command from.

Response

This endpoint will return the raw bytes value of the image that was generated which you can output to a file (like shown in the curl command above) or place it in a buffer to write to a file in your app, e.g.

const fs = require("fs");
const fetch = require("node-fetch");

async function generateImage() {
  const response = await fetch("https://anura-testnet.lilypad.tech/api/v1/image/generate", {
    method: "POST",
    headers: {
      "Content-Type": "application/json",
      "Authorization": "Bearer your_api_key_here"
    },
    body: JSON.stringify({
      prompt: "A spaceship parked on a lilypad",
      model: "sdxl-turbo"
    }),
  });
  
  if (!response.ok) {
    console.error(`Error generating image: StatusCode: ${response.status} Error: ${response.message}`);
    return;
  }

  const buffer = await response.buffer();
  fs.writeFileSync("spaceship.png", buffer);
}

generateImage();

Note: Should you ever need to know what the corresponding Job Offer ID for image generation, it is provided in the response header as Job-Offer-Id

Response Codes

  • 200 OK: Request successful, stream begins

  • 400 Bad Request: Invalid request parameters

  • 401 Unauthorized: Invalid or missing API key

  • 404 Not Found: Requested model not found

  • 500 Internal Server Error: Server error processing request

Video Generation

The Anura API enables you to run long running jobs to generate videos executed through our decentralized compute network. It's really easy to get started generating your own videos using Anura through the endpoints we provide.

Note: Video generation can take anywhere between 4-8 mins to produce a video

Retrieve the list supported video generation models

GET /api/v1/video/models

Currently we support wan2.1; however, we are always adding new models, so stay tuned!

Request Headers

  • Content-Type: application/json*

  • Authorization: Bearer YOUR_API_KEY*

Request Sample

curl -X GET "https://anura-testnet.lilypad.tech/api/v1/video/models" \
-H "Content-Type: application/json" \
-H "Authorization: Bearer your_api_key_here"

Response

{
    "data": {
        "models": [
            "wan2.1"
        ]
    },
    "message": "Retrieved models successfully",
    "status": 200
}

Response Codes

  • 200 OK: Request successful

  • 401 Unauthorized: Invalid or missing API key

  • 500 Internal Server Error: Server error processing request

Send out a request to create an AI generated video

POST /api/v1/video/create-job

Request Headers

  • Content-Type: application/json*

  • Authorization: Bearer YOUR_API_KEY*

Request Parameters

Parameter
Description
Type

model*

Model used to generate the response (e.g. wan2.1). Required.

string

prompt*

The prompt input to generate your video from (max limit of 1000 characters). Required.

string

negative_prompt

An optional field to specify to the model what to exclude from the generated scene

string

Request Sample

{
    "prompt": "Two frogs sit on a lilypad, animatedly discussing the wonders and quirks of AI agents. As they ponder whether these digital beings can truly understand their froggy lives, the serene pond serves as a backdrop to their lively conversation.",
    "negative_prompt": "Dull colors, grainy texture, washed-out details, static frames, incorrect lighting, unnatural shadows, distorted faces, artifacts, low-resolution elements, flickering, blurry motion, repetitive patterns, unrealistic reflections, overly simplistic backgrounds, three legged people, walking backwards.",
    "model": "wan2.1"
}

Response

This endpoint will return an job_offer_id which is an unique identifier corresponding to the job that's running to create your video. What you'll want to do with this id is pass it into our /video/results endpoint (see below) which will provide you the output as a webp file or report that the job is still running. In the latter case, you then can continue to call the endpoint at a later time to eventually retrieve your video. As mentioned in the beginning of this section, video generation can take anywhere between 4-8 mins to complete.

{
    "status": 200,
    "message": "Video job created successfully",
    "data": {
        "job_offer_id": "<your-job-offer-id-here>"
    }
}

Response Codes

  • 200 OK: Request successful, stream begins

  • 400 Bad Request: Invalid request parameters

  • 401 Unauthorized: Invalid or missing API key

  • 404 Not Found: Requested model not found

  • 500 Internal Server Error: Server error processing request

Retrieve your video

GET /api/v1/video/results/:job_offer_id

Parameter
Description
Type

job_offer_id*

The id returned to you in the video creation request i.e /api/v1/video/create-jobRequired.

string

Request Headers

  • Content-Type: application/json*

  • Authorization: Bearer YOUR_API_KEY*

Response

If the video is still in the process of being generated you will see a response that looks like the following:

{
    "status": 102,
    "message": "Request is still processing",
    "data": {
        "job_offer_id": "<job-offer-id>",
        "job_state": "DealAgreed"
    }
}

Response Codes

  • 102 Processing: Request is still processing the creation of the video

  • 200 OK: Request successful

  • 400 Bad Request: Invalid request parameters

  • 401 Unauthorized: Invalid or missing API key

  • 500 Internal Server Error: Server error processing request

However, once the video has be generated you'll be returned the video in webp format with its raw bytes which you can save to a file in the following manner:

curl -X GET "https://anura-testnet.lilypad.tech/api/v1/video/results/<your-job-offer-id>" \
-H "Content-Type: application/json" \
-H "Authorization: Bearer your_api_key_here" \
--output video.webp

The result of the above command will be the video.webp file being saved in the directory from which you ran it from:

Audio Generation

The Anura API enables you to generate audio from text executed through our decentralized compute network. It's really easy to get started generating your own audio using Anura through the endpoints we provide.

Note: Audio generation can take anywhere between 40 seconds to 3 mins to complete depending on the input length

Retrieve the list supported audio generation models

GET /api/v1/audio/models

Currently we support kokoro; however, we are always adding new models, so stay tuned!

Request Headers

  • Content-Type: application/json*

  • Authorization: Bearer YOUR_API_KEY*

Request Sample

curl -X GET "https://anura-testnet.lilypad.tech/api/v1/audio/models" \
-H "Content-Type: application/json" \
-H "Authorization: Bearer your_api_key_here"

Response

{
    "data": {
        "models": [
            "kokoro"
        ]
    },
    "message": "Retrieved models successfully",
    "status": 200
}

Response Codes

  • 200 OK: Request successful

  • 401 Unauthorized: Invalid or missing API key

  • 500 Internal Server Error: Server error processing request

Send out a request to create an AI generated audio

POST /api/v1/audio/create-job

Request Headers

  • Content-Type: application/json*

  • Authorization: Bearer YOUR_API_KEY*

Request Parameters

Parameter
Description
Type

model*

Model used to generate the response (e.g. kokoro). Required.

string

input*

The prompt input to generate your audio from (max limit of 420 characters). Required.

string

voice*

The voice to use when generating the audio sample. Possible values are heart, puck, fenrir, and bellaRequired.

string

Voice samples

Heart

Puck

Fenrir

Bella

Request Sample

{
    "input": "Hello my name is Heart and this is AI speech generated from text using the Kokoro module running on the Lilypad Network",
    "voice": "heart",
    "model": "kokoro"
}

Response

This endpoint will return an job_offer_id which is an unique identifier corresponding to the job that's running to create your audio. What you'll want to do with this id is pass it into our /audio/results endpoint (see below) which will provide you the output as a wav file or report that the job is still running. In the latter case, you then can continue to call the endpoint at a later time to eventually retrieve your audio. As mentioned in the beginning of this section, audio generation can take anywhere between 40 seconds to 3 mins to complete.

{
    "status": 200,
    "message": "Audio job created successfully",
    "data": {
        "job_offer_id": "QmTmTWxffQrosK2nb3a4oeeZd9KRMUGwApEUFhioFUe4Y9"
    }
}

Response Codes

  • 200 OK: Request successful

  • 400 Bad Request: Invalid request parameters

  • 401 Unauthorized: Invalid or missing API key

  • 404 Not Found: Requested model not found

  • 500 Internal Server Error: Server error processing request

Retrieve your video

GET /api/v1/audio/results/:job_offer_id

Parameter
Description
Type

job_offer_id*

The id returned to you in the audio creation request i.e /api/v1/audio/create-jobRequired.

string

Request Headers

  • Content-Type: application/json*

  • Authorization: Bearer YOUR_API_KEY*

Response

If the audio is still in the process of being generated you will see a response that looks like the following:

{
    "status": 102,
    "message": "Request is still processing",
    "data": {
        "job_offer_id": "QmTmTWxffQrosK2nb3a4oeeZd9KRMUGwApEUFhioFUe4Y9",
        "job_state": "DealAgreed"
    }
}

Response Codes

  • 102 Processing: Request is still processing the creation of the audio

  • 200 OK: Request successful

  • 400 Bad Request: Invalid request parameters

  • 401 Unauthorized: Invalid or missing API key

  • 500 Internal Server Error: Server error processing request

However, once the audio has be generated you'll be returned the audio in wav format with its raw bytes which you can save to a file in the following manner:

curl -X GET "https://anura-testnet.lilypad.tech/api/v1/audio/results/<your-job-offer-id>" \
-H "Content-Type: application/json" \
-H "Authorization: Bearer your_api_key_here" \
--output audio.wav

Web Search

The Anura API provides developers with a web search capability enabling you to add a powerful tool to your AI Agent building arsenal. LLM's are only as great as their training data and are taken to the next level when provided with additional context from the web. With web search you can power your AI Agent workflow with live web search data providing your LLM the most up to date information on the latest on goings in the world.

It's easy to get started searching the web through the Anura API using our endpoint:

POST /api/v1/websearch

Request Headers

  • Content-Type: application/json*

  • Authorization: Bearer YOUR_API_KEY*

Request Parameters

Parameter
Description
Type

query*

The web search query you wish to execute

string

number_of_results*

The number of search results you want returned (limited to 1 to 10 inclusive)

number

Request Sample

{
    "query": "What's the Lilypad Network?",
    "number_of_results" : 3
}

Response Sample

The response will include the following fields:

Field
Description

results

The array of search results where each result object is made up of the strings: title, url and description

related_queries

An array of strings containing similar queries based on the one you supplied

count

The number of search results returned

{
    "results": [
        {
            "title": "Lilypad Network",
            "url": "https://lilypad.tech",
            "description": "Lilypad Network Lilypad offers a seamless and efficient way to access the computing power you need for AI and other demanding tasks—no need to invest in expensive hardware or navigate complex cloud setups. Simply submit your job; our decentralized network connects you with the best available resources. Benefit from competitive pricing, secure ..."
        },
        {
            "title": "Lilypad Network - internet-scale off-chain distributed compute solution",
            "url": "https://blog.lilypadnetwork.org",
            "description": "Verifiable, truly internet-scale distributed compute network Efficient off-chain computation for AI & ML DataDAO computing The next frontier of web3. Follow. ... Check out the docs https://docs.lilypad.tech/lilypad! Lilypad Builder-verse! Devlin Rocha. 4 min read. Fuel the Future by Building on Lilypad and Accelerate Open Source AI. Alex Mirran."
        },
        {
            "title": "What is the Lilypad Decentralized Compute Network?",
            "url": "https://blog.lilypadnetwork.org/what-is-the-lilypad-decentralized-compute-network",
            "description": "Lilypad democratizes AI high-performance computing, offering affordable, scalable solutions for researchers and startups. Follow. Follow. What is the Lilypad Decentralized Compute Network? A Crowdsourced Network for HPC Tasks. Lindsay Walker"
        }
    ],
    "related_queries": [
        "Lilypad Tech",
        "LilyPad github",
        "Lilypad website",
        "Lilypad AI",
        "Lily pad Minecraft server",
        "Lilypad crypto",
        "LilyPad Arduino"
    ],
    "count": 3
}

Response Codes

  • 200 OK: Request successful, stream begins

  • 400 Bad Request: Invalid request parameters

  • 401 Unauthorized: Invalid or missing API key

  • 404 Not Found: Requested model not found

  • 500 Internal Server Error: Server error processing request

Jobs

  • GET /api/v1/jobs/:id - Get status and details of a specific job

Get Status/Details of a Job

You can use another terminal to check job status while the job is running.

curl -X GET "https://anura-testnet.lilypad.tech/api/v1/jobs/{job_id}" \
-H "Content-Type: application/json" \
-H "Authorization: Bearer your_api_key_here"

Get Outputs from a Job

Once your job has run, you should get output like this:

data: {
    "id": "cmpl-e654be2df70700d27c155d4d",
    "object": "text_completion",
    "created": 1738614839,
    "model": "llama2",
    "choices": [{
        "text": "<output text here>
        "finish_reason": "stop"
    }]
}

Cowsay

  • POST /api/v1/cowsay - Create a new cowsay job

    • Request body: {"message": "text to display"}

  • GET /api/v1/cowsay/:id/results - Get results of a cowsay job

AI Model Marketplace

Introduction to running AI models on Lilypad with the Module Marketplace

There are two ways that participants in the network can interact with Lilypad Modules: As Job Creators who run modules, or as Module Creators who create modules.

Running Modules

Running Jobs (participating as a 'Job Creator') is just a few simple steps:

  • Choose a module to run and send a job request

Example Modules

Try out some of the Lilypad team modules:

  • Cowsay

  • Llama2

  • Stable Diffusion Turbo Pipeline

Creating Modules

Developer Resources

  • Build a Job Module

  • create-lilypad-module

Llama2

Run Llama 2 on the Lilypad network

Getting Started

export WEB3_PRIVATE_KEY=<YOUR_PRIVATE_KEY>

Run Llama2

Run Llama2

lilypad run github.com/noryev/module-llama2:6d4fd8c07b5f64907bd22624603c2dd54165c215 -i prompt="your prompt here"

Example:

lilypad run github.com/noryev/module-llama2:6d4fd8c07b5f64907bd22624603c2dd54165c215 -i prompt="what is a giant sand trout on arrakis?"

Notes

  • Ensure you have the necessary permissions and resources to run Docker containers with GPU support.

  • The module version (6d4fd8c07b5f64907bd22624603c2dd54165c215) may be updated. Check for the latest version before running.

  • Adjust port mappings and volume mounts as needed for your specific setup.

Output

To view the results in a local directory, navigate to the local folder provided by the job result.

open /tmp/lilypad/data/downloaded-files/<fileID>

Build a Job Module

How to build your own compute job for Lilypad

A Lilypad module is a Git repository that allows you to perform various tasks using predefined templates and inputs. This guide will walk you through creating a Lilypad module, including defining a JSON template, handling inputs, and following best practices.

Modules on Lilypad

Below are a few examples of modules you can run on Lilypad. From language models to image generators and fun utilities, the network supports a growing list of AI modules.

Module Structure

Start by creating a Git repository for your Lilypad module. The module's versions will be represented as Git tags. Below is the basic structure of a Lilypad Module.

your-module/
├── model-directory            # Stores locally downloaded model files 
├── download_model.[py/js/etc] # Script to download model files locally
├── requirements.txt           # Module dependencies
├── Dockerfile                 # Container definition
├── run_script.[py/js/etc]     # Main execution script
├── lilypad_module.json.tmpl   # Lilypad configuration
└── README.md                  # Documentation

Prepare Your Model

  • Download model files

  • Handle all dependencies (requirements.txt)

  • Implement input/output through environment variables

  • Write outputs to /outputs directory

1. Download the model locally

To use a model offline, you first need to download it and store it in a local directory. This guarantees that your code can load the model without requiring an internet connection. Here's a simple process to achieve this:

  1. Install required libraries

  2. Use a script to download the model (eg: python download_model.py)

  3. Verify that the model files are in your directory

The method to download and save a model and tokenizer may vary based on the model's architecture and the framework you are using. Always refer to the documentation of the specific model to ensure compatibility and proper usage.

from transformers import AutoTokenizer, AutoModelForSeq2SeqLM

def download_model():
    model_name = "<namespace>/<model_identifier>"
    # Ensure you have a directory named 'model' in your current working directory or specify a path
    tokenizer = AutoTokenizer.from_pretrained(model_name)
    model = AutoModelForSeq2SeqLM.from_pretrained(model_name)

    # Save the tokenizer and model
    tokenizer.save_pretrained('./model')
    model.save_pretrained('./model')

if __name__ == "__main__":
    download_model()

2. Create Run Script (run_model.py for example) that will be used in conjunction with Docker

import os
import json
from transformers import AutoModel, AutoTokenizer

def main():
    # Load model and tokenizer from local directory
    model_path = '/model'  # Path to the local model directory
    tokenizer = AutoTokenizer.from_pretrained(model_path)
    model = AutoModel.from_pretrained(model_path)

    # Get inputs from environment variables
    input_var = os.environ.get('INPUT_VAR', 'default')
    
    # Your model code here
    result = your_model_function(input_var, model, tokenizer)
    
    # Save outputs
    output_path = '/outputs/result.json'
    with open(output_path, 'w') as f:
        json.dump({'result': result}, f)

if __name__ == "__main__":
    main()

3. Create a Dockerfile that functions with your run script

# Use specific base image
FROM base-image:version

# Set working directory
WORKDIR /workspace

# Install dependencies
RUN apt-get update && apt-get install -y \
    your-dependencies && \
    rm -rf /var/lib/apt/lists/*

# Install model requirements
RUN pip install your-requirements

# Environment variables for running offline and using the local model 
# HF_HOME points to the directory where the model code is
ENV HF_HOME=/model
ENV TRANSFORMERS_OFFLINE=1

# Create necessary directories
RUN mkdir -p /outputs

# Copy execution script
COPY run_script.* /workspace/

# Set entrypoint
ENTRYPOINT ["command", "/workspace/run_script"]

4. Build and Publish Image

To make sure your Docker image is compatible with Lilypad, you need to define the architecture explicitly during the build process. This is particularly important if you are building the image on a system like macOS, which uses a different architecture (darwin/arm64) than Lilypad's infrastructure (linux/amd64).

The examples below are for building, tagging and pushing an image to DockerHub, but you can use any platform you prefer for hosting the image.

For Linux: docker buildx build -t <USERNAME>/<MODULE_NAME>:<MODULE_TAG> --push .

For MacOS:

docker buildx build \
--platform linux/amd64 \
-t <USERNAME>/<MODULE_NAME>:<MODULE_TAG> \
--push \
.

5. Create a lilypad_module.json.tmpl Template

{
    "machine": {
        "gpu": 1,          # Set to 0 if GPU not needed
        "cpu": 1000,       # CPU allocation
        "ram": 8000        # Minimum RAM needed to run the module
    },
    "gpus": [ { "vram": 24576 }, { "vram": 40960 } ] # VRAM in MBs. Solver will default to largest one 
    "job": {
        "APIVersion": "V1beta1",
        "Spec": {
            "Deal": {
                "Concurrency": 1
            },
            "Docker": {
                "Entrypoint": ["command", "/workspace/run_script"],
                "WorkingDirectory": "/workspace",
                "EnvironmentVariables": [
                    # Environment variables with defaults
                    {{ if .var_name }}"VAR_NAME={{ js .var_name }}"{{ else }}"VAR_NAME=default_value"{{ end }}
                ],
                # Specify the Docker image to use for this module
                "Image": "repo-owner/repo-name:tag"
            },
            "Engine": "Docker",
            "Network": {
                "Type": "None"
            },
            "Outputs": [
                {
                    "Name": "outputs",
                    "Path": "/outputs"
                }
            ],
            "PublisherSpec": {
                "Type": "ipfs"
            },
            "Resources": {
                "GPU": "1"    # Must match machine.gpu
            },
            "Timeout": 1800
        }
    }
}

Environment Variables

Format in template:

{{ if .variable }}"VARNAME={{ js .variable }}"{{ else }}"VARNAME=default"{{ end }}

Usage in CLI:

lilypad run repo:tag -i variable=value

Formatting your module run command

During development, you will need to use the Git hash to test your module. This allows you to verify that your module functions correctly and produces the expected results.

Below is a working Lilypad module run cmd for reference. (you can use this to run a Lilypad job within the Lilypad CLI):

Test Module before running on Lilypad

Use the following command syntax to run your Module on Lilypad Testnet.

lilypad run github.com/Lilypad-Tech/module-sdxl:6cf06f4038f1cff01a06c4eabc8135fd9835a78a --web3-private-key <your-private-key> -i prompt="a lilypad floating on a pond"

If many jobs have been run on the machine previosuly, clear Lilypad from the /tmp folder locally and try running the job again.

Run Module on Lilypad

lilypad run github.com/noryev/module-sdxl-ipfs:ae17e969cadab1c53d7cabab1927bb403f02fd2a -i prompt="your prompt here"

Examples

Here are some example Lilypad modules for reference:

Deprecated examples:

These examples can help you understand how to structure your Lilypad modules and follow best practices.

Conclusion

In this guide, we've covered the essential steps to create a Lilypad module, including defining a JSON template, handling inputs, and testing your module. By following these best practices, you can build reliable and reusable modules for Lilypad.

For more information and additional examples, refer to the official Lilypad documentation and the Cowsay example module.

Cowsay

Run a cowsay job

cowsay is a simple, text-based program originally written for Unix-like operating systems that generates ASCII pictures of a cow with a speech bubble containing a specified message.

This module was created as a "Hello World" for the Lilypad Network!

Getting Started

export WEB3_PRIVATE_KEY=<YOUR_PRIVATE_KEY>

Run cowsay

Once you've installed the CLI, run the cowsay command:

lilypad run cowsay:v0.0.4 -i Message="hello, lilypad"  

Output

⠀⠀⠀⠀⠀⠀⣀⣤⣤⢠⣤⣀⠀⠀⠀⠀⠀
⠀⠀⠀⠀⢴⣿⣿⣿⣿⢸⣿⡟⠀⠀⠀⠀⠀    ██╗     ██╗██╗  ██╗   ██╗██████╗  █████╗ ██████╗
⠀⠀⣰⣿⣦⡙⢿⣿⣿⢸⡿⠀⠀⠀⠀⢀⠀    ██║     ██║██║  ╚██╗ ██╔╝██╔══██╗██╔══██╗██╔══██╗
⠀⢰⣿⣿⣿⣿⣦⡙⣿⢸⠁⢀⣠⣴⣾⣿⡆    ██║     ██║██║   ╚████╔╝ ██████╔╝███████║██║  ██║
⠀⣛⣛⣛⣛⣛⣛⣛⠈⠀⣚⣛⣛⣛⣛⣛⣛    ██║     ██║██║    ╚██╔╝  ██╔═══╝ ██╔══██║██║  ██║
⠀⢹⣿⣿⣿⣿⠟⣡⣿⢸⣮⡻⣿⣿⣿⣿⡏    ███████╗██║███████╗██║   ██║     ██║  ██║██████╔╝
⠀⠀⢻⣿⡟⣩⣾⣿⣿⢸⣿⣿⣌⠻⣿⡟⠀    ╚══════╝╚═╝╚══════╝╚═╝   ╚═╝     ╚═╝  ╚═╝╚═════╝ v2.13.0
⠀⠀⠀⠉⢾⣿⣿⣿⣿⢸⣿⣿⣿⡷⠈⠀⠀
⠀⠀⠀⠀⠀⠈⠙⠛⠛⠘⠛⠋⠁⠀ ⠀⠀⠀   Decentralized Compute Network  https://lilypad.tech

🌟  Lilypad submitting job
2025-03-05T12:56:38-06:00 WRN ../runner/work/lilypad/lilypad/cmd/lilypad/utils.go:63 > failed to get GPU info: gpuFillInfo not implemented on darwin
2025-03-05T12:56:38-06:00 INF ../runner/work/lilypad/lilypad/pkg/web3/sdk.go:209 > Connected to arbitrum-sepolia-rpc.publicnode.com
2025-03-05T12:56:38-06:00 INF ../runner/work/lilypad/lilypad/pkg/jobcreator/run.go:27 > Public Address: 0xB86bCAe21AC95BCe7a49C057dC8d911033f8CB7c
Enumerating objects: 42, done.
Counting objects: 100% (22/22), done.
Compressing objects: 100% (4/4), done.
Total 42 (delta 18), reused 19 (delta 18), pack-reused 20 (from 1)
💌  Deal agreed. Running job...
🤔  Results submitted. Awaiting verification...
🤔  Results submitted. Awaiting verification...
✅  Results accepted. Downloading result...
🆔  Data ID: QmP2SQttNC3Hrh2xpY7bNHzV2jHq7MbfLahRC46DVzn5rG

🍂 Lilypad job completed, try 👇
    open /tmp/lilypad/data/downloaded-files/QmQHrsiAuzTLn5VU6jg5LoXBRrAkEVRKiYeJE29w54gg9Q
    cat /tmp/lilypad/data/downloaded-files/QmQHrsiAuzTLn5VU6jg5LoXBRrAkEVRKiYeJE29w54gg9Q/stdout
    cat /tmp/lilypad/data/downloaded-files/QmQHrsiAuzTLn5VU6jg5LoXBRrAkEVRKiYeJE29w54gg9Q/stderr

To view the results in a local directory, navigate to the local folder.

open /tmp/lilypad/data/downloaded-files/QmQHrsiAuzTLn5VU6jg5LoXBRrAkEVRKiYeJE29w54gg9Q

Here, you can view the stdout and stderr as well as the outputs folder for the run:

~ % cat /tmp/lilypad/data/downloaded-files/QmQHrsiAuzTLn5VU6jg5LoXBRrAkEVRKiYeJE29w54gg9Q/stdout
 ________________ 
< hello, lilypad >
 ---------------- 
        \   ^__^
         \  (oo)\_______
            (__)\       )\/\
                ||----w |
                ||     ||

create-lilypad-module

Use create-lilypad-module to create Lilypad modules

create-lilypad-module is an officially supported package that provides a simple scaffolding for building Lilypad modules. It offers a modern Docker setup with minimal configuration.

A Lilypad module is a Git repository that allows you to perform various tasks using predefined templates and inputs.

The following guide is using the "Hugging Face" template, which is more advanced.

If you are new to module building, it is highly recommended you use the "Ollama" template first.

There is currently not a guide in our docs for the "Ollama" template. We will work on adding one soon, but the directions in the README should be sufficient.

Overview

Stable Diffusion Turbo Pipeline

A Lightweight Stable Diffusion Module for Lilypad

Getting Started

Run SDXL Turbo

Example:

Notes

  • Ensure you have the necessary permissions and resources to run Docker containers with GPU support.

  • The module version (ae17e969cadab1c53d7cabab1927bb403f02fd2a) may be updated. Check for the latest version before running.

  • Adjust port mappings and volume mounts as needed for your specific setup.

Output

To view the results in a local directory, navigate to the local folder.

Creating Your Module

Create your Lilypad module

Getting Started

Input:

Output:

If you prefer to follow along with a video guide, you can view our live workshop below! 👇

Prerequisites

Downloading The Model

The first thing you'll need for your module is a local model to use.

You can get started by attempting to run the download_models.py script.

Since the script hasn't been properly configured yet, it will return an error and point you to the file.

Open scripts/download_models.py and you will see some TODO comments with instructions. Let's go through them each in order. You can remove each TODO comment after completing the task.

First we have a reminder to update our requirements.txt file, which is used by the Dockerfile to install the module's dependencies. In the next line is a commented out import statement.

Most (but not all) models that utilize machine learning use the 🤗 Transformers library, which provides APIs and tools to easily download and train pretrained models.

For now, let's look at the top 2 lines of the provided code block:

Notice that torch is also being used. Copy the transformers import statement and paste it over the existing import statement in our download_models.py file.

Now open requirements.txt:

These are 2 of the most common libraries when working with models. Similar to the import statement in the download_models.py file, they are provided by default for convenience, but commented out because although they are common, not every model will use them.

Since this model happens to use both of these libraries, we can uncomment both lines and close the file after saving.

Return to the download_models.py file, and look for the next TODO comment.

For our use case, it should look like this:

You're almost ready to download the model. All you need to do now is replace the following 2 lines after the TODO comment:

Instead of using AutoTokenizer and AutoModelForSequenceClassification, replace those with the DistilBertTokenizer and DistilBertForSequenceClassification we imported.

The script is now configured! Try running the command again.

The models directory should now appear in your project. 🎉

No matter which model you are using, be sure to thoroughly read the model's documentation to learn how to properly download and use the model locally.

Building Your Model

Now for the fun part, it's time to start using the model!

This time we'll get started by running the run_module script.

You should see an error with some instructions.

1. Implement Job Module

Let's tackle the run_inference.py script first. This is where your modules primary logic and functionality should live. There is a TODO comment near the top of the file.

We've already updated the requirements.txt file, so we can skip that step. Go ahead and uncomment the import statements and replace the transformers line with the DistilBertTokenizer and DistilBertForSequenceClassification.

We should refer back to the "How to Get Started With the Model" section of Distilbert's model card to figure out how to use the model.

Let's implement this into our run_inference script. Scroll down to the main() function and you'll see another TODO comment.

Same as before, uncomment and replace AutoTokenizer with DistilBertTokenizer and AutoModelForSeq2SeqLM with DistilBertForSequenceClassification. This is now functionally identical to the first 2 lines of code from Distilbert's example.

Below that, the tokenizer and model are passed into the run_job() function. Let's scroll back up and take a look at the function. This is where we'll want to implement the rest of the code from Distilbert's example. The inputs are already functionally identical, so let's adjust the output.

From the Distilbert model card, copy all of the code below the inputs variable declaration, and paste it over the output variable declaration in your modules code.

All we need to do from here is set the output to the last line we pasted.

That's everything we'll need for the modules source code!

2. Delete Code Block

We still need to finish step 2 that the error in the console gave us earlier. Open the run_module.py script.

Find the TODO comment and delete the code block underneath.

Preparing Your Module

Before you are able to run your module, we need to build the Docker image. You can run the following command:

You should see the following response in the console:

Open the constants.py file, it should look like this:

For now, we'll be testing the module locally, so all we need to worry about is the DOCKER_REPO variable. We'll use MODULE_REPO when it's time to run the module on Lilypad Network. For help or more information, view the configuration documentation

You should be able to successfully build the Docker image now.

In the modules Dockerfile, you'll find 3 COPY instructions.

These instructions bring the requirements.txt file, the src directory, and the models directory into the Docker image. It's important to remember that any modifications to these files or directories will necessitate a rebuild of the module's Docker image to ensure the changes are reflected in the container.

Running Your Module

It's finally time to see your module in action.

Local

Let's start by running it locally.

The CLI should ask you for an input. Enter whatever you like and hit enter. The module will analyze the sentiment of your input and output the results at outputs/result.json.

You just used a local LLM! 🎉

Lilypad Network

Before you can run the module on Lilypad Network, you'll need to push the Docker image to Docker Hub.

While the Docker image is being built and pushed, you should configure the rest of the variables in constants.py. Make sure that you push your code to a public GitHub repository.

Since these variables are only used in scripts and not in any src code that gets used in the Docker image we won't need to rebuild after making these changes.

The last thing we'll need to do is edit the Lilypad module configuration file, lilypad_module.json.tmpl. For the purposes of this module, the default configuration is mostly correct. However, the "Image" field needs to be configured.

Replace the default value with your Docker Hub username, module image, and tag.

Once your Docker image is pushed to Docker Hub and your most recent code is pushed to a public GitHub repository, you can test your module on Lilypad's DemoNet by replacing the --local flag with --demonet

You can also remove the --demonet flag and supply your WEB3_PRIVATE_KEY to run the module on Lilypad's IncentiveNet.

You just used an LLM on Lilypad's decentralized network! 🎉

Now anyone who has the Lilypad CLI installed can also run your module:

Configuration

Configure your Lilypad module

Configure Your Module

After bootstrapping your module, additional configuration is required to run it.

.env

WEB3_PRIVATE_KEY

🚨 DO NOT SHARE THIS KEY 🚨

The private key for the wallet that will be used to run the job.

This is required to run the module on Lilypad Network.

A new development wallet is highly recommended to use for development. The wallet must have enough LP tokens and Arbitrum Sepolia ETH to fund the job.

config/constants.py

DOCKER_REPO

The Docker Hub repository storing the container image of the module code.

This is required to push the image to Docker Hub and run the module on Lilypad Network.

e.g. "<dockerhub_username>/<dockerhub_image>"

DOCKER_TAG

The specific tag of the DOCKER_REPO containing the module code.

Default: "latest"

MODULE_REPO

The URL for the GitHub repository storing the lilypad_module.json.tmpl file. The visibility of the repository must be public.

The lilypad_module.json.tmpl file points to a DOCKER_REPO and Lilypad runs the module from the image.

e.g. "github.com/<github_username>/<github_repo>"

TARGET_COMMIT

The git branch or commit hash that contains the lilypad_module.json.tmpl file you want to run.

Use git log to easily find commit hashes.

Default: "main"

Available Scripts

Your module will be bootstrapped with some handy scripts to help you download the model(s) for your module, build and push Docker images, and run your module locally or on Lilypad Network. Some additional configuration may be required.

In the project directory, you can run:

python -m scripts.download_models

No matter which model you are using, be sure to thoroughly read the documentation to learn how to properly download and use the model locally.

python -m scripts.docker_build

Builds and optionally publishes a Docker image for the module to use.

For most use cases, this script should be sufficient and won't require any configuration or modification (aside from setting your DOCKER_REPO and DOCKER_TAG).

In the modules Dockerfile, you'll find 3 COPY instructions.

These instructions copy the requirements.txt file, the src directory, and the models directory from your local machine into the Docker image. It's important to remember that any modifications to these files or directories will necessitate a rebuild of the module's Docker image to ensure the changes are reflected in the container.

--push Flag

Running the script with --push passed in pushes the Docker image to Docker Hub.

--no-cache Flag

Running the script with --no-cache passed in builds the Docker image without using the cache. This flag is useful if you need a fresh build to debug caching issues, force system or dependency updates, pull the latest base image, or ensure clean builds in CI/CD pipelines.

python -m scripts.run_module

This script is provided for convenience to speed up development. It is equivalent to running the Lilypad module with the provided input and private key (unless running the module locally, then no private key is required). Depending on how your module works, you may need to change the default behavior of this script.

--local Flag

Running the script with --local passed in runs the Lilypad module Docker image locally instead of on Lilypad's Network.

--demonet Flag

Running the script with --demonet passed in runs the Lilypad module Docker image on Lilypad's Demonet.

lilypad_module.json.tmpl

The default lilypad_module.json.tmpl file is below. Make sure to update the Docker Image to point to your Docker Hub image with the correct tag.

The default lilypad_module.json.tmpl should work for low complexity modules. If your module requires additional resources (such as a GPU) make sure to configure the applicable fields.

Template Fields

  • Machine: Specifies the system resources.

  • GPUs: Specifies the minimum VRAM required.

  • Job: Specifies the job details.

    • APIVersion: Specifies the API version for the job.

    • Metadata: Specifies the metadata for the job.

    • Spec: Contains the detailed job specifications.

      • Deal: Sets the concurrency to 1, ensuring only one job instance runs at a time.

      • Docker: Configures the Docker container for the job

        • WorkingDirectory: Defines the working directory of the Docker image.

        • Entrypoint: Defines the command(s) to be executed in the container as part of its initial startup runtime.

        • EnvironmentVariables: This can be utilised to set env vars for the containers runtime, in the example above we use Go templating to set the INPUT variable dynamically from the CLI.

        • Image: Specifies the image to be used (DOCKERHUB_USERNAME/IMAGE:TAG).

      • Engine: Sets the container runtime (Default: "Docker").

      • Network: Specifies that the container does not require networking (Default: "Type": "None").

      • Outputs: Specifies name and path of the directory that will store module outputs.

      • Resources: Specify additional resources.

      • Timeout: Sets the maximum duration for the job. (Default: 600 [10 minutes]).

Developer Tutorials

A collection of Blogs and Video's for developers :)

Guides

Video Tutorials

Helpful Resources

AI Resources

GPU Info

Bacalhau Resources

Folder Structure

The folder structure output from using `create-lilypad-module`

Output

After creation, your project should look like this:

For the module to run, these files must exist with exact filenames:

  • src/run_inference.py

    • The Dockerfile ENTRYPOINT.

    • If you change this files name or location, you must also update the ENTRYPOINT in your Dockerfile and lilypad_module.json.tmpl file to match.

  • config/constants.py

    • The configuration file that stores the DOCKER_REPO, DOCKER_TAG, MODULE_REPO, and TARGET_COMMIT.

    • If you change this files name or location, you must also update the import statements in scripts/docker_build.py and scripts/run_module.py.

  • Dockerfile

    • Required to build your module into a Docker image, and push the image to Docker Hub where it can be accessed by Lilypad Network.

  • requirements.txt

    • Used by the Dockerfile to install dependencies required by your module.

    • Technically, this file can be deleted or renamed, but this naming convention is highly recommended as an industry standard best practice.

  • lilypad_module.json.tmpl

    • The Lilypad configuration file.

You can delete or rename the other files.

You may create subdirectories inside src. For faster builds and smaller Docker images, only files inside src are copied by Docker. You need to put any files required to run your module inside src, otherwise Docker won’t copy them.

You can create more top-level directories. They will not be included in the final Docker image so you can use them for things like documentation.

If you have Git installed and your project is not part of a larger repository, then a new repository will be initialized resulting in an additional top-level .git directory.

Waterlily.ai

Waterlily.ai - An Ethical Generative AI-Art DApp

A user can then opt to download these generated images or mint them as NFTs.

Video

Find out more about Waterlily.ai

JS CLI Wrapper

Run the Lilypad CLI wrapper locally

Note: This is a beta tool and would mostly be expected to run locally. When implementing this tool, note that the POST request includes the user's Web3 private key. Stay tuned for a hosted API from Lilypad that will supplement this local CLI Wrapper.

Getting Started

Prerequisites

Installing and Setting up Lilypad Binary

  1. Build the Lilypad binary:

Usage

In inputs, each input must be preceded by the -i flag, including tunables. For example: "inputs": "-i Prompt='an astronaut floating against a white background' -i Steps=50"

Note: This tool is for demonstration purposes and can be used to run jobs on Lilypad for free. The tooling will be improved upon in the coming weeks for greater scalability. Use the following post request with the WEB3_PRIVATE_KEY below to run jobs on Lilypad. The wallet/private key below is funded with testnet tokens only and has been setup to simplify the use of this developer tool.

The endpoint can then be tested using curl

Ecosystem Projects

Projects built with Lilypad

Explore real-world applications leveraging the Lilypad Network!

End User Apps

EthGlobal HackFS - May-June '24

EthGlobal Sydney - May '24

EthGlobal Istanbul - November '23

ETHOnline - October '23

Open Data Hack - September '23

EthGlobal Paris - July '23

AugmentHack Paris - July '23

HackFS Online - June '23

Get an API key from the .

An object specifying the format that the model must output. .

If set to true, the model response data will be streamed to the client as it is generated using .

Lilypad makes it easy to deploy and run AI models with an AI model hub called the .

Currently setup in the , the Module Marketplace makes it easy for AI model creators to distribute their work and for the community to quickly get started running a wide range of open source AI models.

Use the Lilypad or

Additional modules from the community are available in the .

A is a standard containerized (Docker) process for running compute workloads on Lilypad.

Modules can be developed by any member of the community (participating as a 'Module Creator'). If there is an AI model (or other compute job) you wish to run using the Lilypad Network resources, create a new Module and !

These instructions provide steps for running the Llama2 module on the Lilypad network using Docker and the Lilypad CLI. Find the module repo .

Before running llama2, make sure you have the on your machine and your private key environment variable is set. This is necessary for operations within the Lilypad network.

Learn more about installing the Lilypad CLI and running a Lilypad job with this .

For a more in-depth look at building modules, refer to this .

If you're new to Docker, consider exploring this on creating, building, and running a Docker image for a simple Hello World style application.

To view the full list of available modules on Lilypad, please check out !

If the job run appears to be stuck after a few minutes (sometimes it takes time for the Module to download to the RP node), cancel the job and try again. Open a ticket in with any issues that persist.

: Lilypad "Hello World" example

: Text to text

: Text to image generation

: An example module for LoRa training tasks.

: An example module for LoRa inference tasks.

: An example module related to DuckDB.

Before running cowsay, make sure you have the on your machine and your private key environment variable is set. This is necessary for operations within the Lilypad network.

: install and run create-lilypad-module

: output and explanation of create-lilypad-module files

: requirements and explanations of Lilypad module configuration

: a step-by-step guide on how to create a simple Lilypad module using create-lilypad-module

These instructions provide steps for running the SDXL Turbo Pipeline module on the Lilypad network using Docker and Lilypad CLI. Find the module repo .

Before running sdxl turbo, make sure you have the on your machine and your private key environment variable is set. This is necessary for operations within the Lilypad network.

Learn more about installing the Lilypad CLI and running a Lilypad job with this .

This guide will walk you through creating a basic sentiment analysis module using and (which will be referred to as Distilbert from now on). We will be referring back to the Hugging Face page throughout this guide, so it's best to keep it open and accessible.

To build and run a module on Lilypad Network, you'll need to have the , and on your machine, as well as and accounts.

For this guide, we'll be using which requires and uses .

A basic outline for downloading a model from is provided in scripts/download_models.py. The structure of the script and the methods for downloading a model can differ between models and libraries. It’s important to tailor the process to the specific requirements of the model you're working with.

To find the dependencies that our model requires, we can refer back to Distilbert's Hugging Face page and click on the "Use this model" dropdown, where you will see the library as an option. Click it.

You should see a handy modal explaining how to use the model with the Transformers library. For most models, you'd want to use this. However, Distilbert has a specific tokenizer and model class. Close the modal and scroll to the section of the model card. We're going to use this instead.

torch is a collection of APIs for extending ’s core library of operators.

If we take a look at the , we can use the copy button next to the name of the module to get the MODULE_IDENTIFIER. Paste that in as the value.

A basic outline for downloading a model from is provided, but the structure of the script and the methods for downloading a model can differ between models and libraries. It’s important to tailor the process to the specific requirements of the model you're working with.

Most (but not all) models that utilize machine learning use the library, which provides APIs and tools to easily download and train pretrained models.

- Vitalik Buterin

- Galaxy Digital Research

Nvidia CUDA

- Juan Benet

- Juan Benet

- Juan Benet & Molly Mackinlay

- David Aronchick

- David Aronchick

is a proof-of-concept application aiming to provide an alternative ethical solution to AI-Art generation by creating a new paradigm for AI image generation that pays royalties to the original creator and offers a new revenue stream for artists to explore.

To achieve this, Waterlily leverages the transparency & trustlessness of blockchain, with smart contracts, combined with a decentralised edge compute platform in to deliver an intuitive user experience for generative AI-Art while compensating creators for their original work with every image generation call.

Each stable diffusion model on Waterlily is trained & finetuned to a selection of artist work that has been uploaded by the artist themselves or is available in the public domain. When a user navigates to the platform, they simply enter a text prompt for the images they want to create and choose an artist style that appeals to them. The small fee paid by the user is then distributed to the artist's wallet (except the fees required by the network to run the contract calls & return the generated images) - or to a vetted creator foundation in the case of public images.

The Lilypad CLI wrapper can run locally to create an API endpoint for running jobs on the Lilypad network. This gives developers full control to build a decentralized system running jobs on Lilypad. Github repo can be found .

Build a or AI agent workflow that uses this API endpoint for running jobs! For inspiration, check out JS CLI wrapper + Gradio example. Spin up a Webui with Gradio and use the api with a frontend!

Run node src/index.js to create a local endpoint using the js wrapper with either src/run.js or src/stream.js, then send a post request containing json with your funded WEB3_PRIVATE_KEY key set, see the quick start for more on .

Anura website
Lilypad Module Marketplace
awesome-lilypad github repo
AI inference API
install our CLI
awesome-lilypad repo
Lilypad Module
add it to the Module Marketplace
Module Builder Guide
here
Lilypad CLI installed
video guide
end-to-end guide
step-by-step tutorial

Llama2

An open-source language model, used for generating and understanding text.

Stable Diffusion Turbo Pipeline

A fast, high-performance version of Stable Diffusion XL for generating high-quality images with minimal latency

Cowsay

A playful CLI tool that displays text as if spoken by an ASCII-art cow

the awesome-lilypad repo
Discord
Cowsay
Llama2
SDXL-turbo pipeline
create-lilypad-module
lora-training
lora-inference
duckdb
Lilypad CLI installed
Getting Started
Folder Structure
Configuration
Creating Your Module
export WEB3_PRIVATE_KEY=<YOUR_PRIVATE_KEY>
lilypad run github.com/Lilypad-Tech/module-sdxl:d6a89ed92f4e798459b2990340669da00c56c80c -i prompt="your prompt here"
lilypad run github.com/Lilypad-Tech/module-sdxl:d6a89ed92f4e798459b2990340669da00c56c80c -i prompt="a spaceship parked on a mountain"
open /tmp/lilypad/data/downloaded-files/<fileID>
lilypad run --network demonet github.com/DevlinRocha/lilypad-module-sentiment:main --web3-private-key 0ec38dd1ee0898dae8460b269859b4fb3cb519b35d82014c909ec4741c790831 -i input="LILYPAD IS AWESOME"
{
  "input": "LILYPAD IS AWESOME",
  "sentiment": "POSITIVE",
  "status": "success"
}
python -m scripts.download_models
❌ Error: Model download script is not configured.
👉 /scripts/download_models.py
# TODO: Update ../requirements.txt
# from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch
from transformers import DistilBertTokenizer, DistilBertForSequenceClassification
# torch==2.4.1
# transformers==4.47.1
# TODO: Set this to your model's Hugging Face identifier
MODEL_IDENTIFIER = ""
MODEL_IDENTIFIER = "distilbert/distilbert-base-uncased-finetuned-sst-2-english"
# TODO: Initialize `model` and `tokenizer`
# tokenizer = AutoTokenizer.from_pretrained(MODEL_IDENTIFIER)
# model = AutoModelForSequenceClassification.from_pretrained(MODEL_IDENTIFIER)
tokenizer = DistilBertTokenizer.from_pretrained(MODEL_IDENTIFIER)
model = DistilBertForSequenceClassification.from_pretrained(MODEL_IDENTIFIER)
python -m scripts.download_models
python -m scripts.run_module
❌ Error: No job configured. Implement the module's job before running the module.
        1. Implement job module
                👉 /src/run_inference.py
        2. Delete this code block
                👉 /scripts/run_module.py
# TODO: Update ../requirements.txt
# import torch
# from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = DistilBertTokenizer.from_pretrained("distilbert-base-uncased-finetuned-sst-2-english")
model = DistilBertForSequenceClassification.from_pretrained("distilbert-base-uncased-finetuned-sst-2-english")

inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
with torch.no_grad():
    logits = model(**inputs).logits

predicted_class_id = logits.argmax().item()
model.config.id2label[predicted_class_id]
# TODO: Initialize `model` and `tokenizer`
# tokenizer = AutoTokenizer.from_pretrained(MODEL_DIRECTORY)
# model = AutoModelForSeq2SeqLM.from_pretrained(MODEL_DIRECTORY)
inputs = tokenizer(
    input,
    return_tensors="pt",
    truncation=True,
    padding=True,
)

with torch.no_grad():
    logits = model(**inputs).logits

predicted_class_id = logits.argmax().item()
model.config.id2label[predicted_class_id]

return output
output = model.config.id2label[predicted_class_id]

return output
# TODO: Remove the following print and sys.exit statements and create the module job.
print(
    "❌ Error: No job configured. Implement the module's job before running the module.",
    file=sys.stderr,
    flush=True,
)
print("\t1. Implement job module")
print("\t\t👉 /src/run_inference.py")
print("\t2. Delete this code block")
print("\t\t👉 /scripts/run_module.py")
sys.exit(1)
python -m scripts.docker_build
❌ Error: DOCKER_REPO is not set in config/constants.py.
# TODO: Set the Docker Hub repository before pushing the image.
# Example: "devlinrocha/lilypad-module-sentiment"
DOCKER_REPO = ""

# TODO: Set the tag for the Docker image.
# Example: "latest", "v1.0", or a commit SHA
DOCKER_TAG = "latest"

# TODO: Set the GitHub repository URL where your module is stored.
# Example: "github.com/devlinrocha/lilypad-module-sentiment".
MODULE_REPO = ""

# TODO: Specify the target branch name or commit hash.
# Example: "main" or "c3ed392c11060337cae010862b1af160cd805e67"
TARGET_COMMIT = "main"
COPY requirements.txt .
COPY src /src
COPY models /models
python -m scripts.run_module --local
{
  "input": "LILYPAD IS AWESOME",
  "result": "POSITIVE",
  "status": "success"
}
python -m scripts.docker_build --push
"Image": "devlinrocha/lilypad-module-sentiment:latest"
python -m scripts.run_module --demonet
lilypad run github.com/<MODULE_REPO>:<TARGET_COMMIT> --web3-private-key <WEB3_PRIVATE_KEY>
WEB3_PRIVATE_KEY = ""
DOCKER_REPO = ""
MODULE_REPO = ""
TARGET_COMMIT = ""
COPY requirements.txt .
COPY src /src
COPY models /models
{
  "machine": { "gpu": 1, "cpu": 8000, "ram": 16000 },
  "gpus": [{ "vram": 24576 }]
  "job": {
    "APIVersion": "V1beta1",
    "Spec": {
      "Deal": { "Concurrency": 1 },
      "Docker": {
        "Entrypoint": [
          "/app/src/run_model", {{ .request }}
        ],
        "Image": "DOCKER_HUB_USERNAME/DOCKER_IMAGE@INDEX_DIGEST"
      },
      "Engine": "Docker",
      "Network": { "Type": "None" },
      "Outputs": [{ "Name": "outputs", "Path": "/outputs" }],
      "Resources": { "GPU": "1", "CPU": "8", "Memory": "16Gb" },
      "Timeout": 1800,
      "Verifier": "Noop"
    }
  }
}
project_name
├── config
│   └── constants.py
├── scripts
│   ├── docker_build.py
│   ├── download_models.py
│   └── run_module.py
├── src
│   └── run_inference.py
├── .dockerignore
├── .env
├── .gitignore
├── Dockerfile
├── lilypad_module.json.tmpl
├── README.md
└── requirements.txt
git clone https://github.com/Lilypad-Tech/lilypad
cd lilypad
go build -v -o lilypad

# For Linux: Move to /usr/bin
sudo mv lilypad /usr/bin/

# For Mac: Move to /usr/local/bin
sudo mv lilypad /usr/local/bin
curl -X POST http://localhost:3000 \
-H "Content-Type: application/json" \
-d '{"pk": "'"your-private-key"'", "module": "github.com/lilypad-tech/lilypad-module-lilysay:0.1.0", "inputs": "-i Message=test"}'
// run.js
const { run } = require("./")

run(
  "private-key",
  "module name"
  '-i payload (key=value)'
).then((res) => {
  console.log(res)
})
// stream.js
const { stream } = require("./")

stream(
  "private-key",
  "module name"
  '-i payload (key=value)'
  { stream: true },
).then(() => {
  console.log("Result in ./output/result")
})
// index.js
const fetch = require("node-fetch")
const fs = require("fs")

const URL = "http://js-cli-wrapper.lilypad.tech"
const METHOD = "POST"
const HEADERS = {
  Accept: "application/json",
  "Content-Type": "application/json",
}
const OUTPUT = "./output"

function stream(pk, module, inputs, opts) {
  const body = JSON.stringify({ pk, module, inputs, opts })

  return fetch(URL, {
    headers: HEADERS,
    method: METHOD,
    body,
  }).then(function (res) {
    const fileStream = fs.createWriteStream(`./${OUTPUT}/result`)
    res.body.pipe(fileStream)
    res.body.on("error", (error) => {
      return { error }
    })
    fileStream.on("finish", () => {
      return { status: "done" }
    })
  })
}

function run(pk, module, inputs) {
  const body = JSON.stringify({ pk, module, inputs })

  return fetch(URL, {
    headers: HEADERS,
    method: METHOD,
    body,
  }).then((raw) => raw.json())
}

module.exports = { run, stream }

Build a Lilypad Module

Run Lilypad Modules

CLI Wrapper Frontend

Encode x Filecoin Open Data Hack Video 2023

Lilypad Youtube

Quickstart
create-lilypad-module live workshop
Wes Floyd talks about Waterlily.ai
Learn more
server-sent events
here
Lilypad CLI installed
video guide
create-lilypad-module
distilbert/distilbert-base-uncased-finetuned-sst-2-english
View the final source code for the module
Lilypad CLI
Python
Docker
GitHub
Docker Hub
create-lilypad-module
pip
Python
Hugging Face
🤗 Transformers
Learn more about downloading models from Hugging Face
Read the 🤗 Transformers documentation
How to Get Started With the Model
PyTorch
Learn more about using datasets with Pytorch
Read the torch documentation
Distilbert Hugging Face page
Funding your wallet
Hugging Face
🤗 Transformers
Learn more about downloading models from Hugging Face
Learn more about the 🤗 Transformers library
RAG vs Finetuning
The promise and challenges of crypto + AI
Data on Notable AI Models
Towards decentralized AI
Understanding the Intersection of Crypto and AI
How do graphics cards work?
Toolkit
Nvidia CUDA in 100 Seconds
Bacalhau.org
Impact of Compute over Data
CoD vision & goals
Lessons for building Distributed Compute
State of CoD 2023
Containers at the Edge
Waterlily.ai
FVM
Bacalhau
Waterlily.ai
here
front end
this

Game Theory

Adding Trust to the Network

Introduction

Lilypad is a verifiable trustless decentralized compute network that aims to prevent cheating in the network. The network consists of clients and compute nodes. The main goal is to establish a game theoretic approach to verifiable computing, where clients can trust the results they receive from compute nodes. The approach used in Lilypad is pure verification by replication, without relying on cryptographic tools like snarks or trusted execution environments.

Global Consensus vs Local Consensus

In the context of Lilypad, it's important to understand the difference between global consensus and local consensus. Global consensus, as seen in blockchains, ensures that every node knows the result of a computation is correct due to massive replication across many nodes. However, in Lilypad's two-sided marketplace, only the client needs to be convinced that the result is correct. Local consensus is sufficient for the client, while other nodes like verifying nodes, requesters, and market makers may also need to be convinced.

Adversary Model

Lilypad assumes non-malicious but utility-maximizing agents, including both clients and compute nodes. Utility-maximizing means that nodes will do anything to maximize their return, including returning false results if necessary. However, they are not malicious and won't purposely send back false results. The adversary model assumes that all nodes are utility-maximizing and does not consider malicious behavior.

Good Solutions

Lilypad aims to achieve the best outcome, where nodes never have any incentive to cheat. If that's not possible, the goal is to minimize the incentive to cheat as a function of the protocol's parameters. Another option is to achieve the first two outcomes under simplifying assumptions, such as having a fraction of honest nodes in every mediation protocol. Lilypad also considers the possibility of not being able to achieve any of these outcomes and aims to understand the limitations.

Reinforcement Learning Approach

Lilypad takes an adversary-first approach by designing the most powerful adversary possible and then optimizing against it. The team uses multi-agent reinforcement learning to train agents to act as utility-maximizing agents on behalf of clients and compute nodes. Reinforcement learning has shown impressive results in various domains, and Lilypad aims to leverage its capabilities to train agents that can maximize their utility in the network.

Anti-Cheating Mechanisms

Lilypad plans to test various anti-cheating mechanisms once the reinforcement learning agents are implemented. These mechanisms include:

  1. Consortium of mediators: A group of trusted mediators can check the results, reducing the likelihood of cheating.

  2. Prediction markets and staking: Nodes can stake behind other nodes and lose their collateral if the node they stake behind is found to have cheated.

  3. Taxes and jackpots: A tax is imposed on every job, and the taxes go into a jackpot that is awarded to nodes that find other nodes to have cheated.

  4. Reputation: Nodes can build up a reputation based on the number of jobs they've done and the honesty of their results.

  5. Sorting inputs and outputs: Storing inputs and outputs for longer periods allows for easier verification but increases collateralization requirements.

  6. Frequency of checking results: Autonomous agents can decide when to check results, balancing the need for verification with the cost of collateralization.

Future Developments

In the future, Lilypad aims to take into account the preferences of nodes, such as computational requirements, time requirements, and scheduling requirements. The long-term goal is to have compute nodes and clients negotiate with each other automatically over these aspects of a job. Lilypad acknowledges that game-theoretic verifiable computing is a less studied form of verifiable computing compared to zero-knowledge proofs and trusted execution environments. However, the team is committed to exploring this approach and conducting rigorous research to find effective solutions.

Conclusion

Lilypad's game-theoretic approach to verifiable computing aims to prevent cheating in a decentralized compute network. By using reinforcement learning and testing various anti-cheating mechanisms, Lilypad strives to create a trustless environment where clients can have confidence in the results they receive from compute nodes. The team is actively working on implementing reinforcement learning agents and conducting simulations to evaluate the effectiveness of different strategies.

Extra Labs x Lilypad

3D Mapping on Lilypad

This is a work-in-progress proof of concept utilizing Extra Labs and Lilypad to create a Lilypad module for running multiple geospatial jobs.

The goal is to provide the end-to-end adapter for users to operate on the point cloud data to generate reconstructed maps.

Extra Labs is revolutionizing the way maps are built by leveraging collaborative acquisition and deep learning to produce affordable, detailed 3D models of cities and territories.

To achieve this proof of concept, the Lilypad module will integrate with Extra Labs' platform, allowing users to submit their geospatial data, whether collected via drones, aircraft, delivery vehicles, or smartphones. This data is processed through advanced algorithms to create detailed 3D reconstructions.

The decentralized nature of the platform ensures that data providers are compensated fairly for their contributions through a blockchain-based reward system. This approach not only democratizes access to 3D mapping technologies but also ensures continuous and up-to-date data acquisition, enhancing the accuracy and detail of the generated maps.

Users should be able to easily interact with the platform, upload their data, and receive high-quality 3D models in return. This process is designed to make advanced geospatial mapping accessible to a wide range of users, from urban planners and architects to developers and hobbyists.

HubSpot-LinkedIn Agent

An agent-based system that monitors HubSpot for recently modified contacts, fetches enriched data from LinkedIn, and updates HubSpot automatically.

Agent Details

This workflow serves as a proof of concept (PoC) illustrating the capabilities of agents to interface with platform APIs, specifically those of HubSpot and RapidAPI.

Getting Started

Prerequisites

  • Python 3.8+

  • A HubSpot account with a Private App token

  • A RapidAPI account with a LinkedIn Data API key

  • An OpenAI-compatible API

    • Choose the model you would like to use

Setup

  1. Clone the Repository

    git clone https://github.com/mavericb/HubSpot-LinkedIn-AgentSync.git
    cd HubSpot-LinkedIn-AgentSync
  2. Install Dependencies

    pip install -r requirements.txt

    Note: If requirements.txt is missing, install:

    pip install python-dotenv hubspot-api-client requests schedule phi
  3. Configure Environment Variables

    • Create a .env file:

      touch .env
    • Add your keys:

      HUBSPOT_ACCESS_TOKEN=your-private-app-token
      RAPIDAPI_KEY=your-default-app-token
      API_KEY=your-openai-compatible-key
      BASE_URL=https://anura-testnet.lilypad.tech/api/v1  # Adjust for your provider
      AGENT_MODEL=qwen2.5:7b  # Adjust for your provider

Usage

  1. Start the Daemon

    python main.py
    • Checks HubSpot every minute for contacts modified in the last 7 days (or since last_processed.json).

    • Fetches LinkedIn data and updates HubSpot.

    • Runs for 5 minutes (adjustable in main.py).

  2. Monitor Logs

    • Look for "Found X contacts", "HubSpot update: ...", "Attempting to save last_processed timestamp".

    • If no contacts are found, last_processed.json won’t save—edit a HubSpot contact to test.

File Structure

  • main.py: Entry point and daemon logic.

  • /tools/hubspot_tools.py: HubSpot API utilities (fetch/update contacts).

  • /tools/linkedin_tools.py: LinkedIn API utility (fetch profile data).

  • /agents/hubspot_agent.py: Agent for updating HubSpot.

  • /agents/linkedin_agent.py: Agent for fetching LinkedIn data.

RAG Support Agent

A retrieval-augmented generation (RAG) agent that retrieves relevant context and generates AI-powered responses using the Lilypad Network.

The Lilypad RAG Support Agent is a Retrieval-Augmented Generation (RAG) AI assistant that retrieves relevant information and generates AI-powered responses using the Lilypad Network. It enables automated support and troubleshooting by leveraging vector search and AI-based text generation.

Agent Details

  • Context-Aware Responses – Uses all-MiniLM-L6-v2 embeddings to retrieve relevant information.

  • AI-Powered Answer Generation – Sends retrieved context and user query to the Lilypad API, which processes it using Llama3 8B.

  • Customizable Knowledge Base – Modify the agent’s context source (issues.md) to adapt it for different use cases.

How It Works

  1. Embedding Queries with all-MiniLM

    • Converts user queries and stored knowledge into dense vector embeddings for similarity-based retrieval

  2. Retrieving Relevant Context

    • Searches a pre-indexed database to find the most relevant information.

  3. Generating Responses with Llama3 using the Lilypad API

    • Sends retrieved context and user prompt to the Lilypad API, where Llama3 8B generates a structured response

Expanding to Your Own Support Agent

The Lilypad RAG Support Agent can be adapted to different projects by modifying its retrieval source.

Updating the Knowledge Base (issues.md)

By default, the agent retrieves information from issues.md, a markdown file containing troubleshooting steps.

To customize:

  • Open issues.md in the repository.

  • Replace or expand the content with relevant support information for your project.

  • Format the content clearly to improve retrieval accuracy.

  • Restart the agent to index the updated knowledge base.

For more advanced use cases, the agent can be extended to support multiple files or external knowledge sources.

Getting Started

Get a Lilypad API Key

Clone the Repository

git clone https://github.com/PBillingsby/lilypad-rag-agent.git
cd lilypad-rag-agent

Configure Your API Key

Export your Lilypad API Token as an environment variable:

export LILYPAD_API_TOKEN="your-api-key-here"

To make it persistent, add it to ~/.bashrc or ~/.zshrc.

Setup

Ensure Python 3 is installed, then run:

pip install -r requirements.txt

Usage

After setting up the API key and dependencies, the agent is ready to process queries using Lilypad’s AI-powered retrieval system.

Run the Agent

Execute the agent from the project's root directory:

python3 cli.py

Resources

AI Tooling

AI tooling using Lilypad

AI tooling provides the essential infrastructure and frameworks that support the development, deployment and monitoring of AI-powered applications. This section highlights tools that help streamline workflows, enhance visibility and enable customization when building with AI on Lilypad, with examples and guides for integrating them into your projects.

AI Oncologist Agent

A research agent template implemented as an AI oncologist. Read papers, isolate important information, and produce actionable reports.

Agent Details

The AI Oncologist serves as a Research agent template that can be used to conduct a wide range of research. The Agent is an intelligent system for analyzing research papers using a multi-agent approach.

The system consists of three main agents:

Paper Relevance Agent

  • Searches through PDF documents in the documents directory

  • Uses embeddings and cosine similarity for initial filtering

  • Verifies relevance using LLM-based analysis

  • Returns a list of most relevant paper filenames

Top Paragraphs Agent

  • Extracts text from identified papers

  • Splits content into manageable chunks

  • Scores paragraph relevance using LLM

  • Returns top-scoring paragraphs with relevance scores

Text Query Agent

  • Analyzes provided text passages

  • Generates focused answers to specific queries

  • Uses contextual understanding to provide accurate responses

Getting Started

Prerequisites

  • Python 3.8+

  • OpenAI API key or compatible API (e.g., DeepSeek)

  • PDF files

Installation

  1. Clone the repository:

git clone https://github.com/mavericb/ai-oncologist.git
cd ai-oncologist
  1. Install required packages:

pip install -r requirements.txt
  1. Create a .env file in the project root with the following variables:

# OpenAI-Like API configuration
BASE_URL="https://api.deepseek.com"
OPENAI_API_KEY=your_deepseek_api_key
MODEL="deepseek-chat"

ANURA_BASE_URL=https://anura-testnet.lilypad.tech
ANURA_API_KEY=your_anura_api_key
ANURA_MODEL=phi4:14b

# Search configuration
MAX_RESULTS=3
SIMILARITY_THRESHOLD=0.3
  • BASE_URL: API endpoint for OpenAI Compatible LLM service (default: "https://api.deepseek.com")

  • MODEL: One OpenAI Compatible model to use (default: "deepseek-chat")

  • ANURA_BASE_URL: API endpoint for the Anura LLM service (default: "https://anura-testnet.lilypad.tech")

  • ANURA_MODEL: The Anura model to use (default: "phi4:14b")

  • MAX_RESULTS: Maximum number of papers to return (default: 3)

  • SIMILARITY_THRESHOLD: Minimum similarity score for document selection (default: 0.3)

Usage

  1. Place your PDF research papers in the documents/ directory.

  2. Run the main script:

python AIOncologist.py

Resources

Agents

AI agents on Lilypad

AI agents can autonomously process information, make decisions and interact with users or systems. This section explores different use cases for AI agents on Lilypad, providing examples and step-by-step guides on how to build and run them using the network.

Anything LLM

Tutorial: Using Lilypad x AnythingLLM for Desktop LLM Client

Getting Started

Prerequisites:

  • A basic understanding of the Lilypad API and Anything LLM

  • Access to an Anura Dashboard account

Set up Open AI Endpoint in Rivet

  1. Start AnythingLLM App.

  2. Navigate to the Settings page by clicking wrench icon at bottom left

  3. Click on LLM under AI Providers

  4. Select Generic OpenAI under LLM Provider

Obtain an Anura API Key

  1. Click Get Started

  2. Create a new API key by clicking the “Create API Key” button.

  3. Copy the generated API key and paste it into the APIIey input field on the LLM Screen.

Specify Chat Model Name

  1. Paste in "llama3.1:8b"

Lilypad ML Workbench

A developer platform for using the Lilypad network to train and run AI models

The ML workbench provides an interface to:

  • Run models that are currently available on the Lilypad GPU network

  • Leverage Jupyter notebooks with Lilypad

  • Create multi-module flows

  • Fine tune models

  • Create an AI agent

AI Inference

Run existing Lilypad modules with a simple text or image prompt. The workbench will output an IPFS CID with the result.

Implement the Lilypad module allowlist and cache the models locally before they are needed! Pin the files to a local IPFS node packaged with the workbench platform.

Finetuning

Import a dataset and use the the "Data" tool to finetune a model for a specific end user. Create a Lilypad module and save the module on IPFS. More on this soon!

Create an AI agent and run on Lilypad

The ML workbench provides a simple interface for creating and training AI agents. The Lilypad team is currently testing a variety of models and fine tuning techniques to determine the optimal path forward for this feature. More on this soon!

Resources

Rivet

Tutorial: Using Lilypad x Rivet for Generating Narratives with Anura

Getting Started

Prerequisites:

  • A basic understanding of the Lilypad API and Rivet

  • Access to an Anura Dashboard account

  • Basic knowledge of workflow configuration

Set up Open AI Endpoint in Rivet

  1. Start Rivet App.

  2. Navigate to the Settings page. App>Settings

  3. Click on the “Open AI” tab.

Obtain an Anura API Key

  1. Click Get Started

  2. Create a new API key by clicking the “Create API Key” button.

  3. Copy the generated API key and paste it into the Open AI API key input field in Rivet.

Integrations

Built with Lilypad

This section features hands-on tools and resources designed to support developers building with Lilypad. From code editor extensions to social app integrations, it highlights ways to prototype, automate and ship AI jobs more efficiently. Explore how these utilities fit into real workflows and help extend the power of the network.

VS Code Helper Extension

An integrated development environment for running AI workloads on the Lilypad Network directly within Visual Studio Code.

The Lilypad VS Code Extension allows developers to interact with the Lilypad Anura API within Visual Studio Code. By highlighting code blocks, users can query the available LLMs on Lilypad for explanations, improvements, or suggestions, receiving responses in a formatted webview panel.

Features

  • AI-Powered Code Assistance – Select any code in your editor, choose an AI model, and ask Lilypad questions about it

  • Formatted Responses – View AI-generated insights in a structured webview panel

  • Secure API Configuration – Store your Lilypad API token securely

  • Context Menu Integration – Quickly access the extension’s features via the right-click menu

Installation

From the Repository

  1. Clone the repo: git clone git@github.com:PBillingsby/lilypad-vscode-extension.git

  2. Install the extension in VS Code: code --install-extension lilypad-vscode-extension.vsix

  3. Add Anura API key to .env: LILYPAD_API_TOKEN=<ANURA_API_KEY>

  4. If you make changes to the extension and need to recompile it:

    1. Recompile the code: npm run compile

    2. Repackage the extension: vsce package

    3. Reinstall the updated .vsix file: code --install-extension lilypad-vscode-extension.vsix

Usage

  1. Select a block of code in your editor.

  2. Right-click and select "Ask Lilypad about this code", or:

    • Open the Command Palette (Ctrl+Shift+P) and select "Ask Lilypad about this code".

  3. Choose an AI model to process the query.

  4. Enter your question related to the selected code.

  5. Wait for Lilypad AI to process your query.

  6. View the AI’s response in the webview panel that opens.

Resources

Lilypad Llama3 Chatbot

An AI-powered chatbot that generates responses using the Lilypad API and Llama3.

The Lilypad Llama3 Chatbot is a conversational agent designed to deliver engaging, real-time AI-powered interactions. Leveraging the Llama3 8B model through the Lilypad API, this chatbot provides context-aware and dynamic responses, making it useful for applications such as automated support, interactive Q&A and virtual assistance.

Features

  • AI-Powered Conversations – Utilizes Llama3 8B via the Lilypad Network to generate intelligent responses.

  • Lilypad API Connectivity – Simple integration with Lilypad, allowing flexible use of all available models on the Lilypad API.

This chatbot can be extended with:

  • Memory & Context Awareness – Store conversation history for more personalized interactions.

  • External APIs – Integrate with knowledge bases, search engines, or database lookups.

  • Multi-Model AI – Swap Llama3 with other AI models as needed.

How It Works

  1. User Input – The chatbot interface captures the user's message.

  2. API Request – The message is sent to the Lilypad API, which runs the job on the Llama3 model.

  3. Response Generation – Llama3 processes the input, considers context and generates a natural language response.

  4. Response Display – The response is rendered in the chatbot interface.

Installation

  1. Clone the repository:

    git clone https://github.com/PBillingsby/lilypad-llama3-chatbot.git
    cd lilypad-llama3-chatbot
  2. Install dependencies: npm install

  3. Add the Lilypad API key to .env: LILYPAD_API_TOKEN=<ANURA_API_KEY>

  4. Start the development server: npm run dev

Usage

Access the chatbot at: http://localhost:3000 and enter prompt to start a conversation.

Resources

Farcaster frame

A guide for running Lilypad jobs in a Farcaster frame

In this guide we will be building a Farcaster frame for local development that runs a Lilysay prompt on the Lilypad Network.

Users of the frame can input a prompt and a generate an image. The generated image will appear in the box above the input and will allow a user to view the ASCII art as an image.

Project Setup

Generate project

For this frame, create a NextJS 14 app:

npx create-next-app lilypad-farcaster-example
cd lilypad-farcaster-example
npm install @coinbase/onchainkit uuid sharp@0.32.6 path fs

Open the project in your code editor.

Get testnet tokens

Add .env.local

Add the following into your .env.local file. The private key is used to run the CLI jobs. Make sure that your .env.local file is added to the .gitignore file as your private key should not be exposed or pushed.

  1. WEB3_PRIVATE_KEY=YOUR_PRIVATE_KEY

  2. NEXT_PUBLIC_BASE_URL=http://localhost:3000

Install Lilypad CLI

Run Framegear

Framegear is a simple tool provided by the @coinbase/onchainkit package that allows you to run and test your frames locally without publishing the frame.

We will be using Framegear for this build. However, there are other libraries that can be used for this project instead of Framegear.

In a separate terminal, clone down the onchainkit repo and run Framegear:

git clone https://github.com/coinbase/onchainkit.git
cd onchainkit/framegear
npm i
npm run dev

Configuration for frame

We will need to set up the metadata for our Next.js application that includes Farcaster frame information. It will configure the elements and a URL for frame posting, while also specifying Open Graph metadata for improved social sharing.

In app/page.tsx, add the following before the Home function declaration:

import { getFrameMetadata } from '@coinbase/onchainkit/frame';
import type { Metadata } from 'next';
const frameMetadata = getFrameMetadata({
  buttons: [
    {
      label: "Let's go!"
    }
  ],
  image: `${process.env.NEXT_PUBLIC_BASE_URL}/initial.gif`,
  post_url: `${process.env.NEXT_PUBLIC_BASE_URL}/api/frame?id=1`,
});

export const metadata: Metadata = {
  title: 'Lilysay Farcaster frame',
  description: 'Initial frame',
  openGraph: {
    title: 'Lilysay Farcaster frame',
    description: 'Initial frame',
    images: ["/initial.png"],
  },
  other: {
    ...frameMetadata,
  },
};

UI Components

The UI elements for this frame are all rendered in the app/api/route.ts file, which acts as a request handler for different routes or endpoints within the web application. It defines the logic for handling user input, generating responses, and managing application state. The main functions include processing user prompts, handling status checks, and generating images asynchronously.

Routes

Here’s how the routes are structured for this frame:

  • /api/frame?action=input: This route displays the initial user interface, which includes a text input field for the prompt and a button to submit the form. The user interface also updates dynamically based on the processing status, such as showing a placeholder image or the final generated image.

  • /api/frame?action=submit: This route processes the user input. When a prompt is submitted, the server initiates the image generation process asynchronously. While the image is being generated, the user sees a loading state, and they can check the progress.

  • /api/frame?action=check: This route checks the status of the image generation process. It updates the frame with either a completed image, an error message if the generation fails, or the processing state if the image is still being generated.

  • We also include a fallback just in case an error occurs during the processing of job.

  • /api/frame?action=save: Though not explicitly included, this could be an additional route for handling the logic of saving the generated image to a location for future access.

generateImage

The generateImage function handles the user input and generates the final image for display by utilizing the functions in app/services/cli.ts. It ensures that the image is generated asynchronously and the result is available for display in the frame, or handled properly in case of any errors during the generation process.

Frame images

Throughout the interaction process, various images are used. These images serve as visual cues during each step of the frame, such as when the user is prompted for input, while the image is being processed and once the final result is ready or if an error occurs.

Create route.ts

Create the api/frame directories in your project. Inside of the app/api/frame directory, create a file named route.ts and add the following code:

import { NextRequest, NextResponse } from 'next/server';
import { v4 as uuidv4 } from 'uuid';
import { runCliCommand } from '@/app/services/cli';
import * as fs from 'fs/promises';
import path from 'path';

interface RequestData {
  prompt: string;
  status: 'processing' | 'completed' | 'error';
}

const requestStore: { [key: string]: RequestData } = {};

async function getResponse(req: NextRequest): Promise<NextResponse> {
  const searchParams = req.nextUrl.searchParams;
  const action = searchParams.get("action") || "input";
  const requestId = searchParams.get("id");
  const data = await req.json();
  const prompt = data.untrustedData?.inputText;

  switch (action) {
    case "input":
      return new NextResponse(`<!DOCTYPE html><html><head>
        <title>Input Prompt</title>
        <meta property="fc:frame" content="vNext" />
        <meta property="og:image" content="${process.env.NEXT_PUBLIC_BASE_URL}/enter-prompt.png" />
        <meta property="fc:frame:image" content="${process.env.NEXT_PUBLIC_BASE_URL}/enter-prompt.png" />
        <meta property="fc:frame:button:1" content="Submit" />
        <meta property="fc:frame:input:text" content="Enter your prompt" />
        <meta property="fc:frame:post_url" content="${process.env.NEXT_PUBLIC_BASE_URL}/api/frame?action=submit" />
      </head></html>`);

    case "submit":
      if (prompt) {
        const id = uuidv4();
        requestStore[id] = { prompt, status: 'processing' };

        // Start the image generation process asynchronously
        generateImage(id, prompt);

        // Immediately return the loading state
        return new NextResponse(`<!DOCTYPE html><html><head>
          <title>Request Submitted</title>
          <meta property="fc:frame" content="vNext" />
          <meta property="og:image" content="${process.env.NEXT_PUBLIC_BASE_URL}/loading.gif" />
          <meta property="fc:frame:image" content="${process.env.NEXT_PUBLIC_BASE_URL}/loading.gif" />
          <meta property="fc:frame:button:1" content="Check Status" />
          <meta property="fc:frame:post_url" content="${process.env.NEXT_PUBLIC_BASE_URL}/api/frame?action=check&id=${id}" />
        </head></html>`);
      }
      break;

    case "check":
      if (requestId && requestId in requestStore) {
        const request = requestStore[requestId];
        switch (request.status) {
          case 'completed':
            const imageUrl = `${process.env.NEXT_PUBLIC_BASE_URL}/result.png`;
            return new NextResponse(`<!DOCTYPE html><html><head>
              <title>Result</title>
              <meta property="fc:frame" content="vNext" />
              <meta property="og:image" content="${imageUrl}" />
              <meta property="fc:frame:image" content="${imageUrl}" />
              <meta property="fc:frame:button:1" content="New Request" />
              <meta property="fc:frame:post_url" content="${process.env.NEXT_PUBLIC_BASE_URL}/api/frame?action=input" />
            </head></html>`);
          case 'error':
            return new NextResponse(`<!DOCTYPE html><html><head>
              <title>Error</title>
              <meta property="fc:frame" content="vNext" />
              <meta property="og:image" content="${process.env.NEXT_PUBLIC_BASE_URL}/error.png" />
              <meta property="fc:frame:image" content="${process.env.NEXT_PUBLIC_BASE_URL}/error.png" />
              <meta property="fc:frame:button:1" content="Try Again" />
              <meta property="fc:frame:post_url" content="${process.env.NEXT_PUBLIC_BASE_URL}/api/frame?action=input" />
            </head></html>`);
          case 'processing':
            return new NextResponse(`<!DOCTYPE html><html><head>
              <title>Processing</title>
              <meta property="fc:frame" content="vNext" />
              <meta property="og:image" content="${process.env.NEXT_PUBLIC_BASE_URL}/loading.gif" />
              <meta property="fc:frame:image" content="${process.env.NEXT_PUBLIC_BASE_URL}/loading.gif" />
              <meta property="fc:frame:button:1" content="Check Again" />
              <meta property="fc:frame:post_url" content="${process.env.NEXT_PUBLIC_BASE_URL}/api/frame?action=check&id=${requestId}" />
            </head></html>`);
        }
      }
      break;
  }

  // Fallback response
  return new NextResponse(`<!DOCTYPE html><html><head>
    <title>Error</title>
    <meta property="fc:frame" content="vNext" />
    <meta property="og:image" content="${process.env.NEXT_PUBLIC_BASE_URL}/error.png" />
    <meta property="fc:frame:image" content="${process.env.NEXT_PUBLIC_BASE_URL}/error.png" />
    <meta property="fc:frame:button:1" content="Start Over" />
    <meta property="fc:frame:post_url" content="${process.env.NEXT_PUBLIC_BASE_URL}/api/frame?action=input" />
  </head></html>`);
}

// Runs the CLI command and saves the result as an image
async function generateImage(id: string, prompt: string) {
  try {
    const imageBuffer: any = await runCliCommand(prompt);
    const imagePath = path.join(process.cwd(), 'public', 'result.png');
    await fs.writeFile(imagePath, imageBuffer);
    requestStore[id] = { prompt, status: 'completed' };
  } catch (error) {
    console.error('Error generating image:', error);
    requestStore[id] = { prompt, status: 'error' };
  }
}

export async function POST(req: NextRequest): Promise<Response> {
  return getResponse(req);
}

export const dynamic = 'force-dynamic';

Create cli.ts

This is where the execution of the Lilysay job happens. It will run the job, wait for it to finish and then create an image from the return value.

Inside of the app directory, create a new directory named services and inside of that create a file named cli.ts. The functions inside this file will allow us to send a prompt to the Lilypad Network using the user prompt and a predefined command that runs asynchronously in the terminal. Once the command is executed, Lilypad processes the input through its Lilysay module and outputs the results in the form of an ASCII image, which is then converted into a displayable image using an SVG-to-PNG transformation.

Here are the 3 functions inside this file:

  • createImageBufferFromAscii: Converts ASCII text into an SVG image and then uses the sharp library to convert the SVG into a PNG image buffer. This allows the display or saving of an image representation of the ASCII text.

  • runCliCommand: Executes a Lilypad CLI command to process the user's input text, captures the command's output, and converts it into an image buffer. It handles the entire process of running the command, capturing the output, and managing errors.

  • extractStdoutFilePath: Parses the CLI command's stdout to extract the file path where the Lilypad CLI has saved the output. It uses a regex pattern to identify the path in the command's output.

The following code snippet demonstrates the process:

import { spawn } from 'child_process';
import { promises as fs } from 'fs';
import * as path from 'path';
import { fileURLToPath } from 'url';
import sharp from 'sharp';

const __filename = fileURLToPath(import.meta.url);
const __dirname = path.dirname(__filename);

const MODULE_VERSION = "cowsay:v0.0.4";

// Function to generate an image buffer from ASCII text
async function createImageBufferFromAscii(asciiText, aspectRatio = 1.91) {
  const fontSize = 14;
  const lineHeight = fontSize + 6;
  const padding = 20;

  const lines = asciiText.split('\\n');

  const textHeight = lines.length * lineHeight + padding * 2;

  let width, height;
  if (aspectRatio === 1.91) {
    width = Math.max(textHeight * aspectRatio, 800);
    height = textHeight;
  } else if (aspectRatio === 1) {
    width = height = Math.max(textHeight, 800);
  }

  const escapeXML = (unsafe) => {
    return unsafe.replace(/&/g, "&amp;")
      .replace(/</g, "&lt;")
      .replace(/>/g, "&gt;")
      .replace(/"/g, "&quot;")
      .replace(/'/g, "&#039;");
  };

  const escapedLines = lines.map(line => escapeXML(line));

  const textWidth = Math.max(...escapedLines.map(line => line.length)) * fontSize * 0.6;
  const xPosition = Math.max((width - textWidth) / 2, 10);

  // Generate SVG markup with the ASCII content
  const svgImage = `
    <svg width="${width}" height="${height}" xmlns="<http://www.w3.org/2000/svg>">
      <rect width="100%" height="100%" fill="white" />
      <style>
        .text-content { font-family: monospace; font-size: ${fontSize}px; fill: black; white-space: pre; }
      </style>
      <text x="${xPosition}" y="${padding}" class="text-content">
        ${escapedLines.map((line, index) => `<tspan x="${xPosition}" dy="${index === 0 ? '0' : lineHeight}">${line}</tspan>`).join('')}
      </text>
    </svg>
  `;

  // Convert the SVG to a PNG image buffer using sharp
  return sharp(Buffer.from(svgImage)).png().toBuffer();
}

// Function to run the Lilypad CLI command
export async function runCliCommand(inputs) {
  console.log("Lilypad Starting...");

  const web3PrivateKey = process.env.WEB3_PRIVATE_KEY;
  if (!web3PrivateKey) {
    throw new Error('WEB3_PRIVATE_KEY is not set in the environment variables.');
  }

  // Construct the command to run Lilypad with the user input
  const command = `lilypad run ${MODULE_VERSION} -i Message="${inputs}"`;
  console.log("Command to be executed:", command);

  // Execute the command as a shell process
  return new Promise((resolve, reject) => {
    const child = spawn('bash', ['-c', `export WEB3_PRIVATE_KEY=${web3PrivateKey} && ${command}`]);

    let stdoutData = '';
    let stderrData = '';

    // Capture stdout from the CLI command
    child.stdout.on('data', (data) => {
      stdoutData += data.toString();
      console.log(`Stdout: ${data}`);
    });

    child.stderr.on('data', (data) => {
      stderrData += data.toString();
      console.error(`Stderr: ${data}`);
    });

    child.on('close', async (code) => {
      if (code !== 0) {
        reject(new Error(`Process exited with code ${code}`));
        return;
      }

      if (stderrData) {
        reject(new Error(stderrData));
        return;
      }

      console.log("Process completed successfully!");

      try {
        // Extracts the file path, reads the ASCII content and converts it to an image buffer
        const stdoutFilePath = extractStdoutFilePath(stdoutData);
        const asciiContent = await fs.readFile(stdoutFilePath, 'utf-8');
        const imageBuffer = await createImageBufferFromAscii(asciiContent);
        resolve(imageBuffer);
      } catch (error) {
        reject(new Error(`Error processing output: ${error.message}`));
      }
    });

    child.on('error', (error) => {
      reject(new Error(`Error with spawning process: ${error.message}`));
    });
  });
}

// Helper function to extract the stdout file path from the CLI output
function extractStdoutFilePath(stdoutData) {
  const match = stdoutData.match(/cat (\/tmp\/lilypad\/data\/downloaded-files\/\w+\/stdout)/);
  if (!match || !match[1]) {
    throw new Error('Stdout file path not found in CLI output');
  }
  return match[1];
}
  • Sending the Request: The user's input text is passed directly into the Lilypad CLI command using a shell process. The input text is embedded within the command's arguments and executed asynchronously in the terminal.

  • Handling the Response: After the CLI command completes, the output is captured and processed. The response includes the file path to the generated ASCII image, which is then read from the file system and converted into a PNG image for further use.

  • Error Handling: If an error occurs during the execution of the CLI command or file processing, it is logged to the console, and the process is terminated with appropriate error messaging.

Testing

The Framegear server is running. Next, run your local server in your frame project. We will need to make sure it is running on port 3000:

npm run dev

You should now see your frame and be able to interact with it. Enter a prompt to display in the Lilysay ACSII image!

Rendering results

As the job is processed, a “Check Status” button will be displayed. Clicking this will check if the job has been completed. Once a job is completed, the results will be displayed in the frame. If there is an issue along the way, an error message will be displayed and the user will be prompted to try again.

Potential Use Cases

Running Lilysay jobs is just one of the ways you can utilize Lilypad in a frame, but you can run jobs with any available modules on the Lilypad Network. Some of these include:

  • Stable Diffusion image

  • Stable Diffusion video

  • Ollama

Resources

Hardware Requirements

Minimum Hardware Requirements

  • Processor: Quad-core x64 Processor or better

  • RAM: 32GB (see additional details below)

  • Internet: Internet: 250Mbps download, 100Mbps upload (minimum)

  • GPU: NVIDIA GPU with a minimum of 8GB VRAM (see additional details below)

  • Storage: SSD with at least 500gb of free space

GPU Requirements

  • Base Requirement: The simplest model on Lilypad requires a GPU with at least 8GB of VRAM. This is the minimum required to participate in computational tasks on the Lilypad network.

Model-Specific VRAM Requirements

The capability of your GPU to manage multiple or more complex Lilypad jobs is enhanced by the amount of VRAM available:

  • Standard Models (SDXL, Ollama): Require at least 8GB of VRAM.

  • Advanced Models: Require 14GB+ of VRAM.

Hardware Compatibility

  • GPUs with 8GB of VRAM are limited to running models like SDXL, which fit within this specification. Larger GPUs with higher VRAM are required for more demanding models like SDV, which needs at least 14GB of VRAM.

For example:

RAM Requirements

Lilypad uses the Resource Provider's GPU to load models, initially requiring the temporary storage of the data in the system's RAM. In a production environment with RP Nodes, it is important to have enough RAM to support the model and the underlying system's operational processes.

Minimum RAM Requirements:

  • Base Requirement: A minimum of 16GB of RAM is required, with at least 8GB dedicated to the model and another 8GB allocated for system operations and Lilypad processes.

Additional Considerations

  • Wallets for each GPU: You need a separate account for each GPU you want to set up on the network. The wallet you use for your account must have both ETH (to run smart contracts on Ethereum) and Lilypad (LP) tokens in order to receive funds for jobs) on the network.

  • Larger Models: Jobs involving more substantial models will demand additional RAM. It's important to note that adequate VRAM alone is insufficient; your system must also have enough RAM to load the model into the GPU successfully. Without sufficient system RAM, the model cannot be loaded into the GPU, regardless of available VRAM.

Apply to our closed Beta Resource Provider program!

Lilypad Workflows

A set of n8n workflows using Lilypad AI Models

At Lilypad, we’ve integrated n8n with the Lilypad API to automate dynamic workflows, combining human input, AI generation and multi-platform actions.

Using Lilypad’s OpenAI-compatible endpoints inside n8n lets you:

  • 🧠 Source and enrich content: Pull from databases like Notion, Airtable, or Google Sheets and enhance it with AI.

  • 🎨 Generate custom outputs: Create AI-written text, summaries, or even modify images dynamically.

  • 🔄 Automate publishing: Push results to platforms like Twitter, Discord, Slack, or your own apps.

  • 📈 Track and update: Monitor workflow status and feed results back into your systems.

Everything runs automatically inside n8n, with Lilypad API endpoints providing powerful, cost-free AI capabilities at the heart of it.

Imagine building workflows that review customer feedback with AI, generate personalized emails, summarize research papers, trigger alerts from on-chain events, or even create full content campaigns, all without writing a single script. With n8n and Lilypad, AI-driven automation is at your fingertips.

Coming soon:

Update node

Instructions to update the Docker Lilypad Resource Provider (RP)

To update a Lilypad RP, remove any previous versions of Lilypad on the instance and then follow the instructions to setup a Docker RP.

Ensure all processes from Linux install are stopped and removed

This only applies if you had a Linux version installed

1. Stop all systemd processess

2. Disable the systemd services:

3. Delete the service files from the systemd directory.

Note: Be extremely careful when using sudo to remove files.

4. Reload the systemd daemon to apply the changes.

Remove old Docker containers and images

1. If a Docker RP is running stop the system (if the node is not running, disregard this first step)

2. You can check the status of the containers with:

If they are running, stop them with:

Remove the containers:

3. View all Docker images

4. Take note of the IMAGE ID for each lilypad image (resource-provider, bacalhau, and watchtower)

Delete old Docker images that are duplicates for Lilypad (Bacalhau, Lilypad)

docker volume prune can also be used to remove specific images that aren't being used.

Install and Run a new Version of Lilypad

If a RP was running on the Lilypad Testnet in 2024, it is recommended to create a new wallet when joining the network again for the RP Beta program. If a RP wants to use the same wallet, feel free to try running the RP and let our team know if any issues with running jobs are experienced.

1. Export WEB3_PRIVATE_KEY as an environment variable

2. Use curl to download the docker-compose.yml file from the Lilypad GitHub repository.

3. Start your Lilypad Node

OR start with your own RPC URL

Monitor the Resource Provider

Use the following command to check the status of the resource provider and bacalhau.

Use the following command to view the containers running after starting Docker Compose.

A healthy, updated node should have all containers started, a preflight check, and be adding a resource offer.

Validation of GPU Compute

Validation of GPU Computation in Decentralized, Trustless Networks

Overview

Verifying computational processes in decentralized networks poses a fundamental challenge, particularly for Graphics Processing Unit (GPU) computations. The Lilypad Research team conducted an investigation revealing significant limitations in existing approaches: exact recomputation fails due to computational non-determinism across GPU nodes, Trusted Execution Environments (TEEs) require specialized hardware, and Fully Homomorphic Encryption (FHE) faces prohibitive computational costs.

To address these challenges, this report explores three verification methodologies adapted from adjacent technical domains: model fingerprinting techniques, semantic similarity analysis, and GPU profiling. Through systematic exploration of these approaches, we develop novel probabilistic verification frameworks, including a binary reference model with trusted node verification and a ternary consensus framework that eliminates trust requirements.

These methodologies establish a foundation for ensuring computational integrity across untrusted networks while addressing the inherent challenges of non-deterministic execution in GPU-accelerated workloads.

Troubleshooting

Common FAQs when running a Lilypad node

Please view these resources below for FAQs!

Don't see your issue below?

  • Without a discussion opened, our team will not be able to support the problem.

  • Description (including Lilypad version running on your node)

  • Hardware Info (including Linux/Windows version)

  • Related blockchain/ETH addresses of transaction hashes

  • Output Logs - sudo systemctl status lilypad-resource-provider

  • Related links/urls

  • Screenshots

IMPORTANT: When sharing screenshots of your logs or node information, make sure to remove or block out any references to your node's private key. Keeping your private key safe and away from the public eye is crucial!

FAQ

Common issues to check first!

  1. Updating and restarting the Lilypad services regularly (daily) is encouraged throughout IncentiveNet.

Run a node (Resource Provider - RP)

How do I keep track of Lilypad version releases and other important announcements?

How can I check the status of my Lilypad RP once it's running?

To check if the RP is running use the following command:

This will give a live output from the Lilypad node. The logs will show the node running and accepting jobs on the network.

Run the following command to get info from Bacalhau

Can I become a Lilypad RP with just a CPU and no GPU?

Resource Providers can run on the Lilypad Network without a GPU, however only hardware with a GPU is currently rewarded with Lilybit_ rewards.

Checking Docker Runtime

To verify your Docker runtime configuration: sudo docker info | grep Runtimes

You should see the NVIDIA runtime listed. If you only see: Runtimes: io.containerd.runc.v2 runc you will need to configure the NVIDIA runtime.

Configuring NVIDIA Runtime

If the NVIDIA runtime is not showing up or you're experiencing issues, try the following:

1. Configure the NVIDIA Container Toolkit runtime: sudo nvidia-ctk runtime configure --runtime=docker --set-as-default 2. Restart the Docker service: sudo systemctl restart docker

Overview of Docker setup

For a comprehensive overview of your Docker setup, use: docker info. This command provides detailed information about your Docker daemon configuration.

My docker RP is turning on, but showing errors stating it is not providing a Resource Offer.

Typically this occurs when an old version of Lilypad is still running on the instance.

  1. Ensure the Bacalhau and Lilypad systemd services are stopped and removed.

  1. Disable the systemd services so they start on boot:

  1. Delete the service files from the systemd directory.

**Note: Be extremely careful when using sudo to remove files.

  1. Reload the systemd daemon to apply the changes.

Can I run a Lilypad RP on Windows?

Lilypad RPs currently only support Linux installs. Running a RP on Windows is currently experimental.

How do I run multiple GPU’s on one server?

Can I run multiple Lilypad RPs on one GPU?

No, this would be considered detrimental to the Lilypad network and cannot be rewarded at this time.

In order to run jobs successfully, Lilypad RPs must ensure all resources on the server are available. If a single GPU is running many RPs and splitting the server resources between them, this causes each RP to not have enough resources to run a Lilypad job.

How do I setup a personal RPC for Arbitrum Sepolia?

Here's a quick guide on setting up your own RPC for a Lilypad node.

CompatNotSupportedOnDevice Error

Unknown Error Code: 222

Indicates that the CUDA version of the RP is incorrect. Install the CUDA version which suitable for the gpu type and compile Lilypad by themselves.

Lilypad IncentiveNet details

Are there required updates needed to maintain my node software with Lilypad on IncentiveNet?

Resource providers are expected to have the latest Lilypad version installed on their systems. The installation instructions can be found here:

Can Lilybit rewards be sent to another wallet? Will this function be added in the future?

Currently, it's not possible. However, it's a very good feature request and the team is evaluating!

I’m getting an error of “invalid hex character 'r' in private key”

This is more than likely due to you trying to export your mnemonic seed phrase instead of the private key. A private key typically appears like this: 4c0883a69102937d6231471b5dbb6204fe512961708279df95b4a2200eb6b5e7 and consists of 64 hexadecimal characters.

How do I setup my Metamask wallet?

How do I get testnet LP and ETH (Arbitrum Sepolia ETH)?

RP-Beta Program

Become an early RP on the Lilypad Network and power the future of Open Access AI

Read more about contributing hardware and earning rewards in Lilypads ongoing RP-Beta Program.

Images generated on Waterlily.ai

To find out more, please visit the .

Automate customer relationship management, internal company management, etc. with the ! Scale inference using the Lilypad network deploying any of the models supported by the .

Github repo .

A guide to launch the HubSpot-LinkedIn agent locally and run inference on the Lilypad Network with the .

Steps: Log in > Settings > Integrations > Private Apps > Create private app > Name it > Scopes: crm.objects.contacts (read/write contacts) > Create app > Click app name > Auth tab > View access token > Copy token

Steps: Create an account > Subscribe to > Go to > Select "default-application" > Copy "Application Key"

(Sign up and generate an API key)'

A guide to launch the RAG support agent locally and run inference on the Lilypad Network with the .

Sign up at and generate an API key.

A guide to launch the Research agent locally and run inference on the Lilypad Network with the .

OPENAI_API_KEY: Your OpenAI Compatible API key (for example: )

ANURA_API_KEY: Your Anura API key (get it here: )

Download and Install AnythingLLM from

In the BaseURL input field, paste the URL:

Log in to your Anura account at

The provides ML researchers, businesses, and more a simple interface for leveraging the Lilypad network.

Add a new model to run on the network as a

In order to run an AI model on the Lilypad network, a docker image must be created for the program using the .

The ML workbench product is currently in beta testing and is not yet provided as a hosted solution. Reach out in the Lilypad for more info!

Download and Install Rivet from

In the Open AI endpoint input field, paste the URL:

Log in to your Anura account at

Need some inspiration? Visit the page!

Get .

Get .

This is an example for running a local version as inspiration for developers. The Lilypad CLI will need to be wrapped and the project hosted as an API to run in production and be used in a .

Next, install the required dependencies. is used in this project:

We will need to fund a wallet with Lilypad and Arbitrum Sepolia testnet tokens. Follow the first 2 sections labelled "Setting up MetaMask" and "Funding your wallet" from our docs.

Your WEB3_PRIVATE_KEY can be retrieved from the MetaMask account details menu. For more info, check out the on how to get a your private key. Please do not push your private key to GitHub.

Run through this guide to . Select the CLI User script.

Navigate to and keep that window open for when we start to write the frame.

To obtain the images used in this guide, save them from .

Navigate to the Framegear host and you will see an input labeled "Enter your frame URL". Add http://localhost:3000 to that and click "Fetch".

This page overviews the hardware requirements to operate a Lilypad Network node. It's important to note that these requirements continuously evolve as the network grows. If you have questions or suggestions, please join our or open a pull request on the .

Each model operating on Lilypad has specific VRAM (Video Random Access Memory) requirements directly related to the model's complexity and computational demands. For running a Lilypad Resource Provider (RP) with multiple GPUs, a guide using Proxmox can be found .

A node with a GPU containing 8GB of VRAM can execute Lilypad module jobs, which require a minimum of 8GB of VRAM.

Larger capacity GPUs are needed for heavier compute models like , which require at least 14GB of VRAM.

If the Lilypad install for Linux was previously used on the RP, first . Then follow the .

If the RP previously used a Docker Lilypad version,, then follow the .

If a wallet has been used to run CLI jobs on Lilypad, this wallet cannot be used for a RP). See the Docs for

The Lilypad Research team published "" diving into the complexities of validating successful compute jobs on GPUs. The investigation then explores verification methods and solutions that Lilypad can implement across the network.

(under maintenance)

For complex issues, bug reports, or feature requests, open a discussion in the Lilypad-Tech Github organization discussion .

Navigate to the discussion , select "New Discussion", choose "rp-issues", and fill out the template.

For quick questions or minor issues, use the Lilypad Discord channel and provide the following info.

Ensure the RP (node) is running the latest for your preferred environment

Does the RP have Lilypad Tokens (LP) and Arbitrum ETH?

The Discord channel is the primary location for Resource Provider announcements. Announcements in this channel are also posted on the Lilypad page.

Lilypad supports Resource Providers using the Docker find instructions .

If this doesn't solve the problem, with our team.

Recommendation guide using Proxmox found . More on this coming soon!

The CUDA version of the RP does not match the GPU driver. Please refer to this to repair installation.

When do Lilybit_ rewards earned or rewards slashed appear in the Leaderboard? **The leaderboard is currenlty under maintanence, find Lilybits earned by here.

To stay up to date with the latest releases, check the in the Lilypad Discord or visit the . Along the top menu, click the "Watch" dropdown and you will see a section named "Custom". Selecting "Releases" will allow you to get notified of any new releases!

Check out the .

View the .

a wallet with LP and ETH

No ETH or LP in your wallet? ()

Join our for more help!

Become a Resource Provider:

Extra Labs website
HubSpot-LinkedIn agent
Lilypad Inference API
here
Lilypad Inference API
Sign up for HubSpot
Create a Private App
Sign up for RapidAPI
LinkedIn Data API
Developer Dashboard > Authorization
Lilypad Inference API
Lilypad Inference API
Anura API
Lilypad Docs
Lilypad Anura API
Source code
Lilypad Inference API
DeepSeek API Docs
Lilypad Inference API Docs
GitHub
https://anythingllm.com/
https://anura-testnet.lilypad.tech/api/v1
https://anura.lilypad.tech/
Lilypad ML workbench
Lilypad module
Lilypad module spec
Discord
Source code
https://rivet.ironcladapp.com/
https://anura-testnet.lilypad.tech/api/v1/chat/completions
https://anura.lilypad.tech/
Helpful Resources
Anura API key
Source code
Anura API key
Source code
Farcaster Frame
Coinbase onchain kit
Quick Start
official guide from MetaMask
install the Lilypad CLI on your machine
http://localhost:1337
the source code here
http://localhost:1337
Source code
Farcaster frames documentation

Marketing Workflows

Life workflows

sudo systemctl stop bacalhau
sudo systemctl stop lilypad-resource-provider
sudo systemctl disable bacalhau
sudo systemctl disable lilypad-resource-provider
sudo rm /etc/systemd/system/bacalhau.service
sudo rm /etc/systemd/system/lilypad-resource-provider.service
sudo systemctl daemon-reload
docker compose down
docker ps -a | grep -E "resource-provider|ipfs|watchtower"
docker stop <container_name>
docker rm <container_name>
docker images
docker rmi <IMAGE ID>  
export WEB3_PRIVATE_KEY=<your-private-key>
LATEST_VERSION=$(curl -s https://api.github.com/repos/Lilypad-Tech/lilypad/releases/latest | sed -n 's/.*"tag_name": "\(.*\)".*/\1/p')

curl -o docker-compose.yml "https://raw.githubusercontent.com/Lilypad-Tech/lilypad/$LATEST_VERSION/docker/docker-compose.yml"
WEB3_PRIVATE_KEY=<your_private_key> docker compose up -d
WEB3_PRIVATE_KEY=<your_private_key> WEB3_RPC_URL=wss://arb-sepolia.g.alchemy.com/v2/your-alchemy-id docker compose up -d
docker logs resource-provider
docker logs bacalhau
docker ps
docker logs resource-provider
docker logs bacalhau
sudo systemctl stop bacalhau
sudo systemctl stop lilypad-resource-provider
sudo systemctl disable bacalhau
sudo systemctl disable lilypad-resource-provider
 sudo rm /etc/systemd/system/bacalhau.service
 sudo rm /etc/systemd/system/lilypad-resource-provider.service
sudo systemctl daemon-reload
Altruistic AI Agents - Lilypad Oncologist

Problem Statement

The Distributed Compute Problem

Problem Setup

The setup is a trustless, permissionless, two-sided marketplace for compute, where clients can purchase compute services from compute nodes. Trustless means that by default, the protocol does not assume that any particular node behaves in a trustworthy manner and that each node should be considered as rationally self-interested (note that this excludes intentionally malicious behavior). Permissionless means that any node can join or leave the network at will.

Matches for compute jobs are made off-chain, with the resulting deals and results recorded on-chain. Both clients and compute nodes need to agree to matches before they become deals, and make deposits to the protocol to enable rewards and punishments. Results are verified using verification-via-replication, and clients can check the results of any job after it has been completed, but before it needs to pay. It does so by calling upon a mediation protocol. The mediation protocol is the ultimate source of truth, and the outcome of the mediation protocol determines how payouts to nodes are made.

The issue of preventing fake results in the presence of a Trusted Third Party (TTP) as a mediator is effectively a solved problem (for example, see the section on prior verification-via-replication protocols, though there is much more literature on this topic). Given the assumption that the mediation protocol is the source of truth, we can treat the mediation protocol as a TTP. Since the fake results problem is basically already solved in this case, the cheating problem reduces down to solving the collusion problem within the mediation protocol. (Note, however, that we will address both cheating and collusion; the framework described here exists to conceptually simplify the problem.)

This is a typical scenario of a Byzantine environment, and we can use well-established approaches to Byzantine Fault Tolerance when invoking mediation. However, most BFT algorithms and cryptographic methods rely on assumptions regarding some fraction of honest nodes. The problem is that rational, utility-maximizing agents may still collude, even in a mediation consortium, in order to converge on incorrect results. On the one hand, we could assume that some fraction of nodes are honest, as is often done. On the other hand, can we do better?

Problem Statement

Task

The task is to find the mechanisms that incentivizes all nodes to behave honestly.

Adversary model

All agents in the protocol are utility-maximizing. This will be elucidated in a subsequent section. Most of the literature has focused on the case where compute nodes are dishonest. However, the client can also behave dishonestly in some manner that maximizes their utility. For example, if the client has some level of control over the choice of mediator, and dishonest nodes have their collateral slashed, the client could collude with the mediator in order to deem a correct result incorrect and get a cut of the honest compute node's collateral.

What is a good solution?

"Good" solutions can take a number of forms:

  1. Nodes never have an incentive to be dishonest.

  2. Nodes have an incentive to be dishonest that goes to zero as a function of the parameters of the protocol.

  3. (1) or (2), but under some simplifying assumptions, such as there being some fraction of honest nodes within every mediation protocol.

A good solution would achieve any of these goals. Alternatively, another valuable outcome of this research would be to discover under what assumptions these goals can or cannot be met, or if the goals are even possible to achieve at all.

Mechanisms for achieving these goals

There are a number of ways that these goals may be achieved. The approach will be to construct a digital twin of the protocol and test a number of different mechanisms in simulation. These mechanisms include variations on mediation protocols, collateralization, staking, taxes and jackpots, and others; see the Mechanisms to Explore section for details.

Mechanisms To Explore

Mechanisms to Explore

The following is a list of mechanisms that we are currently considering exploring in order to mitigate attacks. Note that some of these mechanisms clearly would not be able to deter cheating and collusion alone. However, in combination with other mechanisms, they may achieve the goals. In this sense, they should be thought of as modules, optionally composable with each other.

Mediation

Clients call upon a mediation protocol in order to verify the results of a node. There are several variations of the structure of the mediation protocol; the following parameters can be varied:

  1. The number of nodes in the mediation protocol.

  2. If more than two nodes in the mediation consortium, the consensus threshold that determines which result is the one considered to be correct.

  3. How the nodes are chosen.

    • For example, we may want as a baseline the same constraint as in Modicum - that only mediators that both the client and the compute node mutually trust can be used for mediation.

      • Even with this baseline, there is still a question of how to choose the node(s) - it can be random, be determined by an auction, or any other method.

  4. Recursive mediation - that is, if there is no consensus in the consortium, do another mediation.

    • There is a large space of possibilities regarding how to execute this.

    • There needs to be a limit to the number of nodes this recursive process can use. For example, the set of potential nodes can be the same as the set of mutually trusted mediators, as described above.

  5. Other methods discussed here, such as taxes and jackpots, as well as staking and prediction markets, can be incorporated into the mediation protocol.

Collateralization

There are a number of different types of collateral that need to be included in the protocol.

The client needs to deposit collateral so that the compute node knows that it can be paid. For computations where the cost is known up front, this is simple. However, it becomes complicated for arbitrary compute; the client might not have put up enough collateral initially, so there may have to be a back-and-forth between client and compute node where the latter halts the computation until the former deposits more collateral or simply cancels the computation and pays for the partially computed result. If the client does not pay, then the compute node can invoke a mediation process.

The compute node needs to deposit several types of collateral.

  1. Collateral in case they timeout.

    • This is put up as part of the deal agreement - that is, when the deal is posted on-chain.

  2. Collateral in case they cheat.

    • The way that the compute node will convey the amount of collateral they will deposit to indicate that they will not cheat is via a collateral multiplier. The compute node commits to a multiple of whatever they will charge the client ahead of time as part of the deal agreement. The actual collateral is put up after the result is computed and sent to the client. This simplifies immensely the task of determining how much collateral to deposit for arbitrary computations.

  3. Collateral in case they don't do the computation at the rate they said they would. This is closely related to timeout collateral.

    • Ideally, this is a way of enforcing deadlines on jobs.

    • It is not necessary to make this collateral slashing binary - for example, a late result can be still be rewarded.

    • It is enforceable if, for example, the compute node says that they will do X WASM instructions/time. However, technical limitations may make this unrealistic, and it needs to be tested.

One possible way to overcome collusion is to require super high collateral for some particular nodes against each other that that even discount factors very favorable to collusion would not incentivize collusion, even when accounting for repeated games.

While this is not a part of anti-cheating mechanisms, collateral pooling could lower capital requirements for collateralization. High capital requirements are a second-order concern, but will become a higher priority once robust anti-cheating mechanisms are implemented.

Taxes and Jackpots (inspiration from Truebit)

Taking inspiration from the taxes and jackpots scheme used in Truebit, deals can be taxed, with those taxes going to a jackpot that is then used to reward nodes via some distribution protocol determined by the mediation process. For this, we want to be able to take any fraction of the jackpot(s) and distribute it arbitrarily to arbitrary nodes (perhaps even those not involved in the mediation process).

This is a particularly interesting approach because the taxation + jackpots mechanism inherently create a global game that impacts local outcomes. While it may lead to potential collusion attacks, the tool alone is very useful, especially in conjunction with some other the other methods mentioned here. Modeling it in simulation would also provide the opportunity to test some of the hypotheses in the Truebit paper.

This method may also be useful in creating a robust platform where some clients do not care to check their results. That is, if some clients do not check results in general, it may be difficult to assert that the network is secure. Taxes and jackpots may be a way to address this.

Prediction/Replication Markets

Prediction markets have been well-studied in a variety of different fields. More recently, a type of prediction market called a replication market has been explored in the context of replicability in science. With this inspiration, it may be possible that allowing nodes to make bets regarding the replicability of the computations of nodes may be useful in mitigating cheating. For example, nodes with a low prediction for replicability may act as a signal for that node's reputation and encourage it to behave honestly.

It is possible to overlay this mechanism on top of taxes, allowing nodes to choose where their taxes go in the prediction market.

Additionally, since Automated Market Makers are closely related to prediction markets, we can leverage many DeFi tools in this context.

Staking behind nodes

Allow users to stake behind nodes. This is similar to prediction markets, but with slightly different economics. Like with prediction markets, it may be possible to tax users and then allow them to choose which nodes they stake behind. Overall, this approach is similar to delegated Proof-of-Stake.

Announcing successful cheating

Can a node announcing that it successfully cheated (and thereby receiving a reward) benefit the robustness of the protocol? How much would this node have to be rewarded?

Frequency of checks

How often should a client check results? Clearly it is related to the amount of collateral that the other node deposits, how much they value getting true/false results, reputation, and so on. This is a parameter that the client would need to learn to maximize its own utility.

Reputation

The ledger can maintain a record, for each compute node, of the number of jobs the compute node has completed, the number of times its results were checked, and the number of times those results were replicated successfully. All other nodes (client and compute) could locally run some arbitrary function over these numbers to determine how reputable they find that node.

Storing Inputs/Outputs

Results can only be replicated for as long as the inputs are stored somewhere. The client, compute node, or some other entity can pay for storing the inputs/outputs of jobs. The longer they are stored, the more time there is to check the results, which affects things like collateralization, the frequency of checks, etc.

This is related to, but not totally overlapping with, the amount of time that a node might have to wait before getting paid, which is the same time interval that a client has to check a result. However, checking the result after the node gets paid and receives back its collateral may still be possible, with other penalty schemes (or reward schemes, for example, coming from jackpots).

Anti-Collusion via Obfuscation

Colluding requires the following knowledge in order to enforce the parameters of the collusion.

  1. The public keys of the nodes participating in collusion.

  2. The results that were posted by those public keys.

  3. The payouts to the public keys.

In order to sign a collusion contract to begin with, the public keys must be known. However, in a mediation protocol with enough nodes, it may be possible to obscure (2) and (3) by

  1. Having nodes submit results to the mediation protocol in an obscured/anonymous way

  2. Have nodes be paid out according to the results of the mediation protocol in an obscured/anonymous way

If these two criteria can be met, then a mediation protocol based on them might be capable of imitating the game-theoretic outcomes seen in the Smart Contract Counter-Collusion paper.

There have been many decades of cryptography and security research focusing on similar problems to these. It may be the case that it is already possible to do this; otherwise, there is a large amount of ongoing research on the topics of privacy-preserving transactions, and much prior work in the flavor of secret-sharing/MPC/Tor/Monero/ZKPs that could enable this.

Discord
Lilypad documentation repo
here
SDXL
SDV
remove
the install
install instructions
previous Docker containers and images
install instructions
Setting up a Metamask Wallet
Validation of GPU Computation in Decentralized, Trustless Networks
Lilybit Rewards info
Resource Provider Points dashboard
Lilybit Leaderboard
board
board
#i-need-help
Lilypad version
updates-rp
updates
here
raise a ticket
here
Nvidia GPU Driver guide
RPs
Docker
updates-rp channel
Lilypad GitHub
MetaMask official guide to retrieve your private key
docs here
Fund
import custom network and import the tokens
Discord
https://blog.lilypadnetwork.org/resource-provider-beta-program
Apply here!
enough

V1 Research Documents

These documents provide a background for ongoing Lilypad research.

These documents provide a background for the ongoing research on Lilypad. They are primarily focused on the game theory and cryptoeconomics of the protocol, and include an introduction to verifiable computing, an overview of the specific problems we are tackling, a brief overview of prior work, a description of our approach, and mechanisms that we plan to test in simulation.

Learn more about:

Prior Protocols

Investigation of some previous verification-by-replication computing protocols

Prior Verification-via-Replication Protocols

Before explaining our approach, we give a short overview of three prior approaches to verification-via-replication distributed computing protocols: Truebit, Modicum, and Smart Contract Counter-Collusion. We will outline potential improvements, and how our work builds on top of, and differs from, prior work.

Truebit is a protocol for outsourcing computation from blockchains, built using smart contracts on top of Ethereum. The original potential use cases were trustless mining pools, trustless bridges, scaling transaction throughput, and, scalable “on-chain” storage. Since its goal is to scale on-chain computation, it aims for global consensus: "Since there exist no trusted parties on Ethereum’s network, by symmetry we must allow any party to be hired to solve any computational task, and similarly anyone should be able to challenge a Solver’s outcome of a computational task. The latter requirement ensures that TrueBit operates by unanimous consensus." (emphasis added)

The components of Truebit are Miners, Task Givers, Solvers, Verifiers, and Judges. In order to incentivize checking results, random errors are forced into computations, with jackpots awarded to those who find them. These jackpots are funded by taxes on computations.

The authors claim that Sybil attacks are mitigated by pairwise Sybil-resistance between the parties of Task Givers, Solvers, and Verifiers, with Judges and Referees, whose roles are played by Miners, assumed to function as intended. Likewise, they claim that attacks to get bogus solutions on-chain by scaring off Verifiers are mitigated by the economics of deposits, taxes, and jackpot rewards. Additionally, a cartel of Solvers who absorb losses until they find a task with a forced error, upon which time they will receive the jackpot, will lose money in the long-term, since the expected Solver deposit per task is higher than the expected jackpot per task. Addressing another attack, the authors claim that an attack involving a flood of Challengers who try to take as much of the jackpot reward resulting from a forced error as possible is mitigated by having the total jackpot reward decrease as the number of Challengers increases.

Challenges

  • Does not scale to large/complicated/arbitrary computations

  • No formal theorems or proofs, no simulations, many plausible but unsubstantiated claims, especially regarding collusion

  • Everything is done on-chain

  • This model does not work well with two-sided marketplaces, because

    • It aims for global consensus, where any node is allowed to do the computation, whereas in two-sided marketplaces, clients need to be able to choose which nodes they are paying to do the computation

    • Clients may have time restrictions on their computations, and cannot wait for cases where their computations were injected with forced errors

  • No accounting for repeated games

Takeaways

  • Taxes and jackpots are a valuable tool to create a global game that affects local outcomes

  • Provides a list of potential client attacks

Job Creators are only allowed to submit deterministic jobs to the protocol. The Mediator exists to check for non-deterministic tasks submitted by the Job Creator (which can be used by the Job Creator as an attack vector to get free results), and fake results returned by the Resource Provider. The method for determining whether a job is deterministic or not is for the Mediator to run a job n times and check to see whether it receives different answers.

Modicum combines two separate ideas: checking the result from a Resource Provider to see if it is correct, and checking a result from a job submitted by a Job Creator to see if the job was deterministic or not. This implies that there is no capability for a client to simply check whether a result is correct or not, without the possibility of its collateral being slashed.

An alternative to trusting the Mediator (to act as a TTP) by having it run a job n times is having a consortium of n Mediators each run the task a single time. However, this adds the complication of achieving consensus in that consortium.

The issue of the Resource Provider and Mediator colluding to return a fake result is not addressed by this protocol. The authors allow for a Job Creator or Resource Provider to remove a Mediator from their list of trusted Mediators if they no longer trust it. However, that still leaves room to cheat at least once, and ideally this should be disincentivized from the outset.

There is also the issue of collateralization. The Modicum protocol, as well as a number of other protocols, assume (approximate) guesses as to the cost of jobs in advance, so that nodes can deposit the correct amount of collateral. However, doing so is fraught with complications; we provide an alternative approach in the Mechanisms to Explore section.

Challenges

  • The Mediator is basically a trusted third party

  • Client cannot simply check a result without being slashed, which is a consequence of the client attack model

  • No accounting for repeated games

Takeaways

  • Potential client attack, though one that can be mitigated by technical means

  • The client has benefit of getting correct results, which needs to be accounted for in simulation

The authors determine that cryptographic methods for verifiable computation are too expensive for real-world scenarios. For that reason, they rely on verification-via-replication. The scenario is one in which a client simultaneously outsources computation to two clouds, where those two clouds deposit collateral into smart contracts in such a way to create a game between them, where the game incentivizes doing the computation honestly. The central challenge that the authors tackle is the issue of collusion - that is, what if the two clouds collude on an incorrect answer?

In contrast to Modicum, the client is assumed to be honest, and in contrast to Truebit, a trusted third part (TTP) is used to handle disputes.

Three Contracts

The authors use a series of three contracts to counter collusion.

The first game is an induced Prisoner's Dilemma - to avoid the two clouds colluding, one cloud can be rewarded the other cloud's deposit (minus payment to the TTP for resolving the dispute) if the former returned the correct result and the latter did not. Thus, each cloud is better off giving the other cloud fake results while computing the correct result itself. This contract is called the Prisoner's contract. It is analogous to the equilibrium in the classic prisoner's dilemma being defection <> computing honest result and giving other node fake result if offered to collude.

However, the clouds can agree to collude via a smart contract as well. They could do this by both depositing another amount into a contract, where the leader of the collusion must add a bribe (less than its cost of computing) to the contract as well (disregarding the bribe, both clouds deposit the same amount of collateral). The deposit is such that the clouds have more of an incentive to follow the collusion strategy than to deviate from it. This contract is called the Colluder's contract.

In order to counteract the Colluder's contract, a Traitor's contract is used to avoid this scenario by incentivizing the clouds to report the Colluder's contract. The basic concept is that the traitor cloud indeed reports the agreed-upon collusion result to the client in order to avoid the punishment in the Colluder's contract, but also honestly computes and returns the result to the client in order to avoid the punishment of the Prisoner's contract. The client must also put down a deposit in the Traitor's contract. Only the first cloud to report the Colluder's contract gets rewarded. The signing and reporting of the contracts must happen in a particular order in order for this process to work.

The authors prove that these games individually and together lead to a sequential equilibrium (which is stronger than a Nash equilibrium), meaning that it is optimal not only in terms of the whole game, but at every information set (basically the set of options each player has at every turn).

Challenges

  • A Colluder's contract can be signed on different chains (or even off-chain). In order to mitigate this, the Traitor's contracts would have to become cross-chain (which is a major technical challenge), not to mention the possibility of cryptographically secure contracts (e.g. MPC contracts) where one of the parties alone would not be able to prove the existence of this contract

  • Relies on trusted third party to resolve disputes

  • Every task is replicated (that is, two copies of each job are always computed)

  • Assumes client is honest

  • Assumes amount of collateral known beforehand

  • No accounting for repeated games

    • It is well known that in the repeated Prisoner's dilemma, depending on the assumptions, cooperation becomes the equilibrium

Takeaways

  • The contracts and the payoffs that they induce offer a valuable toolbox to think about the problem of collusion

  • The contracts offer, in a restricted setting, an ironclad way (assuming the proofs are correct) of preventing collusion

Meet the Team

Cross-functional, seasoned team

Lilypad is assembling a team of experts across a variety of domains to tackle and build product from the complex R & D required for a decentralised compute network, including verification, privacy, token economics and more.

Core Team

Advisors

Other Partners

Autonomous Agents

Our Approach

Utility Maximization

A core assumption in much of game theory is that agents are utility-maximizing. That is, agents are completely rational actors, and are able to execute exactly the behavior that maximizes their return, however "return" is defined in the given context.

However, we know that in real life, humans are not completely rational, and are not capable of perfect execution of actions. In that light, how can we look at the game-theoretic approaches in the last section?

Either we can try to account for the irrational behavior of humans, or we can try to emulate the behavior of utility-maximizing agents. While there is a large amount of game-theoretic literature dedicated to the former, we opt for the latter for reasons that will become clear below.

Autonomous Agents

Note that the actual environment of a two-sided marketplace for distributed computation is extremely complicated (e.g. the heterogeneity of hardware, types of computation, latencies and bandwidths, etc.). Any theoretical/analytic approach to the problem that is actually correct should also work in simulation, so we opt for a simulation-driven approach.

The way that we can emulate perfectly rational behavior is by training autonomous agents to act on behalf of their human owners in a utility-maximizing manner. At that point, the challenge is to design the global game to drive the probability of cheating to zero - ideally, to make it be equal to zero - which is no small feat in a trustless and permissionless environment. However, the simplifying assumption that we are in fact operating with utility-maximizing agents conceptually simplifies the problem immensely.

At a minimum, the action space for an autonomous agent representing a compute node should be to cheat or not to cheat, and to collude or not collude within a mediation protocol. The observable environment for nodes on the network should include all data stored on the blockchain - that is, the sequence of deals, results, and mediations - as well as the information in the orderbook. While the orderbook will be off-chain, we model in the digital twin the orderbook acting as a centralized, single source of truth that all agents have access to. In the long-term, nodes will have (potentially non-identical) local information regarding other job and resource offers on the network.

Further work may explore agents forming beliefs about the hardware and strategies of other agents, but that is beyond the scope of the current work.

First Principles Approach

We conclude with two "axioms" upon which we will base our simulations:

  1. Every agent attempts to maximize its utility, including cheating and/or colluding if necessary.

  2. All other components of the game should lead to a "good" solution, as defined in the problem statement.

Verifiable Computing

Verifiable Computing Basics

Verifiable computing is dedicated to ensuring that outsourced computations (that is, computations that are not done locally) are done correctly. In some scenarios, it cannot be assumed that the node to which a job is being outsourced will compute the result honestly, or that it is not faulty. Moreover, verifying the result should have less overhead than computing the result in the first place.

While blockchains provide safety and liveness, the massive replication of computation becomes too costly when that level of security is not needed. There is a difference between global consensus, which is necessary in blockchain environments, and local consensus, which is more suited for two-sided marketplaces. In global consensus, all nodes need to be convinced that every computation was done correctly. In contrast, in local consensus, only a small number of nodes - potentially only one node, the client - needs to be convinced that a computation was done correctly.

Ostensibly, for a two-sided marketplace, this implies that only a client really needs to be convinced that a computation was done correctly. However, these computations are not done in isolation, and the interrelation between a client choosing one node repeatedly versus many different nodes, and the mathematics behind those decisions, as well as the need to create a protocol that any client can come along to with no prior experience and trust that cheating is disincentivized, implies the creation of a global game that, while not requiring global consensus in the traditional sense, emulates it in some manner.

Approaches to Verifiable Computing

Cryptographic Approaches

One way to ensure that computations were done correctly is by using cryptographic methods. There are a number of cryptographic approaches for verifiable computation, including

Interactive Proof (IP)

  • In interactive proofs, verification of a statement is modeled as an interaction between a prover and a verifier. The goal of the prover is to convince the verifier that the statement is true, even when the verifier does not have the computation resources to do the computation itself.

  • The protocol must satisfy completeness (if the statement is true, an honest verifier will be convinced) and soundness (if the statement is false, the prover cannot convince the verifier except with some negligible probability).

Zero-Knowledge Proof (ZKP)

  • Zero-knowledge proofs are a type of interactive proof where the verifier learns nothing about private inputs of the computation, other than that the outputs were computed correctly from the all the inputs (some of which may be public/known to the verifier).

  • A ZKP can be made non-interactive, in which case it is called a Non-Interactive Zero-Knowledge Proof (NIZK). Two common variants of NIZKs are zk-SNARKs (zero-knowledge Succinct Non-interactive Argument of Knowledge) and zk-STARKs (zero-knowledge Scalable Transparent Argument of Knowledge).

  • Like IPs, ZKPs must also satisfy the requirements of completeness and soundness.

Multi-Party Computation (MPC)

  • Multi-party computation allows multiple parties to jointly compute a function over their individual inputs without any party revealing its input to other parties. The main objectives of MPC are privacy (parties should learn known about each others' inputs), security (some level of anti-collusion preventing malicious attempts to learn information), functionality (the ability to compute functions over data), and robustness (the protocol should work correctly even in the presence of malicious behavior or faults).

Trusted Execution Environment (TEE)

  • Trusted Execution Environments are secure and isolated enclaves, where code and data inside of the enclave are insulated from the rest of the system, including the operating system, applications, and other enclaves. The goal is to maintain both the confidentiality and the integrity of the code and data.

Verification-via-Replication

Verification-via-replication - often described using the adjective "optimistic" in the blockchain world - relies on recomputing the computation to check whether the end result is the same. The benefits of this method are that it is the easiest to understand, and in some sense, the easiest to implement.

In contrast to the other approaches, verification-via-replication often requires reliance on game-theoretic mechanisms such as collateral slashing, reputation, and other methods. This can become a bit complex when trying to counter collusion between the nodes that are computing the results.

One of the downsides of this approach is, of course, the extra effort expended on recomputing computations. However, with proper incentives, the overhead of this can be reduced dramatically. It is also important to keep in mind that the overhead of this approach is much lower than cryptographic methods, which usually have much higher overhead.

Our Approach

We opt for verification-via-replication as a first approach, for the reasons that it is simple to understand, has less overhead than cryptographic approaches, and has an attack surface that can be economically modelled and analyzed.

This has the downside of making private computations difficult. While the inputs and outputs of jobs can be encrypted so that only the client and compute node can see the code and data, this still leaves the client vulnerable to having their information leaked. Future approaches can incorporate SLAs and eventually support for homomorphic encryption to deal with this issue.

Run a node

Instructions for setting up a Resource Provider (node) on the public Lilypad testnet using Docker, including obtaining necessary funds and installing required software.

Prerequisites

Lilypad RPs currently only support Linux installs. Running a RP on Windows is currently experimental.

  • Linux (Ubuntu 22.04 LTS)

  • Nvidia GPU

Network information and Testnet tokens

The testnet has a base currency of ETH, as well as a utility token called LP. Both are used for running nodes. To add a node to the testnet, follow these steps:

Metamask Configuration

We recommend using MetaMask with custom settings to make things easier. Once you have it installed and setup, here are the settings you need to use:

Network name: Arbitrum Sepolia

Chain ID: 421614

Currency symbol: ETH

Block explorer URL: (leave blank)

Fund your wallet with ETH and LP

The faucet will give you both ETH (to pay for gas) and LP (to stake and pay for jobs).

Setup Arbitrum RPC (Optional)

The Lilypad Network uses the Arbitrum Sepolia Testnet to settle compute transactions. When a transaction is ready to be saved on-chain, Lilypad cycles through a list of public Arbitrum Sepolia RPC endpoints using the endpoint that settles first to save the compute transaction.

Docker Compose Setup

1. Export Your Private Key

Before starting, export your private key from MetaMask. Follow the official MetaMask guide for instructions on safely exporting your private key.

2. Download the Docker Compose Configuration

Use curl to download the docker-compose.yml file from the Lilypad GitHub repository.

LATEST_VERSION=$(curl -s https://api.github.com/repos/Lilypad-Tech/lilypad/releases/latest | sed -n 's/.*"tag_name": "\(.*\)".*/\1/p')

curl -o docker-compose.yml "https://raw.githubusercontent.com/Lilypad-Tech/lilypad/$LATEST_VERSION/docker/docker-compose.yml"

3. Check for Existing Containers

If any containers named resource-provider, ipfs, or watchtower are already in use, they will need to be stopped before running this setup to avoid naming conflicts.

You can check if these containers are running with:

docker ps -a | grep -E "resource-provider|ipfs|watchtower"

If they are running, stop them with:

docker stop <container_name>

If there are still conflicts when trying to running with the docker-compose file, remove the containers:

docker rm <container_name>

4. Start the Resource Provider

Start the Lilypad containers using Docker Compose:

WEB3_PRIVATE_KEY=<your_private_key> docker compose up -d

To include a custom RPC URL:

WEB3_PRIVATE_KEY=<your_private_key> WEB3_RPC_URL=wss://arb-sepolia.g.alchemy.com/v2/your-alchemy-id docker compose up -d

You must not reuse your compute node key as a client, even for testing: this will result in failed jobs and will negatively impact your compute node since the wallet address is how nodes are identified on the network.

5. Monitor Your Node

Use the following command to check the status of the resource provider and bacalhau.

docker logs resource-provider
docker logs bacalhau

Use the following command to view the containers running after starting Docker Compose.

docker ps

Update Lilypad version for Docker RP

When a new version of Lilypad is released, it is critical for Resource Providers to update their installations to ensure compatibility and ability to run Lilypad jobs.

1. If a Docker RP is running stop the system (if the node is not running, disregard this first step)

docker compose down

2. View all Docker images

docker images

3. Delete old Docker images that are duplicates for Lilypad (Bacalhau, Lilypad)

docker rmi <image_name_or_id> 

4. Follow Docker RP install instructions

View Lilybit_ rewards

To view your Lilybit_ rewards, visit one of the following dashboards and paste your node's public address into the input:

Troubleshooting

Setup Arbitrum RPC

Personal RPC endpoint for a Lilypad Resource Provider

The Lilypad Network uses the Arbitrum Sepolia Testnet to settle compute transactions.

The Lilypad Network is not responsible for issues caused by misconfigured RPC endpoints. Please use at your own risk!

Setup RPC video guide

Infura RPC

Setup a personal Arbitrum Sepolia RPC for a Lilypad Resource Provider.

This guide demonstrates how to use Infura's RPC, but this can be substituted for any Arbitrum Sepolia RPC that supports WebSockets.

Setup Infura account

Select the “free” tier as the compute units provided should be sufficient to run a Lilypad RP.

Setup RPC endpoint for Arbitrum Sepolia

In the Infura dashboard, a new API key will usually generate automatically. If not, select "Create New API Key". Navigate to "Configure" to setup the API key.

Scroll down the list to the Arbitrum network and ensure the Sepolia testnet box is checked, then save changes.

In the API key dashboard, select "Active Endpoints" and navigate to "WebSockets".

Scroll down the page to find the Arbitrum Sepolia URL. The RPC endpoint for Arbitrum Sepolia is ready to be used with the Lilypad Resource Provider:

Use the new RPC endpoint

Lilypad RPs can use a personal RPC endpoint with a few simple steps. Only Web-socket (WSS) connections are supported.

Docker users

Stop the existing Lilypad Resource Provider (RP) before setting up the new RPC.

Locate the Lilypad RP Docker container using:

docker ps

Stop the container using the PID:

docker stop <container ID>

Use this command to start the lilypad-resource-provider.service with the new RPC:

docker run -d --gpus all -e WEB3_PRIVATE_KEY=<private-key> -e WEB3_RPC_URL=wss://arbitrum-sepolia.infura.io/ws/v3/some-id-from-infura --restart always ghcr.io/lilypad-tech/resource-provider:latest

Check the status of the container:

docker logs <container ID>

Ubuntu users

Stop the existing Lilypad RP (if the node is not running, disregard this first step):

sudo systemctl stop bacalhau
sudo systemctl stop lilypad-resource-provider

Update lilypad-resource-provider.service with the new RPC:

sudo nano /etc/systemd/system/lilypad-resource-provider.service

Add following line to [Service] section:

Environment="WEB3_RPC_URL=wss://arbitrum-sepolia.infura.io/ws/v3/some-id-from-infura"

Reboot the node:

sudo reboot
sudo systemctl status lilypad-resource-provider
sudo systemctl status bacalhau

Events

Catch up with the Lilypad team or hack on the Lilypad Network at these events!

Upcoming

View the upcoming events we are attending or hosting.

2025

Event
Location
Date

Online

March 17th

Dubai

April 30th - May 1st

Past

View the previous events we attended or hosted.

2025

Event
Location
Date

Denver

February 23rd - March 2nd

2024

Event
Location
Date

South East Asia

November 9th - November 17th

Brussels

July 8th - July 11th

Online

May 17th - June 7th

San Francisco

October 18th - October 20th

Bangkok

November 12th - November 15th

Sydney

May 3rd - May 5th

Istanbul

November 13th - November 17th

Iceland

September 25h - September 28th

Barcelona

October 1st - October 3rd

Las Vegas

October 3rd - October 5th

Online

September 23rd

Run a node - Linux

Instructions for setting up a Resource Provider (node) on the public Lilypad testnet using Linux, including obtaining necessary funds, installing required software, and ensuring security measures.

Prerequisites

  • Linux (Ubuntu 22.04 LTS)

  • Nvidia GPU

  • Nvidia Docker drivers

Lilypad RPs currently only support Linux installs. Running a RP on Windows is currently experimental. The resource provider version of Lilypad is not supported on Darwin/macOS.

Network information and testnet tokens

The testnet has a base currency of ETH, as well as a utility token called LP. Both are used for running nodes. To add a node to the testnet, follow these steps:

Metamask

We recommend using MetaMask with custom settings to make things easier. Once you have it installed and setup, here are the settings you need to use:

Network name: Arbitrum Sepolia

Chain ID: 421614

Currency symbol: ETH

Block explorer URL: (leave blank)

Fund your wallet with ETH and LP

The faucet will give you both ETH (to pay for gas) and LP (to stake and pay for jobs).

Installation

To set up your environment for using Lilypad with GPU support, you need to install several key components. This guide will walk you through installing Docker, the Nvidia Container Toolkit, Bacalhau, and Lilypad. You'll also configure systemd to manage these services efficiently.

Install Docker

Docker is a platform that allows you to automate the deployment of applications inside lightweight, portable containers.

To install Docker Engine, follow the steps specific to your operating system from the official Docker documentation:

Install Nvidia Container Toolkit

curl -fsSL https://nvidia.github.io/libnvidia-container/gpgkey | sudo gpg --dearmor -o /usr/share/keyrings/nvidia-container-toolkit-keyring.gpg \
  && curl -s -L https://nvidia.github.io/libnvidia-container/stable/deb/nvidia-container-toolkit.list | \
    sed 's#deb https://#deb [signed-by=/usr/share/keyrings/nvidia-container-toolkit-keyring.gpg] https://#g' | \
    sudo tee /etc/apt/sources.list.d/nvidia-container-toolkit.list
    
sudo apt-get update

sudo apt-get install -y nvidia-container-toolkit

Configure the container runtime by using the nvidia-ctk command:

 sudo nvidia-ctk runtime configure --runtime=docker --set-as-default

The nvidia-ctk command modifies the /etc/docker/daemon.json file on the host. The file is updated so that Docker can use the NVIDIA Container Runtime.

Restart the Docker daemon:

sudo systemctl restart docker

Install Bacalhau

Bacalhau is a peer-to-peer network of nodes that enables decentralized communication between computers. The network consists of two types of nodes, which can communicate with each other.

Lilypad now supports Bacalhau v1.6.0

To install Bacalhau, run the following in a new terminal window (run each command one by one):

cd /tmp

wget https://github.com/bacalhau-project/bacalhau/releases/download/v1.6.0/bacalhau_v1.6.0_linux_amd64.tar.gz

tar xfv bacalhau_v1.6.0_linux_amd64.tar.gz

sudo mv bacalhau /usr/bin/bacalhau

sudo chown -R $USER /app/data

To check your Bacalhau version use:

bacalhau version

The expected output is:

CLIENT  SERVER  LATEST
v1.6.0  v1.6.0  <latest Bacalhau version>

Install Lilypad

The installation process for the Lilypad CLI involves several automated steps to configure it for your specific system. Initially, the setup script identifies your computer's architecture and operating system to ensure compatibility. It will then download the latest production build of the Lilypad CLI directly from the official GitHub repository using curl and wget.

Once the CLI tool is downloaded, the script sets the necessary permissions to make the executable file runnable. It then moves the executable to a standard location in your system's path to allow it to be run from any terminal window.

Via official released binaries

# Detect your machine's architecture and set it as $OSARCH
OSARCH=$(uname -m | awk '{if ($0 ~ /arm64|aarch64/) print "arm64"; else if ($0 ~ /x86_64|amd64/) print "amd64"; else print "unsupported_arch"}') && export OSARCH
# Detect your operating system and set it as $OSNAME
OSNAME=$(uname -s | awk '{if ($1 == "Darwin") print "darwin"; else if ($1 == "Linux") print "linux"; else print "unsupported_os"}') && export OSNAME;
# Remove existing lilypad installation if it exists
sudo rm -f /usr/local/bin/lilypad
# Download the latest production build
curl https://api.github.com/repos/lilypad-tech/lilypad/releases/latest | grep "browser_download_url.*lilypad-$OSNAME-$OSARCH-gpu" | cut -d : -f 2,3 | tr -d \" | wget -i - -O lilypad
# Make Lilypad executable and install it
chmod +x lilypad
sudo mv lilypad /usr/local/bin/lilypad

Ensure the RP is running the latest Lilypad version. Two weeks after each version release, the previous Lilypad version is deprecated.

To verify the installation, run lilypad in the terminal to display the version and a list of available commands, indicating that Lilypad CLI is ready to use.

Write env file

You will need to create an environment directory for your node and add an environment file that contains your node's private key.

To do this, run the following in your terminal:

sudo mkdir -p /app/lilypad
sudo touch /app/lilypad/resource-provider-gpu.env

Next, add your node's private key into /app/lilypad/resource-provider-gpu.env:

WEB3_PRIVATE_KEY=<YOUR_PRIVATE_KEY> (the private key from a NEW MetaMask wallet FOR THE COMPUTE NODE)

This is the key where you will get paid in LP tokens for jobs run on the network.

You must not reuse your compute node key as a client, even for testing: this will result in failed jobs and will negatively impact your compute node since the wallet address is how nodes are identified on the network.

Setup Arbitrum RPC

The Lilypad Network uses the Arbitrum Sepolia Testnet to settle compute transactions. When a transaction is ready to be saved on-chain, Lilypad cycles through a list of public Arbitrum Sepolia RPC endpoints using the endpoint that settles first to save the compute transaction.

Install systemd unit for Bacalhau

systemd is a system and service manager for Linux operating systems. systemd operates as a central point of control for various aspects of system management, offering features like parallelization of service startup, dependency-based service management, process supervision, and more.

To install systemd, open /etc/systemd/system/bacalhau.service in your preferred editor:

sudo vim /etc/systemd/system/bacalhau.service
[Unit]
Description=Lilypad V2 Bacalhau
After=network-online.target
Wants=network-online.target systemd-networkd-wait-online.service

[Service]
Environment="LOG_TYPE=json"
Environment="LOG_LEVEL=debug"
Environment="HOME=/app/lilypad"
Environment="BACALHAU_SERVE_IPFS_PATH=/app/data/ipfs"
Restart=always
RestartSec=5s
ExecStart=/usr/bin/bacalhau serve --orchestrator --compute

[Install]
WantedBy=multi-user.target  

Install systemd unit for GPU provider

Open /etc/systemd/system/lilypad-resource-provider.service in your preferred editor.

Hint: sudo vim /etc/systemd/system/lilypad-resource-provider.service

[Unit]
Description=Lilypad V2 Resource Provider GPU
After=network-online.target
Wants=network-online.target systemd-networkd-wait-online.service

[Service]
Environment="LOG_TYPE=json"
Environment="LOG_LEVEL=debug"
Environment="HOME=/app/lilypad"
Environment="OFFER_GPU=1"
EnvironmentFile=/app/lilypad/resource-provider-gpu.env
Restart=always
RestartSec=5s
ExecStart=/usr/local/bin/lilypad resource-provider 

[Install]
WantedBy=multi-user.target

Reload systemd's units/daemons (you will need to do this again if you ever change the systemd unit files that we wrote, above)

sudo systemctl daemon-reload

Start Lilypad node

Start systemd units:

Start bacalhau and lilypad-resource-provider in a different terminal window than the ipfs node.

Once bacalhau is started, wait 5-20 seconds before starting lilypad-resource-provider. Bacalhau sometimes takes time to start and lilypad-resource-provider will not start without bacalhau.

sudo systemctl enable bacalhau
sudo systemctl enable lilypad-resource-provider
sudo systemctl start bacalhau
sudo systemctl start lilypad-resource-provider

Now that your services have been installed and enabled, check the status of Bacalhau to ensure it is running correctly on your node:

sudo systemctl status bacalhau

View node status

To check if the node is running use the following command:

sudo systemctl status lilypad-resource-provider

This will give a live output from the Lilypad node. The logs will show the node running and accepting jobs on the network.

Run the following command to get more status info from your node:

sudo journalctl -u lilypad-resource-provider.service -f

To restart your resource provider run:

sudo systemctl restart lilypad-resource-provider

Support for Lilypad RPs

  • Without a discussion opened, our team will not be able to support the problem.

  • Description (including Lilypad version running on your node)

  • Hardware Info (including Linux/Windows version)

  • Related blockchain/ETH addresses of transaction hashes

  • Output Logs - sudo systemctl status lilypad-resource-provider

  • Related links/urls

  • Screenshots

Update Lilypad version

When a new version of Lilypad is released, it is important for resource providers to update their installations to ensure compatibility and access to the latest features and improvements.

Please note that using sudo rm -rf is very powerful and can be dangerous if not used carefully.

  1. If the Lilypad RP is running, stop the system (if the node is not running, disregard this first step):

sudo systemctl stop bacalhau
sudo systemctl stop lilypad-resource-provider
  1. Remove the Lilypad executable by running:

sudo rm -rf /usr/local/bin/lilypad
  1. Start your resource provider by running:

sudo systemctl start bacalhau
sudo systemctl start lilypad-resource-provider

Disconnecting a node

To disconnect your node from Lilypad you will need to do a few things to completely offboard.

Using sudo rm -rf is very powerful and can be dangerous if not used carefully. It is highly recommended to navigate to the parent directory and remove files from there to avoid accidental deletion of important files.

First, stop the node:

sudo systemctl stop bacalhau
sudo systemctl stop lilypad-resource-provider

Next, you must remove the .service files related to Lilypad and Bacalhau. These files are typically stored in /etc/systemd/system/. To remove them, run the following command:

sudo rm -rf /etc/systemd/system/lilypad-resource-provider.service /etc/systemd/system/bacalhau.service

Next we notify the systemd manager to reload its configuration by running:

sudo systemctl daemon-reload

Then, remove the environment file for the Lilypad resource provider. This file is usually stored in /app/lilypad/. To remove it, run:

sudo rm -rf /app/lilypad/resource-provider-gpu.env

Finally, if you followed the installation instructions from the Lilypad documentation and moved the executable to /usr/local/bin/lilypad, it can be removed from there. If the executable is stored in a different directory on your machine, navigate to that directory and remove it from there. To remove the executable, run:

sudo rm -rf /usr/local/bin/lilypad

To remove Bacalhau, run:

sudo rm -rf /usr/bin/bacalhau

View Lilybit_ rewards

To view your Lilybit_ rewards, visit one of the following dashboards and paste your node's public address into the input:

Security

If you want to allowlist only certain modules (e.g. Stable Diffusion modules), to control exactly what code runs on specific nodes (which can be audited to ensure that they are secure and will have no negative impact on the nodes), set an environment variable OFFER_MODULES in the GPU provider to a comma separated list of module names, e.g. sdxl:v0.9-lilypad1,stable-diffusion:v0.0.1.

Run a node video guide

FAQ

Frequently Asked Questions for Lilypad Network

🍃 General Questions

What is the Lilypad Network?

Lilypad is developing a serverless, distributed compute network that enables internet-scale data processing for AI, ML & other arbitrary computation from blockchains, while unleashing idle processing power & unlocking a new marketplace for compute.

Lilypad provides decentralized AI computational services. By extending unrestricted, global access to computational power, Lilypad strategically collaborates with decentralized infrastructure networks, such as Filecoin, to formulate a transparent, efficient, and accessible computational ecosystem.

Perform off-chain decentralized compute over data, with on-chain guarantees, and to call this functionality directly from a smart contract, CLI and an easy to use abstraction layer, opens the door to a multitude of possible applications.

Lilypad Whitepaper

Roadmap

What is the Bacalhau Project?

Bacalhau is a peer to peer computation network enabling compute over data jobs like GPU-enabled AI, ML, analytics, data engineering, data science, de-sci and more. With the open-source Bacalhau Project, you can streamline your existing workflows without rewriting by running Docker containers and WebAssembly (WASM) images as tasks. This architecture is also referred to as Compute Over Data (or CoD).

Wait, didn’t Lilypad used to rely on determinism and optimistic reproducibility for verifiable compute?

Previously, Lilypad required deterministic jobs on the network and used optimistic reproducibility to randomly re-run jobs to ensure trust, however this method has been deprecated due to:

  1. The limitation the determinism requirement placed on what jobs were available to run on the network

  2. The inability to realistically be able to determine a “deterministic” job to be deterministic easily

Has Lilypad raised VC money?

Yes, Lilypad closed our seed round of funding in March 2024.

🌐 Incentivized Testnet Questions

When will the Incentivized Testnet launch?

The Lilypad Incentivized testnet launched in mid June 2024.

How do LilyBit_ rewards work?

Lilybit_ rewards will be awarded to nodes for time on the network (up to a 4x multiplier) and compute power brought to the network. Rewards will be redeemable, for the Lilypad ERC20 Utility Token at Mainnet Launch, with between 5% and 10% of the total token supply (depending on IncentiveNet demand and tokenomics finalization) being allocated to this phase of the Lilypad Network.

Who can earn LilyBit_ rewards?

Phase 1 of the Incentivized Testnet is focused on rewarding nodes on the network, referred to as Resource Providers. The earlier a provider joins the network, the more Lilybits_ will be available.

Phases 2 and onward will provide rewards for Lilypad modules created as well as developer tools/apps (in addition to rewarding nodes).

How do I check my Lilybit_ rewards?

You can check your rewards by pasting your nodes wallet address into the following interfaces:

How does Lilypad use blockchain, and why do I need both ETH and Lilypad tokens to run a job?

On the Lilypad network, The blockchain is used for

  • Payment rails

  • Storing the deals transparently (on-chain guarantees about the compute)

  • Storing any disputes & results

Lilypad Tokens are used to transact on the Lilypad network. They are used as payment by those who want to run jobs (to the resource providers who run them), and as collateral by resource providers.

You need ETH tokens to pay for the gas fees for smart contracts the facilitate transactions, and for records of transactions and disputes that are posted to the Ethereum blockchain.

⚙️ Hardware Provider Questions

What do I need to do before I can run a Lilypad node?

The required steps before running a Lilypad node include adding the node, adding the Lilypad network information, obtaining tokens and installing the required software.

What are the hardware requirements to run a Lilypad node?

What are the updates required for maintaining my node software?

How can I check the status of my Lilypad node?

👩‍💻 Developer Questions

What is a Lilypad Module?

How to run a ML job on Lilypad

How to add a ML model to run on Lilypad

How to run a Lilypad node

Node hardware specs

  • Linux (latest Ubuntu LTS recommended)

  • Nvidia GPU

  • Nvidia drivers

  • Docker

  • Nvidia docker drivers

📖 Token Questions

Expected TGE

Although the launch date is not finalized, the launch of Lilypad Mainnet and the TGE is scheduled for 2025.

"a wallet funded with both ETH and LP tokens"

If you have questions or find something in here interesting, please feel free to raise a discussion in the or in !

The verification game consists of a series of rounds, where in each round, a smaller and smaller subset of the computation is checked. Eventually, only one instruction is used to determine whether a Solver or Verifier is correct:

The original version of had five key components: Job Creators (JC), Resource Providers (RP), Solvers (market makers), Mediators (agreed-upon third parties for mediation), and Directories (file systems, which we have replaced with IPFS and Docker registries). Job Creators are clients, the ones who have computations that need to be done and are willing to pay. Resource Providers are those with computational resources that they are willing to rent out for the right price. Solvers are market-makers; they match the offers from JCs and RPs. Mediators are third parties trusted by both JCs and RPs to arbitrate disagreements. The Directories are network storage services available to both JCs and RPs.

Useful prototype for a two-sided marketplace (the for stream processing applications is also useful)

While this problem setting - verifiable computation by way of game theory - is different than many game theoretic settings, we can draw inspiration from commonly used concepts like the and . Both strategyproofness and the revelation principle are centered around the idea of incentivizing agents to truthfully report their preferences. Most approaches in the literature rely on analytic methods to determine what rational agents will do by analyzing their payoffs as a function of their preferences, the behaviors of other agents, and the mechanism under analysis. Ultimately, we are also aiming to find (a) mechanism(s) that lead(s) to an equilibrium where all agents choose to not cheat and not collude.

The process begins by creating a digital twin of a two-sided marketplace. In this environment, autonomous agents acting on behalf of client and compute nodes will be trained to maximize returns based on data gathered in simulation. For now, we will avoid maximizing returns by optimizing scheduling, though this is a future topic of interest. We will use techniques primarily from the field of multi-agent reinforcement learning in order to train the agents. The precise methods we will use (e.g. modes of training and execution, homogeneous vs. heterogeneous agents, choice of equilibrium, self-play vs. mixed-play, value-based vs. policy-based learning, etc.) will be determined in the course of building the simulation. See the by Albrecht, Christianos, and Schäfer for our reference text.

(Ubuntu install)

(NVIDIA Container Toolkit)

For a more in-depth look at the requirements to run a Lilypad node, please refer to the documentation.

New RPC URL:

For a step by step guide on adding the network, please refer to our .

To obtain testnet LP, use the and enter your ETH address.

To obtain testnet ETH, use a third party and enter your ETH address.

If you need more tokens and already have Sepolia ETH, use the to transfer tokens over to Arbitrum Sepolia.

Resource Providers have the option to using Alchemy instead of using the default public RPC endpoints.

A personal RPC endpoint helps RPs to avoid reliability issues with the public RPC endpoints used by Lilypad ensuring rewards can be earned and jobs can be run consistently. RPs running a personal RPC endpoint contribute to the fault tolerance and decentralization of the Lilypad Network! Read more in the Alchemy Arbitrum .

Before we start the Rp with the Docker setup, retrieve the private key from the wallet set up earlier in this guide. For guidance on exporting your private key, refer to . Once the private key has been retrieved, proceed to initialize the Docker containers using the commands provided below.

The same WEB3_PRIVATE_KEY cannot be used for both a RP and using the Lilypad CLI. If a WEB3_PRIVATE_KEY has already been used to run jobs with the CLI, make a new one and . You can `unset $WEB3_PRIVATE_KEY` if you want to use a different one to run your RP.

Before moving to the next step, ensure older versions of lilypad and bacalhau as systemd services.

Using the Lilypad Docker setup a new RP and run.

are some common troubleshooting techniques when it comes to your resource provider using Docker.

This personal RPC endpoint allows Resource Providers (RP) to avoid reliability issues with the RPC endpoints used by Lilypad ensuring rewards can be earned and jobs can be run consistently. RPs running a personal RPC endpoint contribute to the fault tolerance and decentralization of the Lilypad Network! Read more in the Alchemy Arbitrum .

This guide walks through the steps of setting up a personal RPC endpoint for Arbitrum Sepolia using .

on Infura and choose your plan based on how many APIs you need.

This is guide is for individuals running a Lilypad Resource provider, find more info .

If the Lilypad RP was properly as a systemd service, the RP will reboot using the new RPC. Once the reboot is complete, the RP should be running with the updated configuration. To verify your node is back online and running correctly, run the following:

These instructions are no longer actively maintained. We recommend using the for the most up-to-date and supported setup.

For a more in-depth look at the requirements to run a Lilypad node, please refer to the documentation.

(Ubuntu install)

New RPC URL:

For a step by step guide on adding the network and importing the LP testnet token, please refer to our .

To obtain testnet LP, use the and enter your ETH address.

To obtain testnet ETH, use a third party and enter your ETH address.

The Arbitrum Sepolia faucet provides 0.0001 tokens per request. If you need more tokens and already have Sepolia ETH, you can use the to transfer it over to Arbitrum Sepolia.

- Docker Engine

To ensure proper operation of your graphics cards and Lilypad, follow these steps to install the Nvidia Toolkit Base Installer:

If the Bacalhau CLIENT version is not v1.6.0, it will need to be replaced. Follow the steps to uninstall and reinstall Bacalhau.

A can be retrieved from the Metamask account details menu. For more info, check out official guide from Metamask on viewing a wallet's private key.

Resource Providers have the option to using Alchemy instead of using the default public RPC endpoints.

A personal RPC endpoint helps RPs to avoid reliability issues with the public RPC endpoints used by Lilypad ensuring rewards can be earned and jobs can be run consistently. RPs running a personal RPC endpoint contribute to the fault tolerance and decentralization of the Lilypad Network! Read more in the Alchemy Arbitrum .

For complex issues, bug reports, or feature requests, open a discussion in the Lilypad-Tech Github organization discussion .

Navigate to the discussion , select "New Discussion", choose "rp-issues", and fill out the template.

For quick questions or minor issues, use the Lilypad Discord channel and provide the following info.

As every system is different, these instructions may vary. If you have any issues, please reach out to the team in the for help!

Visit the for a full list of available modules.

The Lilypad Whitepaper was planned to release by end of Q4 2024, but required further review from our team and advisors. See our for the current timeline.

Find the full Lilypad Network on our website!

To find out more about it, see the

Resource Providers (RP) can track their Lilybit_ earnings with the .

Refer to the documentation and select your preferred platform (Linux or Docker) for a detailed guide on the prerequisites.

For more information, please visit .

Resource providers are expected to have the latest Lilypad versions installed on their machines. These will be included in the Docker files in each release. For more information, please refer to the .

Find more information check out the docs.

View the .

Follow the to get started.

To build an application with Lilypad compute and modules on the backend, check out this .

A Lilypad module is a Git repository that can be used to perform various tasks using predefined templates and inputs. This will walk you through the process of creating a Lilypad module, including defining a JSON template, handling inputs, ensuring determinism, and other best practices.

Lilypad is an open network that allows anyone to contribute GPU computing capacity. Find instructions for running a node in our documentation.

For more information on the requirements to run a Lilypad node, please refer to the documentation.

GitHub
the Lilypad Discord server
Truebit
"In fact, only one instruction line is used in a verification game. There will be a part of the program code where there is a discrepancy between the Solver and the Verifier. The instruction of that discrepancy point is used to verify who is right."
Modicum
Modicum
follow-up paper
Smart Contract Counter-Collusion
revelation principle
strategyproofness
pre-print
Nvidia drivers
Docker
Nvidia Docker drivers
hardware requirements
https://sepolia-rollup.arbitrum.io/rpc
Setting up MetaMask documentation
Lilypad faucet
Arbitrum Sepolia testnet faucet
official Arbitrum bridge
setup their own Arbitrum RPC endpoint
docs
this official MetaMask guide
fund the wallet
are not running
RP install instructions
Grafana dashboard
Here
docs
Infura
Create an account
here
setup
Docker approach
hardware requirements
Nvidia drivers
Docker
https://sepolia-rollup.arbitrum.io/rpc
Setting up MetaMask documentation
Lilypad faucet
Arbitrum Sepolia testnet faucet
official Arbitrum bridge
Linux
Nvidia Container Toolkit download page
here
WEB3_PRIVATE_KEY
this
setup their own Arbitrum RPC endpoint
docs
board
board
#i-need-help
Reinstall Lilypad with the latest version
Lilypad Discord
Grafana dashboard
Lilypad leaderboard
Lilypad GitHub

Verifiable Computing
Problem Statement
Prior Protocols
Autonomous Agents
Mechanisms To Explore
Run a node
Hardware Requirements
Run a node
Filecoin AI Blueprints
TOKEN2049 Dubai
ETHDenver
Devcon
ETHCC Brussels
HackFS
ETH Global
Devcon
ETHGlobal
LabWeek
Fil:Dev Iceland
Chainlink SmartCon
Fil:Vegas
Open Data Hack
roadmap
roadmap
Bacalhau Docs
RP Leaderboard
Grafana dashboard
Lilypad leaderboard
Open Source contributor dashboard
Resource Provider docs
Run a node
Modules section
quickstart guides
guide
"build a job module" guide
hardware requirements

Quickstart

Get started with Lilypad

Lilypad allows you to run on-demand compute jobs on the network, enabling AI inference and high-performance workloads without managing infrastructure.

Alchemy RPC

Setup a personal Arbitrum Sepolia RPC for a Lilypad Resource Provider.

This guide demonstrates how to use Alchemy's RPC, but this can be substituted for any Arbitrum Sepolia RPC that supports WebSockets.

Setup Alchemy account

Select the “free” tier as the compute units provided should be sufficient to run a Lilypad RP. The free service provides 300 million compute units per month.

Select “skip bonus” or input a credit card with billing info (the card will not be charged unless the compute credits in the free tier are used).

Setup RPC endpoint for Arbitrum Sepolia

In the “Overview” section of the Alchemy dashboard, navigate to “My app” and select “Endpoints”. If an app was not created upon login, create a new one by selecting "Create new app".

By selecting “Endpoints”, the “Networks” tab will open providing an option to configure the Arbitrum API.

  • Select “Sepolia”

  • Select “Websockets”

The RPC endpoint for Arbitrum Sepolia is ready to be used with the Lilypad Resource Provider:

Metrics for the RPC can be viewed in the “Metrics” tab.

Use the new RPC endpoint

Lilypad RPs can use a personal RPC endpoint with a few simple steps. Only Web-socket (WSS) connections are supported.

Docker users

Stop the existing Lilypad Resource Provider (RP) before setting up the new RPC.

Locate the Lilypad RP Docker container using:

docker ps

Stop the container using the PID:

docker stop <container ID>

Use this command to start the lilypad-resource-provider.service with the new RPC:

docker run -d --gpus all -e WEB3_PRIVATE_KEY=<private-key> -e WEB3_RPC_URL=wss://arb-sepolia.g.alchemy.com/v2/some-id-from-alchemy --restart always ghcr.io/lilypad-tech/resource-provider:latest

Check the status of the container:

docker logs <container ID>

Ubuntu users

Stop the existing Lilypad RP (if the node is not running, disregard this first step):

sudo systemctl stop bacalhau
sudo systemctl stop lilypad-resource-provider

Update lilypad-resource-provider.service with the new RPC:

sudo nano /etc/systemd/system/lilypad-resource-provider.service

Add following line to [Service] section:

Environment="WEB3_RPC_URL=wss://arb-sepolia.g.alchemy.com/v2/some-alchemy-id"

Reboot the node:

sudo reboot
sudo systemctl status lilypad-resource-provider
sudo systemctl status bacalhau

This guide walks through the steps of setting up a personal RPC endpoint for Arbitrum Sepolia using .

and login to the Alchemy dashboard.

This is guide is for individuals running a Lilypad Resource provider, find more info .

If the Lilypad RP was properly as a systemd service, the RP will reboot using the new RPC. Once the reboot is complete, the RP should be running with the updated configuration. To verify your node is back online and running correctly, run the following:

Alchemy
Create an account
here
setup
13KB
Anura.bruno_collection.json
Bruno collection
15KB
Anura.postman_collection.json
Postman collection
783KB
heart.wav
Heart Voice Sample
778KB
puck.wav
Puck Voice Sample
804KB
fenrir.wav
Fenrir Voice Sample
818KB
bella.wav
Bella Voice Sample
Cover

Lilypad Network

Cover

Lilypad Linktree

Cover

@Lilypad_Tech

Cover

Lilypad Blog

Cover

@LilypadNetwork

Cover

Lilypad-Tech Github

Cover

Lilypad Luma

Metamask wallet funded with ETH Arbitrum and Lilypad tokens
A wallet funded with Arbitrum ETH and Lilypad tokens
Two frogs sit on a lilypad, animatedly discussing the wonders and quirks of AI agents. As they ponder whether these digital beings can truly understand their froggy lives, the serene pond serves as a backdrop to their lively conversation.
Cover

Building RAG Agents on Lilypad

Cover

Can an AI Agent Solve the Crypto UI Problem?

Cover

Build a chatbot on Lilypad

Cover

Websearch API powered by the Lilypad Inference API

Cover

Module Builder guide

Cover

Build a Open-source Perplexity Clone

Cover

Lilypad blog

Cover

Try out Waterlily.ai for yourself!

Cover

Contribute to the Github or read the code

Cover

Anything LLM

GPT Desktop Client for Lilypad

Cover

Rivet

Agentic Workflows

Cover

Lilypad Llama3 Chatbot

AI-powered chatbot running on the Lilypad Network using Llama3 for intelligent responses.

Cover

Lilypad ML Workbench

A platform for running, fine-tuning and deploying AI models on the Lilypad Network, with support for Jupyter notebooks and multi-module workflow

Cover

AI Oncologist Agent

A multi-agent system that searches, extracts, and analyzes oncology research papers for key insights.

Cover

RAG Support Agent

Retrieves context and generates AI-driven support responses using the Lilypad Network.

Cover

HubSpot-LinkedIn Agent

An agent-based system that monitors HubSpot for recently modified contacts, fetches enriched data from LinkedIn, and updates HubSpot automatically.

Cover

VS Code Helper Extension

Enhances development with smart suggestions, automation and simple Lilypad integration.

Cover

Farcaster Frame

Allow users to generate AI-powered outputs directly within interactive Farcaster posts

VS Code extension in action
Example use
Example use
Empowering AI-driven Innovation
Cover

Alison Haire

CEO

prev: Filecoin, IBM, Accenture

Cover

Steve King

COO

prev: Atlassian, Canva, Founder

Cover

Stanley Bishop [PhD]

Head of Research

AI & ML Engineer & Researcher, Stanford University

Cover

James Walker

Engineering Manager

prev: Fission VP of Engineering

Cover

Eric Boniardi [PhD ML Research]

Senior Research Engineer [AI & ML, Systems Design]

Scholar - Mathematical Engineering

Cover

Clint Snider [PhD Systems Engineering]

R & D Engineer [Tokenomics & Systems Design]

University of Florida, Tempest Labs

Cover

Lindsay Walker Product Growth Lead

prev: Protocol Labs, Sauce Labs

Cover

Nate Smith

Partnerships & BD Lead

Coinbase, prev: Gamestop, Godwoken

Cover

Brian Ginsburg

Software Engineer

prev: Fission

Cover

Ayush Kumar

Software Engineer

Apybara Founder, prev: Protocol Labs, MovableInk

Cover

Arsen Yeremin

Solutions Architect [AI & Blockchain]

prev: TypherAI CTO

Cover

Narbeh Shahnazarian

Solutions Architect

prev: Expedia

Cover

Holly Grimm

Senior Protocol & Blockchain Engineer

prev: Gitcoin, Algovera

Cover

Logan Lentz

Software & Developer Relations Engineer

Leto, prev: AWS

Cover

Alex Mirran

BD & DevRel Lead

Leto, prev: Willis Towers Watson, Disrupt FinTech

Cover

Phil Billingsby

Senior Developer Relations & DX Engineer

Unboxed, prev: Arweave, FilmFreeway

Cover

David Aronchick

Distributed Compute & Product Advisor

Experience: Kubernetes, Bacalhau, Expanso CEO

Cover

Peter Wang

AI & ML Advisor

Experience: Anaconda.ai Founder

Cover

Jonathon Victor

Web3 Landscape and Business Advisor

Experience: Ansa Research, Protocol Labs

Cover

Paul Wagner

Business & Technical Advisor

Experience: Filecoin, Oracle, Startup Founder

Cover

Brooklyn Zelenka

Technical & Business Advisor

Experience: Co-Founder Fission

Cover

Eric Vander Wal

Product & Web3 Advisor

Experience: Uprising Founder, Godwoken Founder, Oasys

Cover

Jay Logelin

Engineering and Cryptography Advisor

Zama, prev: Protocol Labs, NEAR

Cover

Crypto Econ Lab

Tokenomics Design Team

Cover

Frontline.codes Frontend Development Team

Current contract design
Cover

Inference API

Submit inference jobs, process requests and retrieve results using Lilypad’s API

Cover

CLI

Run compute jobs, manage workloads and interact with the network directly from your terminal.

Cover

WebSearch UI

Combines websearch and AI powered answers using the Lilypad Inference API to help answer questions based on the latest happenings provided as context.

Cover

ai learning buddy

A learning assistant with chat capabilities, note management, and retrieval-augmented generation (RAG) technology powered by Lilypad Anura API

Cover

Lilypad Anura Playground

Run any model hosted on the Lilypad network in a simple chat interface.

Cover

A research agent template that reads papers, aggregates relevant info into reports, and answers questions about research.

Cover

Customer Support agent

A customer support agent with chat capabilities that uises text input from docs, blogs, etc. to provide answers on specific questions.

Cover

Gradio x Lilypad Interface

Run AI jobs from Gradio interface.

Cover

Lilytree

Stylized tree art from ETH addresses.

Cover

Sir Croaksworth's Roast dApp

Get your portfolio roasted.

Cover

SDXL Turbo UI

Generate vibrant AI art from a text prompt.

Cover

the great debAIte

AI personas debate your topic.

Cover

Waterlily.ai

Ethical generative AI-Art DApp.

Cover

ExtraLabs x Lilypad

A work-in-progress proof of concept utilizing Extra Labs and Lilypad to create a Lilypad module for running multiple geospatial jobs such as 3d modeling.

Cover

CipherCraft

A decentralized hub for federated model training on access controlled private datasets.

Cover

NexTown

A decentralized AI computation and storage platform allowing users to run AI models via LilyPad, encrypt results with their public key, and securely store outputs on Lighthouse, accessible only by the user’s private key.

Cover

Filecoin ML Engine

Onchain ML engine to create dApp ML agents on top of Filecoin.

Cover

Dogtoken ai erc404

ERC404 token with custom Lilypad stable diffusion to generate unique images for holders.

Cover

Rock Paper Ninja

A web3 take on Rock Paper Scissors, utilizing Lilypad to create generative avatars.

Cover

Learnr Idle Mining

Cover

Cover

On-Chain Personality - Honorable mention

Leverages on-chain Ethereum mainnet transactions to classify wallets into six predefined buckets/sub-ecosystems: Airdrop Farmer, NFT Collectooor, Snipoor, Shitcoinooor, EVM Enjoyoor, and NPCs.

Cover

Privacy Avengers - Could use Lilypad tech

A protocol for verifiable machine unlearning, solving AI privacy issues by securely and efficiently detecting and removing user data.

Cover

Cover

HuggingLily

Run HuggingFace pipelines via dashboard with just one click.

Cover

Lilypad-Kamu

Cover

Project C

Cover

AI Capwyn

Cover

Lilypad Hub

Cover

Obsidian-Lilypad

Cover

Decenter AI

DeCenter AI functions as a PaaS infrastructure, empowering machine learning engineers to expedite and make the training of AI models more cost-effective through decentralized parallel training methods.

Cover

Lilylatte

Cover

Tiny Hops

Cover

Saturn Observatory

The goal of my project is to increase the transparency of Filecoin Saturn network – the fastest growing, community-run distributed content delivery network (CDN) for Web3.

Cover

Rejuvenate AI

Cover

StoryChain

StoryChain, a multi-level AI based story telling NFT dapp, is a new take on the ages old classic chain of stories, where different users collaboratively create stories.

Using this dapp, users create stories that have unique chapters and arts using web3, AI, NFTs, IPFS. Each page can belong to a different user.

Cover

Lilypad Javascript Wrapper

Cover

Lilywrite

Cover

GreenBadge

Cover

Gadus CLI

Cover

Deehr Market

Cover

CréditDécentrale

Cover

Cypher Deposit

Cover

LLM Bench

Cover

Uncensored GPT-5

Cover

ZK Microphone

Cover

CarpAI

Cover

Brian

Cover

AvaSoul

Cover

EasyCraftAI

Cover

DefiKicks

Cover

DefiKicks

Cover

Decentralized Yield Data Collector

Cover

Pensieve

Cover

Daggle

Cover

InferAI

A single click deploy for your ML models on decentralized compute and storage powered by FVM, Bacalhau & Libp2p.

Cover

Tentai

An innovative NFT game where characters evolve and improve their tasks over time.

SlickCI One click deployment of CI/CD workflows on to decentralised cloud infrastructure.

dub3.ai dub3.ai is an innovative marketplace harnessing the power of Cartesi, enabling users to transform their voice into unique NFTs and generate text-to-speech content using cutting-edge AI technology.

A custom lilypad module for SQL streaming through kamu.dev, this provides provenance, version control, and provenance of derivative datasets. Integrates the desci project kamu.dev into a bacalhau job to be consumed by lilypad v2.

Dive into the cosmic engineering that propels your personalized galaxy of knowledge.

ML Tooling for Security Audits. Validate and test the status of deployed Lilypad Compute Networks. The contracts ensure seamless interaction with the network, while the test script checks for network availability and contract functionality.

This platform, much like Docker Hub, is a one-stop destination for all your module needs as a Lilypad user. Includes a feature-rich playground where you can seamlessly test job specifications, experiment and fine-tune your code in a safe environment.

A user can plug obsidian-lilypad into their obsidian instance to fetch dpids from desci labs, have chat gpt summarize it, and use that as a prompt in runSDXL in lilypad. Helia/ipfs and kubo delegated content routing is used to splay dags and fetch content from IPFS.

The core objective of DeCenter AI is to democratize and decentralize AI model training.

Lilylatte is a DataDAO designed to bridge the gap between onchain and offchain data, focusing on web3 market research. Data owners, or web3 citizens, link their wallets and engage in dialogues generated by our LLM model, 'Lily.'

An intents computational framework for describing lilypad workflows with a serialization format that allows you to replace variables within solidity, onchain which allows you to configure the receiveResults to feedback and trigger future jobs.

I use historical data about the network gathered by Saturn Moonlet to prepare and publish analytical reports on Saturn Observatory website every month.

RejuvenateAI is the first community blockchain based project powered by community built for community to promote healthy living and achieve healthy locations where people live up to a 100 years all around the world.

This project provides a JavaScript wrapper to interact with the Lilypad CLI using Node.js. The wrapper utilizes the node-pty package to spawn a pseudo-terminal for executing the Lilypad CLI commands. It will enable the front-end developers to integrate Lilypad with other tools.

LilyWrite combines blockchain technology and artistic expression, enabling users to craft and own unique poetic artworks. Users generate poems using credits, represented by LW tokens, and acquire them as distinctive non-fungible tokens (NFTs), each symbolizing a unique blend of creativity and expression.

GreenBadge is an innovative on-chain funding platform dedicated to fostering sustainability and a brighter future for our planet. Within our platform, organizations committed to these crucial goals will gain access to a vast network of donors and like-minded brands who share a passionate vision for the future of our environment.

CLI Package for running 3D surface reconstruction with decentralized compute.

Empowering Patients with ownership of their electronic health records.

A decentralized credit score system using zkSNARKs, decentralized Computation & Storage.

Elevating financial privacy in crypto withdrawals. Securely withdraw funds via anonymous transfers & secure transactions, preserving anonymity.

Verifiable on-chain Large Language Models drift benchmarking inspired by

Uncensored 100% decentralized GPT AI chat running on the blockchain.

Trusted audio in the age of deepfakes 🔒🎙 Generative AI is a threat to society. It enables disinformation, manipulation, and political subversion. We've built the world's first attested microphone and used ZK-SNARKs to protect authenticity and privacy.

A fastchat LLM Inference Module for Lilypad v1.

Decentralized AI Assistant for Web3 - Perform on-chain transactions via prompt.

Private-Model Generation & Service Platform -> Your very own unique personality in an AI Avatar (web3 character.ai)!

Based on academic research, EasyCraft built an AI and blockchain based supply chain management implementation matching factory capacity with customer orders for economic efficiency.

A decentralized, community-governed Data DAO on Filecoin that democratizes DeFi data aggregation and TVL calculations

A decentralized, community-governed Data DAO on Filecoin that democratizes DeFi data aggregation and TVL calculations with on-chain adapter proposals, off-chain voting & interoperable tokens.

A proof of concept of a decentralized yield data aggregator. It gets the pool data, does calculations on it and shows the best APY value of the reserve to the user with a push notification.

Pensieve is a decentralized file storage to recreate your those moments in your memories and share with others. As you upload these memories, you can also choose to take part in the documentation of the HISTORY OF APES (HOMO SAPIENS).

A swiss army knife for Bacalhau, everything you need in an easily accessible dashboard.

An easy SDK for AI models running on decentralised computing.

Source code
AI Oncologist Agent
Source code
Source code
Source code
Source code
Source code
Source code
Source code
Source code
Source code
Source code
Source code
Source code
Source code
Source code
Source code
Source code
Source code
Source code
Source code
Source code
Source code
Source code
Source code
Source code
Source code
Source code
https://arxiv.org/abs/2307.09009
Source code
Source code
Source code
Source code
Source code
Source code
Source code
Source code
Source code
Source code
Source code
Source code