arrow-left

All pages
gitbookPowered by GitBook
1 of 1

Loading...

Lilypad Home

Lilypad is powering democratic participation in the AI Innovation Economy by pioneering new avenues for anyone to deploy, distribute and monetize AI models.

As a full-stack, modular AI services platform, Lilypad provides:

  • A model marketplace for publishing and monetizing AI workloads

  • MLOps tooling for managing and scaling AI pipelines

  • A distributed, on-demand GPU network for high-performance inference across agent workflows, ML research and DeSci applications

Lilypad acts as a:

  • Demand engine for decentralized compute and storage

  • Distribution layer for training networks and agentic frameworks

  • Bridge across data, compute and AI ecosystems, enabling straightforward integration

hashtag
Real-World Impact

In partnership with Dell, Dr. Michael Levin, and Amelie Schroder (Hugging Face's top bioML creator), Lilypad powered a protein folding AI model that discovered a novel heart medication, now progressing through wet-lab validation. We are now working with Dell to build a permissioned, licensable version of the Lilypad Network for enterprise AI adoption.

hashtag
Run AI Workloads

Lilypad makes it easy to run containerized AI models across a global, decentralized GPU network with no infrastructure setup required. Whether you're working with image generation models like Stable Diffusion and SDXL, text-to-video models, LLMs, or custom AI workloads, Lilypad provides the compute layer to scale inference on demand.

You can:

  • Deploy models as Lilypad Job Modules using standard Docker containers

  • Trigger jobs via our CLI or inference API, with full support for passing inputs and capturing outputs

  • Integrate into ML pipelines or agent workflows, using Lilypad as a backend for large-scale AI tasks

If you're building an app, automating research, or experimenting with new architectures — Lilypad gives you a permissionless, cost-efficient path to run your AI anywhere.

hashtag
Provide Compute

Join the Lilypad Network as a Resource Provider and earn rewards by contributing your idle GPU or CPU power to run real AI jobs. Whether you're a solo builder with a gaming rig or an organization managing fleets of GPUs, Lilypad lets you monetize underused compute by connecting directly to demand from developers, researchers and decentralized applications.

As a Resource Provider, you’ll:

  • Run containerized workloads from users across the network

  • Earn rewards for completed jobs

  • Retain full control over your hardware with no lock-in or custodial overhead

Lilypad’s compute layer is permissionless, modular, and designed to scale. We support consumer GPUs, data center hardware, and containerized environments — including Linux hosts and Docker-native setups.

hashtag
Get Started Today!

hashtag
Join the Community & Chat with Us

Leverage the modular architecture to compose your own workflows, fine-tune models, or chain outputs across models

Support open innovation by enabling inference for DeSci, agent frameworks, AI art tools, and more
Cover

Start building with Lilypad!

Cover

Become a resource provider today!

Cover

Lilypad Network

Cover

Lilypad Linktree

Cover

@Lilypad_Tech

Cover

Lilypad Blog

Cover

@LilypadNetwork

Cover

Lilypad-Tech Github

Cover

Lilypad Luma