Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Coming soon...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Catch up with the Lilypad team or hack on the Lilypad Network at these events!
For hackathon ideas see Win Hacks & Contribute!
Image: > lilypad run sdxl:v0.9-lilypad3 "A record player at a bit event in space"
Devcon South East Asia [Nov 9-17]
TOKEN2049 Dubai [April 30 - May 1]
ETHDenver [February 23 - March 2]
ETHCC Brussels [July 8-11]
HackFS Online [May 17-June 7]
ETH Global San Francisco [October 18-20]
Devcon Bangkok [November 12-15]
ETHGlobal Sydney [May 3-5]
LabWeek Istanbul [Nov 13-17]
Fil:Dev Iceland [Sept 25-28]
Chainlink SmartCon Barcelona [Oct 1-3]
Fil:Vegas [Oct 3-5]
Open Data Hack - Online [Sept '23]
Instructions for installing the Lilypad CLI on your machine
The installation process for the Lilypad CLI involves several automated steps to configure it for your specific system. Initially, the setup script identifies your computer's architecture and operating system to ensure compatibility. It will then download the latest production build of the Lilypad CLI directly from the official GitHub repository using curl
and wget
.
Once the CLI tool is downloaded, the script sets the necessary permissions to make the executable file runnable. It then moves the executable to a standard location in your system's path to allow it to be run from any terminal window.
Lilypad offers two distinct installation options to cater to different roles within the network: one for the users of Lilypad and another for resource providers who supply the computational resources to the Lilypad Network.
The resource provider version of Lilypad is not supported on Darwin/macOS.
Select the appropriate installation based on your role:
You're required to set your private key environment variable, WEB3_PRIVATE_KEY
, to interact securely with the network.
A WEB3_PRIVATE_KEY
can be retrieved from the Metamask account details menu. For more info, check out the official guide from Metamask on viewing a wallet's private key. Be sure to keep your private key safe and never share it or store it in unsecured places to prevent unauthorized access to your funds.
To use the Lilypad CLI, the set private key will need to hold Lilypad testnet tokens and Arbitrum Sepolia ETH. You can find those instructions in the Funding your wallet documentation.
To verify the installation, running lilypad
in the terminal should display a list of available commands, indicating that Lilypad CLI is ready to use.
Thats it! You've successfully installed the Lilypad CLI on your machine! 🎉
Adding Trust to the Network
Lilypad is a verifiable trustless decentralized compute network that aims to prevent cheating in the network. The network consists of clients and compute nodes. The main goal is to establish a game theoretic approach to verifiable computing, where clients can trust the results they receive from compute nodes. The approach used in Lilypad is pure verification by replication, without relying on cryptographic tools like snarks or trusted execution environments.
In the context of Lilypad, it's important to understand the difference between global consensus and local consensus. Global consensus, as seen in blockchains, ensures that every node knows the result of a computation is correct due to massive replication across many nodes. However, in Lilypad's two-sided marketplace, only the client needs to be convinced that the result is correct. Local consensus is sufficient for the client, while other nodes like verifying nodes, requesters, and market makers may also need to be convinced.
Lilypad assumes non-malicious but utility-maximizing agents, including both clients and compute nodes. Utility-maximizing means that nodes will do anything to maximize their return, including returning false results if necessary. However, they are not malicious and won't purposely send back false results. The adversary model assumes that all nodes are utility-maximizing and does not consider malicious behavior.
Lilypad aims to achieve the best outcome, where nodes never have any incentive to cheat. If that's not possible, the goal is to minimize the incentive to cheat as a function of the protocol's parameters. Another option is to achieve the first two outcomes under simplifying assumptions, such as having a fraction of honest nodes in every mediation protocol. Lilypad also considers the possibility of not being able to achieve any of these outcomes and aims to understand the limitations.
Lilypad takes an adversary-first approach by designing the most powerful adversary possible and then optimizing against it. The team uses multi-agent reinforcement learning to train agents to act as utility-maximizing agents on behalf of clients and compute nodes. Reinforcement learning has shown impressive results in various domains, and Lilypad aims to leverage its capabilities to train agents that can maximize their utility in the network.
Lilypad plans to test various anti-cheating mechanisms once the reinforcement learning agents are implemented. These mechanisms include:
Consortium of mediators: A group of trusted mediators can check the results, reducing the likelihood of cheating.
Prediction markets and staking: Nodes can stake behind other nodes and lose their collateral if the node they stake behind is found to have cheated.
Taxes and jackpots: A tax is imposed on every job, and the taxes go into a jackpot that is awarded to nodes that find other nodes to have cheated.
Reputation: Nodes can build up a reputation based on the number of jobs they've done and the honesty of their results.
Sorting inputs and outputs: Storing inputs and outputs for longer periods allows for easier verification but increases collateralization requirements.
Frequency of checking results: Autonomous agents can decide when to check results, balancing the need for verification with the cost of collateralization.
In the future, Lilypad aims to take into account the preferences of nodes, such as computational requirements, time requirements, and scheduling requirements. The long-term goal is to have compute nodes and clients negotiate with each other automatically over these aspects of a job. Lilypad acknowledges that game-theoretic verifiable computing is a less studied form of verifiable computing compared to zero-knowledge proofs and trusted execution environments. However, the team is committed to exploring this approach and conducting rigorous research to find effective solutions.
Lilypad's game-theoretic approach to verifiable computing aims to prevent cheating in a decentralized compute network. By using reinforcement learning and testing various anti-cheating mechanisms, Lilypad strives to create a trustless environment where clients can have confidence in the results they receive from compute nodes. The team is actively working on implementing reinforcement learning agents and conducting simulations to evaluate the effectiveness of different strategies.
The Distributed Compute Problem
The setup is a trustless, permissionless, two-sided marketplace for compute, where clients can purchase compute services from compute nodes. Trustless means that by default, the protocol does not assume that any particular node behaves in a trustworthy manner and that each node should be considered as rationally self-interested (note that this excludes intentionally malicious behavior). Permissionless means that any node can join or leave the network at will.
Matches for compute jobs are made off-chain, with the resulting deals and results recorded on-chain. Both clients and compute nodes need to agree to matches before they become deals, and make deposits to the protocol to enable rewards and punishments. Results are verified using verification-via-replication, and clients can check the results of any job after it has been completed, but before it needs to pay. It does so by calling upon a mediation protocol. The mediation protocol is the ultimate source of truth, and the outcome of the mediation protocol determines how payouts to nodes are made.
The issue of preventing fake results in the presence of a Trusted Third Party (TTP) as a mediator is effectively a solved problem (for example, see the section on prior verification-via-replication protocols, though there is much more literature on this topic). Given the assumption that the mediation protocol is the source of truth, we can treat the mediation protocol as a TTP. Since the fake results problem is basically already solved in this case, the cheating problem reduces down to solving the collusion problem within the mediation protocol. (Note, however, that we will address both cheating and collusion; the framework described here exists to conceptually simplify the problem.)
This is a typical scenario of a Byzantine environment, and we can use well-established approaches to Byzantine Fault Tolerance when invoking mediation. However, most BFT algorithms and cryptographic methods rely on assumptions regarding some fraction of honest nodes. The problem is that rational, utility-maximizing agents may still collude, even in a mediation consortium, in order to converge on incorrect results. On the one hand, we could assume that some fraction of nodes are honest, as is often done. On the other hand, can we do better?
The task is to find the mechanisms that incentivizes all nodes to behave honestly.
All agents in the protocol are utility-maximizing. This will be elucidated in a subsequent section. Most of the literature has focused on the case where compute nodes are dishonest. However, the client can also behave dishonestly in some manner that maximizes their utility. For example, if the client has some level of control over the choice of mediator, and dishonest nodes have their collateral slashed, the client could collude with the mediator in order to deem a correct result incorrect and get a cut of the honest compute node's collateral.
"Good" solutions can take a number of forms:
Nodes never have an incentive to be dishonest.
Nodes have an incentive to be dishonest that goes to zero as a function of the parameters of the protocol.
(1) or (2), but under some simplifying assumptions, such as there being some fraction of honest nodes within every mediation protocol.
A good solution would achieve any of these goals. Alternatively, another valuable outcome of this research would be to discover under what assumptions these goals can or cannot be met, or if the goals are even possible to achieve at all.
There are a number of ways that these goals may be achieved. The approach will be to construct a digital twin of the protocol and test a number of different mechanisms in simulation. These mechanisms include variations on mediation protocols, collateralization, staking, taxes and jackpots, and others; see the Mechanisms to Explore section for details.
Useful links on topics that relate to the Lilypad Network
The promise and challenges of crypto + AI - Vitalik Buterin
Understanding the Intersection of Crypto and AI - Galaxy Digital Research
Nvidia CUDA Toolkit
Impact of Compute over Data - Juan Benet
CoD vision & goals - Juan Benet
Lessons for building Distributed Compute - Juan Benet & Molly Mackinlay
State of CoD 2023 - David Aronchick
Containers at the Edge - David Aronchick
Inspiration for projects and use cases that can be built on the Lilypad Network
Earn those double bounty dollars building out crossover POCs to Lilypad!
Lilypad is aiming to build out a full suite of decentralized AI projects, which means we want to collaborate with other projects in the ecosystem to make this vision a reality.
These include linking Lilypad compute with storage providers, data streams, databases & more.
This also means adding layers like privacy or extending usage into other frameworks like Unity or React Native projects. Some examples include:
Using Filecoin aggregators like Banyan or Tableland Basin with Lilypad
Extending the Chainsafe Unity SDK to add Lilypad jobs as an option
Building Lilypad into Mona or creating a module to use on Mona
Integrating with The Graph
Doing a POC with data from someone like WeatherXM
Adding privacy layers for data with Lit Protocol
ZK Computations using zkSync for verifiable credentials and identity proofs without revealing sensitive information
A decentralized oracle service that uses AI to fetch, verify, and deliver real-world data from tools like Chainlink to smart contracts
There are also opportunities to extend the functionality of Lilypad compute with items like:
CI/CD pipelines
Vector databases
Autonomous agents
Batching jobs with an external script
Chaining jobs together in a pipeline
Contributing a module is one of the coolest things to do for Lilypad. There are so many many options for compute and AI jobs that you can run with Lilypad, including adding new state-of-the-art OSS AI models
Text to speech
Text to opera (have you heard this one!)
Finetuning models for SDXL and LLM
Image -> cartoon (or anything else)
Build a module that classifies images into predefined categories (e.g., cats, dogs)
A basic recommendation system for movies, books, or products based on user preferences
Personal finance tracker that uses AI to categorize and analyze spending patterns.
A module that provides personalized tutoring and learning assistance using AI
As a quick refresher, the current module making pipeline (see full "Build a job module" resource) looks like this:
Find or build a compute script (for example, a Python script on Hugging Face like SDXL)
Containerise the script with Docker (see this blog for more info)
Add a Lilypad Spec - this is simply a GitHub repo - you can grab a template here!
Share your module with us by submitting a pull request to the GitHub here!
Lilypad is still evolving and is currently in Beta phase. This means there are A LOT of opportunities to help build out better DX for your fellow developers!
One of the main opportunities is in the module making pipeline. See #contribute-a-module for a quick breakdown of the developer journey). Some of the opportunities to improve this process include:
💚 Good first project (Lilypadawan)
💛 Moderate skill needed
❤️ Jedi mode!
💛 An application that can auto-containerise (or auto-dockerise) a given compute script
❤️ BONUS IDEA: You could even then make a module to run the auto-containerising script on Lilypad after it's built! 🧙
💚 You could go one further and also have it auto-generate the Lilypad spec as well!
💛💛 Ways to make it easy for folks without GPUs at home to build & test a module
💚 Implement developer tooling for debugging and monitoring compute jobs, providing real-time feedback and logs
💛 Tools that analyze and optimize the performance of compute scripts, suggesting improvements and best practices
💚 Implement static analysis tools to check for code quality, style, and potential issues in compute scripts
❤️ Create tools to automatically audit compute scripts and modules for security vulnerabilities and compliance with best practices
Build out other end-user projects across a broad range of verticals. We are very interested to hear the type of projects people are looking to build on the Lilypad Network and have released a Javascript wrapper to help folks build easily.
Here are a few basic ideas of end user projects:
Education: AI-powered educational tools for personalized learning experiences
Music: Developing AI-driven music composition tools, or applications for real-time music analysis and recommendation
Finance: Leverage AI for risk assessment and trading strategies
Retail: Building recommendation engines using AI for e-commerce platforms
Environment: Developing solutions for environmental monitoring and pollution tracking
Security: Tools using AI for auditing and security analysis
Projects across DeFi, DeSci, Gaming, & Metaverse, NFTs, IOT and more, as well #crossover-pocs-integrations-and-plugins!
See the Past Hackathon Winners page!
Pssst... We'd love to share your projects here too! Feel free to submit a PR!
We hope this page inspires you to build amazing things with us! If you have any ideas, suggestions, or any questions, we'd love to hear from you! Join our community on Discord to share your thoughts, ideas, ask questions, and get the support you need from the Lilypad team. 🪷
Join our Discord channel here!
🌱Your feedback helps us improve and grow together!
Frequently Asked Questions for Lilypad Network
Lilypad is developing a serverless, distributed compute network that enables internet-scale data processing for AI, ML & other arbitrary computation from blockchains, while unleashing idle processing power & unlocking a new marketplace for compute.
Lilypad provides decentralized AI computational services. By extending unrestricted, global access to computational power, Lilypad strategically collaborates with decentralized infrastructure networks, such as Filecoin, to formulate a transparent, efficient, and accessible computational ecosystem.
Perform off-chain decentralized compute over data, with on-chain guarantees, and to call this functionality directly from a smart contract, CLI and an easy to use abstraction layer, opens the door to a multitude of possible applications.
Lilypad will release a Whitepaper by November 2024.
Lilypad Network is currently in Testnet. The team is currently ironing out some remaining known issues and working on a fair model for our Incentivized Testnet.
Find the full Lilypad Network roadmap on our website!
Previously, Lilypad required deterministic jobs on the network and used optimistic reproducibility to randomly re-run jobs to ensure trust, however this method has been deprecated due to:
the limitation the determinism requirement placed on what jobs were available to run on the network
the inability to realistically be able to determine a “deterministic” job to be deterministic easily
Yes, Lilypad closed our seed round of funding in March 2024.
The Lilypad Incentivized testnet launched in mid June 2024. Stay tuned on the Lilypad Discord for more info!
Lilybit_ rewards will be awarded to nodes for time on the network (up to a 4x multiplier) and compute power brought to the network. Rewards will be redeemable, for the Lilypad ERC20 Utility Token at Mainnet Launch, with between 5% and 10% of the total token supply (depending on IncentiveNet demand and tokenomics finalization) being allocated to this phase of the Lilypad Network.
Resource Providers (RP) can track their Lilybit_ earnings with the RP Leaderboard.
Phase 1 of the Incentivized Testnet is focused on rewarding nodes on the network, referred to as Resource Providers. The earlier a provider joins the network, the more Lilybits_ will be available.
Phases 2 and onward will provide rewards for Lilypad modules created as well as developer tools/apps (in addition to rewarding nodes).
You can check your rewards by pasting your nodes wallet address into the following inteerfaces:
On the Lilypad network, The blockchain is used for
payment rails
storing the deals transparently (on-chain guarantees about the compute)
storing any disputes & results
Lilypad Tokens are used to transact on the Lilypad network. They are used as payment by those who want to run jobs (to the resource providers who run them), and as collateral by resource providers
You need ETH tokens to pay for the gas fees for smart contracts the facilitate transactions, and for records of transactions and disputes that are posted to the Ethereum blockchain.
The required steps before running a Lilypad node include adding the node, adding the Lilypad network information, obtaining tokens and installing the required software.
Refer to the Running a Node documentation and select your preferred platform (Linux or Docker) for a detailed guide on the prerequisites.
The minimum hardware requirements to run a Lilypad node are:
Processor: Quad-core x64 Processor
RAM: 16GB (see additional details below)
Internet: 250Mbps Download Speed
GPU: NVIDIA GPU with a minimum of 8GB VRAM (see additional details below)
For more information, please visit Hardware Requirements.
Resource providers are expected to have the latest Lilypad versions installed on their machines.
The instructions can be found in our installation documentation (select the Resource Provider tab).
To check if the RP is running use the following command: sudo systemctl status lilypad-resource-provider
.
This will give a live output from the Lilypad node. The logs will show the node running and accepting jobs on the network. To get more information from your node, run the following: sudo journalctl -u lilypad-resource-provider.service -f
.
Find more information, select your preferred platform below and check out the docs:
To run a ML model like Stable Diffusion on Lilypad, the model must be setup as a Lilypad Module (see instructions below). Once setup, modules are run with the Lilypad CLI.
To run a ML job on Lilypad (Stable Diffusion, Stable Diffusion Video, and more) using the Lilypad CLI, follow the CLI instructions to get started (select the CLI Users tab).
To build an application with Lilypad compute and modules on the backend, check out this guide.
A Lilypad module is a Git repository that can be used to perform various tasks using predefined templates and inputs. This "build a job module" guide will walk you through the process of creating a Lilypad module, including defining a JSON template, handling inputs, ensuring determinism, and other best practices.
Lilypad is an open network that allows anyone to contribute GPU computing capacity. Find instructions for running a node by selecting your preferred platform below:
Linux (latest Ubuntu LTS recommended)
Nvidia GPU
Nvidia drivers
Docker
Nvidia docker drivers
For more information on the requirements to run a Lilypad node, please refer to the hardware requirements documentation.
Tokenomics and research papers are currently being developed and expected by Q4 2024.
Although the launch date is not finalized, the launch of Lilypad Mainnet and the TGE for LP tokens is scheduled for Q2 2025.
Can't find the answer you were looking for? Join the Lilypad Discord server for live support! 🪷
Lilypad Network Architecture
This page is a dynamic work in progress! We're currently working on some better diagrams!
See the Bacalhau documentation for more information on how Bacalhau operates
The architecture of Lilypad is inspired by the research paper titled "Mechanisms for Outsourcing Computation via a Decentralized Market." The paper introduces MODiCuM, a decentralized system that allows for computational outsourcing in an open market. Just like MODiCuM, Lilypad aims to create an open market of computational resources by introducing various decentralized services like solver, resource provider, job creator, mediator, and directory services. MODiCuM's unique approach to deterring misbehavior in a decentralized environment through dedicated mediators and enforceable fines has influenced Lilypad's own design, particularly in the areas of dispute resolution and system integrity.
For an in-depth understanding, you can read the paper here.
Abstract of the paper: Mechanisms for Outsourcing Computation via a Decentralized Market As the number of personal computing and IoT devices grows rapidly, so does the amount of computational power that is available at the edge. Since many of these devices are often idle, there is a vast amount of computational power that is currently untapped, and which could be used for outsourcing computation. Existing solutions for harnessing this power, such as volunteer computing (e.g., BOINC), are centralized platforms in which a single organization or company can control participation and pricing. By contrast, an open market of computational resources, where resource owners and resource users trade directly with each other, could lead to greater participation and more competitive pricing. To provide an open market, we introduce MODiCuM, a decentralized system for outsourcing computation. MODiCuM deters participants from misbehaving-which is a key problem in decentralized systems-by resolving disputes via dedicated mediators and by imposing enforceable fines. However, unlike other decentralized outsourcing solutions, MODiCuM minimizes computational overhead since it does not require global trust in mediation results. We provide analytical results proving that MODiCuM can deter misbehavior, and we evaluate the overhead of MODiCuM using experimental results based on an implementation of our platform.
Configure a crypto wallet to receive testnet tokens used to interact with the Lilypad Network
Both Resource Proiders (GPU compute nodes) and those looking to run jobs on the Lilypad network need to set up a Metamask account in order to run jobs on Lilypad. The public key of your wallet address is how you are identified on the network, and is how you can look up the transactions you make on the Arbitrum Etherscan blockchain explorer. You need an account for running jobs, and a separate account for each GPU you want to set up on the network.
The wallet you use for your account must have both ETH (to run smart contracts on Ethereum) and Lilypad (LP) tokens in order to pay for jobs (or recieve funds for jobs) on the network.
End users of Lilypad can decide which crypto wallet they would like to use. In this guide, we advise using a MetaMask crypto wallet.
Install MetaMask Extension
The Lilypad Testnet (IncentiveNet) is currently running on the Arbitrum L2 network built on Ethereum.
In order to change to the Arbitrum network in the wallet, open MetaMask and click the network button in the top left of the menu bar:
Then , select "Add network":
Next, select "Add a network manually":
Input all the required Arbitrum Sepolia Testnet network info, and then "Save":
Network name: Arbitrum Sepolia
New RPC URL: https://sepolia-rollup.arbitrum.io/rpc
Chain ID: 421614
Currency symbol: ETH
Block explorer URL: (leave blank)
Network info is referenced directly from the Arbitrum Sepolia documentation.
The wallet is now setup and will display an ETH (Arbitrum Sepolia) token balance. In order to also display the LP token balance, the LP token will need to be imported.
Select "Import tokens" at the bottom of the wallet:
Select "Custom token" and add the Lilypad token contract address and token symbol. Then "Save".
Token contract address: 0x0352485f8a3cB6d305875FaC0C40ef01e0C06535
Token symbol: LP
You should now see both ETH and LP listed in the wallet (initial ETH and LP balances will be 0).
Now you're ready to fund the wallet with testnet LP and ETH tokens!
Running the most important Hello World on Lilypad!
Before you run a Lilypad job, make sure you have Lilypad CLI installed and have set a WEB3_PRIVATE_KEY
env variable in your environment.
Your WEB3_PRIVATE_KEY
can be retrieved from the MetaMask account details menu. For more info, check out the official guide from MetaMask on how to get a your private key. Be sure to keep your private key safe and never share it or store it in unsecured places to prevent unauthorized access to your funds.
Run the command:
Wait for the compute to take place and for the results to be published:
View your results:
Get Testnet Lilypad Tokens (LP) and Arbitrum Sepolia Testnet ETH
tldr: To obtain funds, first ensure the wallet is connected to the Arbitrum Sepolia network. then, collect LP and ETH tokens from these faucets:
Arbitrum Sepolia ETH (3rd party faucet list)
Find out why you need tokens in the FAQs
You must be a member of the Lilypad Discord to claim tokens
Follow these steps to successfully claim your Testnet LP tokens:
Navigate to the Lilypad Testnet faucet.
Authenticate with Discord.
Copy your MetaMask wallet address into the input
Click "Request".
LP Testnet tokens will be sent to this wallet address.
Get Arbitrum Sepolia ETH from this list of third party faucets. Each faucet is designed differently, so follow the instructions provided.
If you need more tokens and already have Sepolia ETH, use the official Arbitrum bridge to transfer the tokens over to Arbitrum Sepolia.
With a balance of both LP and ETH, you're ready to run jobs with the Lilypad CLI!
is developing a serverless, distributed compute network that enables internet-scale data processing, AI, ML & other arbitrary computation, while unleashing idle processing power & unlocking a new marketplace for compute.
You can use Lilypad to run AI workload models including Stable Diffusion and Stable Diffusion Video, or you can add your own module to run on the Lilypad Network. Using Lilypads distributed compute nodes, you can build and run your own containerized workloads that require high-performance computing.
Perform off-chain decentralized compute over data, with on-chain guarantees. Call this functionality directly from a smart contract, CLI, and an easy to use abstraction layer.
The network is actively laying groundwork for multi-chain integration and the deployment of an incentivized testnet.
Lilypad has evolved from earlier versions (v0, v1 & v2), where the network served as a proof of concept for verifiable, decentralized compute directly from smart contracts. These earlier iterations established the groundwork for what is now a robust, scalable platform with expanded features and multichain support.
Bacalhau has been integral to Lilypad since its early versions (v0 and v1), serving as the backbone for verifiable off-chain compute. In these early iterations, Bacalhau was used for Proof of Concept projects, helping users execute decentralized compute jobs from smart contracts.
Lilypad aims to mitigate the challenges predominantly associated with the accessibility of high-performance computational hardware. At present, numerous barriers impede developers and organizations from smoothly integrating projects that require high-performance computing, such as AI technologies, into their applications.
Unlike conventional centralized systems, where access to powerful compute hardware is restricted and costly, Lilypad endeavors to democratize this access. Through its verifiable, trustless, and decentralized computational network, Lilypad extends unrestricted, global access to computational power. By leveraging decentralized infrastructure networks such as Filecoin, Lilypad is strategically positioned to enhance the efficiency, transparency, and accessibility of high-performance computing hardware.
Perform off-chain decentralized compute over data, with on-chain guarantees, and to call this functionality directly from a smart contract, CLI, API and an easy to use abstraction layer, opens the door to a multitude of possible applications including:
Inference AI jobs
ML training jobs
Invoking & supporting generic ZK computations
Cross-chain interoperability complement to bridge protocols
Utilising inbuilt storage on IPFS
Federated Learning consensus (with Bacalhau insulated jobs)
IOT & Sensor Data integrations
Providing a platform for Digital twins
Supply chain tracking & analysis
ETL & data preparation jobs
Some of the key features of Lilypad include:
Verifiable Serverless Decentralized Compute Network: Lilypad is a decentralized compute network that aims to provide global, permissionless access to compute power. The Network orchestrates off-chain compute (a global GPU marketplace) and uses on-chain verification (Arbitrum L2 on Ethereum) to provide guarantees of compute success.
Open Compute Network: Lilypad is an open compute network allowing users to access and run AI models/other programs in a serverless manner. Module creators and general users can access a curated set of modules or can easily create their own Lilypad module to run an AI model/other program on the network.
Decentralization of Mediators: The team also aims to decentralize the mediators in the network. This means that the decision-making process and governance of the network will be distributed among multiple participants, ensuring a more decentralized and resilient system.
Bacalhau is a peer to peer computation network enabling compute over data jobs like GPU-enabled AI, ML, analytics, data engineering, data science, de-sci and more. With the open-source Bacalhau Project, you can streamline your existing workflows without rewriting by running Docker containers and WebAssembly (WASM) images as tasks. This architecture is also referred to as Compute Over Data (or CoD).
Instructions for setting up a Resource Provider (node) on the public Lilypad testnet using Docker, including obtaining necessary funds and installing required software.
Lilypad RPs currently only support Linux installs. Running a RP on Windows is currently experimental.
Linux (Ubuntu 22.04 LTS)
Nvidia GPU
(Ubuntu install)
Nvidia Docker drivers
For a more in-depth look at the requirements to run a Lilypad node, please refer to the documentation.
The testnet has a base currency of ETH, as well as a utility token called LP. Both are used for running nodes. To add a node to the testnet, follow these steps:
We recommend using MetaMask with custom settings to make things easier. Once you have it installed and setup, here are the settings you need to use:
Network name: Arbitrum Sepolia
New RPC URL:
Chain ID: 421614
Currency symbol: ETH
Block explorer URL: (leave blank)
The faucet will give you both ETH (to pay for gas) and LP (to stake and pay for jobs).
The Lilypad Network uses the Arbitrum Sepolia Testnet to settle compute transactions. When a transaction is ready to be saved on-chain, Lilypad cycles through a list of public Arbitrum Sepolia RPC endpoints using the endpoint that settles first to save the compute transaction.
You have two options to start the Lilypad setup: using Docker Compose or directly pulling the image. Both methods will run the containers in the background, allowing you to continue using your terminal while the setup operates.
Before starting, export your private key from MetaMask. Follow the official MetaMask guide for instructions on safely exporting your private key.
Use curl
to download the docker-compose.yml
file from the Lilypad GitHub repository.
If any containers named resource-provider
, ipfs
, or watchtower
are already in use, they will need to be stopped before running this setup to avoid naming conflicts.
You can check if these containers are running with:
If they are running, stop them with:
If there are still conflicts when trying to running with the docker-compose file, remove the containers:
Start the Lilypad containers using Docker Compose:
To include a custom RPC URL:
Use the following command to check the status of the Lilypad Resource provider.
Use the following command to view the containers running after starting Docker Compose.
To view your Lilybit_ rewards, visit one of the following dashboards and paste your node's public address into the input:
Here are some common troubleshooting techniques when it comes to your resource provider using Docker:
Checking Docker Runtime
To verify your Docker runtime configuration: sudo docker info | grep Runtimes
You should see the NVIDIA runtime listed. If you only see: Runtimes: io.containerd.runc.v2 runc
you will need to configure the NVIDIA runtime.
Configuring NVIDIA Runtime If the NVIDIA runtime is not showing up or you're experiencing issues, try the following:
1. Configure the NVIDIA Container Toolkit runtime: sudo nvidia-ctk runtime configure --runtime=docker
2. Restart the Docker service: sudo systemctl restart docker
Overview of Docker setup
For a comprehensive overview of your Docker setup, use: docker info
. This command provides detailed information about your Docker daemon configuration.
Instructions for setting up a Resource Provider (node) using Linux or Docker
Setup a personal Arbitrum Sepolia RPC for a Lilypad Resource Provider.
This guide walks through the steps of setting up a personal RPC endpoint for Arbitrum Sepolia using .
This guide demonstrates how to use Infura's RPC, but this can be substituted for any Arbitrum Sepolia RPC that supports WebSockets.
on Infura and choose your plan based on how many APIs you need.
Select the “free” tier as the compute units provided should be sufficient to run a Lilypad RP.
In the Infura dashboard, a new API key will usually generate automatically. If not, select "Create New API Key". Navigate to "Configure" to setup the API key.
Scroll down the list to the Arbitrum network and ensure the Sepolia testnet box is checked, then save changes.
In the API key dashboard, select "Active Endpoints" and navigate to "WebSockets".
Scroll down the page to find the Arbitrum Sepolia URL. The RPC endpoint for Arbitrum Sepolia is ready to be used with the Lilypad Resource Provider:
Lilypad RPs can use a personal RPC endpoint with a few simple steps. Only Web-socket (WSS) connections are supported.
Stop the existing Lilypad Resource Provider (RP) before setting up the new RPC.
Locate the Lilypad RP Docker container using:
Stop the container using the PID:
Use this command to start the lilypad-resource-provider.service with the new RPC:
Check the status of the container:
Stop the existing Lilypad RP (if the node is not running, disregard this first step):
Update lilypad-resource-provider.service
with the new RPC:
Add following line to [Service] section:
Reboot the node:
Personal RPC endpoint for a Lilypad Resource Provider
The Lilypad Network uses the Arbitrum Sepolia Testnet to settle compute transactions.
The Lilypad Network is not responsible for issues caused by misconfigured RPC endpoints. Please use at your own risk!
This personal RPC endpoint allows Resource Providers (RP) to avoid reliability issues with the RPC endpoints used by Lilypad ensuring rewards can be earned and jobs can be run consistently. RPs running a personal RPC endpoint contribute to the fault tolerance and decentralization of the Lilypad Network! Read more in the Alchemy Arbitrum .
This page overviews the hardware requirements to operate a Lilypad Network node. It's important to note that these requirements continuously evolve as the network grows. If you have questions or suggestions, please join our or open a pull request on the .
Processor: Quad-core x64 Processor or better
RAM: 32GB (see additional details below)
Internet: Internet: 250Mbps download, 100Mbps upload (minimum)
GPU: NVIDIA GPU with a minimum of 8GB VRAM (see additional details below)
Each model operating on Lilypad has specific VRAM (Video Random Access Memory) requirements directly related to the model's complexity and computational demands. For running a Lilypad Resource Provider (RP) with multiple GPUs, a guide using Proxmox can be found .
Base Requirement: The simplest model on Lilypad requires a GPU with at least 8GB of VRAM. This is the minimum required to participate in computational tasks on the Lilypad network.
The capability of your GPU to manage multiple or more complex Lilypad jobs is enhanced by the amount of VRAM available:
Standard Models (, ): Requires 8GB of VRAM.
Advanced Models (): Requires 14GB of VRAM.
GPUs with 8GB of VRAM are limited to running models like SDXL, which fit within this specification. Larger GPUs with higher VRAM are required for more demanding models like SDV, which needs at least 14GB of VRAM.
Lilypad uses the Resource Provider's GPU to load models, initially requiring the temporary storage of the data in the system's RAM. In a production environment with RP Nodes, it is important to have enough RAM to support the model and the underlying system's operational processes.
Base Requirement: A minimum of 16GB of RAM is required, with at least 8GB dedicated to the model and another 8GB allocated for system operations and Lilypad processes.
Wallets for each GPU: You need a separate account for each GPU you want to set up on the network. The wallet you use for your account must have both ETH (to run smart contracts on Ethereum) and Lilypad (LP) tokens in order to recieve funds for jobs) on the network.
Larger Models: Jobs involving more substantial models will demand additional RAM. It's important to note that adequate VRAM alone is insufficient; your system must also have enough RAM to load the model into the GPU successfully. Without sufficient system RAM, the model cannot be loaded into the GPU, regardless of available VRAM.
Information about the Lilybit_ rewards program
GPUs contributing to the Lilypad Network can earn Lilybit_ credits daily, which will be redeemable for Lilypad (LP) mainnet tokens at the Token Generation Event (TGE), expected in Q1 of 2025. To provide compute power and participate in Lilypad's decentralized compute network, Resource Providers (RPs) can accumulate these credits based on their contributions to the network.
Read the in depth breakdown of the Lilybit_ rewards system .
Nodes must be online for a minimum of 4 hours a day continuously (verified with POW efforts) to earn that day’s Lilybit_ rewards.
A 4x multiplier on this base daily rate is available for nodes that are online and taking jobs over the full day (This is calculated with 1.3195^(number of 4 hour windows GPU is online)).
The power of the compute being contributed also provides a point multiplier, with the hashrate of a node (determined by the required POW algorithm) determining the multiplier given to a node’s power. 10 points are rewarded for every MHash/sec provided to the network.
In order to incentivize long term participation in the network, a node will be slashed 10% of their total number of points (earned by providing compute power) each day that node isn’t online for at least 4 hours continuously.
A grace period for RP downtime is now included in the slashing mechanism (Oct-01-2024). RPs will earn 2 days of a “grace period” after every 30 days of continuous service provided. These 2 days will be applied to 2 subsequent down days recorded by the RP allowing the RP to avoid slashing for these 2 days. Grace Period days do not accumulate to more than 2 days ever. Once used the 30 day count to obtain the 2 days restarts. Find more information on slashing in the .
The provides an up-to-date view of rewards earned by each wallet ID and is updated once a day to reflect additional credits earned. RPs can also track the status of their compute contributions with the node status dashboard.
Track the status of a Resource Provider with the .
Setup a personal Arbitrum Sepolia RPC for a Lilypad Resource Provider.
This guide walks through the steps of setting up a personal RPC endpoint for Arbitrum Sepolia using .
This guide demonstrates how to use Alchemy's RPC, but this can be substituted for any Arbitrum Sepolia RPC that supports WebSockets.
and login to the Alchemy dashboard.
Select the “free” tier as the compute units provided should be sufficient to run a Lilypad RP. The free service provides 300 million compute units per month.
Select “skip bonus” or input a credit card with billing info (the card will not be charged unless the compute credits in the free tier are used).
In the “Overview” section of the Alchemy dashboard, navigate to “My app” and select “Endpoints”. If an app was not created upon login, create a new one by selecting "Create new app".
By selecting “Endpoints”, the “Networks” tab will open providing an option to configure the Arbitrum API.
Select “Sepolia”
Select “Websockets”
The RPC endpoint for Arbitrum Sepolia is ready to be used with the Lilypad Resource Provider:
Metrics for the RPC can be viewed in the “Metrics” tab.
Lilypad RPs can use a personal RPC endpoint with a few simple steps. Only Web-socket (WSS) connections are supported.
Stop the existing Lilypad Resource Provider (RP) before setting up the new RPC.
Locate the Lilypad RP Docker container using:
Stop the container using the PID:
Use this command to start the lilypad-resource-provider.service with the new RPC:
Check the status of the container:
Stop the existing Lilypad RP (if the node is not running, disregard this first step):
Update lilypad-resource-provider.service
with the new RPC:
Add following line to [Service] section:
Reboot the node:
How to build your own compute job for Lilypad
A Lilypad module is a Git repository that allows you to perform various tasks using predefined templates and inputs. This guide will walk you through creating a Lilypad module, including defining a JSON template, handling inputs, and following best practices.
If you're new to Docker, consider exploring this on creating, building, and running a Docker image for a simple Hello World style application.
Start by creating a Git repository for your Lilypad module. The module's versions will be represented as Git tags. Below is the basic structure of a Lilypad Module.
Handle all dependencies
Implement input/output through environment variables
Write outputs to /outputs
directory
Format in template:
Usage in CLI:
lilypad run repo:tag -i variable=value
During development, you will need to use the Git hash to test your module. This allows you to verify that your module functions correctly and produces the expected results.
Below is a working lilypad module run cmd for reference. (you can use this to run a lilypad job within the lilypad CLI):
Here are some example Lilypad modules for reference:
Deprecated examples:
These examples can help you understand how to structure your Lilypad modules and follow best practices.
In this guide, we've covered the essential steps to create a Lilypad module, including defining a JSON template, handling inputs, and testing your module. By following these best practices, you can build reliable and reusable modules for Lilypad.
For more information and additional examples, refer to the official Lilypad documentation and the Cowsay example module.
Instructions for setting up a Resource Provider (node) on the public Lilypad testnet using Linux, including obtaining necessary funds, installing required software, and ensuring security measures.
For a more in-depth look at the requirements to run a Lilypad node, please refer to the documentation.
Linux (Ubuntu 22.04 LTS)
Nvidia GPU
(Ubuntu install)
Nvidia Docker drivers
Lilypad RPs currently only support Linux installs. Running a RP on Windows is currently experimental. The resource provider version of Lilypad is not supported on Darwin/macOS.
The testnet has a base currency of ETH, as well as a utility token called LP. Both are used for running nodes. To add a node to the testnet, follow these steps:
We recommend using MetaMask with custom settings to make things easier. Once you have it installed and setup, here are the settings you need to use:
Network name: Arbitrum Sepolia
New RPC URL:
Chain ID: 421614
Currency symbol: ETH
Block explorer URL: (leave blank)
The faucet will give you both ETH (to pay for gas) and LP (to stake and pay for jobs).
To set up your environment for using Lilypad with GPU support, you need to install several key components. This guide will walk you through installing Docker, the Nvidia Container Toolkit, Bacalhau, and Lilypad. You'll also configure systemd to manage these services efficiently.
Docker is a platform that allows you to automate the deployment of applications inside lightweight, portable containers.
To install Docker Engine, follow the steps specific to your operating system from the official Docker documentation:
Configure the container runtime by using the nvidia-ctk
command:
The nvidia-ctk
command modifies the /etc/docker/daemon.json
file on the host. The file is updated so that Docker can use the NVIDIA Container Runtime.
Restart the Docker daemon:
Run a local ipfs node on the Lilypad RP.
If permissions errors are encountered when trying to run ipfs init
, ensure the ipfs node is not setup in the root user and is accessible by Bacalhau.
Bacalhau is a peer-to-peer network of nodes that enables decentralized communication between computers. The network consists of two types of nodes, which can communicate with each other.
Bacalhau versions newer than v1.3.2
are not currently supported but will be in the future. Please pin to Bacalhau v1.3.2
for now.
To install Bacalhau, run the following in a new terminal window (run each command one by one):
To check your Bacalhau version use:
The expected output is:
The installation process for the Lilypad CLI involves several automated steps to configure it for your specific system. Initially, the setup script identifies your computer's architecture and operating system to ensure compatibility. It will then download the latest production build of the Lilypad CLI directly from the official GitHub repository using curl
and wget
.
Once the CLI tool is downloaded, the script sets the necessary permissions to make the executable file runnable. It then moves the executable to a standard location in your system's path to allow it to be run from any terminal window.
Via official released binaries
Ensure the RP is running the latest Lilypad version. Two weeks after each version release, the previous Lilypad version is deprecated.
To verify the installation, run lilypad
in the terminal to display the version and a list of available commands, indicating that Lilypad CLI is ready to use.
You will need to create an environment directory for your node and add an environment file that contains your node's private key.
To do this, run the following in your terminal:
Next, add your node's private key into /app/lilypad/resource-provider-gpu.env
:
This is the key where you will get paid in LP tokens for jobs run on the network.
You must not reuse your compute node key as a client, even for testing: this will result in failed jobs and will negatively impact your compute node since the wallet address is how nodes are identified on the network.
The Lilypad Network uses the Arbitrum Sepolia Testnet to settle compute transactions. When a transaction is ready to be saved on-chain, Lilypad cycles through a list of public Arbitrum Sepolia RPC endpoints using the endpoint that settles first to save the compute transaction.
systemd is a system and service manager for Linux operating systems. systemd operates as a central point of control for various aspects of system management, offering features like parallelization of service startup, dependency-based service management, process supervision, and more.
To install systemd, open /etc/systemd/system/bacalhau.service
in your preferred editor:
Open /etc/systemd/system/lilypad-resource-provider.service
in your preferred editor.
Hint: sudo vim /etc/systemd/system/lilypad-resource-provider.service
Reload systemd's units/daemons (you will need to do this again if you ever change the systemd unit files that we wrote, above)
Start systemd units:
Start bacalhau and lilypad-resource-provider in a different terminal window than the ipfs node.
Once bacalhau is started, wait 5-20 seconds before starting lilypad-resource-provider. Bacalhau sometimes takes time to start and lilypad-resource-provider will not start without bacalhau.
Now that your services have been installed and enabled, check the status of Bacalhau to ensure it is running correctly on your node:
To check if the node is running use the following command:
This will give a live output from the Lilypad node. The logs will show the node running and accepting jobs on the network.
Run the following command to get more status info from your node:
To restart your resource provider run:
Without a discussion opened, our team will not be able to support the problem.
Description (including Lilypad version running on your node)
Hardware Info (including Linux/Windows version)
Related blockchain/ETH addresses of transaction hashes
Output Logs - sudo systemctl status lilypad-resource-provider
Related links/urls
Screenshots
Please note that using sudo rm -rf
is very powerful and can be dangerous if not used carefully.
If the Lilypad RP is running, stop the system (if the node is not running, disregard this first step):
Remove the Lilypad executable by running:
Start your resource provider by running:
To disconnect your node from Lilypad you will need to do a few things to completely offboard.
Using sudo rm -rf
is very powerful and can be dangerous if not used carefully. It is highly recommended to navigate to the parent directory and remove files from there to avoid accidental deletion of important files.
First, stop the node:
Next, you must remove the .service
files related to Lilypad and Bacalhau. These files are typically stored in /etc/systemd/system/
. To remove them, run the following command:
Next we notify the systemd manager to reload its configuration by running:
Then, remove the environment file for the Lilypad resource provider. This file is usually stored in /app/lilypad/
. To remove it, run:
Finally, if you followed the installation instructions from the Lilypad documentation and moved the executable to /usr/local/bin/lilypad
, it can be removed from there. If the executable is stored in a different directory on your machine, navigate to that directory and remove it from there. To remove the executable, run:
To remove Bacalhau, run:
To view your Lilybit_ rewards, visit one of the following dashboards and paste your node's public address into the input:
If you want to allowlist only certain modules (e.g. Stable Diffusion modules), to control exactly what code runs on specific nodes (which can be audited to ensure that they are secure and will have no negative impact on the nodes), set an environment variable OFFER_MODULES
in the GPU provider to a comma separated list of module names, e.g. sdxl:v0.9-lilypad1,stable-diffusion:v0.0.1.
Run a Stable Diffusion Text to Video Job
Based on ComfyUI, the SDV Pipeline modules for Lilypad allow you generate videos from text prompts on Lilypad using Stable Diffusion Video and related models.
The SDV Pipeline modules are designed to take your text prompt, generate a still frame using SDXL, then use that as the input to the SDV model, producing an APNG (animated PNG), WebP video, and an MP4 video all in one go.
Before running sdv
, make sure you have the on your machine and your private key environment variable is set. This is necessary for operations within the Lilypad network.
To run SDV v1.0 or 1.1 Pipeline in Lilypad, you can use the following commands:
To view the results in a local directory, navigate to the local folder.
To view the results on IPFS, navigate to the IPFS CID result output.
Please be patient! IPFS can take some time to propagate and doesn't always work immediately.
As Lilypad modules are currently deterministic, running this command with the same text prompt will produce the same image, since the same seed is also used (the default seed is 0).
If you wish to specify more than one tunable, such as the number of steps, simply add more -i
flags. For example, to change or improve the quality of the image generated add "Steps=x" with x = (min: 5. Max: 200):
The following tunables are available. All of them are optional, and have default settings that will be used if you do not provide them.
Ollama Pipeline modules for Lilypad
Based on Ollama, the Ollama Pipeline modules for Lilypad allow you generate text on Lilypad using various models.
Llama3 is a machine learning model used for natural language processing. It is based on a transformer architecture, which enables it to handle tasks like text generation, summarization, translation, and more. When integrated with Lilypad, it leverages the platform's capabilities to provide efficient text processing for various applications.
Before running the Ollama Pipeline, make sure you have the on your machine and your private key environment variable is set. This is necessary for operations within the Lilypad network.
The Ollama Pipeline in Lilypad can be run using the Lilypad CLI or Docker. Below are the instructions for both of those options.
To run Ollama Pipeline using the Lilypad CLI, you can use the following command:
To run this module in Docker, you can use the following commands:
Run a cowsay job
cowsay
is a simple, text-based program originally written for Unix-like operating systems that generates ASCII pictures of a cow with a speech bubble containing a specified message.
This module was created as a "Hello World" for the Lilypad Network!
Before running cowsay
, make sure you have the on your machine and your private key environment variable is set. This is necessary for operations within the Lilypad network.
Once you've installed the CLI, run the cowsay
command:
To view the results in a local directory, navigate to the local folder.
Here, you can view the stdout
and stderr
as well as the outputs
folder for the run:
Please be patient when waiting for the IPFS result. It can take some time to propogate and doesn't always work immediately.
Investigation of some previous verifiacation-by-replication computing protocols
Before explaining our approach, we give a short overview of three prior approaches to verification-via-replication distributed computing protocols: Truebit, Modicum, and Smart Contract Counter-Collusion. We will outline potential improvements, and how our work builds on top of, and differs from, prior work.
Truebit is a protocol for outsourcing computation from blockchains, built using smart contracts on top of Ethereum. The original potential use cases were trustless mining pools, trustless bridges, scaling transaction throughput, and, scalable “on-chain” storage. Since its goal is to scale on-chain computation, it aims for global consensus: "Since there exist no trusted parties on Ethereum’s network, by symmetry we must allow any party to be hired to solve any computational task, and similarly anyone should be able to challenge a Solver’s outcome of a computational task. The latter requirement ensures that TrueBit operates by unanimous consensus." (emphasis added)
The components of Truebit are Miners, Task Givers, Solvers, Verifiers, and Judges. In order to incentivize checking results, random errors are forced into computations, with jackpots awarded to those who find them. These jackpots are funded by taxes on computations.
The verification game consists of a series of rounds, where in each round, a smaller and smaller subset of the computation is checked. Eventually, only one instruction is used to determine whether a Solver or Verifier is correct:
The authors claim that Sybil attacks are mitigated by pairwise Sybil-resistance between the parties of Task Givers, Solvers, and Verifiers, with Judges and Referees, whose roles are played by Miners, assumed to function as intended. Likewise, they claim that attacks to get bogus solutions on-chain by scaring off Verifiers are mitigated by the economics of deposits, taxes, and jackpot rewards. Additionally, a cartel of Solvers who absorb losses until they find a task with a forced error, upon which time they will receive the jackpot, will lose money in the long-term, since the expected Solver deposit per task is higher than the expected jackpot per task. Addressing another attack, the authors claim that an attack involving a flood of Challengers who try to take as much of the jackpot reward resulting from a forced error as possible is mitigated by having the total jackpot reward decrease as the number of Challengers increases.
Does not scale to large/complicated/arbitrary computations
No formal theorems or proofs, no simulations, many plausible but unsubstantiated claims, especially regarding collusion
Everything is done on-chain
This model does not work well with two-sided marketplaces, because
It aims for global consensus, where any node is allowed to do the computation, whereas in two-sided marketplaces, clients need to be able to choose which nodes they are paying to do the computation
Clients may have time restrictions on their computations, and cannot wait for cases where their computations were injected with forced errors
No accounting for repeated games
Taxes and jackpots are a valuable tool to create a global game that affects local outcomes
Provides a list of potential client attacks
Job Creators are only allowed to submit deterministic jobs to the protocol. The Mediator exists to check for non-deterministic tasks submitted by the Job Creator (which can be used by the Job Creator as an attack vector to get free results), and fake results returned by the Resource Provider. The method for determining whether a job is deterministic or not is for the Mediator to run a job n times and check to see whether it receives different answers.
Modicum combines two separate ideas: checking the result from a Resource Provider to see if it is correct, and checking a result from a job submitted by a Job Creator to see if the job was deterministic or not. This implies that there is no capability for a client to simply check whether a result is correct or not, without the possibility of its collateral being slashed.
An alternative to trusting the Mediator (to act as a TTP) by having it run a job n times is having a consortium of n Mediators each run the task a single time. However, this adds the complication of achieving consensus in that consortium.
The issue of the Resource Provider and Mediator colluding to return a fake result is not addressed by this protocol. The authors allow for a Job Creator or Resource Provider to remove a Mediator from their list of trusted Mediators if they no longer trust it. However, that still leaves room to cheat at least once, and ideally this should be disincentivized from the outset.
There is also the issue of collateralization. The Modicum protocol, as well as a number of other protocols, assume (approximate) guesses as to the cost of jobs in advance, so that nodes can deposit the correct amount of collateral. However, doing so is fraught with complications; we provide an alternative approach in the Mechanisms to Explore section.
The Mediator is basically a trusted third party
Client cannot simply check a result without being slashed, which is a consequence of the client attack model
No accounting for repeated games
Potential client attack, though one that can be mitigated by technical means
The client has benefit of getting correct results, which needs to be accounted for in simulation
The authors determine that cryptographic methods for verifiable computation are too expensive for real-world scenarios. For that reason, they rely on verification-via-replication. The scenario is one in which a client simultaneously outsources computation to two clouds, where those two clouds deposit collateral into smart contracts in such a way to create a game between them, where the game incentivizes doing the computation honestly. The central challenge that the authors tackle is the issue of collusion - that is, what if the two clouds collude on an incorrect answer?
In contrast to Modicum, the client is assumed to be honest, and in contrast to Truebit, a trusted third part (TTP) is used to handle disputes.
The authors use a series of three contracts to counter collusion.
The first game is an induced Prisoner's Dilemma - to avoid the two clouds colluding, one cloud can be rewarded the other cloud's deposit (minus payment to the TTP for resolving the dispute) if the former returned the correct result and the latter did not. Thus, each cloud is better off giving the other cloud fake results while computing the correct result itself. This contract is called the Prisoner's contract. It is analogous to the equilibrium in the classic prisoner's dilemma being defection <> computing honest result and giving other node fake result if offered to collude.
However, the clouds can agree to collude via a smart contract as well. They could do this by both depositing another amount into a contract, where the leader of the collusion must add a bribe (less than its cost of computing) to the contract as well (disregarding the bribe, both clouds deposit the same amount of collateral). The deposit is such that the clouds have more of an incentive to follow the collusion strategy than to deviate from it. This contract is called the Colluder's contract.
In order to counteract the Colluder's contract, a Traitor's contract is used to avoid this scenario by incentivizing the clouds to report the Colluder's contract. The basic concept is that the traitor cloud indeed reports the agreed-upon collusion result to the client in order to avoid the punishment in the Colluder's contract, but also honestly computes and returns the result to the client in order to avoid the punishment of the Prisoner's contract. The client must also put down a deposit in the Traitor's contract. Only the first cloud to report the Colluder's contract gets rewarded. The signing and reporting of the contracts must happen in a particular order in order for this process to work.
The authors prove that these games individually and together lead to a sequential equilibrium (which is stronger than a Nash equilibrium), meaning that it is optimal not only in terms of the whole game, but at every information set (basically the set of options each player has at every turn).
A Colluder's contract can be signed on different chains (or even off-chain). In order to mitigate this, the Traitor's contracts would have to become cross-chain (which is a major technical challenge), not to mention the possibility of cryptographically secure contracts (e.g. MPC contracts) where one of the parties alone would not be able to prove the existence of this contract
Relies on trusted third party to resolve disputes
Every task is replicated (that is, two copies of each job are always computed)
Assumes client is honest
Assumes amount of collateral known beforehand
No accounting for repeated games
It is well known that in the repeated Prisoner's dilemma, depending on the assumptions, cooperation becomes the equilibrium
The contracts and the payoffs that they induce offer a valuable toolbox to think about the problem of collusion
The contracts offer, in a restricted setting, an ironclad way (assuming the proofs are correct) of preventing collusion
provides distributed computational services underpinned by the . The network provides infrastructure for use cases like AI inference, ML training, DeSci and more. Lilypad strategically collaborates with decentralized infrastructure networks, such as Filecoin, to formulate a transparent, efficient, and accessible computational ecosystem.
Find Lilypad on or visit .
Mainstream Web3 Application Support: Lilypad is designed to enable mainstream web3 applications to use its compute network with the and . It aims to make decentralized AI compute more accessible, efficient, and transparent for developers and users.
Multichain Support: The Lilypad Incentivized Testnet on Arbitrum in June 2024 with plans to go multi-chain in the near future. Supporting multiple blockchain networks will increase the scalability and interoperability of the network, allowing users to choose the blockchain that best suits their needs.
Incentivized Test Net: The is live! The IncentiveNet program provide users withto participate in running nodes, testing, and improving the network. Learn more by checking out the .
To find out more about it, see the Bacalhau !
- Lilybit_ rewards
- Open source rewards
- Lilybit_ rewards
repo with examples and use cases
- Run a Lilypad "Hello World"
an AI model to Lilypad
to run a Lilypad node
a Lilypad node
a frontend using Lilypad to run AI Inference
- Text to video on Lilypad
For a step by step guide on adding the network, please refer to our .
To obtain testnet LP, use the and enter your ETH address.
To obtain testnet ETH, use a third party and enter your ETH address.
The Arbitrum Sepolia faucet provides 0.0001 tokens per request. If you need more tokens and already have Sepolia ETH, you can use the to transfer it over to Arbitrum Sepolia.
Resource Providers have the option to using Alchemy instead of using the default public RPC endpoints.
A personal RPC endpoint helps RPs to avoid reliability issues with the public RPC endpoints used by Lilypad ensuring rewards can be earned and jobs can be run consistently. RPs running a personal RPC endpoint contribute to the fault tolerance and decentralization of the Lilypad Network! Read more in the Alchemy Arbitrum .
Before we start the Docker setup, you'll need to retrieve the private key from the wallet you set up earlier in this guide. For guidance on exporting your private key, refer to . Once you’ve securely copied your private key, proceed to initialize the Docker containers using the commands provided below.
This is guide is for individuals running a Lilypad Resource provider, find more info .
If the Lilypad RP was properly as a systemd service, the RP will reboot using the new RPC. Once the reboot is complete, the RP should be running with the updated configuration. To verify your node is back online and running correctly, run the following:
A node with a GPU containing 8GB of VRAM can execute Lilypad module jobs, which require a minimum of 8GB of VRAM.
Larger capacity GPUs are needed for heavier compute models like , which require at least 14GB of VRAM.
This is guide is for individuals running a Lilypad Resource provider, find more info .
If the Lilypad RP was properly as a systemd service, the RP will reboot using the new RPC. Once the reboot is complete, the RP should be running with the updated configuration. To verify your node is back online and running correctly, run the following:
Setting up for the Lilypad Testnet so that you can get ETH and LP tokens to fund smart contract calls on Ethereum's Arbitrum testnet, and pay for jobs on Lilypad Testnet.
Funding your wallet with Lilypad Testnet tokens from the
Runnig a Hello, (Lilysay) World! example compute job on the Lilypad Network
lilypad run
--web3-private-key <private-key> -i prompt="a lilypad floating on a pond"
: Lilypad "Hello World" example
: Text to image generation.
: Text to video generation.
: An example module for LoRa training tasks.
: An example module for LoRa inference tasks.
: An example module related to DuckDB.
For a step by step guide on adding the network and importing the LP testnet token, please refer to our .
To obtain testnet LP, use the and enter your ETH address.
To obtain testnet ETH, use a third party and enter your ETH address.
The Arbitrum Sepolia faucet provides 0.0001 tokens per request. If you need more tokens and already have Sepolia ETH, you can use the to transfer it over to Arbitrum Sepolia.
- Docker Engine
To ensure proper operation of your graphics cards and Lilypad, follow these steps to install the Nvidia Toolkit Base Installer:
If the Bacalhau CLIENT
version is not v1.3.2, it will need to be replaced. Follow the steps to uninstall and reinstall Bacalhau.
A can be retrieved from the Metamask account details menu. For more info, check out official guide from Metamask on viewing a wallet's private key.
Resource Providers have the option to using Alchemy instead of using the default public RPC endpoints.
A personal RPC endpoint helps RPs to avoid reliability issues with the public RPC endpoints used by Lilypad ensuring rewards can be earned and jobs can be run consistently. RPs running a personal RPC endpoint contribute to the fault tolerance and decentralization of the Lilypad Network! Read more in the Alchemy Arbitrum .
For complex issues, bug reports, or feature requests, open a discussion in the Lilypad-Tech Github organization discussion .
Navigate to the discussion , select "New Discussion", choose "rp-issues", and fill out the template.
For quick questions or minor issues, use the Lilypad Discord channel and provide the following info.
When a new version of Lilypad is , it is important for resource providers to update their installations to ensure compatibility and access to the latest features and improvements.
As every system is different, these instructions may vary. If you have any issues, please reach out to the team in the for help!
Visit the for a full list of available modules.
See the section for more information on what tunables are available.
See this on how seeds work for more info on this
Learn more about this Lilypad module on .
To view the results on IPFS, navigate to the IPFS CID result output.
The original version of had five key components: Job Creators (JC), Resource Providers (RP), Solvers (market makers), Mediators (agreed-upon third parties for mediation), and Directories (file systems, which we have replaced with IPFS and Docker registries). Job Creators are clients, the ones who have computations that need to be done and are willing to pay. Resource Providers are those with computational resources that they are willing to rent out for the right price. Solvers are market-makers; they match the offers from JCs and RPs. Mediators are third parties trusted by both JCs and RPs to arbitrate disagreements. The Directories are network storage services available to both JCs and RPs.
Useful prototype for a two-sided marketplace (the for stream processing applications is also useful)
Prompt
A text prompt for the model
"question mark floating in space"
Any string
Framerate
The framerate of the video
8
Any valid positive integer between 1 and 20
Seed
A seed for the image model
42
Any valid non-negative integer
Steps
The number of steps to run the model for
50
Any valid non-negative integer from 5 to 200 inclusive
Scheduler
The scheduler to use for the model
normal
normal
, karras
, exponential
, sgm_uniform
, simple
, ddim_uniform
Sampler
The sampler to use for the model
euler_ancestral
"euler"
, "euler_ancestral"
, "heun"
, "heunpp2"
, "dpm_2"
, "dpm_2_ancestral"
, "lms"
, "dpm_fast"
, "dpm_adaptive"
, "dpmpp_2s_ancestral"
, "dpmpp_sde"
, "dpmpp_sde_gpu"
, "dpmpp_2m"
, "dpmpp_2m_sde"
, "dpmpp_2m_sde_gpu"
, "dpmpp_3m_sde"
, "dpmpp_3m_sde_gpu"
, "ddpm"
, "lcm"
Batching
How many images to produce
1
1
, 2
, 4
, 8
VideoSeed
A seed for the video model
42
Any valid non-negative integer
VideoSteps
The number of steps to run the video model for
50
Any valid non-negative integer from 5 to 70 inclusive
VideoScheduler
The scheduler to use for the video model
normal
normal
, karras
, exponential
, sgm_uniform
, simple
, ddim_uniform
VideoSampler
The sampler to use for the video model
euler_ancestral
"euler"
, "euler_ancestral"
, "heun"
, "heunpp2"
, "dpm_2"
, "dpm_2_ancestral"
, "lms"
, "dpm_fast"
, "dpm_adaptive"
, "dpmpp_2s_ancestral"
, "dpmpp_sde"
, "dpmpp_sde_gpu"
, "dpmpp_2m"
, "dpmpp_2m_sde"
, "dpmpp_2m_sde_gpu"
, "dpmpp_3m_sde"
, "dpmpp_3m_sde_gpu"
, "ddpm"
, "lcm"
Sorry! We're working on this page!
We have deloyed the LilypadOnChainJobCreator contract which you can use to trigger running jobs on the lilypad network from other smart contracts.
It works in tandem with the lilypad jobcreator
on-chain which will watch the on-chain contract and manage jobs on behalf of contracts that submit them.
You will need to know the contract address for the on-chain job creator so we can submit transactions to it.
The production controller address is 0x8e136587e3e5266d5244f6aa896E5CAf8E969946
and you can ask it for the address of the on-chain job creator getJobCreatorAddress()
Running a job involves 2 phases:
calling approve
on the ERC-20 contract to allow the solver to spend your tokens
trigger the job via the on chain job manager
Now we know the address of the on-chain job controller - we can ask it for 3 things:
the address of the ERC-20 token contract - getTokenAddress()
how much the required deposit it - getRequiredDeposit()
the address of the solver that will handle running the job for us - getControllerAddress()
Knowing these 3 things means we can call the standard ERC-20 approve
to allow the solver to spend our tokens on our behalf.
Now - we can call the runJob
method of the on chain controller from another contract. This will cause the job-creator service to kick in and do the following things:
check that funds have been approved for the solver
transfer those funds to it's wallet
run the job on lilypad
call the submitResults
method on the on-chain job creator
the on-chain job creator will call the submitResults
of the original calling contract
The following is an example on-chain smart contract:
Here is an example of a script that brings all of this together:
After the job is completed, a job result ID will be displayed in your terminal. The job result can be accessed through the following URL:
Please note that the IPFS link might take some time to display the result.
The Lilypad Hardhat template demonstrates how to use Lilypad within the Hardhat development environment for onchain interactions. It provides the tools and structure needed to deploy smart contracts that interact with Lilypad nodes onchain.
This template includes an example of running a job using the cowsay
module. The cowsay
module is a simple program that generates an ASCII picture of a cow with a message.
You can run this template directly in Gitpod.
The environment will automatically run a preinstall script that checks if Lilypad and Bacalhau are installed. If they are not already present on the user's machine, the script will install them.
Open the Gitpod environment
Run Lilypad:
This script builds Docker services, starts necessary components (Bacalhau, geth), funds the accounts used, deploys the contracts, and sequentially launches Lilypad related services in the background.
This will execute cowsay onchain using Lilypad, with the specified input message.
In a separate terminal, run this command to execute cowsay onchain:
To view the results of the job, refer to this section.
To update the smart contract, you can modify ExampleClient
in contracts/ExampleClient.sol
. Once you've made the changes, run npm run deploy
.
Run the Lilypad CLI wrapper locally
The Lilypad CLI wrapper can run locally to create an API endpoint for running jobs on the Lilypad network. This gives developers full control to build a decentralized system using Lilypad. Github repo can be found here.
Build a front end or AI agent workflow that uses this API endpoint for running jobs!
Note: This is a beta tool and would mostly be expected to run locally. When implementing this tool, note that the POST request includes the user's Web3 private key. Stay tuned for a hosted API from Lilypad that will supplement this local CLI Wrapper.
Build the Lilypad binary:
Run node src/index.js
to create a local endpoint using the js wrapper with either src/run.js
or src/stream.js
, then send a post request containing json with your funded WEB3_PRIVATE_KEY
key set, see the quick start for more on getting started
In inputs
, each input must be preceeded by the -i
flag, including tunables. For example: "inputs": "-i Prompt='an astronaut floating against a white background' -i Steps=50"
The endpoint can then be tested using curl
The following is a list of mechanisms that we are currently considering exploring in order to mitigate attacks. Note that some of these mechanisms clearly would not be able to deter cheating and collusion alone. However, in combination with other mechanisms, they may achieve the goals. In this sense, they should be thought of as modules, optionally composable with each other.
Clients call upon a mediation protocol in order to verify the results of a node. There are several variations of the structure of the mediation protocol; the following parameters can be varied:
The number of nodes in the mediation protocol.
If more than two nodes in the mediation consortium, the consensus threshold that determines which result is the one considered to be correct.
How the nodes are chosen.
For example, we may want as a baseline the same constraint as in Modicum - that only mediators that both the client and the compute node mutually trust can be used for mediation.
Even with this baseline, there is still a question of how to choose the node(s) - it can be random, be determined by an auction, or any other method.
Recursive mediation - that is, if there is no consensus in the consortium, do another mediation.
There is a large space of possibilities regarding how to execute this.
There needs to be a limit to the number of nodes this recursive process can use. For example, the set of potential nodes can be the same as the set of mutually trusted mediators, as described above.
Other methods discussed here, such as taxes and jackpots, as well as staking and prediction markets, can be incorporated into the mediation protocol.
There are a number of different types of collateral that need to be included in the protocol.
The client needs to deposit collateral so that the compute node knows that it can be paid. For computations where the cost is known up front, this is simple. However, it becomes complicated for arbitrary compute; the client might not have put up enough collateral initially, so there may have to be a back-and-forth between client and compute node where the latter halts the computation until the former deposits more collateral or simply cancels the computation and pays for the partially computed result. If the client does not pay, then the compute node can invoke a mediation process.
The compute node needs to deposit several types of collateral.
Collateral in case they timeout.
This is put up as part of the deal agreement - that is, when the deal is posted on-chain.
Collateral in case they cheat.
The way that the compute node will convey the amount of collateral they will deposit to indicate that they will not cheat is via a collateral multiplier. The compute node commits to a multiple of whatever they will charge the client ahead of time as part of the deal agreement. The actual collateral is put up after the result is computed and sent to the client. This simplifies immensely the task of determining how much collateral to deposit for arbitrary computations.
Collateral in case they don't do the computation at the rate they said they would. This is closely related to timeout collateral.
Ideally, this is a way of enforcing deadlines on jobs.
It is not necessary to make this collateral slashing binary - for example, a late result can be still be rewarded.
It is enforceable if, for example, the compute node says that they will do X WASM instructions/time. However, technical limitations may make this unrealistic, and it needs to be tested.
One possible way to overcome collusion is to require super high collateral for some particular nodes against each other that that even discount factors very favorable to collusion would not incentivize collusion, even when accounting for repeated games.
While this is not a part of anti-cheating mechanisms, collateral pooling could lower capital requirements for collateralization. High capital requirements are a second-order concern, but will become a higher priority once robust anti-cheating mechanisms are implemented.
Taking inspiration from the taxes and jackpots scheme used in Truebit, deals can be taxed, with those taxes going to a jackpot that is then used to reward nodes via some distribution protocol determined by the mediation process. For this, we want to be able to take any fraction of the jackpot(s) and distribute it arbitrarily to arbitrary nodes (perhaps even those not involved in the mediation process).
This is a particularly interesting approach because the taxation + jackpots mechanism inherently create a global game that impacts local outcomes. While it may lead to potential collusion attacks, the tool alone is very useful, especially in conjunction with some other the other methods mentioned here. Modeling it in simulation would also provide the opportunity to test some of the hypotheses in the Truebit paper.
This method may also be useful in creating a robust platform where some clients do not care to check their results. That is, if some clients do not check results in general, it may be difficult to assert that the network is secure. Taxes and jackpots may be a way to address this.
Prediction markets have been well-studied in a variety of different fields. More recently, a type of prediction market called a replication market has been explored in the context of replicability in science. With this inspiration, it may be possible that allowing nodes to make bets regarding the replicability of the computations of nodes may be useful in mitigating cheating. For example, nodes with a low prediction for replicability may act as a signal for that node's reputation and encourage it to behave honestly.
It is possible to overlay this mechanism on top of taxes, allowing nodes to choose where their taxes go in the prediction market.
Additionally, since Automated Market Makers are closely related to prediction markets, we can leverage many DeFi tools in this context.
Allow users to stake behind nodes. This is similar to prediction markets, but with slightly different economics. Like with prediction markets, it may be possible to tax users and then allow them to choose which nodes they stake behind. Overall, this approach is similar to delegated Proof-of-Stake.
Can a node announcing that it successfully cheated (and thereby receiving a reward) benefit the robustness of the protocol? How much would this node have to be rewarded?
How often should a client check results? Clearly it is related to the amount of collateral that the other node deposits, how much they value getting true/false results, reputation, and so on. This is a parameter that the client would need to learn to maximize its own utility.
The ledger can maintain a record, for each compute node, of the number of jobs the compute node has completed, the number of times its results were checked, and the number of times those results were replicated successfully. All other nodes (client and compute) could locally run some arbitrary function over these numbers to determine how reputable they find that node.
Results can only be replicated for as long as the inputs are stored somewhere. The client, compute node, or some other entity can pay for storing the inputs/outputs of jobs. The longer they are stored, the more time there is to check the results, which affects things like collateralization, the frequency of checks, etc.
This is related to, but not totally overlapping with, the amount of time that a node might have to wait before getting paid, which is the same time interval that a client has to check a result. However, checking the result after the node gets paid and receives back its collateral may still be possible, with other penalty schemes (or reward schemes, for example, coming from jackpots).
Colluding requires the following knowledge in order to enforce the parameters of the collusion.
The public keys of the nodes participating in collusion.
The results that were posted by those public keys.
The payouts to the public keys.
In order to sign a collusion contract to begin with, the public keys must be known. However, in a mediation protocol with enough nodes, it may be possible to obscure (2) and (3) by
Having nodes submit results to the mediation protocol in an obscured/anonymous way
Have nodes be paid out according to the results of the mediation protocol in an obscured/anonymous way
If these two criteria can be met, then a mediation protocol based on them might be capable of imitating the game-theoretic outcomes seen in the Smart Contract Counter-Collusion paper.
There have been many decades of cryptography and security research focusing on similar problems to these. It may be the case that it is already possible to do this; otherwise, there is a large amount of ongoing research on the topics of privacy-preserving transactions, and much prior work in the flavor of secret-sharing/MPC/Tor/Monero/ZKPs that could enable this.
Expected November 2024
Coming soon!
Our Approach
A core assumption in much of game theory is that agents are utility-maximizing. That is, agents are completely rational actors, and are able to execute exactly the behavior that maximizes their return, however "return" is defined in the given context.
However, we know that in real life, humans are not completely rational, and are not capable of perfect execution of actions. In that light, how can we look at the game-theoretic approaches in the last section?
Either we can try to account for the irrational behavior of humans, or we can try to emulate the behavior of utility-maximizing agents. While there is a large amount of game-theoretic literature dedicated to the former, we opt for the latter for reasons that will become clear below.
While this problem setting - verifiable computation by way of game theory - is different than many game theoretic settings, we can draw inspiration from commonly used concepts like the revelation principle and strategyproofness. Both strategyproofness and the revelation principle are centered around the idea of incentivizing agents to truthfully report their preferences. Most approaches in the literature rely on analytic methods to determine what rational agents will do by analyzing their payoffs as a function of their preferences, the behaviors of other agents, and the mechanism under analysis. Ultimately, we are also aiming to find (a) mechanism(s) that lead(s) to an equilibrium where all agents choose to not cheat and not collude.
Note that the actual environment of a two-sided marketplace for distributed computation is extremely complicated (e.g. the heterogeneity of hardware, types of computation, latencies and bandwidths, etc.). Any theoretical/analytic approach to the problem that is actually correct should also work in simulation, so we opt for a simulation-driven approach.
The way that we can emulate perfectly rational behavior is by training autonomous agents to act on behalf of their human owners in a utility-maximizing manner. At that point, the challenge is to design the global game to drive the probability of cheating to zero - ideally, to make it be equal to zero - which is no small feat in a trustless and permissionless environment. However, the simplifying assumption that we are in fact operating with utility-maximizing agents conceptually simplifies the problem immensely.
The process begins by creating a digital twin of a two-sided marketplace. In this environment, autonomous agents acting on behalf of client and compute nodes will be trained to maximize returns based on data gathered in simulation. For now, we will avoid maximizing returns by optimizing scheduling, though this is a future topic of interest. We will use techniques primarily from the field of multi-agent reinforcement learning in order to train the agents. The precise methods we will use (e.g. modes of training and execution, homogeneous vs. heterogeneous agents, choice of equilibrium, self-play vs. mixed-play, value-based vs. policy-based learning, etc.) will be determined in the course of building the simulation. See the pre-print by Albrecht, Christianos, and Schäfer for our reference text.
At a minimum, the action space for an autonomous agent representing a compute node should be to cheat or not to cheat, and to collude or not collude within a mediation protocol. The observable environment for nodes on the network should include all data stored on the blockchain - that is, the sequence of deals, results, and mediations - as well as the information in the orderbook. While the orderbook will be off-chain, we model in the digital twin the orderbook acting as a centralized, single source of truth that all agents have access to. In the long-term, nodes will have (potentially non-identical) local information regarding other job and resource offers on the network.
Further work may explore agents forming beliefs about the hardware and strategies of other agents, but that is beyond the scope of the current work.
We conclude with two "axioms" upon which we will base our simulations:
Every agent attempts to maximize its utility, including cheating and/or colluding if necessary.
All other components of the game should lead to a "good" solution, as defined in the problem statement.
3D Mapping on Lilypad
This is a work-in-progress proof of concept utilizing Extra Labs and Lilypad to create a Lilypad module for running multiple geospatial jobs.
The goal is to provide the end-to-end adapter for users to operate on the point cloud data to generate reconstructed maps.
Extra Labs is revolutionizing the way maps are built by leveraging collaborative acquisition and deep learning to produce affordable, detailed 3D models of cities and territories.
To achieve this proof of concept, the Lilypad module will integrate with Extra Labs' platform, allowing users to submit their geospatial data, whether collected via drones, aircraft, delivery vehicles, or smartphones. This data is processed through advanced algorithms to create detailed 3D reconstructions.
The decentralized nature of the platform ensures that data providers are compensated fairly for their contributions through a blockchain-based reward system. This approach not only democratizes access to 3D mapping technologies but also ensures continuous and up-to-date data acquisition, enhancing the accuracy and detail of the generated maps.
Users should be able to easily interact with the platform, upload their data, and receive high-quality 3D models in return. This process is designed to make advanced geospatial mapping accessible to a wide range of users, from urban planners and architects to developers and hobbyists.
To find out more, please visit the Extra Labs website.
Verifiable computing is dedicated to ensuring that outsourced computations (that is, computations that are not done locally) are done correctly. In some scenarios, it cannot be assumed that the node to which a job is being outsourced will compute the result honestly, or that it is not faulty. Moreover, verifying the result should have less overhead than computing the result in the first place.
While blockchains provide safety and liveness, the massive replication of computation becomes too costly when that level of security is not needed. There is a difference between global consensus, which is necessary in blockchain environments, and local consensus, which is more suited for two-sided marketplaces. In global consensus, all nodes need to be convinced that every computation was done correctly. In contrast, in local consensus, only a small number of nodes - potentially only one node, the client - needs to be convinced that a computation was done correctly.
Ostensibly, for a two-sided marketplace, this implies that only a client really needs to be convinced that a computation was done correctly. However, these computations are not done in isolation, and the interrelation between a client choosing one node repeatedly versus many different nodes, and the mathematics behind those decisions, as well as the need to create a protocol that any client can come along to with no prior experience and trust that cheating is disincentivized, implies the creation of a global game that, while not requiring global consensus in the traditional sense, emulates it in some manner.
One way to ensure that computations were done correctly is by using cryptographic methods. There are a number of cryptographic approaches for verifiable computation, including
Interactive Proof (IP)
In interactive proofs, verification of a statement is modeled as an interaction between a prover and a verifier. The goal of the prover is to convince the verifier that the statement is true, even when the verifier does not have the computation resources to do the computation itself.
The protocol must satisfy completeness (if the statement is true, an honest verifier will be convinced) and soundness (if the statement is false, the prover cannot convince the verifier except with some negligible probability).
Zero-Knowledge Proof (ZKP)
Zero-knowledge proofs are a type of interactive proof where the verifier learns nothing about private inputs of the computation, other than that the outputs were computed correctly from the all the inputs (some of which may be public/known to the verifier).
A ZKP can be made non-interactive, in which case it is called a Non-Interactive Zero-Knowledge Proof (NIZK). Two common variants of NIZKs are zk-SNARKs (zero-knowledge Succinct Non-interactive Argument of Knowledge) and zk-STARKs (zero-knowledge Scalable Transparent Argument of Knowledge).
Like IPs, ZKPs must also satisfy the requirements of completeness and soundness.
Multi-Party Computation (MPC)
Multi-party computation allows multiple parties to jointly compute a function over their individual inputs without any party revealing its input to other parties. The main objectives of MPC are privacy (parties should learn known about each others' inputs), security (some level of anti-collusion preventing malicious attempts to learn information), functionality (the ability to compute functions over data), and robustness (the protocol should work correctly even in the presence of malicious behavior or faults).
Trusted Execution Environments are secure and isolated enclaves, where code and data inside of the enclave are insulated from the rest of the system, including the operating system, applications, and other enclaves. The goal is to maintain both the confidentiality and the integrity of the code and data.
Verification-via-replication - often described using the adjective "optimistic" in the blockchain world - relies on recomputing the computation to check whether the end result is the same. The benefits of this method are that it is the easiest to understand, and in some sense, the easiest to implement.
In contrast to the other approaches, verification-via-replication often requires reliance on game-theoretic mechanisms such as collateral slashing, reputation, and other methods. This can become a bit complex when trying to counter collusion between the nodes that are computing the results.
One of the downsides of this approach is, of course, the extra effort expended on recomputing computations. However, with proper incentives, the overhead of this can be reduced dramatically. It is also important to keep in mind that the overhead of this approach is much lower than cryptographic methods, which usually have much higher overhead.
We opt for verification-via-replication as a first approach, for the reasons that it is simple to understand, has less overhead than cryptographic approaches, and has an attack surface that can be economically modelled and analyzed.
This has the downside of making private computations difficult. While the inputs and outputs of jobs can be encrypted so that only the client and compute node can see the code and data, this still leaves the client vulnerable to having their information leaked. Future approaches can incorporate SLAs and eventually support for homomorphic encryption to deal with this issue.
Waterlily.ai - An Ethical Generative AI-Art DApp
Waterlily.ai is a proof-of-concept application aiming to provide an alternative ethical solution to AI-Art generation by creating a new paradigm for AI image generation that pays royalties to the original creator and offers a new revenue stream for artists to explore.
To achieve this, Waterlily leverages the transparency & trustlessness of blockchain, with FVM smart contracts, combined with a decentralised edge compute platform in Bacalhau to deliver an intuitive user experience for generative AI-Art while compensating creators for their original work with every image generation call.
Each stable diffusion model on Waterlily is trained & finetuned to a selection of artist work that has been uploaded by the artist themselves or is available in the public domain. When a user navigates to the Waterlily.ai platform, they simply enter a text prompt for the images they want to create and choose an artist style that appeals to them. The small fee paid by the user is then distributed to the artist's wallet (except the fees required by the network to run the contract calls & return the generated images) - or to a vetted creator foundation in the case of public images.
A user can then opt to download these generated images or mint them as NFTs.
Run a Stable Diffusion Text to Image Job
Generically, stable diffusion is what happens when you put a couple of drops of dye into a bucket of water. Given time, the dye randomly disperses and eventually settles into a uniform distribution which colours all the water evenly.
In computer science, you define rules for your (dye) particles to follow and the medium this takes place in.
Stable Diffusion is a machine learning model used for text-to-image processing (like Dall-E) and based on a diffusion probabilistic model that uses a transformer to generate images from text.
Before running sdxl
, make sure you have the Lilypad CLI installed on your machine and your private key environment variable is set. This is necessary for operations within the Lilypad network.
Learn more about installing the Lilypad CLI and running SDXL with this video guide.
When running SDXL pipelines in Lilypad, you have the choice between using the Base model or the Refiner model. Each serves a unique purpose in the image generation process:
Base Model: This is the primary model that generates the initial image based on your input prompt. It focuses on the broad aspects of the image, capturing the main theme and essential elements. The Base model is faster and uses less computational power.
Refiner Model: This model takes the image from the Base model and enhances it. It refines details, improves textures, and adjusts colors to increase the visual appeal and realism of the image. The Refiner model is used when you need higher quality and more detailed images.
To run SDXL Pipeline in Lilypad, you can use the following commands:
Base:
Refiner:
Base:
Refiner:
To view the results in a local directory, navigate to the local folder.
In the /outputs folder, you'll find the image:
To view the results on IPFS, navigate to the IPFS CID result output.
Please be patient! IPFS can take some time to propagate and doesn't always work immediately.
As Lilypad modules are currently deterministic, running this command with the same text prompt will produce the same image, since the same seed is also used (the default seed is 0).
To change an image output, pass in a different seed number:
See this beginner-friendly article on how seeds work for more info on this
Lilypad can run SDXL v0.9 or SDXL v1.0 with the option to add tunables to improve or change the model output.
If you wish to specify more than one tunable, such as the number of steps, simply add more -i
flags. For example, to improve the quality of the image generated add "Steps=x" with x = (Min: 5. Max: 200):
See the options and tunables section (below) for more information on what tunables are available.
The following tunables are available. All of them are optional, and have default settings that will be used if you do not provide them.
Prompt
A text prompt for the model
"question mark floating in space"
Any string
Seed
A seed for the model
42
Any valid non-negative integer
Steps
The number of steps to run the model for
50
Any valid non-negative integer from 5 to 200 inclusive
Scheduler
The scheduler to use for the model
normal
normal
, karras
, exponential
, sgm_uniform
, simple
, ddim_uniform
Sampler
The sampler to use for the model
euler_ancestral
"euler"
, "euler_ancestral"
, "heun"
, "heunpp2"
, "dpm_2"
, "dpm_2_ancestral"
, "lms"
, "dpm_fast"
, "dpm_adaptive"
, "dpmpp_2s_ancestral"
, "dpmpp_sde"
, "dpmpp_sde_gpu"
, "dpmpp_2m"
, "dpmpp_2m_sde"
, "dpmpp_2m_sde_gpu"
, "dpmpp_3m_sde"
, "dpmpp_3m_sde_gpu"
, "ddpm"
, "lcm"
Size
The output size requested in px
1024
512
, 768
, 1024
, 2048
Batching
How many images to produce
1
1
, 2
, 4
, 8
See the usage sections for the runner of your choice for more information on how to set and use these variables.
Learn more about this Lilypad module on Github.
Check out our smart contracts docs!
DefiKicks is a decentralized, community-governed Data DAO on Filecoin that democratizes DeFi data aggregation and TVL calculations
See the Lilypad Example at Drand Off-chain Voting
DefiKicks is a decentralized, community-governed Data DAO (Decentralized Autonomous Organization) that operates on the Filecoin network, with the goal of democratizing DeFi data aggregation and TVL (Total Value Locked) calculations. It seeks to improve transparency and foster collaboration through a system of on-chain adapter proposals, off-chain voting, and interoperable token rewards.
At its core, DefiKicks is a counter to the centralization issues prevalent in traditional DeFi data aggregators such as DefiLlama. By shifting the TVL calculations to be performed on-chain, DefiKicks ensures data authenticity and transparency. In essence, it functions similarly to a version control system like 'Git,' allowing anyone to propose a new TVL adapter for a project.
Adapter proposals are initiated on-chain via an IPFS hash, with the adapter code hosted on IPFS. Voting is conducted off-chain using the time-lock encryption of Drand, with results brought back on-chain through Bacalhau Lilypad, a trusted oracle. Upon community approval, adapters are registered on-chain in the 'Registry' contract, which maintains a transparent and immutable record of approved adapters.
Active participants in the voting process who align with the majority are rewarded with KICK tokens. The issuance of these tokens is governed by a smart contract, incentivizing participation and wise decision-making.
Following adapter registration, the decentralized backend begins calculating data points using the code hosted on IPFS. This process employs Lit's Programmable Key Pairs and Lit Actions, with all TVL results hosted on a Ceramic Stream. The results are then viewable by users on the frontend, further promoting an open and transparent DeFi ecosystem.
DefiKicks uses a suite of modern technologies to power its platform, including smart contracts for governance, the LIT protocol for decentralized data governance calculation, Ceramic for decentralized data storage, libp2p for secure peer-to-peer network connections, ApeCoin for alternative voting options, Bacalhau for off-chain jobs execution, Filecoin FEVM for contract interactions, IPFS for decentralized data storage, Web3.Storage for reliable data transfer, and Drand for unbiased off-chain voting.
In essence, DefiKicks, with its combination of technologies, proposes a novel, transparent, and decentralized approach to DeFi data. It is a testament to the possibilities of decentralized governance and data analytics, aiming to usher in a new era of trust and accessibility in DeFi analytics.
DefiKicks operates on a foundation of decentralized technologies and protocols to ensure transparent, accessible DeFi data.
It uses smart contracts on Filecoin's FEVM for governance, including adapter proposal, voting, and registration.
The LIT protocol allows for secure, decentralized data calculations, while Ceramic facilitates decentralized data storage.
Lilypad + Bacalhau carries out off-chain jobs such as reward computation and vote tallying.
IPFS and Web3.Storage ensure secure and efficient data storage and transfer, while Drand provides unbiased off-chain voting through time-lock encryption.
A developer platform for using the Lilypad network to train and run AI models
The Lilypad ML workbench provides ML researchers, businesses, and more a simple interface for leveraging the Lilypad network.
The ML workbench provides an interface to:
Run models that are currenly avaliable on the Lilypad GPU network
Add a new model to run on the network as a Lilypad module
Leverage Juptyer notebooks with Lilypad
Create multi-module flows
Fine tune models
Create an AI agent
In order to run an AI model on the Lilypad network, a docker image must be created for the program using the Lilypad module spec.
The ML workbench product is currently in beta testing and is not yet provided as a hosted solution. Reach out in the Lilypad Discord for more info!
Run existing Lilypad modules with a simple text or image prompt. The workbench will output an IPFS CID with the result.
Implement the Lilypad module allowlist and cache the models locally before they are needed! Pin the files to a local IPFS node packaged with the workbench platform.
Import a dataset and use the the "Data" tool to finetune a model for a specific end user. Create a Lilypad lodule and save the module on IPFS. More on this soon!
The ML workbench provides a simple interface for creating and training AI agents. The Lilypad team is currently testing a variety of models and fine tuning techniques to determine the optimal path forward for this feature. More on this soon!
A guide for running Lilypad jobs in a Farcaster frame
In this guide we will be building a Farcaster frame for local development that runs a Lilysay prompt on the Lilypad Network.
Users of the frame can input a prompt and a generate an image. The generated image will appear in the box above the input and will allow a user to view the ASCII art as an image.
This is an example for running a local version as inspiration for developers. The Lilypad CLI will need to be wrapped and the project hosted as an API to run in production and be used in a Farcaster Frame.
For this frame, create a NextJS 14 app:
Next, install the required dependencies. Coinbase onchain kit is used in this project:
Open the project in your code editor.
We will need to fund a wallet with Lilypad and Arbitrum Sepolia testnet tokens. Follow the first 2 sections labelled "Setting up MetaMask" and "Funding your wallet" from our Quick Start docs.
.env.local
Add the following into your .env.local
file. The private key is used to run the CLI jobs. Make sure that your .env.local
file is added to the .gitignore
file as your private key should not be exposed or pushed.
Your WEB3_PRIVATE_KEY
can be retrieved from the MetaMask account details menu. For more info, check out the official guide from MetaMask on how to get a your private key. Please do not push your private key to GitHub.
WEB3_PRIVATE_KEY='your-wallet key'
NEXT_PUBLIC_BASE_URL=http://localhost:3000
Run through this guide to install the Lilypad CLI on your machine. Select the CLI User script.
Framegear is a simple tool provided by the @coinbase/onchainkit
package that allows you to run and test your frames locally without publishing the frame.
We will be using Framegear for this build. However, there are other libraries that can be used for this project instead of Framegear.
In a separate terminal, clone down the onchainkit
repo and run Framegear
:
Navigate to http://localhost:1337
and keep that window open for when we start to write the frame.
We will need to set up the metadata for our Next.js application that includes Farcaster frame information. It will configure the elements and a URL for frame posting, while also specifying Open Graph metadata for improved social sharing.
In app/page.tsx
, add the following before the Home
function declaration:
The UI elements for this frame are all rendered in the app/api/route.ts
file, which acts as a request handler for different routes or endpoints within the web application. It defines the logic for handling user input, generating responses, and managing application state. The main functions include processing user prompts, handling status checks, and generating images asynchronously.
Here’s how the routes are structured for this frame:
/api/frame?action=input
: This route displays the initial user interface, which includes a text input field for the prompt and a button to submit the form. The user interface also updates dynamically based on the processing status, such as showing a placeholder image or the final generated image.
/api/frame?action=submit
: This route processes the user input. When a prompt is submitted, the server initiates the image generation process asynchronously. While the image is being generated, the user sees a loading state, and they can check the progress.
/api/frame?action=check
: This route checks the status of the image generation process. It updates the frame with either a completed image, an error message if the generation fails, or the processing state if the image is still being generated.
We also include a fallback just in case an error occurs during the processing of job.
/api/frame?action=save
: Though not explicitly included, this could be an additional route for handling the logic of saving the generated image to a location for future access.
generateImage
The generateImage
function handles the user input and generates the final image for display by utilizing the functions in app/services/cli.ts
. It ensures that the image is generated asynchronously and the result is available for display in the frame, or handled properly in case of any errors during the generation process.
Throughout the interaction process, various images are used. These images serve as visual cues during each step of the frame, such as when the user is prompted for input, while the image is being processed and once the final result is ready or if an error occurs.
To obtain the images used in this guide, save them from the source code here.
route.ts
Create the api/frame
directories in your project. Inside of the app/api/frame
directory, create a file named route.ts
and add the following code:
cli.ts
This is where the execution of the Lilysay job happens. It will run the job, wait for it to finish and then create an image from the return value.
Inside of the app
directory, create a new directory named services
and inside of that create a file named cli.ts
. The functions inside this file will allow us to send a prompt to the Lilypad Network using the user prompt and a predefined command that runs asynchronously in the terminal. Once the command is executed, Lilypad processes the input through its Lilysay module and outputs the results in the form of an ASCII image, which is then converted into a displayable image using an SVG-to-PNG transformation.
Here are the 3 functions inside this file:
createImageBufferFromAscii
: Converts ASCII text into an SVG image and then uses the sharp
library to convert the SVG into a PNG image buffer. This allows the display or saving of an image representation of the ASCII text.
runCliCommand
: Executes a Lilypad CLI command to process the user's input text, captures the command's output, and converts it into an image buffer. It handles the entire process of running the command, capturing the output, and managing errors.
extractStdoutFilePath
: Parses the CLI command's stdout to extract the file path where the Lilypad CLI has saved the output. It uses a regex pattern to identify the path in the command's output.
The following code snippet demonstrates the process:
Sending the Request: The user's input text is passed directly into the Lilypad CLI command using a shell process. The input text is embedded within the command's arguments and executed asynchronously in the terminal.
Handling the Response: After the CLI command completes, the output is captured and processed. The response includes the file path to the generated ASCII image, which is then read from the file system and converted into a PNG image for further use.
Error Handling: If an error occurs during the execution of the CLI command or file processing, it is logged to the console, and the process is terminated with appropriate error messaging.
The Framegear server is running. Next, run your local server in your frame project. We will need to make sure it is running on port 3000:
Navigate to the Framegear host http://localhost:1337 and you will see an input labeled "Enter your frame URL". Add http://localhost:3000 to that and click "Fetch".
You should now see your frame and be able to interact with it. Enter a prompt to display in the Lilysay ACSII image!
As the job is processed, a “Check Status” button will be displayed. Clicking this will check if the job has been completed. Once a job is completed, the results will be displayed in the frame. If there is an issue along the way, an error message will be displayed and the user will be prompted to try again.
Running Lilysay jobs is just one of the ways you can utilize Lilypad in a frame, but you can run jobs with any available modules on the Lilypad Network. Some of these include:
Stable Diffusion image
Stable Diffusion video
Ollama
A selection of projects built with Lilypad by hackers & community members - get inspired!
Want to be featured on this page? Get in touch in the Lilypad Discord server!
Common FAQs when running a Lilypad node
Please view these resources before asking questions!
For complex issues, bug reports, or feature requests, open a discussion in the Lilypad-Tech Github organization discussion .
Navigate to the discussion , select "New Discussion", choose "rp-issues", and fill out the template.
Without a discussion opened, our team will not be able to support the problem.
For quick questions or minor issues, use the Lilypad Discord channel and provide the following info.
Description (including Lilypad version running on your node)
Hardware Info (including Linux/Windows version)
Related blockchain/ETH addresses of transaction hashes
Output Logs - sudo systemctl status lilypad-resource-provider
Related links/urls
Screenshots
IMPORTANT: When sharing screenshots of your logs or node information, make sure to remove or block out any references to your node's private key. Keeping your private key safe and away from the public eye is crucial!
If the RP public keys are not found within the Arbiscan network or Leaderboard
Updating and restarting the Lilypad services regularly (daily) is encouraged throughout IncentiveNet.
Check the wallet used by the RP to ensure it has enough Arbitrum Sepolia ETH and LP tokens
Search the wallet ID on Arbiscan to view transactions occurring every hour
After following the checklist above, if needed restart the node.
Restart lilypad and bacalhau services
Resource Providers can run on the Lilypad Network without a GPU, however only hardware with a GPU is currently rewarded with Lilybit_ rewards.
Lilypad RPs currently only support Linux installs. Running a RP on Windows is currently experimental.
No, this would be considered detrimental to the Lilypad network and cannot be rewarded.
In order to run jobs successfully, Lilypd RPs must ensure all resources on the server are available. If a single GPU is running many RPs and splitting the server resources between them, this causes each RP to not have enough resources to run a Lilypad job.
Here's a quick guide on setting up your own RPC for a Lilypad node.
First, if the Lilypad node is running, stop the system (if the node is not running, disregard this first step):
Uninstall Bacalhau:
Finally, start the resource provider:
One fix is to install kubo (goipfs) and then export the path to /app/data/ipfs
using:
Export the path first, then turn Bacalhau back on and see if that corrects it.
Indicates that the CUDA version of the RP is incorrect. Install the CUDA version which suitable for the gpu type and compile Lilypad by themselves.
To check if the RP is running use the following command:
This will give a live output from the Lilypad node. The logs will show the node running and accepting jobs on the network.
Run the following command to get more info from your node:
Once a day at 00:10 UTC. If no rewards appear after 24 hours of expected uptime, view the logs and Arbiscan transactions from the resource provider to ensure online status.
The slashing mechanism now uses a tiered/progressive system to more fairly slash RPs. Penalties will increase progressively the longer the node remains online. 1-5 days offline will result in a 2.5% slash per day, 5-10 days offline a 5% slash per day, and more than 10 days offline a 10% slash per day.
A grace period for RP downtime is now included in the slashing mechanism. RPs will earn 2 days of a “grace period” after every 30 days of continuous service provided.
These 2 days will be applied to 2 subsequent down days recorded by the RP allowing the RP to avoid slashing for these 2 days.
Grace Period days do not accumulate to more than 2 days ever. Once used the 30 day count to obtain the 2 days restarts.
However, a RP will still receive a multiplier relative to how many 4-hour windows of POWs completed in that day. This figure shows the daily multiplier for a given number of 4-hour windows of POWs completed in a day.
Resource providers are expected to have the latest Lilypad version installed on their systems. The installation instructions can be found here:
Currently, it's not possible. However, it's a very good feature request and the team is evaluating!
This is more than likely due to you trying to export your mnemonic seed phrase instead of the private key. A private key typically appears like this: 4c0883a69102937d6231471b5dbb6204fe512961708279df95b4a2200eb6b5e7
and consists of 64 hexadecimal characters.
See the Twitter Thread
Watch the Open Data Hack Presentation
See the Twitter Thread
See the Twitter Thread
See the Twitter Thread
Watch the HackFS Presentation
Ensure the RP (node) is running the latest for your preferred environment
Confirm network activity. Is the RP transacting on (Sepolia) with transactions verified and running consistently?
Is the RP online in the and in the ?
does the RP have Lilypad Tokens (LP) and Arbitrum ETH?
The Discord channel is the primary location for Resource Provider announcements. Announcements in this channel are also posted on the Lilypad page.
Check out this for setting up a Lilypad node using Ubuntu Linux. Note, you may notice in the Metamask setup part of the video the Network name and RPC URL are old. Disregard this, the video was recorded with our old testnet info not the current Lilypad testnet on Arbitrum. Otherwise, the directions here are current.
Read more about running a Lilypad node in the .
To view the status of a Lilypad Resource Provider ensuring points are earned, follow the steps below or check out this .
View points earned in the
If this doesn't solve the problem, with our team.
Recommendation guide using Proxmox found . More on this coming soon!
Reinstall Bacalhau using in the docs.
The CUDA version of the RP does not match the GPU driver. Please refer to this to repair installation.
Read more in the instructions.
The way the system is setup currently, a RP will lose the if the hashrate call (every 4 hours) is missed. If the PoW happens in the window of time that a node is restarting/offline, then it's missed.
To stay up to date with the latest releases, check the in the Lilypad Discord or visit the . Along the top menu, click the "Watch" dropdown and you will see a section named "Custom". Selecting "Releases" will allow you to get notified of any new releases!
Check out the .
a custom network (Lilypad)
a wallet's Private Key
the Lilypad Token to a wallet
a wallet with LP and ETH
No ETH or LP in your wallet? ()
Join our for more help!
Setting up a feeless frontend for Lilypad
In this tutorial, we will review how to create a feeless front end for a Lilypad module. To do so, we will:
Create a new NextJS project.
Create a CLI Wrapper to run the module.
Create a server to handle requests from the browser.
Create our user interface to allow a user to enter a prompt to run the module.
Run the front end.
Create a new Next JS project by running the following:
In your project folder, create a new file named .env to manage your environment variables. Paste in the following, and replace “<your-private-key-here>” with the private key of the wallet that will be charged for all of the jobs.
To avoid publishing your private key online, add .env to your .gitignore
file.
Install dotenv
to access your environment variables:
In your app folder, create a folder for pages, then a folder for API, and in that folder, create a file named cliWrapper.js
. Paste the following code into cliWrapper.js
:
Install Express and CORS:
In your app folder, create a file named server.js
and paste the following code:
In the App folder, delete the default content on the Page.tsx
file and paste in the following code:
To run your front end on your local host, you will need to update the next.config.js
file. Open the file and paste the following:
Now you're ready to test your front end!
Open two terminals, one to run your server and another to run your front end. In both terminals, navigate to your App folder. For your server terminal, run the following:
You should see “Server listening at http://localhost:3001”.
In your second terminal, run your front end with the following command:
You should see something like this.
In your browser, navigate to http://localhost:3000/ and you should now be able to see your front end. It should look like this.
Type in a prompt and click on the enter button. If you look at the terminal running your server, you’ll notice that it says “Lilypad Starting…”. Lilypad will print to the console once the job is complete, and it should look something like this.
Now check your browser, and you should see your image.
Congrats! You have successfully integrated Lilypad into your front end.
If you were able to complete the tutorial, share your front end with us on Twitter!
If you need any help or have any feedback, hop into our Discord!
View the source code.
These documents provide a background for ongoing Lilypad research.
If you have questions or find something in here interesting, please feel free to raise a discussion in the GitHub or in the Lilypad Discord server!
These documents provide a background for the ongoing research on Lilypad. They are primarily focused on the game theory and cryptoeconomics of the protocol, and include an introduction to verifiable computing, an overview of the specific problems we are tackling, a brief overview of prior work, a description of our approach, and mechanisms that we plan to test in simulation.
Alison Haire
CEO
prev: Filecoin, IBM, Accenture
Lilypad YouTube
An innovative NFT game where characters evolve and improve their tasks over time.
SlickCI One click deployment of CI/CD workflows on to decentralised cloud infrastructure.
dub3.ai dub3.ai is an innovative marketplace harnessing the power of Cartesi, enabling users to transform their voice into unique NFTs and generate text-to-speech content using cutting-edge AI technology.
A custom lilypad module for SQL streaming through kamu.dev, this provides provenance, version control, and provenance of derivative datasets. Integrates the desci project kamu.dev into a bacalhau job to be consumed by lilypad v2.
Dive into the cosmic engineering that propels your personalized galaxy of knowledge.
ML Tooling for Security Audits. Validate and test the status of deployed Lilypad Compute Networks. The contracts ensure seamless interaction with the network, while the test script checks for network availability and contract functionality.
This platform, much like Docker Hub, is a one-stop destination for all your module needs as a Lilypad user. Includes a feature-rich playground where you can seamlessly test job specifications, experiment and fine-tune your code in a safe environment.
A user can plug obsidian-lilypad into their obsidian instance to fetch dpids from desci labs, have chat gpt summarize it, and use that as a prompt in runSDXL in lilypad. Helia/ipfs and kubo delegated content routing is used to splay dags and fetch content from IPFS.
The core objective of DeCenter AI is to democratize and decentralize AI model training.
Lilylatte is a DataDAO designed to bridge the gap between onchain and offchain data, focusing on web3 market research. Data owners, or web3 citizens, link their wallets and engage in dialogues generated by our LLM model, 'Lily.'
An intents computational framework for describing lilypad workflows with a serialization format that allows you to replace variables within solidity, onchain which allows you to configure the receiveResults to feedback and trigger future jobs.
I use historical data about the network gathered by Saturn Moonlet to prepare and publish analytical reports on Saturn Observatory website every month.
RejuvenateAI is the first community blockchain based project powered by community built for community to promote healthy living and achieve healthy locations where people live up to a 100 years all around the world.
This project provides a JavaScript wrapper to interact with the Lilypad CLI using Node.js. The wrapper utilizes the node-pty
package to spawn a pseudo-terminal for executing the Lilypad CLI commands. It will enable the front-end developers to integrate Lilypad with other tools.
LilyWrite combines blockchain technology and artistic expression, enabling users to craft and own unique poetic artworks. Users generate poems using credits, represented by LW tokens, and acquire them as distinctive non-fungible tokens (NFTs), each symbolizing a unique blend of creativity and expression.
GreenBadge is an innovative on-chain funding platform dedicated to fostering sustainability and a brighter future for our planet. Within our platform, organizations committed to these crucial goals will gain access to a vast network of donors and like-minded brands who share a passionate vision for the future of our environment.
CLI Package for running 3D surface reconstruction with decentralized compute.
Empowering Patients with ownership of their electronic health records.
A decentralized credit score system using zkSNARKs, decentralized Computation & Storage.
Elevating financial privacy in crypto withdrawals. Securely withdraw funds via anonymous transfers & secure transactions, preserving anonymity.
Verifiable on-chain Large Language Models drift benchmarking inspired by
Uncensored 100% decentralized GPT AI chat running on the blockchain.
Trusted audio in the age of deepfakes 🔒🎙 Generative AI is a threat to society. It enables disinformation, manipulation, and political subversion. We've built the world's first attested microphone and used ZK-SNARKs to protect authenticity and privacy.
A fastchat LLM Inference Module for Lilypad v1.
Decentralized AI Assistant for Web3 - Perform on-chain transactions via prompt.
Private-Model Generation & Service Platform -> Your very own unique personality in an AI Avatar (web3 character.ai)!
Based on academic research, EasyCraft built an AI and blockchain based supply chain management implementation matching factory capacity with customer orders for economic efficiency.
A decentralized, community-governed Data DAO on Filecoin that democratizes DeFi data aggregation and TVL calculations with on-chain adapter proposals, off-chain voting & interoperable tokens.
A proof of concept of a decentralized yield data aggregator. It gets the pool data, does calculations on it and shows the best APY value of the reserve to the user with a push notification.
Pensieve is a decentralized file storage to recreate your those moments in your memories and share with others. As you upload these memories, you can also choose to take part in the documentation of the HISTORY OF APES (HOMO SAPIENS).
A swiss army knife for Bacalhau, everything you need in an easily accessible dashboard.
An easy SDK for AI models running on decentralised computing.