π οΈLilypad ML Workbench
A developer platform for using the Lilypad network to train and run AI models
Last updated
A developer platform for using the Lilypad network to train and run AI models
Last updated
The Lilypad ML workbench provides ML researchers, businesses, and more a simple interface for leveraging the Lilypad network.
The ML workbench provides an interface to:
Run models that are currenly avaliable on the Lilypad GPU network
Add a new model to run on the network as a Lilypad module
Leverage Juptyer notebooks with Lilypad
Create multi-module flows
Fine tune models
Create an AI agent
In order to run an AI model on the Lilypad network, a docker image must be created for the program using the Lilypad module spec.
The ML workbench product is currently in beta testing and is not yet provided as a hosted solution. Reach out in the Lilypad Discord for more info!
Run existing Lilypad modules with a simple text or image prompt. The workbench will output an IPFS CID with the result.
Implement the Lilypad module allowlist and cache the models locally before they are needed! Pin the files to a local IPFS node packaged with the workbench platform.
Import a dataset and use the the "Data" tool to finetune a model for a specific end user. Create a Lilypad lodule and save the module on IPFS. More on this soon!
The ML workbench provides a simple interface for creating and training AI agents. The Lilypad team is currently testing a variety of models and fine tuning techniques to determine the optimal path forward for this feature. More on this soon!