Llama2
Run Llama 2 on the Lilypad network
These instructions provide steps for running the Llama2 module on the Lilypad network using Docker and the Lilypad CLI. Find the module repo here.
Getting Started
Prerequisites
Before running llama2
, make sure you have the Lilypad CLI installed on your machine and your private key environment variable is set. This is necessary for operations within the Lilypad network.
Learn more about installing the Lilypad CLI and running a Lilypad job with this video guide.
Run Llama2
Run Llama2
Example:
Notes
Ensure you have the necessary permissions and resources to run Docker containers with GPU support.
The module version (6d4fd8c07b5f64907bd22624603c2dd54165c215) may be updated. Check for the latest version before running.
Adjust port mappings and volume mounts as needed for your specific setup.
Llama2 Output
To view the results in a local directory, navigate to the local folder provided by the job result.
In the /outputs folder, you'll find the image:
To view the results on IPFS, navigate to the IPFS CID result output.
Last updated