LoRA stands for Low Rank Adaptation and is a mathematical technique to reduce the number of parameters that are trained in a model - so instead of fine-tuning all the weights that constitute the weight matrix of the pre-trained large language model, two smaller matrices that approximate this larger matrix are fine-tuned.
What this means in practice is that instead of needing to build a custom model off all the original input data material (and needed many many GPUs to do so), LoRA means you can fine-tune an existing model (such as SDXL 0.9 Stable Diffusion) to be biased towards a certain result on just one GPU.
For example, an open source Stable Diffusion model can be fine-tuned to produce images in the style of Claude Monet paintings using LoRA.
Fun fact: This is how Waterlily.ai trains artist models - look how good the results are even without an up to date Stable Diffusion model like SDXL0.9!
A Claude Monet data set is available in zip form on IPFS here: ipfs://bafybeiglwmvudxxethpi46el3o5m44lrki2sjzgs7whvb6xaz6e65wm7am
One of the easiest ways to upload a dataset to IPFS is to use web3.storage.
To run a LoRA fine-tuning job, just provide the training data for the job to the command:
lilypad run lora_training:v0.1.7-lilypad1 '{images_cid: "bafybeiah7ib5mhzlckolwlkwquzf772wl6jdbhtbuvnbuo5arq7pcs4ubm", seed: 3}'
NB: the params above should be yaml eg. {seed: 42, 'images_cid': 'Qm...'} where images_cid contains an images.zip with training images in it.
This will output a result model CID, which can then be used to generate new images in this particular style:
lilypad run lora_inference:v0.1.7-lilypad1 '{lora_cid: <CID result from above>, prompt: "an astronaut riding a unicorn in the style of <s1><s2>", seed: 3}'
[Smart Contract] Running LoRA Fine Tuning [coming soon]