LLM Inference
A Fast Chat LLM Inference Module for Lilypad
Last updated
A Fast Chat LLM Inference Module for Lilypad
Last updated
This LLM Inference Module is a community-contributed module developed at AugmentHack.xyz The repo for this module can be found here.
See the original AugmentHack entry below:
Usage:
Inputs:
Where "paramsStr" is a question in CID form for the LLM. For example https://ipfs.io/ipfs/QmcPjQwVcJiFge3yNjVL2NoZsTQ3GBpXAZe21S2Ncg16Gt is a bare file CID which contains
To use it you would run:
Outputs:
The output will be an IPFS CID, for example running the above input would result in the following link:
https://ipfs.io/ipfs/QmVNXCAfJgER6U7Z5XT8QaAVFPdwmtSFE6c9sUaAx7ttZs
Under link/output/result.json you will see
Pssst... here's a question on Claude Monet you could try too ;) bafybeihu62yl76fcypidaiz35gq3yjguxawy5zzwadzvlcgpnfkuy2do3i