[Stable Diffusion XL](https://huggingface.co/spaces/stabilityai/stable-diffusion) Images [Riffusion](https://replicate.com/riffusion/riffusion) Music [MusicGen](https://huggingface.co/spaces/facebook/MusicGen) Music [Point-e](https://replicate.com/cjwbw/point-e) 3D Objects [StarChat-β](https://huggingface.co/HuggingFaceH4/starchat-beta) Coding Assistant

1
0

I noticed there didn't seem to be a community about large language models, akin to r/localllama. So maybe this will be it. For the uninitiated, you can easily try a bleeding edge LLM in your browser [here](https://huggingface.co/spaces/HuggingFaceH4/falcon-chat). If you loved that, some places to get started with local installs and execution are here- https://github.com/ggerganov/llama.cpp https://github.com/oobabooga/text-generation-webui https://github.com/LostRuins/koboldcpp https://github.com/turboderp/exllama and for models in general, the renowned TheBloke provides the best and fastest releases- https://huggingface.co/TheBloke

4
0