A local LLM not using llama.cpp as the backend? Daring today aren't we.
Wonder what its performance is in comparison
Discuss matters related to our favourite AI Art generation technology
A local LLM not using llama.cpp as the backend? Daring today aren't we.
Wonder what its performance is in comparison