Hi, I am starting to see people using llama3 to generate article. It seems to be better than any model out there at the moment. It generates far better content than gpt4 and execute instructions better than other llm.
Would you look into implementing?
Tim
May 9, 2024, 5:00am
2
Its accessible via meta-llama/Meta-Llama-3-70B-Instruct - Demo - DeepInfra
Inside SCM
If you have a MAC you can install the models on your computer and access it using SCM inbuilt openai API.
A self-hosted, offline, ChatGPT-like chatbot. Powered by Llama 2. 100% private, with no data leaving your device. New: Code Llama support! - getumbrel/llama-gpt
I also see that its available to nodejs, which means is possible for SCM to download and run it on your machine for free.
However there are some steep memory requirements ie 64GB ram to run it locally.
Tim
May 20, 2024, 11:35pm
3
Possible to use open ai alt now and also LM studio to run llama on your computer.
How to connect SCM to LM Studio (llama3 etc) to run offline AI models for FREE