Hi, I am starting to see people using llama3 to generate article. It seems to be better than any model out there at the moment. It generates far better content than gpt4 and execute instructions better than other llm.
Would you look into implementing?
Tim
May 9, 2024, 5:00am
2
Its accessible via meta-llama/Meta-Llama-3-70B-Instruct - Demo - DeepInfra
Inside SCM
If you have a MAC you can install the models on your computer and access it using SCM inbuilt openai API.
A self-hosted, offline, ChatGPT-like chatbot. Powered by Llama 2. 100% private, with no data leaving your device. New: Code Llama support!
I also see that its available to nodejs, which means is possible for SCM to download and run it on your machine for free.
Run AI models locally on your machine with node.js bindings for llama.cpp. Enforce a JSON schema on the model output on the generation level
However there are some steep memory requirements ie 64GB ram to run it locally.
Tim
May 20, 2024, 11:35pm
3
Possible to use open ai alt now and also LM studio to run llama on your computer.
How to connect SCM to LM Studio (llama3 etc) to run offline AI models for FREE
Excited to see how the Llama3 API turns out! I have been using llamacpp lately and it handles these models pretty well for local runs.
1 Like
Tim
March 21, 2025, 10:44am
5
Don’t forget you can also try new reasoning models as well like deepseek in SCM.