Llama3 api

Hi, I am starting to see people using llama3 to generate article. It seems to be better than any model out there at the moment. It generates far better content than gpt4 and execute instructions better than other llm.

Would you look into implementing?

Its accessible via meta-llama/Meta-Llama-3-70B-Instruct - Demo - DeepInfra

Inside SCM

If you have a MAC you can install the models on your computer and access it using SCM inbuilt openai API.

I also see that its available to nodejs, which means is possible for SCM to download and run it on your machine for free.

However there are some steep memory requirements ie 64GB ram to run it locally.

Possible to use open ai alt now and also LM studio to run llama on your computer.

How to connect SCM to LM Studio (llama3 etc) to run offline AI models for FREE

Excited to see how the Llama3 API turns out! I have been using llamacpp lately and it handles these models pretty well for local runs.

1 Like

Don’t forget you can also try new reasoning models as well like deepseek in SCM.