Is planned to use openai assistants?

Hi, OpenAi has assistants, and claude has projects.
are planned to choose in prompts list, choose between assistants/projects and models?

The advantage: the assistant can have static info about my product, target audience, my writing style and scm can use all the tokens to pass new and dynamic data, like scrapped data
and at same time, save tokens usage
Thanks

Please give me an example of how you want to use openai assistants.

Right now you have webhooks and openai alt1/2 to call open ai as you need.

Then you have scm api on the other hand to interact with your openai assistant with scm.

I’m basing it on this:

https://platform.openai.com/docs/assistants/overview

I use actually with make.com Feedly Advertorials _ Make ...
I wrote a 260 lines prompt _You’re a world-class copyw... + added some pdf’s

in scm, into the prompts, a selector to choose the assistant/project SEO Content Machine 2024-08...

the thread in openai assistants is good to have one thread by blog category per example.
this thread can be created into the openai playground, so only should be a text input

I don’t know if this answer your question

assistants sound like an api call to an external service right?

Ie, the prompts go to them?

no, they are chatgpt api calls, to a different endpoint than actual

you are misunderstanding with functions, that are called into the prompts

this is the api for assistants
https://platform.openai.com/docs/assistants/quickstart?context=without-streaming

and this is your actual api call
https://platform.openai.com/docs/guides/chat-completions/getting-started#:~:text=An%20example-,Chat%20Completions%20API,-call%20looks%20like

Do you have an assistant deployed?

yes, if wish you can try with mine, I pay the api calls

Can you give me some screenshots of what urls you call?

Why dont you use open ai alt or webhooks?

what do you mean with this
Why dont you use open ai alt or webhooks?

my assistants prompts has 10k tokens instructions + files with examples (aprox 100k tokens), so

  1. with 110k tokens… in the normal chatcompletion can’t pass much more data
  2. why pay 110k tokens on each generation in instructions when I can save them? with assistants generations, only need to say on each generation the “unique”
    in SCM, that means that all the article seo, voice, and style instructions don’t need to send on each generation, only need to say
    write an article about [topic] mixing this 4 contents
    content1,
    content2,

and the assistant has in the inside instructions all (up to 5 Gigas) information and tunning

here are the api calls
https://platform.openai.com/docs/assistants/quickstart?context=without-streaming

curl https://api.openai.com/v1/threads/thread_abc123/runs \
  -H "Authorization: Bearer $OPENAI_API_KEY" \
  -H "Content-Type: application/json" \
  -H "OpenAI-Beta: assistants=v2" \
  -d '{
    "assistant_id": "[my assistant name]",
    "instructions": "Please address the user as Jane Doe. The user has a premium account."
  }

You can make those api calls using webhook.

It allows you to paste in your own api call urls and configure headers and data to be sent.

Then when you want call assistant you call the ‘webhook 1’

I assume you have the assistant Id already right?

If so I can help you do a test config with webhook.

For full explanation of webhooks.