Allow inline AI source inside AI prompts

Right now it seems you can only choose 1 AI source/model per article creator setting. I was hoping there was a way to (optionally) designate which source/model per request.

%TITLE1% = [“OpenAI | GTP 3.5”: "Write a title on “%keyword%” ]
%OUTLINE1% = [“Groq | 70B”: Write an article based on %TITLE1% ]

This may sound unimportant … but, as you know:

There are free AI models, cheap AI models, and high-end AI models. For many tasks within a single article creator, you can use free/cheap AI models because the high end models are not needed, but sometimes you need them. And these tasks all fall within a single Article Creator run.

OPENROUTER AND HUGGINGFACE???
Also, please add OpenRouter and HuggingFace AI models. These are the best deals on AI right now and you won’t have to keep adding new AI models much more because Openrouter has just about every model from all companies. HuggingFace for Open-Source models which many are free and can be just as good.

ETA … I was able to use OpenRouter with OpenAI alt … I think HuggingFace can also use it.

Hugging face models are free but api is paid.

You can also run hugging face models via LM Studio.

Do you have a prompt you want to run with inline model selection?

Like example that I can build a test case around

Yes … you can use the example i gave earlier … where a title is created using OpenAI and an OUTLINE is created using Groq.

FYI … HuggingFace API does have a free tier, but it is limited.

Didn’t know that! I can add it, although last time I looked it was pretty expensive.

I re-opened feature request for inline ai models in prompts

Format will be
[service name | model name : prompt ]

If correctly parsed, the prompt will be colored:

  • service name will be purple
  • model name is green

image

Inside the task log

image

Print of service name and model name

Service name is from this list:
aka same as dropdown

image

1 Like

Can you allow us to pass the temperature parameter inline as well?

REASON: To create proper articles, some queries need low temp for accurate focused results (“extract keywords”) and others need higher temps for creative text (“paraphrase this text”)… all within the same task.

EXAMPLE:
[service name | model name| temperature : prompt ]

For inline, its getting a bit too long to have that many parameters that are not named.

I’m also worried about the simple prompt parser I built to do the service/model detection in prompt. It might make it very brittle and prone to error with edge cases.

I can try to add it but there are other parameters like presence etc, that the super user might want to add. I think it would be better to allow something that gives 100% control of all parameters, not just the temperature.

Not sure how to do this though, ideally it should be all named parameters in prompt eg:

[{service:open ai,model:gpt4,presence:1,temperature:1, prompt: ‘write a title’}]

However this might mean spintax can’t be used, so we might have to test different coding formats.

Another option is that you have unlimited custom ais, and just create your own ‘service’ and keep to the service:ai model format.

Ie add more openai alt#3, openai alt#4 etc

I think this can be solved going back to my Universal webhook request … allow the user to create as many APIs as they want … then they can just pick the API they want by chosen API Name/ID.

So I might have an API for OpenAI with GPT4 with high temp and another API for OpenAI with GPT4o mini with a low temp. If the user can create as many as they need to, then these extra parameters are no longer needed inline.

The Universal Webhook solves many issues.

So Prompt would be:

[Webhook API NameID: prompt ]
1 Like

Agreed, this seems like the best solution to try

My vote for this.