Right now it seems you can only choose 1 AI source/model per article creator setting. I was hoping there was a way to (optionally) designate which source/model per request.
%TITLE1% = [“OpenAI | GTP 3.5”: "Write a title on “%keyword%” ]
%OUTLINE1% = [“Groq | 70B”: Write an article based on %TITLE1% ]
This may sound unimportant … but, as you know:
There are free AI models, cheap AI models, and high-end AI models. For many tasks within a single article creator, you can use free/cheap AI models because the high end models are not needed, but sometimes you need them. And these tasks all fall within a single Article Creator run.
OPENROUTER AND HUGGINGFACE???
Also, please add OpenRouter and HuggingFace AI models. These are the best deals on AI right now and you won’t have to keep adding new AI models much more because Openrouter has just about every model from all companies. HuggingFace for Open-Source models which many are free and can be just as good.
Can you allow us to pass the temperature parameter inline as well?
REASON: To create proper articles, some queries need low temp for accurate focused results (“extract keywords”) and others need higher temps for creative text (“paraphrase this text”)… all within the same task.
EXAMPLE:
[service name | model name| temperature : prompt ]
For inline, its getting a bit too long to have that many parameters that are not named.
I’m also worried about the simple prompt parser I built to do the service/model detection in prompt. It might make it very brittle and prone to error with edge cases.
I can try to add it but there are other parameters like presence etc, that the super user might want to add. I think it would be better to allow something that gives 100% control of all parameters, not just the temperature.
Not sure how to do this though, ideally it should be all named parameters in prompt eg:
[{service:open ai,model:gpt4,presence:1,temperature:1, prompt: ‘write a title’}]
However this might mean spintax can’t be used, so we might have to test different coding formats.
Another option is that you have unlimited custom ais, and just create your own ‘service’ and keep to the service:ai model format.
I think this can be solved going back to my Universal webhook request … allow the user to create as many APIs as they want … then they can just pick the API they want by chosen API Name/ID.
So I might have an API for OpenAI with GPT4 with high temp and another API for OpenAI with GPT4o mini with a low temp. If the user can create as many as they need to, then these extra parameters are no longer needed inline.