Cost vs Result? Need Help

Thanks for the share, very helpful.

You are pasting each part of the full outline into the openAI chatgpt app directly correct?

And manually collating each part into a full article at the end?

Im having trouble getting the openai to give me a standard output for the outline that I can parse.

The problem is that the output from the openai api is not as good as the output from the openai chatgpt app.

Right now we can do this in SCM

The each section has to be declared manually in the user macros.

It works but it would be nice if instead of sub1,sub2,sub3 we could instead use an AI generated outline.

Thats the part I’m trying to figure out how to do using code.

I can convert your prompt to write a section easily

I am thinking of adding %outline% as a special macro.

It runs this prompt automatically

[article outline for keyword%]

output tends to be like this:

image

Take each line one at a time from output as a section ie

  1. introduction
  2. importance of feeding dogs a balanced diet.

I need to test this to see how well it works.

I am prototyping the process right now.

Once I get it working well I will add it to the AI writer and overhaul it.

Right now I am processing prompts to get standard output to find headers and sub headers from an AI given outline

1 Like

Thanks Tim.

I’m Looking forward to it.

I’m concerned about the token cost with SCM though. I tested Cuppa.sh and Machined.ai again, and it cost me this much to create 12 well-researched articles, each of which was over 3K words.

Is SCM using too much on your task?

It is, and it’s not giving me good output either, too much time and money spent on testing.

I ended up whipping up my local llm using ollama and litellm proxy to test out a few things now.

Which reminds me, I have a feature request here - Custom API Endpoints (OpenAI Compatible)

Send me the export of the last task you created that was using to many credits and I will have a look at it.

I lost the logs, I’ll try again tonight. The main is the output quality. I cannot get a decent article out of the system for some reason. And if I use AI Writer instead of AI Article Creator, I don’t seem to be able to add custom headlines for it to write content, And content scraped from the web is not up to the mark for me. Maybe having an outline is the only way.

Here’s the last project I used.

swiss_test_ext.project : downloaded

I ran project once and poured over the logs to get a better understanding of the usage.

Findings

The article spinning usage seemed a bit high so I had a better look into it and because you used both image and video inserts etc, those items are also getting sent to the AI writer for processing when they don’t need to be. most likely higher if you used base64 images by default

SCM was sending the complete article to the AI model for processing which is inefficient.

Now it will only send the text of the article first, then add all the images videos etc later after spinning.

Before - Article spin usage

AI article 755 tokens
Spinning article (1911 words / 15651 characters) using ‘AI Writer’

After optimization

AI article, 642 tokens
Spinning article (839 words / 5320 characters) using ‘AI Writer’

AI article of around same token cost, but you can see the character count of article being spun is now only a 1/3 of what it used to be.

From eyeballing the usage in task log it seems that GPT4 cost was about $1 an article unoptimized.

With these new changes it will be cheaper than that.

Can you explain more about this?

It also sends over 100-300 tiny API hits to rewrite titles and questions and answers etc that took most of the time. Each hit used 40-70 tokens but over 100.
That accumulated a lot too.

I’ll get back to you on this tomorrow. I’ll try recording my process in detail, about to hit the hay.

Yes you are correct, everything you have checked is getting re-written even if you don’t use it ie QnA

The biggest users of credits are 1) Spinning 2) Re-Spinning, split evenly.

for 1) Spinning, you can just turn off all scraped content re-writing as a test
2) Re-Spinning, I will issue update today to fix it so it will only use as much as the article text size

I reworked the scraped content re-writer logic.

TLDR

  • You don’t have to turn on or off scraped rewriter checkboxes anymore
  • SCM will never spin all the scraped content if you just use the AI, existing or article forge generators
  • Re-writer will still re-write your articles, and save you credits by not unnecessarily rewriting all your cached scraped content you might not be using

Create a new AI creator task, selecting AI writer and applying a re-writer no longer consumes a bunch of AI credits spinning the scraped content cache.

This avoids a beginner pitfall as the default state will conserve credits instead of using them.