ComfyUI API for Free AI Images?

Can you add Comfy UI API for images?

Take a look at all the AI image models … that we can get for FREE instead of paying 4-8 cents for each image credit.

THESE FREE MODELS ARE AMAZING:
https://civitai.com/models

Here is a tutorial on enabling the ComfyUI API

For beginners, you can install ComfyUI with 1 click

Let me have a look…

From a quick glance looks very promising. However setup does seem to require a few steps to get going. Not as easy as LM studio for content gen.

Thanks for the feature post.

I will try to install it on my dev PC to get a first hand feel for it

1 Like

This would really take SCM over the top … because what you can do with ComfyUI is rivaling BIG DATA … and it’s free. SCM would be in unprecedented territory.

Take a look at this new model FLUX … just came out yesterday.

I really hope you can get it working. fingers crossed

Okay … when you watch the videos, it looks very complex, but here is how to make it easy for you to add ComfyUI to SCM via a simple API call.

BASIC STEPS NEEDED FOR COMFYUI API

  1. Load the Workflow JSON string in any programming language
  2. Change any desired field in the Workflow JSON string
  3. Wrap the whole definition into dict under the “prompt” key ({“prompt”: })
  4. Send it as POST request to http://127.0.0.1:8188/prompt (or any other url)

SO in SCM under Settings/API Logins … add a new API accordion tab, ask user for :

  1. ComfyUI Server Endpoint
  2. Workflow (as a textarea for JSON string)
  3. Image Prompt Tag (default it to “%IMGPROMPT%” or something that will jive nicely with the rest of SCM’s delimiters).
  4. Timeout (s).

The Workflow is just a JSON string, and all SCM needs to do is replace the IMAGE PROMPT TAG with the value of the prompt from the SCM task.

The result will be a PNG image via API Websocket (or multiple images if a batch is run, but batches are not necessary for our purpose).

EXAMPLE:
When I saved the ComfyUI Workflow API json file, I got this code:

{
  "5": {
    "inputs": {
      "width": 1024,
      "height": 768,
      "batch_size": 1
    },
    "class_type": "EmptyLatentImage",
    "_meta": {
      "title": "Empty Latent Image"
    }
  },
  "6": {
    "inputs": {
      "text": "Realistic Photo of a massive tsunami about to hit the Statue of Liberty",
      "clip": [
        "11",
        0
      ]
    },
    "class_type": "CLIPTextEncode",
    "_meta": {
      "title": "CLIP Text Encode (Prompt)"
    }
  },
  "8": {
    "inputs": {
      "samples": [
        "13",
        0
      ],
      "vae": [
        "10",
        0
      ]
    },
    "class_type": "VAEDecode",
    "_meta": {
      "title": "VAE Decode"
    }
  },
  "10": {
    "inputs": {
      "vae_name": "flux-vae.safetensors"
    },
    "class_type": "VAELoader",
    "_meta": {
      "title": "Load VAE"
    }
  },
  "11": {
    "inputs": {
      "clip_name1": "t5xxl_fp8_e4m3fn.safetensors",
      "clip_name2": "clip_l.safetensors",
      "type": "flux"
    },
    "class_type": "DualCLIPLoader",
    "_meta": {
      "title": "DualCLIPLoader"
    }
  },
  "12": {
    "inputs": {
      "unet_name": "flux1-schnell-fp8.safetensors",
      "weight_dtype": "fp8_e4m3fn"
    },
    "class_type": "UNETLoader",
    "_meta": {
      "title": "Load Diffusion Model"
    }
  },
  "13": {
    "inputs": {
      "noise": [
        "25",
        0
      ],
      "guider": [
        "22",
        0
      ],
      "sampler": [
        "16",
        0
      ],
      "sigmas": [
        "17",
        0
      ],
      "latent_image": [
        "5",
        0
      ]
    },
    "class_type": "SamplerCustomAdvanced",
    "_meta": {
      "title": "SamplerCustomAdvanced"
    }
  },
  "16": {
    "inputs": {
      "sampler_name": "uni_pc_bh2"
    },
    "class_type": "KSamplerSelect",
    "_meta": {
      "title": "KSamplerSelect"
    }
  },
  "17": {
    "inputs": {
      "scheduler": "sgm_uniform",
      "steps": 4,
      "denoise": 1,
      "model": [
        "12",
        0
      ]
    },
    "class_type": "BasicScheduler",
    "_meta": {
      "title": "BasicScheduler"
    }
  },
  "22": {
    "inputs": {
      "model": [
        "12",
        0
      ],
      "conditioning": [
        "6",
        0
      ]
    },
    "class_type": "BasicGuider",
    "_meta": {
      "title": "BasicGuider"
    }
  },
  "25": {
    "inputs": {
      "noise_seed": 1028320977096301
    },
    "class_type": "RandomNoise",
    "_meta": {
      "title": "RandomNoise"
    }
  },
  "26": {
    "inputs": {
      "images": [
        "27",
        1
      ]
    },
    "class_type": "PreviewImage",
    "_meta": {
      "title": "Preview Image"
    }
  },
  "27": {
    "inputs": {
      "empty_cache": true,
      "gc_collect": true,
      "unload_all_models": true,
      "image_pass": [
        "8",
        0
      ]
    },
    "class_type": "VRAM_Debug",
    "_meta": {
      "title": "VRAM Debug"
    }
  }
}

so all the user needs to do here is find his manual prompt in the API workflow he saved, and then change it to the Image Prompt Tag “%IMGPROMPT%”. Then copy/paste the whole JSON into SCM’s new API settings Workflow textarea box.

LIKE THIS:

{
  "5": {
    "inputs": {
      "width": 1024,
      "height": 768,
      "batch_size": 1
    },
    "class_type": "EmptyLatentImage",
    "_meta": {
      "title": "Empty Latent Image"
    }
  },
  "6": {
    "inputs": {
      "text": "%IMGPROMPT%",
      "clip": [
        "11",
        0
      ]
    },
    "class_type": "CLIPTextEncode",
    "_meta": {
      "title": "CLIP Text Encode (Prompt)"
    }
  },
  "8": {
    "inputs": {
      "samples": [
        "13",
        0
      ],
      "vae": [
        "10",
        0
      ]
    },
    "class_type": "VAEDecode",
    "_meta": {
      "title": "VAE Decode"
    }
  },
  "10": {
    "inputs": {
      "vae_name": "flux-vae.safetensors"
    },
    "class_type": "VAELoader",
    "_meta": {
      "title": "Load VAE"
    }
  },
  "11": {
    "inputs": {
      "clip_name1": "t5xxl_fp8_e4m3fn.safetensors",
      "clip_name2": "clip_l.safetensors",
      "type": "flux"
    },
    "class_type": "DualCLIPLoader",
    "_meta": {
      "title": "DualCLIPLoader"
    }
  },
  "12": {
    "inputs": {
      "unet_name": "flux1-schnell-fp8.safetensors",
      "weight_dtype": "fp8_e4m3fn"
    },
    "class_type": "UNETLoader",
    "_meta": {
      "title": "Load Diffusion Model"
    }
  },
  "13": {
    "inputs": {
      "noise": [
        "25",
        0
      ],
      "guider": [
        "22",
        0
      ],
      "sampler": [
        "16",
        0
      ],
      "sigmas": [
        "17",
        0
      ],
      "latent_image": [
        "5",
        0
      ]
    },
    "class_type": "SamplerCustomAdvanced",
    "_meta": {
      "title": "SamplerCustomAdvanced"
    }
  },
  "16": {
    "inputs": {
      "sampler_name": "uni_pc_bh2"
    },
    "class_type": "KSamplerSelect",
    "_meta": {
      "title": "KSamplerSelect"
    }
  },
  "17": {
    "inputs": {
      "scheduler": "sgm_uniform",
      "steps": 4,
      "denoise": 1,
      "model": [
        "12",
        0
      ]
    },
    "class_type": "BasicScheduler",
    "_meta": {
      "title": "BasicScheduler"
    }
  },
  "22": {
    "inputs": {
      "model": [
        "12",
        0
      ],
      "conditioning": [
        "6",
        0
      ]
    },
    "class_type": "BasicGuider",
    "_meta": {
      "title": "BasicGuider"
    }
  },
  "25": {
    "inputs": {
      "noise_seed": 1028320977096301
    },
    "class_type": "RandomNoise",
    "_meta": {
      "title": "RandomNoise"
    }
  },
  "26": {
    "inputs": {
      "images": [
        "27",
        1
      ]
    },
    "class_type": "PreviewImage",
    "_meta": {
      "title": "Preview Image"
    }
  },
  "27": {
    "inputs": {
      "empty_cache": true,
      "gc_collect": true,
      "unload_all_models": true,
      "image_pass": [
        "8",
        0
      ]
    },
    "class_type": "VRAM_Debug",
    "_meta": {
      "title": "VRAM Debug"
    }
  }
}

Notice the only change was under block 6 where I swapped the manual prompt from my saved API workflow with %IMGPROMPT%.

Then all SCM needs to do is a simple replace of %IMGPROMPT% and post the entire JSON string as the final prompt to ComfyUI endpoint.

By the way … I got FLUX working via ComfyUI and it is unbelievably amazing.

Here is the result I got for the prompt: “Photo of 3 smiling angels”. That’s all I wrote. Took 2 min to render because I need more ram, but the resulting images are beyond belief.

Ok, thanks for breaking it down.

I need to put sometime to installing comfy ui and loading up a model first right?

As for the workflow JSON, is this setup by default?

I saw video and it was full of drag-drop pipes which looked confusing

I know you’re busy … so I will try to create a working WEBHOOK example and test it for you to see if I can get it working so you don’t have to deal with the complexities of ComfyUI.

So … I was giving it some more thought. I think instead of chasing every API from here to eternity … and there will be many more. You will constantly be programming APIs. That is probably not the best solution.

I think what you should do is just create a Universal WEBHOOK Feature … that let’s the user create as many Webhooks / API types as they want while making your job easier with less work. It would be very similar to the structure I detailed earlier … and Of course, allow us to pass and extract Macro values to and from the webhooks.

Allow User to create unlimited webhooks and for each one
ASK USER FOR:

  1. Webhook Name
  2. Server Endpoint
  3. Method (GET or POST)
  4. Workflow (TextArea for All Request Parameters, allow for Macro Variables)
  5. Payload Type (type of response expected … is it text, json, xml, image, video?)
  6. Payload Parser (TextArea that allows macro variables to be set from extracted data and returned to task)
  7. Timeout (s).

You can even do templates for popular webhooks, like ComfyUI, or heck … anything … (think Zapier/Make.com - no limits!).

This way you are not chasing the horizon with every new API that comes out that users will want.

… or do you have a similar feature already? I looked, but couldn’t find a webhook system.

I created a new thread regarding Universal Webhook Feature … if you can make that happen, you can disregard this feature request as it would be unnecessary.

NEW THREAD:

Thankyou! will comment on your webhook thread.

FYI … I was able to get the Comfy API working.

All I need now is the webhook feature in SCM, so I can use it. :wink:

1 Like

Are you able to make it work with webhook?

You will need to get comfy ui to return image data via base64

Ok … I’m a bit confused … how would I call this from the Article Creator? Do I use “Webhook1” or “Webhook 1” (with space)?

%MyFeaturedImage% = [Webhook1: Realistic Picture of "%title%"]
or ??
%MyFeaturedImage% = [Webhook 1: Realistic Picture of "%title%"]

And in the body of the “Webhook 1” setting, I would pass the reserved macro %prompt% like below? … where “%prompt%” is specially reserved by SCM?

{
    "model": "image-model",
    "messages": [{
            "role": "system",
            "content": "Create an image"
        },
        {
            "role": "user",
            "content": %prompt%
        }
    ],
    "temperature": 0.7
}

Any examples you could provide of how this should work would be great.

Its called as same name as what is in the dropdown.

image

So webhook 1

Correct. Inside webhook 1 body

Should look like this

image

The final piece is the output, so you need to tell SCM what data to return.

Not sure what the JSON output is though, but if you do this.

You will get all data printed to your article.

image

Refer to this for a in depth explanation

I added a test service button to make it easier to figure out the settings as well

image

I can confirm with a bit of server customization on my end, I was able to get the ComfyUI API working with your new Webhook feature. Big THUMBS UP! Thank You Tim.

When you can, can you increase the number of “OpenAI Alts” and “Webhooks” from just 2? Ideally it should be an unlimited number (you never know how many projects you will have to run that require different resources), but at least 5 each for now to begin with?

I might have to redo the UI to add new items as the list is getting a bit too long for one screen I think

New feature request

Did you manage to get SCM to talk to comfyui properly at the end?

If you could share details so I can try it as well that would be great!

The new UI for API logins looks great … I hope you are able to add more Open AIs and Webhooks because I have to create a different Ollama model with a different modelfile for each param set (ie context window etc) I want to use.

Anyways, yes, I was able to get ComfyUI to work. I ended up creating a multipurpose script tailored to my ComfyUI config that would I could use the Webhook with. My script wouldn’t work for anyone else because it was customized for my particular server setup.

However, all the script is doing can be done with your new webhook. Here’s how:

  1. Save the ComfyUI API json file as the first tutorial video above shows.
  2. Copy/Paste the long JSON string into your Webhook Box but Replace the literal prompt with your %prompt% tag.
  3. Then Post the JSON workflow to ComfyUI endpoint
  4. The only issue you might run into is I believe the ComfyUI result is a binary file, not base64. so my script saves the binary to disk and then has the following line in it:
$base64_image = base64_encode(file_get_contents($imagefile));

and then I have the script return the base64 string to your Webhook.

And it works!

1 Like

Amazing,

Ok I am trying it now. Downloading it to PC…

Hopefully I can have a nice written tutorial for setting it up on the site soon

1 Like