It is, and it’s not giving me good output either, too much time and money spent on testing.
I ended up whipping up my local llm using ollama and litellm proxy to test out a few things now.
Which reminds me, I have a feature request here - Custom API Endpoints (OpenAI Compatible)