When using Local LLM models, some AI tasks take more than 180 seconds. However SCM errors out over 180,000 ms.
2. Screenshot or task log of the problem
Error in Log: “Timeout of 180,000 ms exceeded. Retry 1/3 in 8s”
Some local LLM tasks can take up to 5-10 mins or more. How can we increase the timeout so that SCM doesn’t cause an error and keep retrying the same task while the AI is still in the process of answering?
Yes (OpenAI Alt #1/#2), but can you please make it so user can define it in settings? This should not be a hard-coded value. Just set 3min as default, but then allow user to adjust.
Okay … i didn’t get an error this time … thank you. However, I still think it wise to allow the user to adjust the OpenAI Alt #1/#2 timeout because different models take longer than others to respond and hardcoding even to 15 min could break the AI’s train of thought. Once it breaks, I have no way of increasing the timeout to adjust for any additional params or layers in the new AI model. In my experience, hardcoding values for all use cases is never wise.