Hi Koyeb Support Team,
I am currently hosting a Generative AI API gateway on Koyeb.
The Issue: My application integrates with Large Language Models (LLMs) that require long inference times, especially for complex reasoning tasks and long-context generation. Currently, I am hitting the platform’s default 100-second HTTP timeout.
Could you please increase the HTTP timeout limit for my service? I would like to request an increase to 300 to 500 seconds to accommodate these long-running AI requests.