Request to increase HTTP timeout limit for AI/LLM Application

Hi Koyeb Support Team,

I am currently hosting a Generative AI API gateway on Koyeb.

The Issue: My application integrates with Large Language Models (LLMs) that require long inference times, especially for complex reasoning tasks and long-context generation. Currently, I am hitting the platform’s default 100-second HTTP timeout.

Could you please increase the HTTP timeout limit for my service? I would like to request an increase to 300 to 500 seconds to accommodate these long-running AI requests.

Generally speaking it’s bad practice to have long running http requests. For one, it’s a DOS risk. A more common approach is to have a short http request that kicks off a background job to do the long processing. Then either have the client poll to check on the job status or call a client webhook when the job is done.