Changelog #81 - Nvidia A100 and H100 GPUs Available in Dallas, Custom Request Timeout for Scale Plan Users, and more

Hello, and welcome to this week’s changelog update! Let’s dive into what’s new:

  1. Nvidia A100 and H100 GPUs Available in Dallas

    You can now deploy Services on Nvidia A100 and H100 GPUs in the Dallas region on Koyeb. This enables H100 GPU deployments in North America and increases our capacity in this continent.

  2. Custom Request Timeout for Scale Plan Users

    Scale plan users can now enable requests to run up to 900 seconds before timeout, ideal for Services with long-running requests. This option provides more flexibility compared to the previous 100-second limit. Reach out to us if you want to access this option.

  3. Deploy Larger Docker Images to GPUs

    You can now deploy Docker images up to 150GB (compressed). This update allows you to run workloads with larger images including big models. Previously, the limit was 100GB.

  4. Next-Gen AI Infra with Tenstorrent & LlamaIndex in San Francisco

    For the AI builders in San Francisco, join us on Wednesday, March 5 for a meetup diving into building on next-generation AI hardware. We’ll be there with our friends from Tenstorrent and LlamaIndex. Registration is required.