Hello, and welcome to this week’s changelog update! Let’s dive into what’s new:
-
Performance improvements for Dockerfile-based builds
We’ve improved the performance of Dockerfile-based builds which are now up to 2 times faster. This is useful when you build large images with Dockerfile. -
Fixed secret interpolation at build time
We fixed an issue causing deployment builds using secrets to fail. The secret was missing when set as an environment variable using interpolation. -
Nvidia A100 GPUs in self-service for Starter organizations
Starter organizations can now self-serve on Nvidia A100 GPUs. We removed a constraint requiring Starter organizations to talk to us before deploying on these GPUs. -
New tutorial: Fine-Tune MistralAI and Evaluate the Fine-Tuned Model on Koyeb Serverless GPUs
Last week, you learned how to fine-tune a Llama 3.1 B model. This week, Nuno shows you how to fine-tune MistralAI and evaluate it on Koyeb Serverless GPUs. At the end of the tutorial, you can experiment with your datasets and observe how fine-tuning can improve performance in your area of interest!