Changelog #109: Partial Updates Now Valid Using the PATCH Endpoint of the Koyeb API, New Tutorial Using Ollama with Koyeb Sandboxes, and more

Hello, and welcome to this week’s changelog update! Let’s dive into what’s new:

  1. Partial Updates Now Valid Using the PATCH Endpoint of the Koyeb API

    You can now provide a partial body with just the updates you want to make to a Service when using the Koyeb API’s PATCH endpoint (PATCH /v1/services/{id}). Prior to this change, you had to provide the entire service definition, making the endpoint effectively identical to PUT.

  2. New Tutorial: Use Ollama to Test Multiple Code Generation Models with Koyeb Sandboxes

    Want to know which models perform best for your use case? In this tutorial, you’ll learn how to run Ollama in Koyeb Sandboxes to generate code using multiple AI models simultaneously, allowing you to easily compare results across models.

    Check out the tutorial on the Koyeb website.

  3. New Video: Choosing a GPU for Your LLM - What to Consider When Benchmarking

    Our GPU benchmarks for LLMs are frequently used as a reference by developers deciding which hardware to select for their model. In our latest video, we give details on what we measure, why it matters, and what numbers to pay attention to depending on your use case.

    Watch the video on YouTube to learn more about our benchmarking.

  4. Event Recap: AI Dev Meetup on Coding Agents with OpenAI and LangChain Last Tuesday, we kicked off our first AI developer meetup of 2026 with a packed room and over 350 signups! This was our first content-focused event since organizing AI Engineer Paris 2025, and it was a great night bringing the AI dev community together to share ideas and learn from some of the most exciting builders in the space. Read the event recap on the Koyeb blog. Check out the Koyeb Luma page to find out about upcoming events, including our next developer meetup on February 19.