Cannot deploy Open-WebUI properly via Docker

Your default deployment for Open-WebUI — Deploy Open WebUI One-Click App - Koyeb — is not using the correct Docker parameters.

Specifically, if the instance is restarted, the storage is not persistent. It should be using the following arguments with the Docker command:

-v open-webui:/app/backend/data

Full recommended deployment:

docker run -d -p 3000:8080 --add-host=host.docker.internal:host-gateway -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:cuda

When I try to put them into the args field it doesn’t work, because your buggy interface rewrites what I write and adds backwards slashes and other stuff. Even after I managed and verified via preview, it still doesn’t launch correctly.

This is super annoying.

Could you please either fix the deployment for Open-WebUI or tell me how I can provide those args correctly so that it works?

Alternatively, how to make your deployment run the https://github.com/open-webui/open-webui/blob/main/run.sh file?

When I specify it in the command or entrypoint field, I get an error as well.

Failed deployment, FYI: edff7015-6331-456b-9ed7-388619c6c3d4

Hi @Dmitry_Paranyushkin

To persist the Open WebUI configuration, you will need to add a Koyeb Volume to your service.

By default Open WebUI is using a Sqlite database to store the state of the application. This is optional but you can also create a Koyeb Postgres service and use it as the database of your Open WebUI service by setting up the DATABASE_URL environment variable with the database connection string as value.

Open WebUI is launched with Ollama and models are not persist by default which means that you will have to re-pull them in case of a new deployment for instance.

To avoid this behavior, you can bundle the model in the container, here is a quick example on how to do that using the CLI:

Dockerfile:

FROM ghcr.io/open-webui/open-webui:latest-ollama

ARG MODEL_NAME=gemma2:2b

ENV OLLAMA_KEEP_ALIVE -1
ENV OLLAMA_MODELS=/ollama/models

RUN ollama serve & sleep 5 && ollama pull $MODEL_NAME 

Create a Volume to persist the Open WebUI configuration

koyeb volume create openwebui-data --region fra --size 10

Create and deploy the service

koyeb deploy . openwebui/demo \
    --instance-type gpu-nvidia-a100 \
    --region fra \
     --checks 8000:tcp \
    --checks-grace-period 8000=300 \
    --type web \
    --archive-builder docker \
    --archive-docker-dockerfile Dockerfile \
    --env MODEL_NAME=gemma2:2b \
   --volumes openwebui-data :/app/backend/data  

Let me know if anything is not clear.

Can you give an example of how you would do that with Ollama as a deployment by itself ? I have OpenWebUI deployed on a VPS, and just want to connect it to Ollama hosted on Koyeb (for the GPU capabilities)

@Tim_Considine the process is similar to the one explained in the previous message but use the Ollama Docker image and removes configuration related to OpenWebUI.

Dockefile

FROM ollama/ollama

ARG MODEL_NAME=gemma2:2b

ENV OLLAMA_KEEP_ALIVE -1
ENV OLLAMA_MODELS=/ollama/models

RUN ollama serve & sleep 5 && ollama pull $MODEL_NAME 

Create and deploy the service

koyeb deploy . openwebui/demo \
    --instance-type gpu-nvidia-a100 \
    --region fra \
     --checks 8000:tcp \
    --checks-grace-period 8000=300 \
    --type web \
    --archive-builder docker \
    --archive-docker-dockerfile Dockerfile \
    --env MODEL_NAME=gemma2:2b \
1 Like

I get :

Instance is starting... Waiting for health checks to pass.
TCP health check failed on port 8000.

I changed it to port 11434 and it works now (subject to testing)

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.