When I try to put them into the args field it doesn’t work, because your buggy interface rewrites what I write and adds backwards slashes and other stuff. Even after I managed and verified via preview, it still doesn’t launch correctly.
This is super annoying.
Could you please either fix the deployment for Open-WebUI or tell me how I can provide those args correctly so that it works?
Alternatively, how to make your deployment run the https://github.com/open-webui/open-webui/blob/main/run.sh file?
When I specify it in the command or entrypoint field, I get an error as well.
To persist the Open WebUI configuration, you will need to add a Koyeb Volume to your service.
By default Open WebUI is using a Sqlite database to store the state of the application. This is optional but you can also create a Koyeb Postgres service and use it as the database of your Open WebUI service by setting up the DATABASE_URL environment variable with the database connection string as value.
Open WebUI is launched with Ollama and models are not persist by default which means that you will have to re-pull them in case of a new deployment for instance.
To avoid this behavior, you can bundle the model in the container, here is a quick example on how to do that using the CLI:
Dockerfile:
FROM ghcr.io/open-webui/open-webui:latest-ollama
ARG MODEL_NAME=gemma2:2b
ENV OLLAMA_KEEP_ALIVE -1
ENV OLLAMA_MODELS=/ollama/models
RUN ollama serve & sleep 5 && ollama pull $MODEL_NAME
Create a Volume to persist the Open WebUI configuration
koyeb volume create openwebui-data --region fra --size 10
Can you give an example of how you would do that with Ollama as a deployment by itself ? I have OpenWebUI deployed on a VPS, and just want to connect it to Ollama hosted on Koyeb (for the GPU capabilities)
@Tim_Considine the process is similar to the one explained in the previous message but use the Ollama Docker image and removes configuration related to OpenWebUI.
Dockefile
FROM ollama/ollama
ARG MODEL_NAME=gemma2:2b
ENV OLLAMA_KEEP_ALIVE -1
ENV OLLAMA_MODELS=/ollama/models
RUN ollama serve & sleep 5 && ollama pull $MODEL_NAME