Docker daemon access

Hello! I’m using the docker API for Python and I’m having problems accessing the docker daemon.

I have already enabled privileged access for docker.

The error occurs exactly on the docker connection line:

client = docker.from_env()

The error message:
docker.errors.DockerException: Error while fetching server API version: ('Connection aborted.', FileNotFoundError(2, 'No such file or directory'))

If I understand correctly, it is due to not finding the docker daemon/socket.

The docker API documentation for from_env defines:
The environment variables used are the same as those used by the Docker command-line client. They are:

DOCKER_HOST:
The URL to the Docker host.

DOCKER_TLS_VERIFY:
Verify the host against a CA certificate.

DOCKER_CERT_PATH:
A path to a directory containing TLS certificates to use when connecting to the Docker host.

Parameters:

  • version (str) – The version of the API to use. Set to auto to automatically detect the server’s version. Default: auto
  • timeout (int) – Default timeout for API calls, in seconds.
  • max_pool_size (int) – The maximum number of connections to save in the pool.
  • environment (dict) – The environment to read environment variables from. Default: the value of os.environ
  • credstore_env (dict) – Override environment variables when calling the credential store process.
  • use_ssh_client (bool) – If set to True, an ssh connection is made via shelling out to the ssh client. Ensure the ssh client is installed and configured on the host.

Reference link: Client — Docker SDK for Python 7.0.0 documentation


Which environments do I necessarily need to fill in and with what values?

Or could the problem be of another nature?

Hello,

Did you start the docker daemon?

Setting the privileged flag for your service is not enough. Without it, you can’t start the docker daemon. But you still have to start it :slight_smile:

To start the daemon, you can replicate what we do in our image koyeb/docker-compose: GitHub - koyeb/koyeb-docker-compose

  1. In your Dockerfile, inherit docker:dind with FROM docker:dind
  2. Add an entrypoint which starts the docker daemon, similar to koyeb-docker-compose/koyeb-entrypoint.sh at master · koyeb/koyeb-docker-compose · GitHub
  3. In your Dockerifle, set ENTRYPOINT ["/entrypoint.sh"]
  4. Run your script in CMD.

I confess that I was a little confused by the combination of the github tutorials and your comment here on this post.

In short, can I start the docker daemon with just one Dockerfile? Without the need for docker-compose?

Because for me it would be simpler.

Anyway, I’m trying with Dockerfile and docker-compose, see:


My Dockerfile (Dockerfile.koyeb):

FROM docker:dind

WORKDIR /API

COPY ./dependencies ./dependencies
COPY ./models ./models
COPY ./openapi ./openapi
COPY ./migrations ./migrations
COPY ./orm ./orm
COPY ./routers ./routers
COPY ./schemas ./schemas
COPY ./tests ./tests
COPY ./utils ./utils
COPY ./filters ./filters
COPY ./static ./static
COPY database.py .
COPY compilers.py .
COPY constants.py .
COPY main.py .
COPY alembic.ini .
COPY koyeb-entrypoint.sh /koyeb-entrypoint.sh
COPY docker-compose.koyeb.yml .

ENTRYPOINT ["/koyeb-entrypoint.sh"]
CMD ["docker", "compose", "-f", "docker-compose.koyeb.yml", "up"]

My docker-compose (docker-compose.koyeb.yml):

version: "3.9"
services:
   fastapi:
     container_name: API
     env_file:
       - .env
     build:
       context: ./
       dockerfile: ./Dockerfile.koyeb
     command: bash -c "alembic upgrade head && gunicorn -k uvicorn.workers.UvicornWorker -b 0.0.0.0:8000 --reload --forwarded-allow-ips=*"
     volumes:
       - static-volume:/API/static/
       - /var/run/docker.sock:/var/run/docker.sock
     ports:
       - "8000:8000"

volumes:
   static-volume:

Error:

Instance created. Preparing to start...
Network configuration propagated
Instance is starting. Propagating network configuration...
Internal deployment error. If the error persists, try to redeploy or contact us.
Instance stopped

Hello,

I apologize if my previous explanations were not clear.

From what I understand, you aim to deploy a Python application that requires the ability to execute Docker commands. To accomplish this, you need to create a Docker image that installs all necessary dependencies, initiates the Docker daemon, and then launches your application.

Fortunately, this can be achieved by using the koyeb/docker-compose base image (note that this does not require the use of Docker Compose). This base image is designed to start the Docker daemon and execute the specified Docker CMD.

Consider the deployment of the following Flask application:

  • in app.py
from flask import Flask


app = Flask(__name__)


@app.route('/')
def hello_world():
    return 'Hello from Koyeb :)'


if __name__ == "__main__":
    app.run(host='0.0.0.0', port=8000)
  • in requirements.txt:
click==8.1.3
Flask==2.2.2
gunicorn==20.1.0
itsdangerous==2.1.2
Jinja2==3.1.2
MarkupSafe==2.1.1
Werkzeug==2.2.2

Create a Dockerfile that uses koyeb/docker-compose as the base and installs your dependencies:

FROM koyeb/docker-compose

RUN apk add py3-pip

WORKDIR /app

COPY requirements.txt .

RUN python -m venv /venv && /venv/bin/pip install -r requirements.txt
RUN /venv/bin/pip install -r requirements.txt


COPY . /app

CMD ["/venv/bin/python", "app.py"]

This Dockerfile:

  • Inherits from koyeb/docker-compose, which pre-installs Docker and configures an ENTRYPOINT that starts the Docker daemon prior to executing the CMD.
  • Installs dependencies within a virtual environment.
  • Specifies the command to execute your application.

You can test this Dockerfile locally using the command: docker run --rm --privileged -p 8000:8000 $(docker build -q .)

Now, proceed to create a Koyeb service, expose port 8000, and enable the privileged flag (required to start docker).

I have set up an example repository at GitHub - brmzkw/test-koyeb at dind which you can use as a reference :slight_smile:

1 Like

Worked well! Thank you again! :smile:

You clarified all my doubts!