Log exporter instance failing on health check

Hello, I’m struggling to make the log exporter work using the v1.0.0 image and vector. I appreciate any help.

I created a new service using the docker.io/koyeb/log-exporter:v1.0.0 image and set the environment variables KOYEB_SERVICE, KOYEB_TOKEN, SINK_TOML_LOG and DEBUG following the koyeb’s documentation.

I don’t think the problem is with the vector connection (to grafana loki), because I don’t receive any error from grafana. The logs the service prints are:

Instance allocated in nomad
s6-rc: info: service s6rc-fdholder: starting
s6-rc: info: service s6rc-oneshot-runner: starting
s6-rc: info: service s6rc-oneshot-runner successfully started
s6-rc: info: service fix-attrs: starting
s6-rc: info: service s6rc-fdholder successfully started
s6-rc: info: service fix-attrs successfully started
s6-rc: info: service legacy-cont-init: starting
s6-rc: info: service legacy-cont-init successfully started
s6-rc: info: service check: starting
s6-rc: info: service check successfully started
s6-rc: info: service vector: starting
s6-rc: info: service vector successfully started
s6-rc: info: service koyeb-cli: starting
s6-rc: info: service koyeb-cli successfully started
s6-rc: info: service legacy-services: starting
s6-rc: info: service legacy-services successfully started
√ Loaded ["/etc/vector"]
√ Component configuration
√ Health check "grafanacloud-*-*"
√ Health check "print"
-------------------------------------------
                                  Validated
2024-02-12T20:31:43.859079Z  INFO vector::app: Log level is enabled. level="vector=info,codec=info,vrl=info,file_source=info,tower_limit=info,rdkafka=info,buffers=info,lapin=info,kube=info"
2024-02-12T20:31:43.860163Z  INFO vector::app: Loading configs. paths=["/etc/vector"]
2024-02-12T20:31:43.870863Z  INFO vector::sources::file_descriptors: Capturing stdin.
2024-02-12T20:31:43.905714Z  INFO vector::topology::running: Running healthchecks.
2024-02-12T20:31:43.905848Z  INFO vector::topology::builder: Healthcheck passed.
2024-02-12T20:31:43.906730Z  INFO vector: Vector has started. debug="false" version="0.31.0" arch="x86_64" revision="0f13b22 2023-07-06 13:52:34.591204470"
2024-02-12T20:31:43.906924Z  INFO vector::app: API is disabled, enable by setting `api.enabled` to `true` and use commands like `vector top`.
{"host":"3326c0ad","message":"GET https://api.github.com/repos/koyeb/koyeb-cli/releases: 403 API rate limit exceeded for ip. (But here's the good news: Authenticated requests get a higher rate limit. Check out the documentation for more details.) [rate reset in 13m20s]","source_type":"stdin","timestamp":"2024-02-12T20:31:43.976181416Z"}
2024-02-12T20:31:44.447921Z  INFO vector::topology::builder: Healthcheck passed.

And after some seconds, the service stops:

s6-rc: info: service legacy-services: stopping
s6-rc: info: service legacy-services successfully stopped
s6-rc: info: service koyeb-cli: stopping
s6-rc: info: service koyeb-cli successfully stopped
2024-02-12T20:32:48.629369Z  INFO vector::signal: Signal received. signal="SIGTERM"
2024-02-12T20:32:48.629983Z  INFO vector: Vector has stopped.
2024-02-12T20:32:48.632354Z  INFO vector::topology::running: Shutting down... Waiting on running components. remaining_components="grafanacloud-*-*" time_remaining="59 seconds left"
s6-rc: info: service vector successfully stopped
s6-rc: info: service check: stopping
s6-rc: info: service s6rc-fdholder: stopping
s6-rc: info: service check successfully stopped
s6-rc: info: service legacy-cont-init: stopping
s6-rc: info: service s6rc-fdholder successfully stopped
s6-rc: info: service legacy-cont-init successfully stopped
s6-rc: info: service fix-attrs: stopping
s6-rc: info: service fix-attrs successfully stopped
s6-rc: info: service s6rc-oneshot-runner: stopping
s6-rc: info: service s6rc-oneshot-runner successfully stopped

And then the service fails with the message:

Instance has abruptly stopped. TCP health check on port 8000 failed, restart attempt 2. Instance failed to start.

I don’t know if I’m missing something in the log exporter service configuration, should I use another port or modify something?

Hi @Vinicius_Victor_Giro

Thanks for sharing this with us! I think that you are deploying the log exporter as a Web service. Can you try deploying it as a Worker instead?

There should be no port exposed, because the log exporter only pushes data out. The reason why the health check fails is that we are not able to confirm that a service is bound on port 8000 (which is the default one in a koyeb Web service).

2 Likes

Oh, I missed that :sweat:

Thank you for the help, It’s working now! :pray:

1 Like