-
Notifications
You must be signed in to change notification settings - Fork 558
Description
Issue submitter TODO list
- I've searched for an already existing issues here
Describe the bug (actual behavior)
I wanted to migrate from the chart provided Redis instance to my own external Redis.
redis.enable
is set to false and the externalRedis.host
, externalRedis.port
(default in values), externalRedis.existingSecret
, externalRedis.existingSecretKey
, externalRedis.dp
externalRedis.ssl
options are set to correct values.
Deployments connecting to 127.0.0.1
(from logs):
sentry-post-process-forward-errors
sentry-ingest-consumer-events
sentry-cron
sentry-web
sentry-worker
sentry-worker-events
sentry-worker-transactions
All of the above try to connect to 127.0.0.1
even though all the settings tell them not to - see Additional context.
After installing dnsutils
in the pod (I set the command to sleep
for debugging purposes) I checked that the hostname is resolved correctly which it is. Connecting and authenticating with the externalRedis
host using telnet
works without issues so it something with the workers and probably just those 2.
Expected behavior
The consumers connecting to the externalRedis.host
host.
values.yaml
values.yaml
ingestConsumerEvents:
enabled: true
replicas: 1
# concurrency: 4
# env: []
resources: {}
# requests:
# cpu: 300m
# memory: 500Mi
affinity: {}
nodeSelector: {}
securityContext: {}
containerSecurityContext: {}
# tolerations: []
# podLabels: {}
# maxBatchSize: ""
# logLevel: "info"
# inputBlockSize: ""
# maxBatchTimeMs: ""
# it's better to use prometheus adapter and scale based on
# the size of the rabbitmq queue
autoscaling:
enabled: false
minReplicas: 1
maxReplicas: 3
targetCPUUtilizationPercentage: 50
sidecars: []
topologySpreadConstraints: []
# volumes: []
livenessProbe:
enabled: true
initialDelaySeconds: 5
periodSeconds: 320
# volumeMounts: []
autoOffsetReset: "earliest"
# noStrictOffsetReset: false
externalRedis:
## Hostname or ip address of external redis cluster
##
host: "external-sentry-redis"
port: 6379
## Just omit the password field if your redis cluster doesn't use password
# password: redis
existingSecret: redis-secret
## set existingSecretKey if key name inside existingSecret is different from redis-password'
existingSecretKey: REDIS_PASSWD
## Integer database number to use for redis (This is an integer)
db: 0
## Use ssl for the connection to Redis (True/False)
ssl: false
Helm chart version
26.22.0
Steps to reproduce
- Have Sentry running with chart's Redis
- Disable chart's Redis
redis.enabled: false
- Set host, secret, db and ssl under
externalRedis
helm upgrade sentry sentry/sentry --version 26.22.0 --timeout 20m
Screenshots
No response
Logs
sentry.ingest.consumer.processors.Retriable: Error 111 connecting to 127.0.0.1:6379. Connection refused.
Additional context
I see the correct host in the Sentry ConfigMap is set correctly:
SENTRY_OPTIONS["redis.clusters"] = {
"default": {
"hosts": {
0: {
"host": "external-sentry-redis",
"password": os.environ.get("REDIS_PASSWORD", ""),
"port": "6379",
"db": "0"
}
}
}
}
The BROKER_URL
ENV is set correctly as well:
redis://:$(HELM_CHARTS_SENTRY_REDIS_PASSWORD_CONTROLLED)@external-sentry-redis:6379/0
From what I can tell it's just ignoring the Redis settings.