If you have completed the Elasticsearch OpenTelemetry integration installation or Kubernetes installation but don't see data in New Relic, find your issue below and follow the solution steps.
Host-based deployments
How to check
$sudo systemctl status otelcol-contribResolution
- If the service is inactive, start it:
sudo systemctl start otelcol-contrib - If the service failed, fix configuration errors and restart:
sudo systemctl restart otelcol-contrib
How to check
$sudo journalctl -u otelcol-contrib.service -fResolution
Review the log output and resolve the root cause (for example, connection problems, authentication failures, or permission issues).
Error sample: dial tcp [::1]:9200: connect: connection refused
Resolution
- Ensure the
endpointinconfig.yamlmatches the Elasticsearch host and port. - Confirm Elasticsearch is running and reachable from the collector host.
Error sample: permanent error: 403 Forbidden
Resolution
Verify
NEWRELIC_LICENSE_KEYin/etc/systemd/system/otelcol-contrib.service.d/environment.conf.Reload systemd and restart the collector:
bash$sudo systemctl daemon-reload$sudo systemctl restart otelcol-contrib
Error sample: permission denied or cannot open file
Resolution
- Add the collector user to the Elasticsearch group:bash$sudo usermod -a -G elasticsearch otelcol-contrib
- Restart the collector:
sudo systemctl restart otelcol-contrib
How to check
$# Unsecured cluster$curl -I http://localhost:9200$
$# With authentication$curl -u username:password -k https://localhost:9200Resolution
Verify the cluster is healthy, credentials are valid, and firewall or security settings permit access.
Resolution
- Ensure the
resourcedetectionprocessor is included in every metrics pipeline. - Verify
elasticsearch.cluster.nameis set via theresource/cluster_name_overrideprocessor.
Resolution
- Confirm
filelogreceiver paths are correct and absolute. - Check that the logs pipeline includes both the
filelogreceiver and theotlphttpexporter.
Kubernetes deployments
How to check
$# Verify your Elasticsearch pods have the required label$kubectl get pods -n <namespace> -l app=elasticsearch --show-labelsResolution
If no pods are returned, your Elasticsearch pods are missing the required app=elasticsearch label. The receiver_creator cannot discover pods without matching labels.
For StatefulSet/Deployment, add the label in the pod template:
spec:template:metadata:labels:app: elasticsearchFor existing pods, add the label and restart:
bash$kubectl label pods -l <your-selector> app=elasticsearch -n <namespace>$kubectl rollout restart statefulset/elasticsearch -n <namespace>If using custom labels, update the receiver rule in values.yaml to match your labels:
rule: type == "pod" && labels["app"] == "your-custom-label"
How to check
$kubectl get pods -n newrelic$kubectl describe pod <collector-pod-name> -n newrelicResolution
Check pod events for errors:
kubectl describe podReview collector logs:
bash$kubectl logs -n newrelic -l app.kubernetes.io/name=opentelemetry-collectorVerify the secret exists:
bash$kubectl get secret newrelic-licenses -n newrelicCheck resource limits aren't too low
How to check
$# Check collector logs for discovery errors$kubectl logs -n newrelic -l app.kubernetes.io/name=opentelemetry-collector | grep "receiver_creator"Resolution
Verify RBAC permissions are correctly set:
bash$kubectl get clusterrole | grep opentelemetry$kubectl describe clusterrole <role-name>Ensure the collector has permissions to watch pods, nodes, and endpoints
Check that k8s_observer extension is enabled in the config
How to check
$# Check network policies$kubectl get networkpolicies -n <namespace>$
$# Test connectivity from collector to Elasticsearch$kubectl exec -n newrelic <collector-pod> -- curl http://<es-pod-ip>:9200Resolution
Verify network policies allow traffic from the newrelic namespace to your Elasticsearch namespace
Check if Elasticsearch pods are exposing the correct port (default: 9200)
Ensure no firewall rules block inter-pod communication
Error sample: permanent error: 403 Forbidden
Resolution
Verify the secret contains the correct license key:
bash$kubectl get secret newrelic-licenses -n newrelic -o jsonpath='{.data.NEWRELIC_LICENSE_KEY}' | base64 -dEnsure the OTLP endpoint is correct for your region
Check that the secret is mounted in the collector pod:
bash$kubectl describe pod <collector-pod> -n newrelic | grep -A5 "Environment"
Resolution
Verify you're using
mode: daemonset(deployment mode cannot access node logs)Check volume mounts are correctly configured:
bash$kubectl describe pod <collector-pod> -n newrelic | grep -A10 "Mounts"Verify the filelog receiver path matches your Elasticsearch pod logs:
bash$kubectl exec -n newrelic <collector-pod> -- ls /var/log/pods/*/elasticsearch*/*.logEnsure the collector has read permissions on host log directories
Resolution
Verify
K8S_CLUSTER_NAMEenvironment variable is set in values.yamlCheck the
resource/clusterprocessor is in the metrics pipelineQuery to verify:
FROM Metric SELECT * WHERE metricName LIKE 'elasticsearch.%' LIMIT 1Check if
k8s.cluster.nameattribute is present