Monitor your self-hosted Elasticsearch cluster by installing the OpenTelemetry Collector directly on servers or virtual machines. New Relic provides flexible deployment options to match your infrastructure setup and monitoring requirements.
You can choose between two collector distributions:
- NRDOT: New Relic Distribution of OpenTelemetry
- OTel Collector Contrib: Standard OpenTelemetry Collector with community-contributed components
Installation options
Choose the collector distribution that matches your needs:
Importante
NRDOT support for Elasticsearch monitoring is coming soon! Stay tuned for updates!
Before you begin
Before configuring the OTel Collector Contrib, ensure you have:
Required access privileges:
- Your New Relic
- Root or sudo privileges on the host machine
- Elasticsearch cluster admin privileges with
monitorormanagecluster privilege (see Elasticsearch security privileges documentation for details)
System requirements:
- Elasticsearch version 7.16 or higher - This integration requires a modern Elasticsearch cluster
- Network connectivity - Outbound HTTPS (port 443) to New Relic's OTLP ingest endpoint
Configuration values ready:
- Elasticsearch endpoint - Your Elasticsearch cluster URL (e.g.,
http://localhost:9200) - Cluster name - A unique identifier for your cluster in New Relic
Importante
You must have OpenTelemetry Collector Contrib installed on your host before proceeding. We recommend installing via official packages (.deb or .rpm) to ensure the systemd service unit is created correctly.
Configure Elasticsearch monitoring
Once the OTel Collector Contrib is installed, replace the collector's default configuration file with the Elasticsearch monitoring configuration. This will enable Elasticsearch metrics collection. Host metrics and logs are optional and can be added separately.
The configuration file is located at: /etc/otelcol-contrib/config.yaml
Sugerencia
Backup your default configuration: Before modifying the configuration file, create a backup copy to preserve the default settings:
$sudo cp /etc/otelcol-contrib/config.yaml /etc/otelcol-contrib/config.yaml.backupTo configure the collector:
Open the configuration file with a text editor using root or sudo privileges:
bash$sudo nano /etc/otelcol-contrib/config.yamlDelete all existing content and replace it with the following configuration for Elasticsearch monitoring:
Importante
Replace the endpoint value with your Elasticsearch cluster endpoint and update elasticsearch.cluster.name in the processor block with a unique name to identify your cluster in New Relic.
receivers: elasticsearch: endpoint: "http://localhost:9200" collection_interval: 15s metrics: elasticsearch.os.cpu.usage: enabled: true elasticsearch.cluster.data_nodes: enabled: true elasticsearch.cluster.health: enabled: true elasticsearch.cluster.in_flight_fetch: enabled: true elasticsearch.cluster.nodes: enabled: true elasticsearch.cluster.pending_tasks: enabled: true elasticsearch.cluster.shards: enabled: true elasticsearch.cluster.state_update.time: enabled: true elasticsearch.index.documents: enabled: true elasticsearch.index.operations.merge.current: enabled: true elasticsearch.index.operations.time: enabled: true elasticsearch.node.cache.count: enabled: true elasticsearch.node.cache.evictions: enabled: true elasticsearch.node.cache.memory.usage: enabled: true elasticsearch.node.shards.size: enabled: true elasticsearch.node.cluster.io: enabled: true elasticsearch.node.documents: enabled: true elasticsearch.node.disk.io.read: enabled: true elasticsearch.node.disk.io.write: enabled: true elasticsearch.node.fs.disk.available: enabled: true elasticsearch.node.fs.disk.total: enabled: true elasticsearch.node.http.connections: enabled: true elasticsearch.node.ingest.documents.current: enabled: true elasticsearch.node.ingest.operations.failed: enabled: true elasticsearch.node.open_files: enabled: true elasticsearch.node.operations.completed: enabled: true elasticsearch.node.operations.current: enabled: true elasticsearch.node.operations.get.completed: enabled: true elasticsearch.node.operations.time: enabled: true elasticsearch.node.shards.reserved.size: enabled: true elasticsearch.index.shards.size: enabled: true elasticsearch.os.cpu.load_avg.1m: enabled: true elasticsearch.os.cpu.load_avg.5m: enabled: true elasticsearch.os.cpu.load_avg.15m: enabled: true elasticsearch.os.memory: enabled: true jvm.gc.collections.count: enabled: true jvm.gc.collections.elapsed: enabled: true jvm.memory.heap.max: enabled: true jvm.memory.heap.used: enabled: true jvm.memory.heap.utilization: enabled: true jvm.threads.count: enabled: true elasticsearch.index.segments.count: enabled: true elasticsearch.index.operations.completed: enabled: true elasticsearch.node.script.cache_evictions: enabled: false elasticsearch.node.cluster.connections: enabled: false elasticsearch.node.pipeline.ingest.documents.preprocessed: enabled: false elasticsearch.node.thread_pool.tasks.queued: enabled: false elasticsearch.cluster.published_states.full: enabled: false jvm.memory.pool.max: enabled: false elasticsearch.node.script.compilation_limit_triggered: enabled: false elasticsearch.node.shards.data_set.size: enabled: false elasticsearch.node.pipeline.ingest.documents.current: enabled: false elasticsearch.cluster.state_update.count: enabled: false elasticsearch.node.fs.disk.free: enabled: false jvm.memory.nonheap.used: enabled: false jvm.memory.pool.used: enabled: false elasticsearch.node.translog.size: enabled: false elasticsearch.node.thread_pool.threads: enabled: false elasticsearch.cluster.state_queue: enabled: false elasticsearch.node.translog.operations: enabled: false elasticsearch.memory.indexing_pressure: enabled: false elasticsearch.node.ingest.documents: enabled: false jvm.classes.loaded: enabled: false jvm.memory.heap.committed: enabled: false elasticsearch.breaker.memory.limit: enabled: false elasticsearch.indexing_pressure.memory.total.replica_rejections: enabled: false elasticsearch.breaker.memory.estimated: enabled: false elasticsearch.cluster.published_states.differences: enabled: false jvm.memory.nonheap.committed: enabled: false elasticsearch.node.translog.uncommitted.size: enabled: false elasticsearch.node.script.compilations: enabled: false elasticsearch.node.pipeline.ingest.operations.failed: enabled: false elasticsearch.indexing_pressure.memory.limit: enabled: false elasticsearch.breaker.tripped: enabled: false elasticsearch.indexing_pressure.memory.total.primary_rejections: enabled: false elasticsearch.node.thread_pool.tasks.finished: enabled: falseprocessors: memory_limiter: check_interval: 60s limit_mib: ${env:NEW_RELIC_MEMORY_LIMIT_MIB:-100} cumulativetodelta: {} resource/cluster_name_override: attributes: - key: elasticsearch.cluster.name value: "<elasticsearch-cluster-name>" action: upsert resourcedetection: detectors: [ system ] system: resource_attributes: host.name: enabled: true host.id: enabled: true os.type: enabled: true batch: timeout: 10s send_batch_size: 1024 attributes/cardinality_reduction: actions: - key: process.pid action: delete - key: process.parent_pid action: delete transform/metadata_nullify: metric_statements: - context: metric statements: - set(description, "") - set(unit, "")exporters: otlphttp: endpoint: ${env:NEWRELIC_OTLP_ENDPOINT} headers: api-key: ${env:NEWRELIC_LICENSE_KEY}service: pipelines: metrics/elasticsearch: receivers: [elasticsearch] processors: [memory_limiter, resourcedetection, resource/cluster_name_override, attributes/cardinality_reduction, cumulativetodelta, transform/metadata_nullify, batch] exporters: [otlphttp](Optional) For secured Elasticsearch with authentication and SSL, modify the receiver configuration:
receivers:elasticsearch:endpoint: "https://localhost:9200"username: "your_elasticsearch_username"password: "your_elasticsearch_password"tls:ca_file: "/etc/elasticsearch/certs/http_ca.crt"insecure_skip_verify: falsecollection_interval: 15s(Optional) To collect host metrics, add the hostmetrics receiver:
receivers:hostmetrics:collection_interval: 60sscrapers:cpu:metrics:system.cpu.utilization: {enabled: true}system.cpu.time: {enabled: true}load:metrics:system.cpu.load_average.1m: {enabled: true}system.cpu.load_average.5m: {enabled: true}system.cpu.load_average.15m: {enabled: true}memory:metrics:system.memory.usage: {enabled: true}system.memory.utilization: {enabled: true}disk:metrics:system.disk.io: {enabled: true}system.disk.operations: {enabled: true}filesystem:metrics:system.filesystem.usage: {enabled: true}system.filesystem.utilization: {enabled: true}network:metrics:system.network.io: {enabled: true}system.network.packets: {enabled: true}process:metrics:process.cpu.utilization:enabled: trueAnd add to the service pipelines:
service:pipelines:metrics/host:receivers: [hostmetrics]processors: [memory_limiter, resourcedetection, batch]exporters: [otlphttp](Optional) To collect Elasticsearch logs, add the filelog receiver. Ensure the user running the collector service (otelcol-contrib) has read access to your Elasticsearch log files:
If running Elasticsearch on Linux (Host):
receivers:filelog:include:- /var/log/elasticsearch/elasticsearch.log- /var/log/elasticsearch/*.logIf running Elasticsearch in Docker:
receivers:filelog:include:- /var/lib/docker/containers/*/*.logoperators:- type: movefrom: attributes.logto: bodyAnd add to the service pipelines:
service:pipelines:logs:receivers: [filelog]processors: [resource/cluster_name_override]exporters: [otlphttp](Optional) To add custom metadata tags to your metrics, use the
resource/static_overrideprocessor:processors:resource/static_override:attributes:- key: envvalue: "production"action: upsertservice:pipelines:metrics/elasticsearch:receivers: [elasticsearch]processors: [memory_limiter, resourcedetection, resource/cluster_name_override, resource/static_override, attributes/cardinality_reduction, cumulativetodelta, transform/metadata_nullify, batch]exporters: [otlphttp]Save the configuration file.
Set the environment variables:
Create a systemd override directory:
bash$sudo mkdir -p /etc/systemd/system/otelcol-contrib.service.dCreate the environment configuration file:
bash$cat <<EOF | sudo tee /etc/systemd/system/otelcol-contrib.service.d/environment.conf$[Service]$Environment="NEWRELIC_OTLP_ENDPOINT=https://otlp.nr-data.net:4318"$Environment="NEWRELIC_LICENSE_KEY=YOUR_LICENSE_KEY_HERE"$Environment="NEW_RELIC_MEMORY_LIMIT_MIB=100"$EOFUpdate the configuration with your values:
- Replace
https://otlp.nr-data.net:4318with your region's endpoint - Replace
YOUR_LICENSE_KEY_HEREwith your actual New Relic license key - Replace
100with your desired memory limit in MiB for the collector (default: 100 MiB). Adjust based on your environment's needs
- Replace
Restart the OTel Collector Contrib to apply changes:
bash$sudo systemctl daemon-reload$sudo systemctl restart otelcol-contrib.service
Verify data collection
Verify that the OTel Collector Contrib is running and collecting data without errors:
Check the collector service status:
bash$sudo systemctl status otelcol-contrib.serviceMonitor the collector logs for any errors:
bash$sudo journalctl -u otelcol-contrib.service -fLook for successful connections to Elasticsearch and New Relic. If you see errors, refer to the troubleshooting guide.
Sugerencia
Correlate APM with Elasticsearch: To connect your APM application and Elasticsearch cluster, include the resource attribute es.cluster.name="your-cluster-name" in your APM metrics. This enables cross-service visibility and faster troubleshooting within New Relic.
View your Elasticsearch data
Once the collector is running and sending data, you can explore your Elasticsearch metrics, create custom queries, and set up monitoring dashboards in New Relic.
For detailed information on accessing your data, writing NRQL queries, and configuring alerts, see Find and query Elasticsearch data.
Troubleshooting
If you encounter issues during installation or don't see data in New Relic, see our comprehensive troubleshooting guide for step-by-step solutions to common problems.