• /
  • EnglishEspañolFrançais日本語한국어Português
  • Log inStart now

Confluent cloud integration

New Relic offers an integration for collecting your Confluent Cloud managed streaming for Apache Kafka data. This document explains how to activate this integration and describes the data that can be reported.

Prerequisites

  • A New Relic account
  • An active Confluent Cloud account
  • A Confluent Cloud API key and secret
  • MetricsViewer access on the Confluent Cloud account

Activate integration

To enable this integration, go to Integrations & Agents, select Confluent Cloud -> API Polling and follow the instructions.

Important

If you have IP Filtering set up, add the following IP addresses to your filter.

  • 162.247.240.0/22
  • 152.38.128.0/19

For more information about New Relic IP ranges for cloud integration, refer this document. For instructions to perform this task, refer this document.

Configuration and polling

Default polling information for the Confluent Cloud Kafka integration:

  • New Relic polling interval: 5 minutes
  • Confluent Cloud data interval: 1 minute

You can change the polling frequency only during the initial configuration.

View and use data

You can query and explore your data using the following event type:

Entity

Data type

Provider

Cluster

Metric

Confluent

Connector

Metric

Confluent

ksql

Metric

Confluent

Compute Pool (Flink)

Metric

Confluent

For more on how to use your data, see Understand and use integration data.

Metric data

This integration records Confluent cloud Kafka data for cluster, connector, and ksql.

Cluster data

Metric

Unit

Description

cluster_load_percent

Percent

A measure of the utilization of the cluster. The value is between 0.0 and 1.0. Only dedicated tier clusters has this metric data.

hot_partition_ingress

Percent

An indicator of the presence of a hot partition caused by ingress throughput. The value is 1.0 when a hot partition is detected, and empty when there is no hot partition detected.

hot_partition_egress

Percent

An indicator of the presence of a hot partition caused by egress throughput. The value is 1.0 when a hot partition is detected, and empty when there is no hot partition detected.

request_bytes

Bytes

The delta count of total request bytes from the specified request types sent over the network. Each sample is the number of bytes sent since the previous data point. The count is sampled every 60 seconds.

response_bytes

Bytes

The delta count of total response bytes from the specified response types sent over the network. Each sample is the number of bytes sent since the previous data point. The count is sampled every 60 seconds.

received_bytes

Bytes

The delta count of bytes of the customer's data received from the network. Each sample is the number of bytes received since the previous data sample. The count is sampled every 60 seconds.

sent_bytes

Bytes

The delta count of bytes of the customer's data sent over the network. Each sample is the number of bytes sent since the previous data point. The count is sampled every 60 seconds.

received_records

Count

The delta count of records received. Each sample is the number of records received since the previous data sample. The count is sampled every 60 seconds.

sent_records

Count

The delta count of records sent. Each sample is the number of records sent since the previous data point. The count is sampled every 60 seconds.

partition_count

Count

The number of partitions.

consumer_lag_offsets

Milliseconds

The lag between a group member's committed offset and the partition's high watermark.

successful_authentication_count

Count

The delta count of successful authentications. Each sample is the number of successful authentications since the previous data point. The count sampled every 60 seconds.

active_connection_count

Count

The count of active authenticated connections.

Connector data

Metric

Unit

Description

sent_records

Count

The delta count of total number of records sent from the transformations and written to Kafka for the source connector. Each sample is the number of records sent since the previous data point. The count is sampled every 60 seconds.

connector_status

Bit

The status of a connector within the system. Its value is always set to 1, signifying the connector's presence. The connector's current operational state is identified through the metric.status tag.

connector_task_status

Bit

The status of a connector's task within the system. Its value is always set to 1, signifying the connector task's presence. The connector's current operational state is identified through the metric.status tag.

connector_task_batch_size_avg

Count

The average batch size (measured by record count) per minute. For a source connector, it indicates the average batch size sent to Kafka. For a sink connector, it indicates the average batch size read by the sink task.

connector_task_batch_size_max

Count

The maximum batch size (measured by record count) per minute. For a source connector, it indicates the maximum batch size sent to Kafka. For a sink connector, it indicates the maximum batch size read by the sink task.

received_records

Count

The delta count of total number of records received by the sink connector. Each sample is the number of records received since the previous data point. The count is sampled every 60 seconds.

sent_bytes

Bytes

The delta count of total number of records received by the sink connector. Each sample is the number of records received since the previous data point. The count is sampled every 60 seconds.

received_bytes

Bytes

The delta count of total bytes received by the sink connector. Each sample is the number of bytes received since the previous data point. The count is sampled every 60 seconds.

dead_letter_queue_records

Count

The delta count of dead letter queue records written to Kafka for the sink connector. The count is sampled every 60 seconds.

ksql data

Metric

Unit

Description

streaming_unit_count

Count

The count of Confluent Streaming Units (CSUs) for this KSQL instance. The count is sampled every 60 seconds. The implicit time aggregation for this metric is MAX.

query_saturation

Percent

The maximum saturation for a given ksqlDB query across all nodes. Returns a value between 0 and 1, a value close to 1 indicates that ksqlDB query processing is bottlenecked on available resources.

task_stored_bytes

Bytes

The size of a given task's state stores in bytes.

storage_utilization

Percent

The total storage utilization for a given ksqlDB application.

consumed_total_bytes

Bytes

The delta count of bytes consumed from Kafka by continuous queries over the requested period.

produced_total_bytes

Bytes

The delta count of bytes produced to Kafka by continuous queries over the requested period.

offsets_processed_total

Count

The delta count of offsets processed by a given query or task or topic, or offset.

committed_offset_lag

Milliseconds

The current lag between the committed offset and end offset for a given query or task or topic, or offset.

processing_errors_total

Count

Delta count of the number of record processing errors of a query over the requested period.

query_restarts

Count

Delta count of the number of failures that cause a query to restart over the requested period.

Metric

Unit

Description

compute_pool_utilization.cfu_limit

Count

The possible maximum number of CFUs for the pool.

compute_pool_utilization.cfu_minutes_consumed

Count

The number of CFUs consumed since the last measurement.

compute_pool_utilization.current_cfus

Count

The absolute number of CFUs at a given moment.

current_input_watermark_milliseconds

Milliseconds

The last watermark this statement has received (in milliseconds) for the given table.

current_output_watermark_milliseconds

Milliseconds

The last watermark this statement has produced (in milliseconds) to the given table.

materialized_table_utilization.cfu_minutes_consumed

Count

The number of how many CFUs consumed since the last measurement.

materialized_table_utilization.current_cfus

Count

The absolute number of CFUs at a given moment.

max_input_lateness_milliseconds

Milliseconds

The maximum observed lateness across all records processed in the last minute. A record is considered late if it has a timestamp less than or equal to the current watermark.

num_late_records_in

Count

Total number of input records classified as late events. These are records whose timestamp is less than or equal to the current watermark.

num_records_in

Count

Total number of records this statement has received.

num_records_in_from_files

Count

Total number of records this statement has read from Tableflow files.

num_records_in_from_topics

Count

Total number of records this statement has read from Kafka topics.

num_records_out

Count

Total number of records this statement has emitted.

operator.state_size_bytes

Bytes

The size in bytes of this operator state.

pending_records

Count

Total amount of available records after the consumer offset in a Kafka partition across all operators.

statement_status

Count

This metric monitors the status of a statement within the system. Its value is always set to 1, signifying the statement's presence. The statement's current operational state is identified through the metric.status tag:

  • PENDING: The statement has been submitted and Flink is preparing to start running the statement.
  • RUNNING: Flink is actively running the Flink statement.
  • COMPLETED: The statement has completed all of its work.
  • DELETING: The statement is being deleted.
  • FAILED: The statement has encountered an error and is no longer running.
  • DEGRADED: The statement appears unhealthy, for example, no transactions have been committed for a long time, or the statement has frequently restarted recently.
  • STOPPING: The statement is about to be stopped.
  • STOPPED: The statement has been stopped and is no longer running.

statement_utilization.cfu_minutes_consumed

Count

The number of how many CFUs consumed since the last measurement.

statement_utilization.current_cfus

Count

The absolute number of CFUs at a given moment.

What's next

Data and UI

Learn how to use New Relic to monitor your Kafka clusters

Copyright © 2026 New Relic Inc.

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.