Skip to content
Search ESC

Kafka Monitoring Guide: Key Metrics, Consumer Lag, and Alerting

2025-06-10 · Updated 2026-04-09 · 7 min read · Igor Bobriakov

Kafka monitoring is mostly about a handful of key metrics that tell you whether the cluster is healthy, lagging, saturated, or drifting toward failure. In practice, the most useful signals come from broker health, producer throughput and latency, consumer lag, and a short list of high-value alerts.

Operating a robust Apache Kafka cluster goes beyond initial setup; it demands continuous vigilance. Proactive monitoring is the cornerstone of maintaining a healthy, performant, and reliable Kafka deployment. Without it, you risk silent failures, performance degradation, and potential data loss. Understanding what to monitor, how to interpret key metrics, and which tools to employ are essential skills for any team managing business-critical Kafka infrastructure.

This guide provides a comprehensive overview of Kafka monitoring essentials. We’ll delve into the critical metrics for brokers, producers, and consumers, discuss effective alerting strategies, and explore popular monitoring tools and stacks. At ActiveWizards, we emphasize robust monitoring as a foundational element of the advanced AI and data engineering solutions we build, ensuring our clients’ Kafka clusters operate at peak efficiency and reliability.

Why Comprehensive Kafka Monitoring Matters

Effective monitoring provides numerous benefits:

  • Early Problem Detection: Identify issues like broker failures, high latency, or consumer lag before they impact end-users.
  • Performance Optimization: Understand bottlenecks and resource utilization to guide tuning efforts.
  • Capacity Planning: Track trends in message volume, disk usage, and resource consumption to forecast future needs.
  • Root Cause Analysis: Quickly diagnose problems by correlating metrics across different components.
  • Ensuring Data Integrity & Availability: Monitor replication status, under-replicated partitions, and overall cluster health.

Key Metric Categories for Kafka Monitoring

A holistic monitoring strategy covers metrics from multiple perspectives:

  1. Broker Metrics: Health and performance of individual Kafka brokers.
  2. Producer Metrics: Throughput, latency, and error rates for applications sending data.
  3. Consumer Metrics: Consumption rates, lag, and processing health for applications reading data.
  4. ZooKeeper Metrics: (If applicable, especially for older Kafka versions or specific management tasks) Health of the ZooKeeper ensemble.
  5. Operating System (OS) Metrics: CPU, memory, disk I/O, and network stats on broker and client machines.
  6. JVM Metrics: Garbage collection, heap usage, and thread states for Kafka brokers and Java-based clients.

Essential Broker Metrics

Brokers are the heart of Kafka. Monitoring their health is paramount.

  • UnderReplicatedPartitions: kafka.server:type=ReplicaManager,name=UnderReplicatedPartitions. Any value > 0 indicates potential data loss risk and reduced fault tolerance. This is a critical alert.
  • OfflinePartitionsCount: kafka.controller:type=KafkaController,name=OfflinePartitionsCount. Indicates partitions without an active leader. Critical alert.
  • ActiveControllerCount: kafka.controller:type=KafkaController,name=ActiveControllerCount. Should be 1 across the entire cluster. Alerts if 0 or >1.
  • NetworkProcessorAvgIdlePercent: kafka.network:type=SocketServer,name=NetworkProcessorAvgIdlePercent. Low values (e.g., < 0.3) suggest network threads are a bottleneck.
  • RequestHandlerAvgIdlePercent: kafka.server:type=KafkaRequestHandlerPool,name=RequestHandlerAvgIdlePercent. Low values suggest I/O threads (request handlers) are a bottleneck.
  • MessagesInPerSec: kafka.server:type=BrokerTopicMetrics,name=MessagesInPerSec. Overall message ingress rate.
  • BytesInPerSec / BytesOutPerSec: kafka.server:type=BrokerTopicMetrics,name=BytesInPerSec (and BytesOutPerSec). Network traffic.
  • RequestQueueTimeMs (p99): kafka.network:type=RequestMetrics,name=RequestQueueTimeMs,request={Produce|FetchConsumer|FetchFollower}. Time requests spend in the queue. High values indicate saturation.
  • LogFlushRateAndTimeMs (p99): kafka.log:type=LogFlushStats,name=LogFlushRateAndTimeMs. Time spent flushing logs to disk. High values indicate disk I/O bottlenecks.
  • LeaderCount / PartitionCount: kafka.server:type=ReplicaManager,name=LeaderCount (and PartitionCount). Track leader distribution and total partitions per broker.
  • IsrShrinksPerSec / IsrExpandsPerSec: kafka.server:type=ReplicaManager,name=IsrShrinksPerSec (and IsrExpandsPerSec). High rates indicate ISR instability, possibly due to network issues or overloaded replicas.

Essential Producer Metrics

Monitor producer behavior to ensure data is being published reliably and efficiently.

  • record-send-rate: kafka.producer:type=producer-metrics,client-id=([-.\w]+). Outgoing message rate.
  • record-error-rate: kafka.producer:type=producer-metrics,client-id=([-.\w]+). Rate of failed send attempts. Should be near zero.
  • request-latency-avg / request-latency-max: kafka.producer:type=producer-metrics,client-id=([-.\w]+). Average and max time for requests to be acknowledged. Monitor percentiles for a better picture.
  • batch-size-avg: kafka.producer:type=producer-metrics,client-id=([-.\w]+). Average batch size. Useful for tuning batch.size and linger.ms.
  • compression-rate-avg: kafka.producer:type=producer-metrics,client-id=([-.\w]+). Average compression ratio achieved.
  • buffer-available-bytes / buffer-total-bytes: kafka.producer:type=producer-metrics,client-id=([-.\w]+). Monitor buffer utilization to see if buffer.memory is sufficient.
  • record-queue-time-avg / record-queue-time-max: kafka.producer:type=producer-metrics,client-id=([-.\w]+). Time records spend in the producer’s internal queue.

Essential Consumer Metrics

Track consumer performance to ensure timely data processing and identify lag.

  • records-lag-max: kafka.consumer:type=consumer-fetch-manager-metrics,client-id=([-.\w]+),topic=([-.\w]+),partition=([0-9]+) (or aggregated per client/topic). Maximum lag in messages for any partition. This is a critical metric indicating how far behind a consumer is.
  • records-consumed-rate: kafka.consumer:type=consumer-fetch-manager-metrics,client-id=([-.\w]+). Rate at which records are consumed.
  • fetch-latency-avg / fetch-latency-max: kafka.consumer:type=consumer-fetch-manager-metrics,client-id=([-.\w]+). Time taken to fetch records from brokers.
  • bytes-consumed-rate: kafka.consumer:type=consumer-fetch-manager-metrics,client-id=([-.\w]+). Data consumption rate in bytes.
  • commit-latency-avg / commit-latency-max: kafka.consumer:type=consumer-coordinator-metrics,client-id=([-.\w]+). Time taken to commit offsets. High values can indicate coordinator issues or slow commits.
  • assigned-partitions: kafka.consumer:type=consumer-coordinator-metrics,client-id=([-.\w]+). Number of partitions assigned to this consumer. Useful for tracking rebalances.
  • join-rate / sync-rate: kafka.consumer:type=consumer-coordinator-metrics,client-id=([-.\w]+). Rate of group join and sync operations. High rates indicate frequent rebalances (consumer group instability).

Expert Insight: Understanding Consumer Lag Consumer lag is a critical indicator of processing capacity. If lag continuously grows, your consumers cannot keep up with the production rate. This could be due to insufficient consumer instances, slow processing logic within the consumer, or downstream bottlenecks. Alerting on high or persistently growing lag is essential.

OS and JVM Metrics

Kafka’s performance is intrinsically linked to its underlying system.

  • OS Metrics:
  • CPU Utilization (overall, per core, user/system/iowait) - Memory Usage (total, free, buffered, cached - pay attention to page cache!) - Disk I/O (iops, throughput, await times, queue depth for data disks) - Network I/O (bytes/packets in/out, errors, drops) - Open File Descriptors
  • JVM Metrics (for Brokers & Java Clients):
    • Heap Usage (used, committed, max)
    • Garbage Collection (GC) Counts & Durations (especially major GCs)
    • Thread Count & States
    • Class Loading Count

Monitoring Tools and Stacks

Several tools and combinations can be used for Kafka monitoring:

Tool/StackDescriptionProsCons
Prometheus + GrafanaPopular open-source monitoring and visualization stack. Kafka JMX metrics exposed via JMX Exporter. Node Exporter for OS metrics.Highly flexible, powerful querying (PromQL), great visualization, large community, many integrations.Requires setup and configuration of exporters, Prometheus server, and Grafana.
Datadog, Dynatrace, New RelicCommercial APM and infrastructure monitoring solutions with Kafka integrations.Comprehensive, often easier setup, unified platform, AI-assisted insights.Can be expensive, potential vendor lock-in.
Confluent Control CenterWeb-based tool for managing and monitoring Apache Kafka clusters (part of Confluent Platform).Kafka-specific insights, stream monitoring, management features.Primarily for Confluent Platform users, licensing costs for advanced features.
Yahoo CMAK (Cluster Manager for Apache Kafka)Open-source tool for managing Kafka clusters, includes some monitoring capabilities. Formerly Kafka Manager.Good for cluster management, basic monitoring.Less focused on deep metrics analysis and alerting compared to dedicated monitoring systems.
Custom JMX Tools / ScriptsUsing tools like JConsole, VisualVM, or custom scripts to poll JMX MBeans directly.Fine-grained access to metrics, no external dependencies for basic checks.Not scalable for production monitoring, no historical data, no alerting.
The Prometheus + Grafana stack is a very common and powerful open-source choice for comprehensive Kafka monitoring, often augmented by tools like Alertmanager for alerting.

Effective Alerting Strategies

Metrics are useful, but automated alerting is crucial for proactive operations.

  • Alert on Critical Broker Health: UnderReplicatedPartitions > 0, OfflinePartitionsCount > 0, ActiveControllerCount != 1. These require immediate attention.
  • Alert on Resource Saturation: High CPU/Memory/Disk I/O utilization, low NetworkProcessorAvgIdlePercent.
  • Alert on High Latency: Producer request latency (p99), consumer fetch latency (p99), end-to-end latency if measurable.
  • Alert on Growing Consumer Lag: records-lag-max consistently increasing or exceeding a defined threshold.
  • Alert on Producer/Consumer Error Rates: Any significant increase in record-error-rate (producer) or processing errors (consumer).
  • Alert on Rebalance Storms: High consumer join-rate or sync-rate.
  • Use Thresholds and Rate of Change: Alert if a metric exceeds a static threshold (e.g., disk usage > 85%) or if it changes too rapidly (e.g., message rate drops unexpectedly).
  • Tiered Alerting: Differentiate between warning (investigate soon) and critical (wake someone up) alerts.

Expert Insight: Avoid Alert Fatigue Be selective with your alerts. Alerting on too many non-actionable metrics leads to alert fatigue, where important alerts get ignored. Focus on symptoms of problems (e.g., high lag, errors) rather than every possible cause. Fine-tune thresholds based on historical data and business impact.

Conclusion: Vigilance for a Resilient Kafka

Comprehensive monitoring and intelligent alerting are non-negotiable for maintaining a healthy, high-performing Apache Kafka cluster. By diligently tracking key metrics across brokers, clients, and the underlying systems, and by setting up meaningful alerts, you can proactively identify and address issues, optimize performance, and ensure your Kafka deployment remains a reliable backbone for your data-driven applications.

Implementing a robust monitoring solution can be complex. ActiveWizards has deep expertise in designing and deploying monitoring strategies for Kafka, helping organizations gain the visibility they need for operational excellence.

Ensure Your Kafka Cluster’s Health with ActiveWizards

Don’t let your Kafka cluster run in the dark. ActiveWizards provides expert services in setting up comprehensive Kafka monitoring, dashboards, and alerting systems tailored to your specific needs. Gain the insights you need for a resilient and performant Kafka deployment.

Production Deployment

Deploy this architecture

Submit system context, constraints, and delivery pressure. A Principal Engineer reviews every submission and recommends the right next step.

[ SUBMIT SPECS ]

No SDRs. A Principal Engineer reviews every submission.

About the author

Igor Bobriakov

AI Architect. Author of Production-Ready AI Agents. 15 years deploying production AI platforms and agentic systems for enterprise clients and deep-tech startups.