Prometheus Apr 2026

For the last decade, one answer has consistently risen above the noise: .

Prometheus is designed for reliability, not long-term storage. By default, data is stored locally on disk. If your server crashes, your historical data is gone. "Highly available" setups usually involve running two identical Prometheus servers (both scraping the same targets) and a separate Alertmanager to deduplicate alerts. This is wasteful and fragile. prometheus

The current best practice is using the OpenTelemetry Collector to receive metrics (via OTLP), transform them, and then forward them to a Prometheus-compatible backend. The war is over: Prometheus won the query language and exposition format , but OpenTelemetry is winning the instrumentation and collection layer. If you are running Kubernetes, you are almost certainly using Prometheus—whether you know it or not (via embedded solutions like Red Hat OpenShift or Rancher). For the last decade, one answer has consistently

Prometheus is for metrics (numbers over time). It cannot replace logging (ELK Stack) or tracing (Jaeger). This is the "Three Pillars of Observability." If a user reports a slow checkout process, Prometheus can tell you latency spiked, but it cannot tell you which specific database query caused it. You need integration with other tools. If your server crashes, your historical data is gone

In the sprawling, dynamic world of cloud-native computing, asking "Is the server up?" feels as archaic as asking for a fax number. The question has evolved into something far more complex: "How is my system behaving, where are its bottlenecks, and what will break next?"

Discover more from The Bollard

Subscribe now to keep reading and get access to the full archive.

Continue reading