Metric Cardinality

Understand how metric attributes multiply to create timeseries, why this matters for costs, and how to identify high-cardinality contributors.

Overview

Cardinality is the count of unique timeseries for a given metric. Understanding cardinality is essential because:

  • Most observability platforms bill by timeseries count, not metric name count
  • High cardinality consumes more memory and slows queries
  • Uncontrolled cardinality growth can overwhelm downstream systems

Metric vs timeseries

Before discussing cardinality, understand the difference between a metric and a timeseries:

ConceptDefinitionExample
MetricA named measurement definitionhttp.server.request.duration
TimeseriesA metric name + specific attribute valueshttp.server.request.duration{service="api", endpoint="/users", status="200"}

A single metric definition can generate thousands of timeseries depending on its attributes. When observability platforms refer to “custom metrics” in their billing, they typically mean timeseries, not metric names.

The cardinality formula

Cardinality is calculated as the Cartesian product of all possible attribute value combinations:

Timeseries Count = values(attr1) × values(attr2) × values(attr3) × ...

Each new attribute does not add to cardinality - it multiplies.

Example calculation

Consider a single metric http.server.request.duration with these attributes:

AttributeUnique ValuesExamples
service10api, web, auth, …
status_code5200, 201, 400, 404, 500
region3us-west, us-east, eu
endpoint50/api/v1/users, /api/v1/orders, …

Cardinality calculation:

10 × 5 × 3 × 50 = 7,500 timeseries

From a single metric definition, you generate 7,500 billable timeseries.

The multiplier effect

Adding a new attribute multiplies existing cardinality:

ChangeBeforeAfter
Base metric-7,500
Add method (4 values: GET, POST, PUT, DELETE)7,50030,000
Add pod_id (100 values)30,0003,000,000

A seemingly innocuous attribute like pod_id can increase cardinality by 100x.

Histogram multiplier

Histogram metrics multiply cardinality further. Each histogram generates multiple sub-metrics:

Sub-metricDescription
_countNumber of observations
_sumSum of all observed values
_minMinimum observed value
_maxMaximum observed value
_bucketCount per bucket boundary

Each sub-metric carries the full attribute set. With default bucket configurations:

7,500 base cardinality × 5 histogram sub-metrics = 37,500 timeseries

Histogram metrics can be 5x or more expensive than simple counters or gauges.

Identifying high-cardinality contributors

To control cardinality, identify which attributes contribute the most unique values.

Common high-cardinality culprits

AttributeRiskAlternative
request_idUnique per requestRemove - use traces instead
user_idUnique per userRemove or hash to buckets
pod_idUnique per podAggregate to service level
container_idUnique per containerAggregate to pod or service
url.pathUnique per request pathNormalize: /users/123/users/{id}
timestampNever use as attributeAlready part of timeseries definition

The fingerprint method

Create a unique identifier for each timeseries to count distinct combinations:

fingerprint = metric_name + sorted(key1=value1, key2=value2, ...)

Example fingerprints:

http.request.duration|endpoint=/api/v1/users,region=us-west,service=api,status=200
http.request.duration|endpoint=/api/v1/orders,region=us-west,service=api,status=200
http.request.duration|endpoint=/api/v1/users,region=us-east,service=api,status=200

Count distinct fingerprints to measure cardinality. Group by attribute to identify which ones contribute the most unique values.

Cardinality impact

Impact AreaEffect
CostDirect billing impact - more timeseries = higher bill
MemoryAggregation processors hold state per timeseries
Query performanceMore series = slower queries and dashboards
StorageMore series = more index entries and storage
Ingestion limitsMany platforms have rate limits per timeseries

Best practices

Design metrics with cardinality in mind

When instrumenting applications:

  • Limit attributes to those you query by
  • Use bounded value sets (status codes, not request IDs)
  • Normalize dynamic values (URL paths, error messages)

Monitor cardinality

Track cardinality as a metric itself:

  • Count distinct fingerprints per metric name
  • Alert on sudden cardinality increases
  • Review top contributors regularly

Reduce cardinality proactively

Before metrics reach expensive destinations:

  • Drop high-cardinality attributes
  • Normalize URL paths and similar dynamic values
  • Use group_by in aggregation to control output dimensions

See Reduce Metric Cardinality for implementation strategies.

See also