Friday, 31 October 2025

Elasticsearch Indices




An Elasticsearch index is a logical namespace that stores and organizes a collection of related JSON documents, similar to a database table in relational databases but designed for full-text search and analytics. 

Each index is uniquely named and can contain any number of documents, where each document is a set of key-value pairs (fields) representing your data.​

Key Features of an Elasticsearch Index


  • Structure: An index is comprised of one or more shards, which are distributed across nodes in the Elasticsearch cluster for scalability and resilience.​
  • Mapping and Search: Indexes define mappings that control how document fields are stored and searched.
  • Indexing Process: Data is ingested and stored as JSON documents in the index, and Elasticsearch builds an inverted index to allow for fast searches.​
  • Use Case: Indices are used to organize datasets in log analysis, search applications, analytics, or any scenario where rapid search/retrieval is needed.​

In summary, an Elasticsearch index is the foundational storage and retrieval structure enabling efficient search and analytics on large datasets.


Index Lifecycle Policy (ILM)


An Index Lifecycle Management (ILM) policy defines what happens to an index as it ages — automatically. It’s a set of rules for retention, rollover, shrink, freeze, and delete.

Example:

PUT _ilm/policy/functionbeat
{
  "policy": {
    "phases": {
      "hot": {
        "actions": {
          "rollover": { "max_age": "30d", "max_size": "50GB" }
        }
      },
      "delete": {
        "min_age": "90d",
        "actions": { "delete": {} }
      }
    }
  }
}


This says:
  • Keep the index hot (actively written to) until it’s 30 days old or 50 GB big.
  • Then roll over (create a new index and switch writes to it).
  • After 90 days, delete the old index.

ILM be applied to a standard (non–data stream) index. We can attach an ILM policy to any index, not just data streams. However, there’s a big difference:

  • Rollover alias required:
    • Standard Index:Yes. We must manually set up an alias to make rollover work!
    • Data Stream: No (handled automatically - Elastic manages the alias and the backing indices)
  • Multiple backing indices
    • Standard Index: Optional (via rollover)
    • Data Stream: Always (that’s how data streams work)
  • Simplified management
    • Standard Index: Manual setup
    • Data Stream: Built-in

Index Rollover vs Data Stream


If we have a continuous stream of documents (e.g. logs) being written to Elasticsearch, we should not write them to a regular index as its size will grow over time and we'll need to keep increasing a node storage. Instead, we should consider one of the following options:

  1. Data Stream
  2. Index with ILM policy which defines a rollover conditions

What does rollover mean for a standard index?

When a rollover is triggered (by size, age, or doc count):

  • Elasticsearch creates a new index with the same alias.
  • The alias used for writes (e.g. functionbeat-write) is moved from the old index to the new one.
  • Functionbeat or Logstash continues writing to the same alias, unaware that rollover happened.


Example:

# Initially
functionbeat-000001  (write alias: functionbeat-write)

# After rollover
functionbeat-000001  (read-only)
functionbeat-000002  (write alias: functionbeat-write)


This keeps the write flow continuous and allows you to:
  • Manage old data (delete, freeze, move to cold tier)
  • Limit index size for performance

How to apply ILM to a standard index?

Here’s a minimal configuration:

PUT _ilm/policy/functionbeat
{
  "policy": {
    "phases": {
      "hot": {
        "actions": {
          "rollover": { "max_age": "30d", "max_size": "50GB" }
        }
      },
      "delete": {
        "min_age": "30d",
        "actions": { "delete": {} }
      }
    }
  }
}

PUT _template/functionbeat
{
  "index_patterns": ["functionbeat-*"],
  "settings": {
    "index.lifecycle.name": "functionbeat",
    "index.lifecycle.rollover_alias": "functionbeat-write"
  }
}


The following command creates a new index called functionbeat-000001 (if it doesn’t already exist). If the index does exist, it updates the aliases section. It creates an alias named functionbeat-write that points to this index. (Aliases are like virtual index names — you can send reads or writes to the alias instead of a specific index. They’re lightweight and flexible.). is_write_index: true tells Elasticsearch: “When someone writes to this alias, route the write operations to this index.” If you later have: functionbeat-000001, functionbeat-000002 and both share the alias functionbeat-write, then only the one with "is_write_index": true will receive new documents.

PUT functionbeat-000001
{
  "aliases": {
    "functionbeat-write": { "is_write_index": true }
  }
}


ILM rollover works by:
  • Watching the alias (functionbeat-write), not a specific index.
  • When rollover conditions are met (e.g. 50 GB or 30 days), Elasticsearch:
    • Creates a new index (functionbeat-000002)
    • Moves "is_write_index": true from 000001 to 000002. From that moment, all new Functionbeat writes go to the new index — automatically.
After rollover:
  • functionbeat-000001 becomes read-only, but still searchable.
  • ILM will later delete it when it ages out (based on your policy).

So that last command effectively bootstraps the first generation of an ILM-managed index family.
  • ILM policy: Automates rollover, delete, etc.
  • Rollover action: Creates a new index and shifts the alias
  • Alias requirement: Required, used for write continuity
  • Data stream alternative: Better option, handles rollover and aliasing for you

Index Template

Index templates do not retroactively apply to existing indices. They only apply automatically to new indices created after the template exists.

When we define an index template like:

PUT _index_template/functionbeat
{
  "index_patterns": ["functionbeat-*"],
  "template": {
    "settings": {
      "index.lifecycle.name": "functionbeat"
    }
  }
}


That template becomes part of the index creation logic.

So:

When a new index is created (manually or via rollover),
→ Elasticsearch checks all templates matching the name.
→ The matching template(s) are merged into the new index settings.

Existing indices are not touched or updated.

If we already have an index — e.g. functionbeat-8.7.1 — that matches the template pattern, it won’t automatically get the template settings.

We need to apply those manually, for example:

PUT functionbeat-8.7.1/_settings
{
  "index.lifecycle.name": "functionbeat",
  "index.lifecycle.rollover_alias": "functionbeat-write"
}

Now the existing index is under ILM control (using the same settings the template would have applied if it were created fresh).

Elasticsearch treats index templates as blueprints for new indices, not as live configurations.
This is intentional — applying settings automatically to existing indices could cause:
  • unintended allocation moves,
  • mapping conflicts,
  • or lifecycle phase resets.

We want to keep as least as possible data in Elasticsearch. If data stored are logs, we want to:
  • make sure apps are sending only meaningful logs
  • make sure we capture repetitive error messages so the app can be fixed and stop emitting them

Shards and Replicas


We can set the number of shards and replicas per index in Elasticsearch when we create the index, and we can dynamically update the number of replicas (but not the number of primary shards) for existing indices.​

Setting Shards and Replicas on Index Creation


Specify the desired number in the index settings payload:


PUT /indexName
{
  "settings": {
    "index": {
      "number_of_shards": 6,
      "number_of_replicas": 2
    }
  }
}

This creates the index with 6 primary shards and 2 replicas per primary shard.​

Adjusting Replicas After Creation


You can adjust the number of replicas for an existing index using the settings API:


PUT /indexName/_settings
{
  "index": {
    "number_of_replicas": 3
  }
}

Replicas can be changed at any time, but the number of primary shards is fixed for the lifetime of the index.​

Shard and Replica Principles


Each index has a configurable number of primary shards.
Each primary shard can have multiple replica shards (copies).
Replicas improve fault tolerance and can spread search load.​

We should choose shard and replica counts based on data size, node count, and performance needs. Adjusting these settings impacts resource usage and indexing/search performance.


Index Size


To find out the size of each index (shards) we can use the following Kibana DevTools query:


GET /_cat/shards?v&h=index,shard,prirep,state,unassigned.reason,node,store&s=store:desc

The output contains the following columns:
  • index - index name
  • shard - order number of a (primary) shard. If we have 2 shards and 2 replicas, we'd have 4 rows, with shard=0 for first two rows (first primary and replica) and shard=1 for next two rows (second primary and replica)
  • prirep - is shard a primary (p) or replica (r)
  • state - e.g. STARTED
  • unassigned
  • reason
  • node - name of the node
  • store - used storage (in gb, mb or kb)


Each shard should not be larger than 50GB. We can impose this via Index Lifecycle Policy where we can set rollover criteria.

No comments: