Skip to main content
Mitratech Success Center

Elasticsearch Configuration and Best Practices

Nodes and Clustering

Each time an instance of Elasticsearch starts, it starts a node. A collection of nodes forms a cluster. If the system is running a single node of Elasticsearch, then the cluster consists of only one node.

By default, every node in the cluster can handle the following types of traffic:

  • The transport layer is used exclusively for communication between nodes and the Java TransportClient.
  • The HTTP layer is used only by external REST clients.

Nodes are aware of other nodes in the cluster and can forward client requests to the appropriate node.

  • The master node is the node that controls the state of the cluster. All nodes within the cluster report to the master node.
  • master-eligible node (node.master:true) can temporarily replace the function of the master node if the master node stops or encounters a problem.
  • data node ( holds data and performs data related operations such as CRUD, search, and aggregations.

For more detailed information on Elasticsearch clusters and nodes and how the global search indexing functions outside of TeamConnect, visit the Elasticsearch Reference for Elasticsearch 5.3.

Preventing Split Brain 

Mitratech recommends having at least 3 master-eligible nodes with discovery.zen.minimum_master_nodes set to 2. A general rule of thumb is to have this setting set to (number of master-eligible nodes / 2) + 1.

Background: If a configuration has more than one master-eligible node, a condition called "split brain" can occur. For example, if a cluster has 2 master-eligible nodes and one of the node loses communication but does not crash, the lost node now has no communication with a master node so it elects itself as master. When communication is regained between the nodes, there are now 2 Master Nodes. When data is sent to one node for indexing and search requests are sent to another node that does not have the recently indexed information, data can be corrupted.

To prevent this issue, Elasticsearch has a discovery.zen.minimum_master_nodes setting that allows you to set the minimum number of Master Eligible Nodes that need to be present for a new Master Node to be elected. For example, in a configuration with 3 Master Eligible Nodes and discovery.zen.minimum_master_nodes=2, the cluster still runs if one node loses connection because 2 Master Eligible Nodes are still available. The one node that lost communication will try to elect itself as master but cannot succeed because it needs at least one more Master Eligible Node in the cluster to become Master.

Note: This setting does not work if you have only 2 Master Eligible Nodes in the cluster. A value of 2 would mean that if one node goes down, the entire cluster is inoperable. A value of 1 does not protect against split brain.


A shard is a subdivision of an index which is in itself a fully-functional and independent "index" that can be hosted on a node in the cluster. Dividing an index into shards prevents creating an index with a large amount of data that exceeds the hardware limits of a single node. (For example, a single index of a billion documents taking up 1 TB of disk space might not fit on the disk of a single node or might be too slow to serve search requests from a single node alone.) The number of primary and replica shards can be configured in the Elasticsearch Configuration Properties.

An ideal maximum shard size is 40 - 50 GB. For example, if an index size is 500 GB, you would have at least 10 primary shards.

One index should be spread across 3 nodes (ideally across 3 different servers) with 3 primary and 3 replica shards.


Primary Shards 

Each document is stored in a single primary shard. When you index a document, it is indexed first on the primary shard, then on all replicas of the primary shard. You can specify fewer or more primary shards to scale the number of documents that your index can handle. You cannot change the number of primary shards in an index, once the index is created.

Replica shards 

A replica shard is a copy of the primary shard and has two purposes:

  • Increases failover: a replica shard can be promoted to a primary shard if the primary fails.
  • Increases performance: get and search requests can be handled by primary or replica shards. By default, each primary shard has one replica, but the number of replicas can be changed dynamically on an existing index. A replica shard will never be started on the same node as its primary shard. 

Indexing and Performance 

Your Elasticsearch indexing time may vary significantly based on the objects and fields selected to be indexed. For example, memo fields containing large volumes of text will be indexed exponentially slower than numeric-type fields.

For performance reasons, discern which fields you plan to leverage for global search before your initial indexing. Limiting the indices to these fields results in quicker indexing times while also reducing the load on the memory required for indexing. 

In an existing installation, you can remove unnecessary object indices or remove objects from Global Search

Indexing of documents stored in external DMS systems may also slow performance times. Users are encouraged to appropriately schedule their indexing if they have a significant volume of documents being indexed from an external DMS.

Allocated Memory/Java Heap Size 

Allocate up to 50% of your available memory to Elasticsearch and make sure that the owner of the process is allowed to use this limit. The other half should be reserved for Lucene caching, which uses ANY free memory on the machine. It loads segments (inverted indices) into memory for faster searching, so definitely keep that in mind when calculating Elasticsearch memory requirements. 

Set the minimum and maximum java heap settings to the same value in the jvm.options file, typically located in .../elasticsearch/elasticsearch-5.3.0/config/jvm.options. 

  • Was this article helpful?