Technical terms
1. SolrCloud Terminology
One of the most confusing aspects of Solr terminology is in the difference between collections, shards, replicas, cores, and config sets. These terms have specific meanings with reference to SolrCloud.
SolrCloud / Solr Cluster - A cluster of Solr servers that combines fault tolerance and high availability. A set of Solr nodes can be managed as a single unit. The orchestration of the cluster and the synchronization between the individual nodes is handled by Zookeeper.
Node - A JVM instance running Solr.
Collection - A complete logical index in a SolrCloud cluster. It is associated with a config set and is made up of one or more shards. If the number of shards is more than one, it is a distributed index, but SolrCloud lets you refer to it by the collection name and not worry about the shards parameter that is normally required for DistributedSearch.
Configset - A set of config files necessary for a core to function properly. Each config set has a name. At minimum this will consist of solrconfig.xml (SolrConfigXml) and schema.xml (SchemaXml), but depending on the contents of those two files, may include other files. This is stored in Zookeeper. Config sets can be uploaded or updated using the upconfig command in the command-line utility or the bootstrap_confdir Solr startup parameter.
Core - This is a running instance of a Lucene index along with all the Solr configuration. Multiple cores let you have a single Solr instance with separate configurations and indexes, with their own config and schema for very different applications, but still have the convenience of unified administration. Individual indexes are still fairly isolated, but you can manage them as a single application without ever restarting your Servlet Container.
SolrCloud the configuration of a core is stored in Zookeeper. With traditional (single-core) Solr, the core's config remains in the conf directory on the disk. |
Leader - The shard replica that has won the leader election. Elections can happen at any time, but normally they are only triggered by events like a Solr instance going down. When documents are indexed, SolrCloud will forward them to the leader of the shard, and the leader will distribute them to all the shard replicas.
Replica - One copy of a shard. Each replica exists within Solr as a core. A collection named test created with numShards=1 and replicationFactor set to two will have exactly two replicas, so there will be two cores, each on a different machine (or Solr instance). One will be named test_shard1_replica1 and the other will be named test_shard1_replica2. One of them will be elected to be the leader.
Shard - A logical piece (or slice) of a collection. Each shard is made up of one or more replicas. An election is held to determine which replica is the leader. This term is also in the General list below, but there it refers to Solr cores. The SolrCloud concept of a shard is a logical division.
Shards / Sharding - Sharding is a scaling technique in which a collection is split into multiple logical pieces called
'shards in order to scale up the number of documents in a collection beyond what could physically fit on a
single server.
Incoming queries are distributed to every shard in the collection, which respond with merged
results.
Replication - Another technique available is to increase the Replication Factor of your collection, which allows
you to add servers with additional copies of your collection to handle higher concurrent query load by
spreading the requests around to multiple machines.
Sharding and Replication are not mutually exclusive, and together make Solr an extremely powerful and scalable platform.
Replication Modes - Until Solr 7, the SolrCloud model for replicas has been to allow any replica to become a leader when a leader
is lost. This is highly effective for most users, providing reliable failover in case of issues in the cluster.
However, it comes at a cost in large clusters because all replicas must be in sync at all times.
To provide additional flexibility, two new types of replicas have been added, named TLOG & PULL. These new
types provide options to have replicas which only sync with the leader by copying index segments from the
leader. The TLOG type has an additional benefit of maintaining a transaction log (the tlog of its name),
which would allow it to recover and become a leader if necessary; the PULL type does not maintain a
transaction log, so cannot become a leader.
As part of this change, the traditional type of replica is now named NRT. If you do not explicitly define a
number of TLOG or PULL replicas, Solr defaults to creating NRT replicas. If this model is working for you, you
will not have to change anything.
2. ZooKeeper Cluster – Terminology
Zookeeper - SolrCloud requires Zookeeper which is a standalone program that helps other programs keep a functional cluster running. It handles leader elections. Although Solr can be run with an embedded Zookeeper, it is recommended that it be standalone, installed separately from Solr. It is also recommended that it be a redundant ensemble, requiring at least three hosts. Zookeeper can run on the same hardware as Solr, and many users do run it on the same hardware.
Zookeeper ensemble and quorum - ZooKeeper Service is replicated over a set of hosts called an ensemble. A replicated group of servers in the same application is called a quorum. All servers in the quorum have copies of the same configuration file. QuorumPeers will form a ZooKeeper ensemble. Zookeeper requires a majority, it’s recommended to use an odd number of machines/servers. For example: Five machines ZooKeeper can handle the failure of two machines.
Link: https://myjeeva.com/zookeeper-cluster-setup.html#performance-and-availability-considerations