Update the Tanzu GemFire Cluster

Apply Changes to the Tanzu GemFire Cluster

Once a Tanzu GemFire cluster has been created, the Tanzu GemFire cluster may be altered by applying an updated CRD. Applying the altered CRD will cause a rolling update of the pods in the Tanzu GemFire cluster. The rolling update restarts each locator and server in the Tanzu GemFire cluster, one at a time.

  1. Modify and save the updated YAML file that contains the cluster’s CRD configuration.
  2. Apply the changes with a command of the form:

    kubectl -n NAMESPACE-NAME apply -f CLUSTER-CRD-YAML
    

    where NAMESPACE-NAME is your chosen name for the Tanzu GemFire cluster namespace, and CLUSTER-CRD-YAML is the file name of the file containing the YAML that represents the Tanzu GemFire cluster’s CRD.

    As a result of applying the changes, the Tanzu GemFire Operator will bring the state of the Tanzu GemFire cluster to that specified in the YAML file.

Scale the Tanzu GemFire Cluster

Scale the Tanzu GemFire cluster by increasing or decreasing the quantity of locators or servers. Before decreasing the quantity of servers, ensure that the cluster will have sufficient resources to host the data.

Edit the YAML file that contains the cluster’s CRD configuration. Change the spec: locators: replicas value to the desired quantity of locators to be running in the scaled Tanzu GemFire cluster. Change the spec: servers: replicas value to the desired quantity of servers to be running in the scaled Tanzu GemFire cluster.

Apply the change as described in Apply Changes to the Tanzu GemFire Cluster.

To check the status of the Tanzu GemFire cluster:

kubectl -n NAMESPACE-NAME get GemFireClusters

where NAMESPACE-NAME is your chosen name for the Tanzu GemFire cluster namespace.

For example, if the NAMESPACE-NAME is gemfire-cluster:

$ kubectl -n gemfire-cluster get GemFireClusters
NAME       LOCATORS   SERVERS
gemfire1   2/2        2/3

The first number identifies how many of the replicas are running. The second number identifies how many replicas are specified. When the quantity running reaches the number of replicas specified for both locators and servers, scaling is complete.