Advanced Experiments

Learn how to run a more complicated experiment


This example will deploy Elasticsearch and requires more resources then the quick start example, therefore you will need something larger then a typical minikube cluster. A four node cluster with 32 total vCPUs (8 on each node) and 64GB total memory (16GB on each node) is generally sufficient.

Experiment Lifecycle

Creating a StormForge Optimize Pro experiment stores the experiment state in your cluster. When using the platform, the experiment definition is also synchronized to our API for access to the machine learning capabilities. No additional objects are created until trial assignments have been suggested (either manually or using our API, see next section on adding manual trials).

Once assignments have been suggested, a trial run will start generating workloads for your cluster. The creation of a trial object populated with assignments will initiate the following work:

  1. If the experiment contains setup tasks, a new job will be created for that work.
  2. The patches defined in the experiment are applied to the cluster.
  3. The status of all patched objects is monitored, the trial run will wait for them to stabilize.
  4. The trial job specified in the experiment is created (the default behavior simply executes a timed sleep).
  5. Upon completion of the trial job, metric values are collected.
  6. If the experiment contains setup tasks, another job will be created to clean up the state created by the initial setup task job.

Tutorial Manifests

The manifests for this tutorial can be found in the elasticsearch directory of the examples repository.

This experiment will use StormForge Optimize Pro “setup tasks”. Setup tasks are a simplified way to apply bulk state changes to a cluster (i.e. installing and uninstalling an application or it’s components) before and after a trial run. To use setup tasks, we will create a separate service account with additional privileges necessary to make these modifications.
The actual experiment object manifest; this includes the definition of the experiment itself (in terms of assignable parameters and observable metrics) as well as the instructions for carrying out the experiment (in terms of patches and metric queries). Feel free to edit the parameter ranges and change the experiment name to avoid conflicting with other experiments in the cluster.
This experiment makes use of rally to test Elasticsearch. This contains the configuration for rally.

Running the Experiment

We’ll need to apply the manifests listed above for our experiment.

$ kubectl apply -f
serviceaccount/stormforge created created

$ kubectl apply -f
configmap/rally-ini created

$ kubectl apply -f created

Verify all resources are present:

$ kubectl get experiment,sa,cm
NAME                                                      STATUS   Running

NAME                        SECRETS   AGE
serviceaccount/default      1         4h7m
serviceaccount/stormforge   1         36s

NAME                  DATA   AGE
configmap/rally-ini   1      23s

As soon as the experiment is created, StormForge machine learning will begin creating and running trials automatically. You can view trial status by searching for trial objects:

$ kubectl get trial -l
NAME                          STATUS       ASSIGNMENTS                                         VALUES
elasticsearch-example-kzzph   Setting up   memory=1500, cpu=750, replicas=3, heap_percent=50

Monitoring the Experiment

Both experiments and trials are created as custom Kubernetes objects. You can see a summary of the objects using kubectl get trials,experiments; on compatible clusters, trial objects will also display their parameter assignments and (upon completion) observed values.

The experiment objects themselves will not have their state modified over the course of a trial run: once created they represent generally static state.

Trial objects will undergo a number of state progressions over the course of a trial run. These progressions can be monitored by watching the “status” portion of the trial object (e.g. when viewing kubectl get trials -o yaml <TRIAL NAME>).

The trial object will also own several (one to three) job objects depending on the experiment; those jobs will be labeled using the trial name (e.g. trial=<name>) and are typically named using the trial name as a prefix. The -create and -delete suffixes on job names indicate setup tasks (also labeled with role=trialSetup).

Collecting Experiment Output

Once an experiment is underway and some trials have completed, you can get the trial results using kubectl:

$ kubectl get trials -l

Re-running the Experiment

Once a trial run is complete, a new trial will be generated automatically, until the number of trials created reaches the configured experimentBudget.

Last modified October 4, 2022