Configure by using annotations
Define default settings at the workload, namespace, and cluster levels by using annotations
Note: You can learn more about how Optimize Live generates recommendations in the Concepts topic.
Use the Optimize Live UI to configure optimization settings for one workload at a time, which can be helpful in testing workloads.
Use annotations to configure optimization settings at scale at the workload, namespace, and cluster levels.
This topic doesn’t discuss annotations in detail. If you plan to use annotations, take a few minutes to read the Configure by using annotations topic — it lists the supported annotations and will help you to understand where to place annotations to configure settings at the workload, namespace, and cluster levels.
Workloads whose settings are managed in any way by annotations become view-only in the UI.
To edit the workload in the UI, you must remove the cluster-defaults
ConfigMap, namespace-level annotations, and workload-level annotations that pertain to the workload.
By default, Optimize Live doesn’t apply (deploy) recommendations automatically. See the best practices below.
Best practices:
Check whether the StormForge Applier is installed. On the command line, run:
helm ls -a -n stormforge-system
If you don’t see the stormforge-applier-2.*.*
chart listed in the CHART column of the output, install it.
Next, configure auto-deployment using one of these methods:
UI:
Annotations:
Decide at which level to set the value and annotate accordingly (refer to Configure by using annotations for details):
live.stormforge.io/auto-deploy
annotation. Example: live.stormforge.io/auto-deploy: "true"
autoDeploy: "VALUE"
parameter:value pair to the clusterDefaultConfig values in a cluster-defaults.yaml
file. Example: autoDeploy: "true"
.Specify how often you want Optimize Live to generate recommendations for a workload.
Best practices:
Generating recommendations once daily (default value) is considered a best practice in stable or production environments.
Review recommendations manually the first few times before enabling auto-deployment, so that you can see how recommendations affect your workloads.
When you’re ready to have Optimize Live apply scheduled recommendations automatically, be sure to install the StormForge Applier and enable automatic deployment.
Key points:
To specify how often recommendations are generated and how long they’re valid for:
UI:
Annotations:
Use macros, ISO 8601 Duration strings like "P1D"
, or Cron format to specify a schedule.
Decide at which level to set the value and annotate accordingly (refer to Configure by using annotations for details):
live.stormforge.io/schedule
annotation. Example: live.stormforge.io/schedule: "@daily"
schedule: "VALUE"
parameter:value pair to the clusterDefaultConfig values in a cluster-defaults.yaml
file. Example: schedule: "P1D"
.Optional. Set the smallest amount of change to requests that make it worthwhile to deploy new recommendations. Setting thresholds can help to reduce pod churn.
You can specify a percent-change threshold, a unit-change threshold, or both.
Key points:
Available only to workloads that have automatic deployment enabled (auto-deploy=true).
Important: Thresholds are not preserved when you disable automatic deployment on a workload. If you need to enforce a standard thresholds across workloads in your organization, consider creating a YAML file that defines default values.
If you enable a threshold, you must specify an positive integer value for both CPU and memory.
You can set these for individual workloads, but setting them at the cluster level provides the most value.
Unit changes work well for small workloads, and percent changes work well for larger workloads.
A recommendation is applied to a workload if, for at least one container, the recommendation satisfies all enabled thresholds.
Example: Suppose you enable both the percent-change and unit-change thresholds for a workload. A recommendation will be applied if either (or both) of the following is true:
To configure thresholds:
UI:
To remove a threshold, click its toggle to the off position and then click Update.
Annotations:
Decide at which level to set the value and annotate accordingly (refer to Configure by using annotations for details). Remember to include the live.stormforge.io/auto-deploy: "true"
annotation.
More detailed examples are shown in the Configure by using annotations topic.
Workload or namespace level: Annotate the workload YAML manifest or the namespace YAML manifest with one or both of the following pairs of annotations. Replace the sample values with the values you need.
Percent-change:
live.stormforge.io/auto-deploy.thresholds.cpu.percent: "5"
live.stormforge.io/auto-deploy.thresholds.memory.percent: "10"
Unit-change:
live.stormforge.io/auto-deploy.thresholds.cpu.unit: "10m"
live.stormforge.io/auto-deploy.thresholds.memory.unit: "10Mi"
Cluster level: Add either or both of the following parameter:value pairs to the clusterDefaultConfig values in a cluster-defaults.yaml
file. Remember to include the autoDeploy: "true"
pair. Replace the sample values with the values you need.
Percent-change:
autoDeployThresholdsCpuPercent: "5"
autoDeployThresholdsMemoryPercent: "10"
Unit-change:
autoDeployThresholdsCpuUnit: "10m"
autoDeployThresholdsMemoryUnit: "10Mi"
CLI:
Run stormforge edit workload handlers
with either or both pairs of arguments. Remember to include the --auto-deploy
(or equivalent --autodeploy=true
) argument. Replace the VALUE and VALUE_AND_UNIT with the values you need.
Percent-change arguments:
--auto-deploy-thresholds-min-percent-change-cpu=VALUE
--auto-deploy-thresholds-min-percent-change-memory=VALUE
Unit-change arguments:
--auto-deploy-thresholds-min-unit-change-cpu=VALUE_AND_UNIT
--auto-deploy-thresholds-min-unit-change-memory=VALUE_AND_UNIT
Examples:
Apply only recommendations that change CPU requests by at least 5% for at least one container. Suppose for this example, memory requests don’t matter to you - you must still set the memory threshold to a positive integer value.
stormforge edit workload WORKLOAD_NAME handlers \
--auto-deploy=true \
--auto-deploy-thresholds-min-percent-change-cpu=5 \
--auto-deploy-thresholds-min-percent-change-memory=1 \
Apply only recommendations that satisfy either (or both) of the following conditions for at least one container:
CPU requests change by at least 5% and 10m
Memory requests change by at least 10% and 10Mi
stormforge edit workload WORKLOAD_NAME handlers \
--auto-deploy=true \
--auto-deploy-thresholds-min-percent-change-cpu=5 \
--auto-deploy-thresholds-min-percent-change-memory=10 \
--auto-deploy-thresholds-min-unit-change-cpu=10m \
--auto-deploy-thresholds-min-unit-change-memory=10Mi
To remove a threshold, run stormforge edit workload handlers
with the appropriate threshold values set to 0. For example, to remove the unit-change threshold, run the following command:
stormforge edit workload WORKLOAD_NAME handlers \
--auto-deploy=true \
--auto-deploy-thresholds-min-unit-change-cpu=0 \
--auto-deploy-thresholds-min-unit-change-memory=0
To see additional arguments (for example, for applying these settings to all workloads in a cluster or in a specific workspace), see stormforge edit workloads handlers
or run stormforge edit workloads handlers --help
.
Based on your risk profile, choose a value for the CPU optimization goal and memory optimization goal:
You can set different CPU and memory optimization goals. For example, if your organization can tolerate throttling when containers exceed CPU limits but cannot tolerate restarts when containers exceed memory limits, you can set a savings goal for CPU and a reliability goal for memory.
To configure optimization goals:
UI:
Annotations:
Decide at which level to set the value and annotate accordingly (refer to Configure by using annotations for details):
live.stormforge.io/cpu.optimization-goal: "reliability"
live.stormforge.io/memory.optimization-goal: "savings"
cluster-defaults.yaml
file. Example:
cpuOptimizationGoal: "reliability"
memoryOptimizationGoal: "savings"
By default, only CPU and memory requests (not limits) are optimized for each container in a workload. You can adjust this value to optimize requests and limits; you can also exclude containers (such as sidecar containers) from optimization.
To specify what to optimize:
UI:
Annotations:
Decide at which level to set the value and annotate accordingly (refer to Configure by using annotations for details):
Workload or namespace level: Annotate the workload YAML manifest or the namespace YAML manifest with one or both of the following annotations. Example:
live.stormforge.io/containers.cpu.optimization-policy: "RequestsAndLimits"
live.stormforge.io/containers.memory.optimization-policy: "RequestsAndLimits"
Cluster level: Add either or both of the following parameter:value pairs to the clusterDefaultConfig values in a cluster-defaults.yaml
file. Example:
containersCpuOptimizationPolicy: "RequestsAndLimits"
containersMemoryOptimizationPolicy: "RequestsAndLimits"
Examples: Setting container-specific defaults
Suppose you want to set container-specific defaults, as in these examples:
Default to optimizing CPU and memory requests only (and not limits). Several named containers are given exceptions: for the server and api containers, optimize requests and limits; for the sidecar container, do not optimize anything.
live.stormforge.io/containers.cpu.optimization-policy: "RequestsOnly,server=RequestsAndLimits,api=RequestsAndLimits,sidecar=DoNotOptimize"
live.stormforge.io/containers.memory.optimization-policy: "RequestsOnly,server=RequestsAndLimits,api=RequestsAndLimits,sidecar=DoNotOptimize"
cluster-defaults.yaml
file, add:
containersCpuOptimizationPolicy: "RequestsOnly,server=RequestsAndLimits,api=RequestsAndLimits,sidecar=DoNotOptimize"
containersMemoryOptimizationPolicy: "RequestsOnly,server=RequestsAndLimits,api=RequestsAndLimits,sidecar=DoNotOptimize"
Assume all containers are set to optimize RequestsOnly
(the default value). Suppose you want to optimize the server container CPU and memory requests and limits, and not optimize the sidecar container:
live.stormforge.io/containers.cpu.optimization-policy: server="RequestsAndLimits",sidecar="DoNotOptimize"
live.stormforge.io/containers.memory.optimization-policy: server="RequestsAndLimits",sidecar="DoNotOptimize"
cluster-defaults.yaml
file, add:
containersCpuOptimizationPolicy: server="RequestsAndLimits",sidecar="DoNotOptimize"
containersMemoryOptimizationPolicy: server="RequestsAndLimits",sidecar="DoNotOptimize"
In this example, the optimization policies are updated for the server and sidecar containers only.
Important: Change this container-level setting only if you need a specific limit-to-request ratio requirement to ensure resource consumption doesn’t exceed requests.
Optimize Live uses this ratio to calculate recommended CPU and memory limits:
Recommended limits = recommended requests * limitRequestRatio
The default limitRequestRatio is 2.0, which means that the recommended limit will be double the recommended requests. For example, if the recommended CPU requests value is 100m, then the recommended limit would be 200m.
Optimize Live always calculates recommended CPU and memory limits values, but doesn’t apply them if a container is configured to optimize requests only.
You can configure the limitRequestRatio based on your container needs:
Using limitRequestRatio in conjunction with resource limits
You can use both the limitRequestRatio and resource limits values together — they are not mutually exclusive. Optimize Live calculates the recommended limits value and then adjusts it if needed.
Suppose:
Your container settings might look something like this:
containerSettings:
- cpu:
requests:
min: 20m
max: 2000m
limits:
min: 2000m
max: 16000m
limitRequestRatio: 1.33
If Optimize Live recommends a cpu.requests
value of 100m, then the calculated cpu.limits
value is 133m (100 x 1.33), which is lower than cpu.limits.min
of 2000m. Optimize Live would adjust the recommended cpu.limits
value to 2000m to respect the cpu.limits.min=2000m
setting.
To change the limitRequestRatio setting:
UI:
Decide at which level to set the value and annotate accordingly (refer to Configure by using annotations for details):
live.stormforge.io/containers.cpu.limits.limit-request-ratio: "1.33"
live.stormforge.io/containers.memory.limits.limit-request-ratio: "1.33"
cluster-defaults.yaml
file. Example:
containersCpuLimitsLimitRequestRatio: "1.33"
containersMemoryLimitsLimitRequestRatio: "1.33"
Suppose you want to set container-specific defaults, as in these examples:
Set a ratio of 1.33 for all containers (overriding the default value of 2.0); specify exceptions for the server and api containers:
live.stormforge.io/containers.cpu.limits.limit-request-ratio: "1.33,server=1.4,api=2.0"
live.stormforge.io/containers.memory.limits.limit-request-ratio: "1.33,server=1.4,api=2.0"
containersCpuLimitsLimitRequestRatio: "1.33,server=1.4,api=2.0"
containersMemoryLimitsLimitRequestRatio: "1.33,server=1.4,api=2.0"
Override the current value for the server container only:
In the workload YAML manifest or the namespace YAML manifest, add:
live.stormforge.io/containers.cpu.limits.limit-request-ratio: "server=1.4"
live.stormforge.io/containers.memory.limits.limit-request-ratio: "server=1.4"
In the clusterDefaultConfig values in the cluster-defaults.yaml file, add:
containersCpuLimitsLimitRequestRatio: "server=1.4"
containersMemoryLimitsLimitRequestRatio: "server=1.4"
Define default settings at the workload, namespace, and cluster levels by using annotations
Machine learning removes the toil of configuration, but you can still change settings to experiment with workload recommendations.