Configure

Learn how to configure workloads in Optimize Live for common scenarios via the Optimize Live UI and Kubernetes annotations

Learn how to:

Key Points

  • Use the Optimize Live UI to configure optimization settings for one workload at a time, which can be helpful in testing workloads.

  • Use annotations to configure optimization settings at scale at the workload, namespace, and cluster levels.

    This topic doesn’t discuss annotations in detail. If you plan to use annotations, take a few minutes to read the Configure by using annotations guide — it lists the supported annotations and will help you to understand where to place annotations to configure settings at the workload, namespace, and cluster levels.

  • Workloads whose settings are managed in any way by annotations become view-only in the UI.

    To edit the workload in the UI, you must remove the cluster-defaults ConfigMap, namespace-level annotations, and workload-level annotations that pertain to the workload.

Automatically deploy recommendations

By default, Optimize Live doesn’t apply (deploy) recommendations automatically.

Best practice: Review and selectively apply the first few recommendations to see what happens. Then, as you trust recommendations more, you can enable them to be applied automatically on more workloads.

Steps

First, check whether the StormForge Applier version 2.2 or higher is installed. On the command line, run:

stormforge check optimize-live

If the output indicates that the Applier isn’t installed or that it’s out of date, install it.

Next, configure auto-deployment using one of these methods:
UI:

  1. In the left navigation, click Optimize Live > Workloads, then find and click the name of the workload you want to work with.
  2. On the Workload page, click Config, and set Automatic Deployment to On.
  3. To save your changes, click Update.

Annotations:
Decide at which level to set the value and annotate accordingly (refer to Configure by using annotations for details):

  • Workload or namespace level: Annotate the workload YAML manifest or the namespace YAML manifest with the live.stormforge.io/auto-deploy annotation. Example: live.stormforge.io/auto-deploy: "true"
  • Cluster level: Add the autoDeploy: "VALUE" parameter:value pair to the clusterDefaultConfig values in a cluster-defaults.yaml file. Example: autoDeploy: "true".

Set a schedule

Specify how often you want Optimize Live to generate recommendations for a workload.

Best practices:

  • Generating recommendations once daily (default value) is considered a best practice in stable or production environments.

  • Review recommendations manually the first few times before enabling auto-deployment, so that you can see how recommendations affect your workloads.

    When you’re ready to have Optimize Live apply scheduled recommendations automatically, be sure to install the StormForge Applier and enable automatic deployment.

Key points:

  • Short intervals (such as hourly or every few hours) provide close tracking of utilization with quick-changing short-lived recommendations. Useful when in automatic deployment mode.
  • Longer intervals (such as weekly) will produce longer-lived recommendations, which is useful for integrating with slower-moving CI or applying recommendations manually.
Steps

To specify how often recommendations are generated and how long they’re valid for:
UI:

  1. In the left navigation, click Optimize Live > Workloads, then find and click the name of the workload you want to work with.
  2. On the Workload page, click Config.
  3. In the Recommendation Schedule section, specify how often you want to receive recommendations.
  4. To save your changes, click Update.

Annotations:
Use macros, ISO 8601 Duration strings like "P1D", or Cron format to specify a schedule. Decide at which level to set the value and annotate accordingly (refer to Configure by using annotations for details):

  • Workload or namespace level: Annotate the workload YAML manifest or the namespace YAML manifest with the live.stormforge.io/schedules annotation. Example: live.stormforge.schedules: "@daily"
  • Cluster level: Add the schedule: "VALUE" parameter:value pair to the clusterDefaultConfig values in a cluster-defaults.yaml file. Example: schedule: "P1D".

Configure CPU and memory optimization goals

Based on your risk profile, choose a value for the CPU optimization goal and memory optimization goal:

  • savings: Provides recommendations that are closer to actual CPU or memory usage. Consider this option when you want to maximize your resource savings.
  • balanced: Default value. Provides a balanced approach to achieving cost savings and increased reliability.
  • reliability: Minimizes the risk of hitting CPU or memory limits. Consider this option for business-critical applications.

You can set different CPU and memory optimization goals. For example, if your organization can tolerate throttling when containers exceed CPU limits but cannot tolerate restarts when containers exceed memory limits, you can set a savings goal for CPU and a reliability goal for memory.

Steps

To configure optimization goals:
UI:

  1. In the left navigation, click Optimize Live > Workloads, then find and click the name of the workload you want to work with.
  2. On the Workload page, click Config and choose your optimization goal.
  3. To save your changes, click Update.

Annotations:
Decide at which level to set the value and annotate accordingly (refer to Configure by using annotations for details):

  • Workload or namespace level: Annotate the workload YAML manifest or the namespace YAML manifest with one or both of the following annotations. Example:
    live.stormforge.io/cpu.optimization-goal: "reliability"
    live.stormforge.io/memory.optimization-goal: "savings"
    
  • Cluster level: Add either or both of the following parameter:value pairs to the clusterDefaultConfig values in a cluster-defaults.yaml file. Example:
    cpuOptimizationGoal: "reliability"
    memoryOptimizationGoal: "savings"
    

Specify what to optimize (set an optimization policy): requests and limits, requests, or nothing

By default, CPU and memory requests and limits are optimized for each container in a workload. You can adjust this value to optimize requests only; you can also exclude containers (such as sidecar containers) from optimization.

Steps

To specify what to optimize:
UI:

  1. From the left navigation, click Optimize Live > Workloads, then find and click the name of the workload you want to work with.
  2. On the Workload page, click Config.
  3. In the Containers section, expand the container you want to work with.
  4. In the Configure CPU and Configure Memory sections, select what you want to optimize.
    • To exclude a container (such as a sidecar container) from optimzation : In both the Configure CPU and Configure Memory sections, select Don’t optimize.
  5. Repeat as needed for containers in the workload.
  6. To save your changes, click Update.

Annotations:
Decide at which level to set the value and annotate accordingly (refer to Configure by using annotations for details):

  • Workload or namespace level: Annotate the workload YAML manifest or the namespace YAML manifest with one or both of the following annotations. Example:

        live.stormforge.io/containers.cpu.optimization-policy: "RequestsAndLimits"
        live.stormforge.io/containers.memory.optimization-policy: "RequestsAndLimits"
    
  • Cluster level: Add either or both of the following parameter:value pairs to the clusterDefaultConfig values in a cluster-defaults.yaml file. Example:

    containersCpuOptimizationPolicy: "RequestsAndLimits"
    containersMemoryOptimizationPolicy: "RequestsAndLimits"
    

Examples: Setting container-specific defaults

Suppose you want to set container-specific defaults, as in these examples:

  • Default to optimizing CPU and memory requests only (and not limits). Several named containers are given exceptions: for the server and api containers, optimize requests and limits; for the sidecar container, do not optimize anything.

    • Workload or namespace level: In the workload YAML manifest or the namespace YAML manifest, add:
      live.stormforge.io/containers.cpu.optimization-policy: "RequestsOnly,server=RequestsAndLimits,api=RequestsAndLimits,sidecar=DoNotOptimize"
      live.stormforge.io/containers.memory.optimization-policy: "RequestsOnly,server=RequestsAndLimits,api=RequestsAndLimits,sidecar=DoNotOptimize"
      
    • Cluster level: In the clusterDefaultConfig values in the cluster-defaults.yaml file, add:
      containersCpuOptimizationPolicy: "RequestsOnly,server=RequestsAndLimits,api=RequestsAndLimits,sidecar=DoNotOptimize"
      containersMemoryOptimizationPolicy: "RequestsOnly,server=RequestsAndLimits,api=RequestsAndLimits,sidecar=DoNotOptimize"
      
  • Assume all containers are set to optimize RequestsAndLimits (the default value). Suppose you want to optimize the server container CPU and memory requests only and not optimize the sidecar container:

    • In the workload YAML manifest or the namespace YAML manifest, add:
      live.stormforge.io/containers.cpu.optimization-policy: server="RequestsOnly",sidecar="DoNotOptimize"
      live.stormforge.io/containers.memory.optimization-policy: server="RequestsOnly",sidecar="DoNotOptimize"
      
    • In the clusterDefaultConfig values in a cluster-defaults.yaml file, add:
      containersCpuOptimizationPolicy: server="RequestsOnly",sidecar="DoNotOptimize"
      containersMemoryOptimizationPolicy: server="RequestsOnly",sidecar="DoNotOptimize"
      

    In this example, the optimization policies are updated for the server and sidecar containers only.

Change the limit-to-request ratio (limitRequestRatio)

Important: Change this container-level setting only if you need a specific limit-to-request ratio requirement to ensure resource consumption doesn’t exceed requests.

Concepts and examples

Optimize Live uses this ratio to calculate recommended CPU and memory limits:
Recommended limits = recommended requests * limitRequestRatio

The default limitRequestRatio is 1.2, which means that the recommended limit will be only 20% higher than the recommended requests. This ratio ensures consumption won’t greatly exceed requests. For example, if the recommended CPU requests value is 100m, then the recommended limit would be 120m.

Optimize Live always calculates recommended CPU and memory limits values, but doesn’t apply them if a container is configured to optimize requests only.

You can configure the limitRequestRatio based on your container needs:

  • 1.0 provides Guaranteed Quality of Service
  • 1.2 is the default value
  • A custom value equal to or greater than 1.0 (to two decimal places) provides more room for spikes or changes in consumption

Using limitRequestRatio in conjunction with resource limits
You can use both the limitRequestRatio and resource limits values together — they are not mutually exclusive. Optimize Live calculates the recommended limits value and then adjusts it if needed.

Suppose:

  • You want to ensure a workload has at least 2 cores for startup requirements.
  • You want a limitRequestRatio of 1.33.

Your container settings might look something like this:

containerSettings: 
  - cpu:
      requests:
        min: 20m
        max: 2000m
      limits:
        min: 2000m
        max: 16000m
        limitRequestRatio: 1.33

If Optimize Live recommends a cpu.requests value of 100m, then the calculated cpu.limits value is 133m (100 x 1.33), which is lower than cpu.limits.min of 2000m. Optimize Live would adjust the recommended cpu.limits value to 2000m to respect the cpu.limits.min=2000m setting.

Steps

To change the limitRequestRatio setting:
UI:

  1. Navigate to the workload details page and click Config.
  2. Expand the container you want to work with.
  3. In both the CPU and Memory sections, set the Limit Request Ratio to a value equal to or greater than 1.0 (up to two decimal places).
    Tip:
    • 1.0 provides Guaranteed Quality of Service
    • 1.2 is the default value
  4. Repeat as needed for other containers.
  5. To save your changes, click Update.

Annotations:
Decide at which level to set the value and annotate accordingly (refer to Configure by using annotations for details):

  • Workload or namespace level: Annotate the workload YAML manifest or the namespace YAML manifest with one or both of the following annotations. Example:
    live.stormforge.io/containers.cpu.limits.limit-request-ratio: "1.33"
    live.stormforge.io/containers.memory.limits.limit-request-ratio: "1.33"
    
  • Cluster level: Add either or both of the following parameter:value pairs to the clusterDefaultConfig values in a cluster-defaults.yaml file. Example:
    containersCpuLimitsLimitRequestRatio: "1.33"
    containersMemoryLimitsLimitRequestRatio: "1.33"
    

Examples: Setting container-specific defaults
Suppose you want to set container-specific defaults, as in these examples:

  • Set a ratio of 1.33 for all containers (overriding the default value of 1.2); specify exceptions for the server and api containers:

    • Workload or namespace level: In the workload YAML manifest or the namespace YAML manifest, add:
      live.stormforge.io/containers.cpu.limits.limit-request-ratio: "1.33,server=1.4,api=1.2"
      live.stormforge.io/containers.memory.limits.limit-request-ratio: "1.33,server=1.4,api=1.2"
      
    • Cluster level: In the clusterDefaultConfig values in the cluster-defaults.yaml file, add:
      containersCpuLimitsLimitRequestRatio: "1.33,server=1.4,api=1.2"
      containersMemoryLimitsLimitRequestRatio: "1.33,server=1.4,api=1.2"
      
  • Override the current value for the server container only:

    • In the workload YAML manifest or the namespace YAML manifest, add:

      live.stormforge.io/containers.cpu.limits.limit-request-ratio: "server=1.4"
      live.stormforge.io/containers.memory.limits.limit-request-ratio: "server=1.4"
      
    • In the clusterDefaultConfig values in the cluster-defaults.yaml file, add:

      containersCpuLimitsLimitRequestRatio: "server=1.4"
      containersMemoryLimitsLimitRequestRatio: "server=1.4"
      
Last modified September 15, 2023