Best practices

Tips to help you get the most from Optimize Live

Get the most value out of Optimize Live and minimize unexpected results by following these best practices.

Workload settings

Configuration at scale using annotations

Annotations enable you to configure optimization behavior at the workload, namespace, and cluster levels.

  • Set organization-wide defaults; enable teams to override as needed: Using annotations, platform teams can set organization-wide default workload values, and development teams can fine-tune settings depending on their application and environment (development, staging, test, production).

    You can set default values at the workload, namespace, and cluster levels. Workload settings have the highest precedence, and cluster settings have lowest precedence.

  • Specify the majority default value first, followed by exceptions: When editing container-level resources (such as requests and limits), specify the value to apply to all or most containers first, followed by container-specific exceptions, as in these examples:

    • When annotating a workload or namespace:
      live.stormforge.io/containers.cpu.optimization-policy: "RequestsAndLimits,sidecar=DoNotOptimize"
    • In a cluster-default ConfigMap:
      containersCpuOptimizationPolicy: "RequestsAndLimits,sidecar=DoNotOptimize"

Learn more:

Applying recommendations automatically (autodeployment)

  • Before configuring Optimize Live to automatically apply recommendations, you should review and selectively apply the first few recommendations to see what happens. Then, as you trust recommendations more, you can enable them to be applied automatically on more workloads.

    When you’re ready to have Optimize Live apply scheduled recommendations automatically, be sure to install the StormForge Applier and enable automatic deployment.

Last modified September 8, 2023