Configuration using a CRD
7 minute read
You can instantiate a Live object to create an application and generate recommendations that you can apply to optimize your workloads. The resulting application is defined as a Live custom resource, or Live resource for short.
You can use Live resources to automate the creation of applications and configuration of recommendations for them. You might do this as part of a CI/CD pipeline or other automation, eliminating manual work from the command line or UI.
In this guide, you’ll learn how to create a Live resource that represents an application, review its settings for generating recommendations, and confirm that the application is running.
Before you can create a Live resource, you need:
- An Optimize Live installation with the controller installed in the stormforge-system namespace
- A Kubernetes namespace with at least one Deployment
- A metrics provider installed and configured as described in the Configure Metrics Provider section of the Optimize Live installation docs
Creating a Live resource
The following YAML file defines a Live object based on the Live custom resource definition (CRD) and configures min and max values for CPU and memory, as well as the target HPA utilization range.
Applying this YAML file will create an application (a Live resource) named My CRD Test that generates and applies recommendations to targets in the namespace that you specify (in this example, nginx-app).
You’ll explore the file in detail in the Writing a Live resource spec section.
Note: The HPA, limits, and requests values below are sample values only, not suggested values. Choose values that meet your workload needs.
apiVersion: optimize.stormforge.io/v1 kind: Live metadata: name: my-crd-test spec: components: - enabled: true name: tsdb - enabled: true name: recommender - enabled: true name: applier - enabled: true name: grafana application: appID: my-crd-test appName: My CRD test mode: manual namespaceSelector: kubernetes: namespace: nginx-app resources: - interval: 1h0m0s containerResources: bounds: requests: min: cpu: 50m memory: 50M max: cpu: 500m memory: 500M limits: min: cpu: 50m memory: 50M max: cpu: 500m memory: 500M targetUtilization: min: cpu: 50 max: cpu: 90 tolerance: cpu: low memory: high
You’ll create the Live resource in the stormforge-system namespace. This namespace also contains the Optimize Live controller and other key components. To apply the Live definition file, run:
kubectl apply -f LIVE_OBJECT_FILENAME.yaml -n stormforge-system
You should see the following output:
Validating the Live resource
Now, confirm that the Optimize Live pods for your new Live resource are up and running. If multiple Live objects exist, you’ll have multiple applier, recommender, and tsdb pods. To understand which application each pod belongs to, as well as the pod’s component type, list the pods with their labels:
kubectl get pods -n stormforge-system --show-labels
Notice that Optimize Live created three pods for the my-crd-test application:
NAME READY STATUS RESTARTS AGE LABELS applier-87dcc8-55b9c76c47-9bgvn 1/1 Running 0 11d component=applier,configChecksum=e3b0c44298fc1c149afb,live.optimize.stormforge.io/applicationName=my-crd-test,pod-template-hash=55b9c76c47 grafana-75c8f9cfd4-sj8t8 1/1 Running 0 11d component=grafana,configChecksum=029b0324f87f64507cff,datasourceConfigChecksum=aad2b26c23dd51d3d42c,pod-template-hash=75c8f9cfd4,providerChecksum=996689ddb6a58d780bf3 optimize-live-5d95b4c664-mwf4c 1/1 Running 0 4d23h app.kubernetes.io/instance=optimize-live,app.kubernetes.io/name=optimize-live,component=controller,helm.sh/chart-version=0.5.1,pod-template-hash=5d95b4c664 recommender-87dcc8-78dfc58889-zxgsw 1/1 Running 0 11d component=recommender,configChecksum=e39a405a0861f5173ece,live.optimize.stormforge.io/applicationName=my-crd-test,pod-template-hash=78dfc58889 tsdb-87dcc8-6c7d9c8478-rlj7k 1/1 Running 0 11d component=tsdb,configChecksum=c18fe0e02ff5273ad806,live.optimize.stormforge.io/applicationName=my-crd-test,pod-template-hash=6c7d9c8478
What do these pods do?
- tsdb contains a copy of the data collected from the data gathering tools (such as Prometheus or Datadog), used to make recommendations
- recommender generates recommendations for optimizing workloads and applications
- applier applies the recommendations
- grafana, which already existed before you created your Live resource, provides a dashboard where you can view your recommendations
Viewing the application in the UI
You can see your Live resource listed as an application in the Optimize Live UI. In a browser window, go to the following URL, replacing APP_ID with the
appID value from the Live object definition file:
On the application’s page, below the application’s name, notice the
Namespace selector value. This application provides recommendations for workloads (which are typically Deployments) in the matching namespace(s). To apply the recommendations, click Apply Recommendations. To edit the recommendation settings, click Configure (or edit the Live’s .yaml file and apply the file again).
You need to wait for Optimize Live to collect performance data before you’ll see recommendations. This may take a few minutes to several hours, depending on your metrics source. Monitor the progess via the Progress bar on the application’s page.
Writing a Live resource spec
As with all other Kubernetes configuration, a Live resource needs the
metadata fields, and its name must be a valid DNS subdomain name.
A Live resource also requires a
.spec section, in which you must provide values for the following fields:
An array that contains
grafana elements, each with
enabled set to
.spec.application.appID: A unique alphanumeric application identifier. Do not include spaces or special characters.
.spec.application.appName: A unique name that helps you identify the application in a list.
.spec.application.mode: Indicates how recommendations are applied:
.spec.application.namespaceSelector.kubernetes.namespace: The namespace that contains the target workloads or deployments for which to generate recommendations.
.spec.application.resources: An array of at least one element that specifies the subfields listed below.
The default interval for generating recommendations. A suggested default interval is
For greater control over how Kubernetes allocates resources to containers, you can provide values for the following requests and limits fields.
CPU requests (in millicores): Specify the valid range for CPU request recommendations. By default, there are no bounds on the recommendations Optimize Live can make.
Don’t specify a value lower than what is needed for application startup (based on known application requirements). If the recommender discovers an optimal minimum that’s lower than what you specify, the recommender sets its minimum recommendation to match what you specify.
.spec.application.resources[*].containerResources.bounds.requests.max.cpuDon’t exceed the core count of your biggest nodes, otherwise your recommendations might be unschedulable.
Memory requests (in megabytes): Specify the valid range for memory request recommendations. By default, there are no bounds on the recommendations Optimize Live can make.
If the recommender discovers an optimal minimum that’s lower than what you specify, the recommender sets its minimum recommendation to match what you specify.
Don’t exceed the available memory of your biggest node, otherwise your recommendations might be unschedulable.
CPU request limits range (in millicores):
For some workloads, it may be appropriate to ensure CPU limits are never lower than some known value.
Example: For Java applications, a minimum of 2000m (2 full CPUs) can help to achieve a reasonable startup time.
Memory request limits range (in megabytes):
For some workloads, it may be appropriate to ensure memory limits are never lower than some known value. If a container exceeds its memory limit, it will terminate.
HPA target CPU utilization (string): Specify the percentage range that can reliably handle production loads.
Examples: A min value of
10can improve startup times; a max value of
95can help prevent throttling.
Risk tolerance: Choose the option that corresponds to your reliability and savings goals.
lowminimizes the risk of hitting CPU limits. Consider this option for business-critical applications.
mediumprovides a balanced approach to achieving cost savings and increased reliability.
highprovides recommendations that are closer to actual CPU usage. Consider this option when you want to experiment with maximizing your resource savings.
lowminimizes the risk of hitting memory limits. Consider this option for business-critical applications or to minimize the risk of out-of-memory (OOM) errors.
mediumprovides a balanced approach to achieving cost savings and increased reliability.
highprovides recommendations that are closer to actual memory usage. Consider this option when you want to experiment with maximizing your resource savings.
Validating and using your Live resources
Eventually, you’ll create more Live resources to get recommendations for other namespaces. To list all of the Live resources, run:
kubectl get lives -n stormforge-system
You’ll see output similar to this:
NAME AGE my-crd-test 6d20h
To get the details about a specific Live resource, specify its name (in this example, my-crd-test):
kubectl describe live my-crd-test -n stormforge-system
Updating your Live objects
You can use kubectl to edit application recommendation settings:
kubectl edit live LIVE_NAME -n stormforge-system
If you see that one or more pods aren’t running as expected, you can look for errors in the Optimize Live controller logs:
kubectl logs -l app.kubernetes.io/name=optimize-live -n stormforge-system
You can also check the high-level health of the Optimize Live controller:
kubectl get deployment optimize-live -n stormforge-system
The output will look similar to this:
NAME READY UP-TO-DATE AVAILABLE AGE optimize-live 1/1 1 1 6d23h
Deleting Live objects
To stop receiving recommendations for this application, delete the Live:
kubectl delete live LIVE_NAME -n stormforge-system
Using Live custom resources to create and configure applications is a convenient way to work with applications as part of a CI/CD pipeline or other automation, such as ensuring that new applications that are provisioned by CI are accompanied by an appropriate Optimize Live configuration.