Security FAQ

As a SOC 2 Type II compliant company, StormForge adheres to global and industry compliance best practices

You asked:

How does one access StormForge SOC 2 compliance reports?

StormForge is SOC 2 Type 2 compliant as determined by an audit completed by an accredited auditing firm. You can request access to the SOC 2 reports on the SOC 2 Compliance Reports page.

How is Optimize Live deployed?

  • On your cluster, we deploy the following components:
    • StormForge Agent, which reports on new workloads and deploys and configures the Metrics Forwarder. The Metrics Forwarder collects data and ships it to the StormForge backend. The oci:// Helm chart creates and uses a ServiceAccount called stormforge-agent and binds it to the Kubernetes view ClusterRole, granting read-only permissions to all resources in the cluster.
    • Applier (optional), which patches workloads with optimized resource utilization recommendations. The oci:// Helm chart creates and uses a ServiceAccount called stormforge-applier and binds it to the Kubernetes edit ClusterRole, granting update and patch permissions to all optimizable workloads (and HPA, if enabled).
  • On our instances, we store the data and run machine learning to provide recommendations, which are presented in the StormForge UI.

What data do you collect?

From a targeted instance, we collect:

  • Metadata: node names, node UIDs, node instance types, kube-system namespace UID, namespace names, workload names, workload types, workload labels, pod names, pod requests and limits, container names, and container requests and limits. We also have the cluster name, which is provided by a user when installing the Agent.

    • If you specify an allowNamespaces or a denyNamespaces list, we collect data accordingly. For example, we do not collect data about namespaces that you include in the denyNamespaces list.
    • You can choose to disable the collection of workload labels when you install the Agent.
  • Metrics: We use the metadata above to build node, workload, and container metrics. For the complete list for metrics, see What metrics does StormForge collect? at the end of this topic.

We do not collect any personal data directly — only via social logins (such as Google or GitHub).

For details, check out our Privacy Policy.

How do you collect data and where is it stored?

The StormForge Metrics Forwarder collects metrics data from a targeted instance via HTTPS requests, and then pushes the metrics to the StormForge SaaS backend. We store the parsed and ingested data in the StormForge cloud. Each customer has their own separate instance, and data is not shared.

How long is it stored for?

By default, we store data for one year. Upon request, we will delete all data that is less than one year of age.

Who has access to customer data?

Access to production data is restricted to privileged StormForge engineers on an as-needed temporary basis and only for the explicit intent of direct customer support.

When our Machine Learning team needs data for product improvement activities, we anonymize data. No external parties have access to customer data.

Does StormForge support federated single sign-on?

Yes. StormForge supports OpenID Connect (OIDC), Security Assertion Markup Language (SAML), and other popular federated single sign-on (SSO) technologies through our authorization vendor. We can map groups from your authentication system into roles in the StormForge system (Viewer, Operator, Manager, Administrator).

For more information, contact your sales representative or contact StormForge sales.

Which StormForge service URLs must be on the organization’s allowlist?

See the complete list in the StormForge installation prerequisites.

What metrics does StormForge collect?

The following table lists the metrics that StormForge collects from Kubernetes clusters.

  • Workload-level metrics are generated by Stormforge using metadata that we collect and are prefixed with sf.
  • Container-level metrics are built-in metrics provided by cAdvisor running on the Kubernetes node.
Metric Source Why we collect it
sf_node_allocated_requests Custom node metrics The number of allocated requests on the node
sf_node_allocated_limits Custom node metrics The number of allocated limits on the node
sf_node_allocated_pods Custom node metrics The number of non-terminated pods running on the node
sf_node_allocatable_resources Custom node metrics The number of allocatable resources on the node
sf_workload_pod_owner Consolidated metric for ownership With this metric, we have pod owner and workload, replacing KSM kube_pod_owner and kube_replicaset_owner
sf_workload_spec_replicas Consolidated metric for desired replicas number With this metric, we have all desired replica metrics regardless type of pod owner. The pod owner must have the subresource scale.
sf_workload_status_replicas Consolidated metric for observed replicas number With this metric, we have all observed replica metrics regardless type of pod owner.
sf_workload_pod_container_resource_requests Consolidated pod metric with requests With this metric, we have all requests metrics in a single metric.
sf_workload_pod_container_resource_limits Consolidated pod metric with limits With this metric, we have all limits metrics in a single metric.
sf_horizontalpodautoscaler_spec_min_replicas KSM-like/horizontalpodautoscaler-metrics Track minimum replicas for each HPA
sf_horizontalpodautoscaler_spec_max_replicas KSM-like/horizontalpodautoscaler-metrics Track maximum replicas for each HPA
sf_horizontalpodautoscaler_spec_target_metric KSM-like/horizontalpodautoscaler-metrics Track target metric for each HPA
container_cpu_usage_seconds_total cadvisor Track cpu usage for each container
container_memory_working_set_bytes cadvisor Track memory usage for each container
container_cpu_cfs_throttled_seconds_total cadvisor Total time duration the container has been throttled
container_memory_max_usage_bytes cadvisor Maximum memory usage recorded
container_oom_events_total cadvisor Count of out of memory events observed for the container

Source: StormForge Agent Helm chart readme:

helm show readme oci://
Last modified June 12, 2024