Skip to content

MON-4036: Add TelemeterClientConfig to ClusterMonitoring API#2755

Merged
openshift-merge-bot[bot] merged 1 commit intoopenshift:masterfrom
danielmellado:mon_4034_telemeter_client_config
Mar 27, 2026
Merged

MON-4036: Add TelemeterClientConfig to ClusterMonitoring API#2755
openshift-merge-bot[bot] merged 1 commit intoopenshift:masterfrom
danielmellado:mon_4034_telemeter_client_config

Conversation

@danielmellado
Copy link
Copy Markdown
Contributor

Migrate the telemeter-client configmap settings to a CRD field within
ClusterMonitoringSpec in config/v1alpha1. The new TelemeterClientConfig
struct supports:

  • nodeSelector: pod scheduling to specific nodes
  • resources: compute resource requests and limits
  • tolerations: pod tolerations for scheduling
  • topologySpreadConstraints: pod distribution across topology domains

Signed-off-by: Daniel Mellado dmellado@fedoraproject.org

@openshift-ci-robot
Copy link
Copy Markdown

Pipeline controller notification
This repo is configured to use the pipeline controller. Second-stage tests will be triggered either automatically or after lgtm label is added, depending on the repository configuration. The pipeline controller will automatically detect which contexts are required and will utilize /test Prow commands to trigger the second stage.

For optional jobs, comment /test ? to see a list of all defined jobs. To trigger manually all jobs from second stage use /pipeline required command.

This repository is configured in: LGTM mode

@openshift-ci-robot openshift-ci-robot added the jira/valid-reference Indicates that this PR references a valid Jira ticket of any type. label Mar 10, 2026
@openshift-ci-robot
Copy link
Copy Markdown

openshift-ci-robot commented Mar 10, 2026

@danielmellado: This pull request references MON-4036 which is a valid jira issue.

Warning: The referenced jira issue has an invalid target version for the target branch this PR targets: expected the story to target either version "4.22." or "openshift-4.22.", but it targets "openshift-5.0" instead.

Details

In response to this:

Migrate the telemeter-client configmap settings to a CRD field within
ClusterMonitoringSpec in config/v1alpha1. The new TelemeterClientConfig
struct supports:

  • nodeSelector: pod scheduling to specific nodes
  • resources: compute resource requests and limits
  • tolerations: pod tolerations for scheduling
  • topologySpreadConstraints: pod distribution across topology domains

Signed-off-by: Daniel Mellado dmellado@fedoraproject.org

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

@openshift-ci
Copy link
Copy Markdown
Contributor

openshift-ci bot commented Mar 10, 2026

Hello @danielmellado! Some important instructions when contributing to openshift/api:
API design plays an important part in the user experience of OpenShift and as such API PRs are subject to a high level of scrutiny to ensure they follow our best practices. If you haven't already done so, please review the OpenShift API Conventions and ensure that your proposed changes are compliant. Following these conventions will help expedite the api review process for your PR.

@coderabbitai
Copy link
Copy Markdown

coderabbitai bot commented Mar 10, 2026

Important

Review skipped

Auto reviews are limited based on label configuration.

🚫 Review skipped — only excluded labels are configured. (1)
  • do-not-merge/work-in-progress

Please check the settings in the CodeRabbit UI or the .coderabbit.yaml file in this repository. To trigger a single review, invoke the @coderabbitai review command.

⚙️ Run configuration

Configuration used: Repository YAML (base), Organization UI (inherited)

Review profile: CHILL

Plan: Pro

Run ID: b45bf1ab-d6bd-488a-9237-00841661a00e

You can disable this status message by setting the reviews.review_status to false in the CodeRabbit configuration file.

Use the checkbox below for a quick retry:

  • 🔍 Trigger review

Note

Reviews paused

It looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the reviews.auto_review.auto_pause_after_reviewed_commits setting.

Use the following commands to manage reviews:

  • @coderabbitai resume to resume automatic reviews.
  • @coderabbitai review to trigger a single review.

Use the checkboxes below for quick actions:

  • ▶️ Resume reviews
  • 🔍 Trigger review
📝 Walkthrough

Walkthrough

A new optional TelemeterClientConfig field was added to ClusterMonitoringSpec, exposing nodeSelector, resources, tolerations, and topologySpreadConstraints with schema validations (min/max items/properties and list types). The v1alpha1 CRD was extended to include telemeterClientConfig. Multiple existing list validations were tightened by lowering several maxItems limits from 10 to 5 across various resource lists. Test cases were expanded to cover TelemeterClientConfig creation, validation failures (empty/duplicate/too-many items), and update-path behaviors.

🚥 Pre-merge checks | ✅ 4 | ❌ 1

❌ Failed checks (1 inconclusive)

Check name Status Explanation Resolution
Test Structure And Quality ❓ Inconclusive PR modifies YAML-based CRD validation test fixtures, not Go Ginkgo tests. The openshift/api repository uses a specialized integration test framework with declarative YAML format, not traditional Ginkgo patterns. Clarify whether the custom check applies to YAML-based schema validation tests or exclusively Go/Ginkgo tests. Provide specific guidelines aligned with the declarative YAML test format if validation is required.
✅ Passed checks (4 passed)
Check name Status Explanation
Title check ✅ Passed The title accurately and concisely describes the main change: adding TelemeterClientConfig to the ClusterMonitoring API, which is the primary feature in this changeset.
Description check ✅ Passed The description provides relevant context about migrating telemeter-client configmap settings to a CRD field and enumerates the key features (nodeSelector, resources, tolerations, topologySpreadConstraints) supported by the new TelemeterClientConfig struct.
Docstring Coverage ✅ Passed No functions found in the changed files to evaluate docstring coverage. Skipping docstring coverage check.
Stable And Deterministic Test Names ✅ Passed This PR contains no Ginkgo tests. Test names in the YAML-based CRD validation framework are stable, deterministic, and contain no dynamic values.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests

Comment @coderabbitai help to get the list of available commands and usage tips.

@openshift-ci openshift-ci bot added the size/XXL Denotes a PR that changes 1000+ lines, ignoring generated files. label Mar 10, 2026
@qodo-code-review
Copy link
Copy Markdown

Review Summary by Qodo

Add TelemeterClientConfig to ClusterMonitoring API for pod scheduling and resources

✨ Enhancement

Grey Divider

Walkthroughs

Description
• Add TelemeterClientConfig struct to ClusterMonitoringSpec API
• Support pod scheduling via nodeSelector, tolerations, and topologySpreadConstraints
• Enable resource management with resources field for CPU/memory constraints
• Include comprehensive validation rules and test coverage for new configuration
Diagram
flowchart LR
  A["ClusterMonitoringSpec"] -->|adds field| B["TelemeterClientConfig"]
  B -->|supports| C["nodeSelector"]
  B -->|supports| D["resources"]
  B -->|supports| E["tolerations"]
  B -->|supports| F["topologySpreadConstraints"]
  C -->|controls| G["Pod Scheduling"]
  D -->|manages| H["Resource Limits"]
  E -->|handles| I["Taint Tolerance"]
  F -->|distributes| J["Topology Domains"]
Loading

Grey Divider

File Changes

1. config/v1alpha1/types_cluster_monitoring.go ✨ Enhancement +80/-0

Define TelemeterClientConfig struct with pod scheduling fields

• Added TelemeterClientConfig field to ClusterMonitoringSpec struct with comprehensive
 documentation
• Defined new TelemeterClientConfig type with four configuration fields: nodeSelector,
 resources, tolerations, and topologySpreadConstraints
• Implemented validation constraints including minProperties: 1, minItems: 1, and maxItems: 10
 for list fields
• Added detailed comments explaining each field's purpose, defaults, and constraints

config/v1alpha1/types_cluster_monitoring.go


2. config/v1alpha1/zz_generated.deepcopy.go Miscellaneous +45/-0

Generate deepcopy methods for TelemeterClientConfig

• Added DeepCopyInto method for TelemeterClientConfig with proper handling of map and slice
 fields
• Added DeepCopy method for TelemeterClientConfig to support object cloning
• Implemented deep copy logic for nodeSelector map, resources slice, tolerations slice, and
 topologySpreadConstraints slice

config/v1alpha1/zz_generated.deepcopy.go


3. config/v1alpha1/zz_generated.swagger_doc_generated.go 📝 Documentation +13/-0

Generate swagger documentation for TelemeterClientConfig

• Added swagger documentation map for TelemeterClientConfig struct
• Documented all four configuration fields with detailed descriptions of their purpose and
 constraints
• Included information about default values and validation rules in swagger documentation

config/v1alpha1/zz_generated.swagger_doc_generated.go


View more (5)
4. openapi/generated_openapi/zz_generated.openapi.go Miscellaneous +104/-1

Generate OpenAPI schema for TelemeterClientConfig

• Added OpenAPI schema definition for TelemeterClientConfig with complete property specifications
• Defined schemas for nodeSelector (map of strings), resources (array of ContainerResource),
 tolerations (array of v1.Toleration), and topologySpreadConstraints (array of
 v1.TopologySpreadConstraint)
• Included validation constraints in OpenAPI schema (minProperties, maxItems, minItems, list types)
• Added TelemeterClientConfig to dependencies list in ClusterMonitoringSpec schema

openapi/generated_openapi/zz_generated.openapi.go


5. config/v1alpha1/tests/clustermonitorings.config.openshift.io/ClusterMonitoringConfig.yaml 🧪 Tests +253/-0

Add comprehensive test cases for TelemeterClientConfig

• Added 11 comprehensive test cases covering valid and invalid TelemeterClientConfig
 configurations
• Tests validate individual fields: resources, tolerations, topologySpreadConstraints, and
 nodeSelector
• Tests verify validation constraints: empty object rejection, duplicate detection, size limits, and
 resource limit ordering
• Tests ensure proper error messages for constraint violations (minProperties, maxItems, minItems)

config/v1alpha1/tests/clustermonitorings.config.openshift.io/ClusterMonitoringConfig.yaml


6. config/v1alpha1/zz_generated.crd-manifests/0000_10_config-operator_01_clustermonitorings.crd.yaml ⚙️ Configuration changes +348/-0

Add TelemeterClientConfig to CRD manifest

• Added telemeterClientConfig field to CRD spec with complete YAML schema definition
• Defined all four sub-fields with detailed descriptions, validation rules, and Kubernetes list type
 annotations
• Included comprehensive property schemas for ContainerResource items with validation rules for
 resource quantities
• Added constraints for list sizes (minItems: 1, maxItems: 10) and map sizes (minProperties: 1,
 maxProperties: 10)

config/v1alpha1/zz_generated.crd-manifests/0000_10_config-operator_01_clustermonitorings.crd.yaml


7. config/v1alpha1/zz_generated.featuregated-crd-manifests/clustermonitorings.config.openshift.io/ClusterMonitoringConfig.yaml ⚙️ Configuration changes +348/-0

Add TelemeterClientConfig to feature-gated CRD manifest

• Added identical telemeterClientConfig field definition to feature-gated CRD manifest
• Mirrors the changes in the main CRD manifest to maintain consistency across generated manifests
• Includes all validation rules and property schemas for the new configuration field

config/v1alpha1/zz_generated.featuregated-crd-manifests/clustermonitorings.config.openshift.io/ClusterMonitoringConfig.yaml


8. payload-manifests/crds/0000_10_config-operator_01_clustermonitorings.crd.yaml ⚙️ Configuration changes +348/-0

Add TelemeterClientConfig to payload CRD manifest

• Added telemeterClientConfig field definition to payload CRD manifest
• Includes complete schema with all four configuration fields and their validation rules
• Maintains consistency with other generated CRD manifests for deployment purposes

payload-manifests/crds/0000_10_config-operator_01_clustermonitorings.crd.yaml


Grey Divider

Qodo Logo

@qodo-code-review
Copy link
Copy Markdown

qodo-code-review bot commented Mar 10, 2026

Code Review by Qodo

🐞 Bugs (1) 📘 Rule violations (1) 📎 Requirement gaps (0)

Grey Divider


Action required

1. TelemeterClientConfig MinProperties undocumented 📘 Rule violation ✓ Correctness
Description
TelemeterClientConfig has +kubebuilder:validation:MinProperties=1, but the type comment does not
document that the object must not be empty (i.e., at least one field must be set). This violates the
requirement that validation markers be fully documented, potentially confusing API consumers about
valid/invalid configurations.
Code

config/v1alpha1/types_cluster_monitoring.go[R575-581]

+// TelemeterClientConfig provides configuration options for the Telemeter Client component
+// that runs in the `openshift-monitoring` namespace. The Telemeter Client collects selected
+// monitoring metrics and forwards them to Red Hat for telemetry purposes.
+// Use this configuration to control pod scheduling and resource allocation.
+// +kubebuilder:validation:MinProperties=1
+type TelemeterClientConfig struct {
+	// nodeSelector defines the nodes on which the Pods are scheduled.
Evidence
PR Compliance ID 6 requires that every validation marker be documented in the field/type comments.
The new TelemeterClientConfig type adds +kubebuilder:validation:MinProperties=1 but its comment
only describes purpose/scheduling/resources and does not state that at least one property must be
set (non-empty object requirement).

AGENTS.md
config/v1alpha1/types_cluster_monitoring.go[575-581]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

## Issue description
`TelemeterClientConfig` includes the validation marker `+kubebuilder:validation:MinProperties=1`, but the type-level comment does not document the resulting constraint that the object must not be empty.

## Issue Context
Compliance requires that every validation marker applied to an API field/type be fully documented in comments so users understand constraints without inspecting generated schema.

## Fix Focus Areas
- config/v1alpha1/types_cluster_monitoring.go[575-581]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools



Remediation recommended

2. Topology constraints not enforced 🐞 Bug ✓ Correctness
Description
The CRD schema for spec.telemeterClientConfig.topologySpreadConstraints documents constraints (e.g.,
maxSkew “0 is not allowed” and specific allowed values for whenUnsatisfiable) but does not encode
them as OpenAPI validations, so the API can accept values that contradict the schema’s own
documented rules.
Code

config/v1alpha1/zz_generated.crd-manifests/0000_10_config-operator_01_clustermonitorings.crd.yaml[R2178-2274]

+                        maxSkew:
+                          description: |-
+                            MaxSkew describes the degree to which pods may be unevenly distributed.
+                            When `whenUnsatisfiable=DoNotSchedule`, it is the maximum permitted difference
+                            between the number of matching pods in the target topology and the global minimum.
+                            The global minimum is the minimum number of matching pods in an eligible domain
+                            or zero if the number of eligible domains is less than MinDomains.
+                            For example, in a 3-zone cluster, MaxSkew is set to 1, and pods with the same
+                            labelSelector spread as 2/2/1:
+                            In this case, the global minimum is 1.
+                            | zone1 | zone2 | zone3 |
+                            |  P P  |  P P  |   P   |
+                            - if MaxSkew is 1, incoming pod can only be scheduled to zone3 to become 2/2/2;
+                            scheduling it onto zone1(zone2) would make the ActualSkew(3-1) on zone1(zone2)
+                            violate MaxSkew(1).
+                            - if MaxSkew is 2, incoming pod can be scheduled onto any zone.
+                            When `whenUnsatisfiable=ScheduleAnyway`, it is used to give higher precedence
+                            to topologies that satisfy it.
+                            It's a required field. Default value is 1 and 0 is not allowed.
+                          format: int32
+                          type: integer
+                        minDomains:
+                          description: |-
+                            MinDomains indicates a minimum number of eligible domains.
+                            When the number of eligible domains with matching topology keys is less than minDomains,
+                            Pod Topology Spread treats "global minimum" as 0, and then the calculation of Skew is performed.
+                            And when the number of eligible domains with matching topology keys equals or greater than minDomains,
+                            this value has no effect on scheduling.
+                            As a result, when the number of eligible domains is less than minDomains,
+                            scheduler won't schedule more than maxSkew Pods to those domains.
+                            If value is nil, the constraint behaves as if MinDomains is equal to 1.
+                            Valid values are integers greater than 0.
+                            When value is not nil, WhenUnsatisfiable must be DoNotSchedule.
+
+                            For example, in a 3-zone cluster, MaxSkew is set to 2, MinDomains is set to 5 and pods with the same
+                            labelSelector spread as 2/2/2:
+                            | zone1 | zone2 | zone3 |
+                            |  P P  |  P P  |  P P  |
+                            The number of domains is less than 5(MinDomains), so "global minimum" is treated as 0.
+                            In this situation, new pod with the same labelSelector cannot be scheduled,
+                            because computed skew will be 3(3 - 0) if new Pod is scheduled to any of the three zones,
+                            it will violate MaxSkew.
+                          format: int32
+                          type: integer
+                        nodeAffinityPolicy:
+                          description: |-
+                            NodeAffinityPolicy indicates how we will treat Pod's nodeAffinity/nodeSelector
+                            when calculating pod topology spread skew. Options are:
+                            - Honor: only nodes matching nodeAffinity/nodeSelector are included in the calculations.
+                            - Ignore: nodeAffinity/nodeSelector are ignored. All nodes are included in the calculations.
+
+                            If this value is nil, the behavior is equivalent to the Honor policy.
+                          type: string
+                        nodeTaintsPolicy:
+                          description: |-
+                            NodeTaintsPolicy indicates how we will treat node taints when calculating
+                            pod topology spread skew. Options are:
+                            - Honor: nodes without taints, along with tainted nodes for which the incoming pod
+                            has a toleration, are included.
+                            - Ignore: node taints are ignored. All nodes are included.
+
+                            If this value is nil, the behavior is equivalent to the Ignore policy.
+                          type: string
+                        topologyKey:
+                          description: |-
+                            TopologyKey is the key of node labels. Nodes that have a label with this key
+                            and identical values are considered to be in the same topology.
+                            We consider each <key, value> as a "bucket", and try to put balanced number
+                            of pods into each bucket.
+                            We define a domain as a particular instance of a topology.
+                            Also, we define an eligible domain as a domain whose nodes meet the requirements of
+                            nodeAffinityPolicy and nodeTaintsPolicy.
+                            e.g. If TopologyKey is "kubernetes.io/hostname", each Node is a domain of that topology.
+                            And, if TopologyKey is "topology.kubernetes.io/zone", each zone is a domain of that topology.
+                            It's a required field.
+                          type: string
+                        whenUnsatisfiable:
+                          description: |-
+                            WhenUnsatisfiable indicates how to deal with a pod if it doesn't satisfy
+                            the spread constraint.
+                            - DoNotSchedule (default) tells the scheduler not to schedule it.
+                            - ScheduleAnyway tells the scheduler to schedule the pod in any location,
+                              but giving higher precedence to topologies that would help reduce the
+                              skew.
+                            A constraint is considered "Unsatisfiable" for an incoming pod
+                            if and only if every possible node assignment for that pod would violate
+                            "MaxSkew" on some topology.
+                            For example, in a 3-zone cluster, MaxSkew is set to 1, and pods with the same
+                            labelSelector spread as 3/1/1:
+                            | zone1 | zone2 | zone3 |
+                            | P P P |   P   |   P   |
+                            If WhenUnsatisfiable is set to DoNotSchedule, incoming pod can only be scheduled
+                            to zone2(zone3) to become 3/2/1(3/1/2) as ActualSkew(2-1) on zone2(zone3) satisfies
+                            MaxSkew(1). In other words, the cluster can still be imbalanced, but scheduler
+                            won't make it *more* imbalanced.
+                            It's a required field.
+                          type: string
Evidence
In the generated CRD manifest, maxSkew is only typed as an integer (no minimum constraint) even
though its description states “0 is not allowed”, and whenUnsatisfiable is only typed as a string
(no enum constraint) despite its description listing specific allowed values. The API type comment
states this field maps directly to the Pod spec field, so having the CRD accept values outside the
documented constraints undermines the API contract and shifts validation burden to later
stages/consumers.

config/v1alpha1/zz_generated.crd-manifests/0000_10_config-operator_01_clustermonitorings.crd.yaml[2178-2199]
config/v1alpha1/zz_generated.crd-manifests/0000_10_config-operator_01_clustermonitorings.crd.yaml[2254-2274]
config/v1alpha1/types_cluster_monitoring.go[628-646]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

### Issue description
The generated CRD schema for `spec.telemeterClientConfig.topologySpreadConstraints` documents constraints (e.g., `maxSkew` “0 is not allowed” and specific allowed values for `whenUnsatisfiable`) but does not enforce them via OpenAPI/CEL validations.

### Issue Context
This means the API server may accept values that contradict the schema’s own documented rules, leaving validation to later consumers.

### Fix Focus Areas
- config/v1alpha1/types_cluster_monitoring.go[628-646]
- config/v1alpha1/tests/clustermonitorings.config.openshift.io/ClusterMonitoringConfig.yaml[624-832]
- config/v1alpha1/zz_generated.crd-manifests/0000_10_config-operator_01_clustermonitorings.crd.yaml[2095-2287]

### Implementation notes
- Add `+kubebuilder:validation:XValidation` rules on `TopologySpreadConstraints` to enforce:
 - `maxSkew &gt; 0` for every element
 - `whenUnsatisfiable` is one of `DoNotSchedule` / `ScheduleAnyway`
- Add a CRD schema test case that attempts to create a resource with `maxSkew: 0` and/or an invalid `whenUnsatisfiable` and asserts the expected admission error substring.

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


Grey Divider

ⓘ The new review experience is currently in Beta. Learn more

Grey Divider

Qodo Logo

Comment on lines +575 to +581
// TelemeterClientConfig provides configuration options for the Telemeter Client component
// that runs in the `openshift-monitoring` namespace. The Telemeter Client collects selected
// monitoring metrics and forwards them to Red Hat for telemetry purposes.
// Use this configuration to control pod scheduling and resource allocation.
// +kubebuilder:validation:MinProperties=1
type TelemeterClientConfig struct {
// nodeSelector defines the nodes on which the Pods are scheduled.
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Action required

1. telemeterclientconfig minproperties undocumented 📘 Rule violation ✓ Correctness

TelemeterClientConfig has +kubebuilder:validation:MinProperties=1, but the type comment does not
document that the object must not be empty (i.e., at least one field must be set). This violates the
requirement that validation markers be fully documented, potentially confusing API consumers about
valid/invalid configurations.
Agent Prompt
## Issue description
`TelemeterClientConfig` includes the validation marker `+kubebuilder:validation:MinProperties=1`, but the type-level comment does not document the resulting constraint that the object must not be empty.

## Issue Context
Compliance requires that every validation marker applied to an API field/type be fully documented in comments so users understand constraints without inspecting generated schema.

## Fix Focus Areas
- config/v1alpha1/types_cluster_monitoring.go[575-581]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools

Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick comments (1)
config/v1alpha1/tests/clustermonitorings.config.openshift.io/ClusterMonitoringConfig.yaml (1)

580-832: Add failure cases for nodeSelector and tolerations bounds.

The new schema also enforces min/maxProperties on nodeSelector and min/maxItems on tolerations, but this block only exercises negative paths for resources and topologySpreadConstraints. A couple of explicit failures here would make regressions in those two branches much harder to miss.

Example additions
+    - name: Should reject TelemeterClientConfig with empty nodeSelector
+      initial: |
+        apiVersion: config.openshift.io/v1alpha1
+        kind: ClusterMonitoring
+        spec:
+          telemeterClientConfig:
+            nodeSelector: {}
+      expectedError: 'spec.telemeterClientConfig.nodeSelector: Invalid value: 0: spec.telemeterClientConfig.nodeSelector in body should have at least 1 properties'
+
+    - name: Should reject TelemeterClientConfig with empty tolerations array
+      initial: |
+        apiVersion: config.openshift.io/v1alpha1
+        kind: ClusterMonitoring
+        spec:
+          telemeterClientConfig:
+            tolerations: []
+      expectedError: 'spec.telemeterClientConfig.tolerations: Invalid value: 0: spec.telemeterClientConfig.tolerations in body should have at least 1 items'
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@config/v1alpha1/tests/clustermonitorings.config.openshift.io/ClusterMonitoringConfig.yaml`
around lines 580 - 832, Add negative test cases exercising nodeSelector and
tolerations bounds: include tests named e.g. "Should reject
TelemeterClientConfig with empty nodeSelector" and "Should reject
TelemeterClientConfig with too many nodeSelector properties" that set
spec.telemeterClientConfig.nodeSelector to {} and to an object exceeding the
schema maxProperties respectively, and include tests "Should reject
TelemeterClientConfig with empty tolerations array" (set tolerations: []) and
"Should reject TelemeterClientConfig with too many tolerations" (provide more
than the allowed maxItems). Ensure each test uses the same ClusterMonitoring
YAML structure as other cases and sets expectedError strings matching the schema
validation messages for min/maxProperties and min/maxItems on nodeSelector and
tolerations so failures are asserted.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Nitpick comments:
In
`@config/v1alpha1/tests/clustermonitorings.config.openshift.io/ClusterMonitoringConfig.yaml`:
- Around line 580-832: Add negative test cases exercising nodeSelector and
tolerations bounds: include tests named e.g. "Should reject
TelemeterClientConfig with empty nodeSelector" and "Should reject
TelemeterClientConfig with too many nodeSelector properties" that set
spec.telemeterClientConfig.nodeSelector to {} and to an object exceeding the
schema maxProperties respectively, and include tests "Should reject
TelemeterClientConfig with empty tolerations array" (set tolerations: []) and
"Should reject TelemeterClientConfig with too many tolerations" (provide more
than the allowed maxItems). Ensure each test uses the same ClusterMonitoring
YAML structure as other cases and sets expectedError strings matching the schema
validation messages for min/maxProperties and min/maxItems on nodeSelector and
tolerations so failures are asserted.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Repository YAML (base), Central YAML (inherited)

Review profile: CHILL

Plan: Pro

Run ID: 897b5361-c81f-468e-bb44-621465732eb1

📥 Commits

Reviewing files that changed from the base of the PR and between 7127010 and 40f8860.

⛔ Files ignored due to path filters (5)
  • config/v1alpha1/zz_generated.crd-manifests/0000_10_config-operator_01_clustermonitorings.crd.yaml is excluded by !**/zz_generated.crd-manifests/*
  • config/v1alpha1/zz_generated.deepcopy.go is excluded by !**/zz_generated*
  • config/v1alpha1/zz_generated.featuregated-crd-manifests/clustermonitorings.config.openshift.io/ClusterMonitoringConfig.yaml is excluded by !**/zz_generated.featuregated-crd-manifests/**
  • config/v1alpha1/zz_generated.swagger_doc_generated.go is excluded by !**/zz_generated*
  • openapi/generated_openapi/zz_generated.openapi.go is excluded by !openapi/**, !**/zz_generated*
📒 Files selected for processing (3)
  • config/v1alpha1/tests/clustermonitorings.config.openshift.io/ClusterMonitoringConfig.yaml
  • config/v1alpha1/types_cluster_monitoring.go
  • payload-manifests/crds/0000_10_config-operator_01_clustermonitorings.crd.yaml

@openshift-merge-robot openshift-merge-robot added the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Mar 11, 2026
saschagrunert

This comment was marked as outdated.

Copy link
Copy Markdown
Member

@saschagrunert saschagrunert left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The new TelemeterClientConfig struct follows the established pattern from other component configs in this file. Markers and godoc are mostly well done.

Main issue: the PR removes OpenShiftStateMetricsConfig (merged to master one day prior via MON-4033, 4cf5fe1) and replaces it with TelemeterClientConfig. After rebasing on current master, both fields need to coexist.

Also needs additional test coverage for nodeSelector and tolerations boundary validation.

@danielmellado
Copy link
Copy Markdown
Contributor Author

While rebasing, the CRD hit the Kubernetes CEL validation cost budget (StaticEstimatedCRDCostLimit). The quantity() CEL function has a high fixed estimated cost per invocation, and with 6 structs in ClusterMonitoringSpec each embedding []ContainerResource, the cumulative cost exceeded the limit by ~13%.

Fixed by reducing MaxItems on all resources []ContainerResource fields from 10 to 5 (sufficient for practical use — cpu, memory, hugepages variants).I also added an explanatory comment on the ContainerResource type.

@danielmellado danielmellado force-pushed the mon_4034_telemeter_client_config branch from 40f8860 to 2f6d3d1 Compare March 12, 2026 14:19
@openshift-merge-robot openshift-merge-robot removed the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Mar 12, 2026
Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (1)
config/v1alpha1/tests/clustermonitorings.config.openshift.io/ClusterMonitoringConfig.yaml (1)

37-60: Add the missing prometheusOperatorConfig.resources max-items regression test.

These cases cover the new MaxItems=5 boundary for the other touched resources lists, but spec.prometheusOperatorConfig.resources was tightened in the API too and is still untested here. One rejection case for 6 items would keep all changed validation paths covered.

Proposed test case
+    - name: Should reject PrometheusOperatorConfig with more than 5 resource items
+      initial: |
+        apiVersion: config.openshift.io/v1alpha1
+        kind: ClusterMonitoring
+        spec:
+          prometheusOperatorConfig:
+            resources:
+              - name: "cpu"
+                request: "100m"
+              - name: "memory"
+                request: "64Mi"
+              - name: "hugepages-2Mi"
+                request: "32Mi"
+              - name: "hugepages-1Gi"
+                request: "1Gi"
+              - name: "ephemeral-storage"
+                request: "1Gi"
+              - name: "nvidia.com/gpu"
+                request: "1"
+      expectedError: 'spec.prometheusOperatorConfig.resources: Too many: 6: must have at most 5 items'

Also applies to: 312-333, 462-481, 664-683, 933-952

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@config/v1alpha1/tests/clustermonitorings.config.openshift.io/ClusterMonitoringConfig.yaml`
around lines 37 - 60, Add a regression test that mirrors the alertmanagerConfig
case but targets spec.prometheusOperatorConfig.resources to assert MaxItems=5
enforcement: create a new test entry named like "Should reject
PrometheusOperatorConfig with more than 5 items" with initial YAML containing
spec.prometheusOperatorConfig.resources listing 6 distinct resource objects
(e.g., cpu, memory, hugepages-2Mi, hugepages-1Gi, ephemeral-storage,
nvidia.com/gpu) and set expectedError to
'spec.prometheusOperatorConfig.resources: Too many: 6: must have at most 5
items' so the tightened validation path for prometheusOperatorConfig.resources
is covered.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@config/v1alpha1/types_cluster_monitoring.go`:
- Around line 211-218: The MaxItems for several []ContainerResource "resources"
lists was tightened from 10 to 5, which is schema-breaking; revert the
+kubebuilder:validation:MaxItems annotation (or change it back to 10) for the
"resources" list annotations so existing ClusterMonitoring CRs with 6–10 entries
continue to validate, specifically update the +kubebuilder:validation:MaxItems
on the "resources" list annotations associated with the ContainerResource lists
(the "resources" field) in types_cluster_monitoring.go so they match the
original limit (10) or remove the constraint until a migration is provided.

---

Nitpick comments:
In
`@config/v1alpha1/tests/clustermonitorings.config.openshift.io/ClusterMonitoringConfig.yaml`:
- Around line 37-60: Add a regression test that mirrors the alertmanagerConfig
case but targets spec.prometheusOperatorConfig.resources to assert MaxItems=5
enforcement: create a new test entry named like "Should reject
PrometheusOperatorConfig with more than 5 items" with initial YAML containing
spec.prometheusOperatorConfig.resources listing 6 distinct resource objects
(e.g., cpu, memory, hugepages-2Mi, hugepages-1Gi, ephemeral-storage,
nvidia.com/gpu) and set expectedError to
'spec.prometheusOperatorConfig.resources: Too many: 6: must have at most 5
items' so the tightened validation path for prometheusOperatorConfig.resources
is covered.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Repository YAML (base), Central YAML (inherited)

Review profile: CHILL

Plan: Pro

Run ID: 0f07ec4c-1986-4bc9-917b-36a27aee9952

📥 Commits

Reviewing files that changed from the base of the PR and between 40f8860 and 2f6d3d1.

⛔ Files ignored due to path filters (5)
  • config/v1alpha1/zz_generated.crd-manifests/0000_10_config-operator_01_clustermonitorings.crd.yaml is excluded by !**/zz_generated.crd-manifests/*
  • config/v1alpha1/zz_generated.deepcopy.go is excluded by !**/zz_generated*
  • config/v1alpha1/zz_generated.featuregated-crd-manifests/clustermonitorings.config.openshift.io/ClusterMonitoringConfig.yaml is excluded by !**/zz_generated.featuregated-crd-manifests/**
  • config/v1alpha1/zz_generated.swagger_doc_generated.go is excluded by !**/zz_generated*
  • openapi/generated_openapi/zz_generated.openapi.go is excluded by !openapi/**, !**/zz_generated*
📒 Files selected for processing (3)
  • config/v1alpha1/tests/clustermonitorings.config.openshift.io/ClusterMonitoringConfig.yaml
  • config/v1alpha1/types_cluster_monitoring.go
  • payload-manifests/crds/0000_10_config-operator_01_clustermonitorings.crd.yaml

@danielmellado danielmellado force-pushed the mon_4034_telemeter_client_config branch from 2f6d3d1 to 371ee72 Compare March 12, 2026 15:16
@danielmellado
Copy link
Copy Markdown
Contributor Author

The failure on verify-crdifyis to be expected as per the CEL fix

		(v1alpha1) ^.spec.prometheusOperatorAdmissionWebhookConfig.resources - maxItems: 
              
			maximum decreased : 10 -> 5
+ echo This verifier checks all files that have changed. In some cases you may have changed or renamed a file that already contained api violations, but you are not introducing a new violation. In such cases it is appropriate to /override the failing CI job.

@openshift-merge-robot openshift-merge-robot added the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Mar 13, 2026
@danielmellado
Copy link
Copy Markdown
Contributor Author

Thanks, I'm committing this, rebasing and I'll push the updated version ;)

@danielmellado danielmellado force-pushed the mon_4034_telemeter_client_config branch from 49b7786 to 4fb7908 Compare March 13, 2026 10:45
@openshift-merge-robot openshift-merge-robot removed the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Mar 13, 2026
Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick comments (1)
payload-manifests/crds/0000_10_config-operator_01_clustermonitorings.crd.yaml (1)

3237-3247: Consider de-duplicating the Prometheus resources size description.

Line 3237 already states the 1..5 constraint, and Lines 3245-3247 repeat the same rule in different wording. Keeping one canonical phrasing would reduce noise.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@payload-manifests/crds/0000_10_config-operator_01_clustermonitorings.crd.yaml`
around lines 3237 - 3247, The Prometheus resources list description repeats the
1..5 constraint twice; edit the YAML comment/description for the "resources"
block in the Clustermonitorings CRD to remove the duplicated phrasing (remove
the second occurrence "Maximum length for this list is 5. Minimum length for
this list is 1.") so only a single canonical sentence about the minimum of 1 and
maximum of 5 resource entries remains; ensure the example default entries (cpu,
memory) and the note about unique resource names are preserved.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Nitpick comments:
In
`@payload-manifests/crds/0000_10_config-operator_01_clustermonitorings.crd.yaml`:
- Around line 3237-3247: The Prometheus resources list description repeats the
1..5 constraint twice; edit the YAML comment/description for the "resources"
block in the Clustermonitorings CRD to remove the duplicated phrasing (remove
the second occurrence "Maximum length for this list is 5. Minimum length for
this list is 1.") so only a single canonical sentence about the minimum of 1 and
maximum of 5 resource entries remains; ensure the example default entries (cpu,
memory) and the note about unique resource names are preserved.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Repository YAML (base), Central YAML (inherited)

Review profile: CHILL

Plan: Pro

Run ID: 8229a1e7-3d19-467b-8bae-bced8348a26f

📥 Commits

Reviewing files that changed from the base of the PR and between 49b7786 and 4fb7908.

⛔ Files ignored due to path filters (5)
  • config/v1alpha1/zz_generated.crd-manifests/0000_10_config-operator_01_clustermonitorings.crd.yaml is excluded by !**/zz_generated.crd-manifests/*
  • config/v1alpha1/zz_generated.deepcopy.go is excluded by !**/zz_generated*
  • config/v1alpha1/zz_generated.featuregated-crd-manifests/clustermonitorings.config.openshift.io/ClusterMonitoringConfig.yaml is excluded by !**/zz_generated.featuregated-crd-manifests/**
  • config/v1alpha1/zz_generated.swagger_doc_generated.go is excluded by !**/zz_generated*
  • openapi/generated_openapi/zz_generated.openapi.go is excluded by !openapi/**, !**/zz_generated*
📒 Files selected for processing (3)
  • config/v1alpha1/tests/clustermonitorings.config.openshift.io/ClusterMonitoringConfig.yaml
  • config/v1alpha1/types_cluster_monitoring.go
  • payload-manifests/crds/0000_10_config-operator_01_clustermonitorings.crd.yaml

@openshift-merge-robot openshift-merge-robot added the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Mar 17, 2026
@danielmellado danielmellado force-pushed the mon_4034_telemeter_client_config branch from 4fb7908 to 15632a0 Compare March 17, 2026 12:47
@openshift-merge-robot openshift-merge-robot removed the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Mar 17, 2026
@danielmellado danielmellado force-pushed the mon_4034_telemeter_client_config branch from 15632a0 to 007234d Compare March 24, 2026 10:11
@danielmellado danielmellado force-pushed the mon_4034_telemeter_client_config branch from 007234d to f309966 Compare March 24, 2026 12:14
Copy link
Copy Markdown
Member

@saschagrunert saschagrunert left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The new TelemeterClientConfig struct follows established patterns well. Markers, godoc, and test coverage are mostly in good shape. Three items remain.

Migrate the telemeter-client configmap settings to a CRD field within
ClusterMonitoringSpec in config/v1alpha1. The new TelemeterClientConfig
struct supports:

- nodeSelector: pod scheduling to specific nodes
- resources: compute resource requests and limits
- tolerations: pod tolerations for scheduling
- topologySpreadConstraints: pod distribution across topology domains

Also reduces MaxItems on all []ContainerResource fields from 10 to 5
to stay within the Kubernetes CRD CEL validation cost budget
(StaticEstimatedCRDCostLimit).

Signed-off-by: Daniel Mellado <dmellado@fedoraproject.org>
@danielmellado danielmellado force-pushed the mon_4034_telemeter_client_config branch from f309966 to f3693b4 Compare March 25, 2026 09:38
Copy link
Copy Markdown
Member

@saschagrunert saschagrunert left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

/lgtm

@openshift-ci openshift-ci bot added the lgtm Indicates that a PR is ready to be merged. label Mar 25, 2026
@openshift-ci-robot
Copy link
Copy Markdown

Scheduling tests matching the pipeline_run_if_changed or not excluded by pipeline_skip_if_only_changed parameters:
/test e2e-aws-ovn
/test e2e-aws-ovn-hypershift
/test e2e-aws-ovn-hypershift-conformance
/test e2e-aws-ovn-techpreview
/test e2e-aws-serial-1of2
/test e2e-aws-serial-2of2
/test e2e-aws-serial-techpreview-1of2
/test e2e-aws-serial-techpreview-2of2
/test e2e-azure
/test e2e-gcp
/test e2e-upgrade
/test e2e-upgrade-out-of-change
/test minor-e2e-upgrade-minor

@danielmellado
Copy link
Copy Markdown
Contributor Author

/retest-required

1 similar comment
@danielmellado
Copy link
Copy Markdown
Contributor Author

/retest-required

Copy link
Copy Markdown
Contributor

@everettraven everettraven left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

/lgtm

@openshift-ci
Copy link
Copy Markdown
Contributor

openshift-ci bot commented Mar 27, 2026

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: everettraven, saschagrunert

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Details Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@openshift-ci openshift-ci bot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Mar 27, 2026
@everettraven
Copy link
Copy Markdown
Contributor

Alpha API with a reasonable reason for changing from 10 -> 5 max items on container resources fields.

/override ci/prow/verify-crdify

@openshift-ci
Copy link
Copy Markdown
Contributor

openshift-ci bot commented Mar 27, 2026

@everettraven: Overrode contexts on behalf of everettraven: ci/prow/verify-crdify

Details

In response to this:

Alpha API with a reasonable reason for changing from 10 -> 5 max items on container resources fields.

/override ci/prow/verify-crdify

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@danielmellado
Copy link
Copy Markdown
Contributor Author

/verified by ci

@openshift-ci-robot openshift-ci-robot added the verified Signifies that the PR passed pre-merge verification criteria label Mar 27, 2026
@openshift-ci-robot
Copy link
Copy Markdown

@danielmellado: This PR has been marked as verified by ci.

Details

In response to this:

/verified by ci

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

@openshift-ci
Copy link
Copy Markdown
Contributor

openshift-ci bot commented Mar 27, 2026

@danielmellado: all tests passed!

Full PR test history. Your PR dashboard.

Details

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here.

@openshift-merge-bot openshift-merge-bot bot merged commit 5851208 into openshift:master Mar 27, 2026
28 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

approved Indicates a PR has been approved by an approver from all required OWNERS files. jira/valid-reference Indicates that this PR references a valid Jira ticket of any type. lgtm Indicates that a PR is ready to be merged. size/XXL Denotes a PR that changes 1000+ lines, ignoring generated files. verified Signifies that the PR passed pre-merge verification criteria

Projects

None yet

Development

Successfully merging this pull request may close these issues.

6 participants