3

Let's assume I have a node labeled with the labels myKey1: 2, myKey2: 5, myKey3: 3, myKey4: 6. I now want to check if one of those labels has a value greater than 4 and if so schedule my workload on this node. For that I use the following nodeAffinity rule:

 spec:
   containers:
   - name: wl1
     image: myImage:latest
     imagePullPolicy: IfNotPresent
  affinity:
     nodeAffinity:
       requiredDuringSchedulingIgnoredDuringExecution:
         nodeSelectorTerms:
         - matchExpressions:
           - key: myKey1
             operator: Gt
             values:
             - 4
         nodeSelectorTerms:
         - matchExpressions:
           - key: myKey2
             operator: Gt
             values:
             - 4
         nodeSelectorTerms:
         - matchExpressions:
           - key: myKey3
             operator: Gt
             values:
             - 4
         nodeSelectorTerms:
         - matchExpressions:
           - key: myKey4
             operator: Gt
             values:
             - 4

I would instead love to use something shorter to be able to address a bunch of similar labels like e.g.

  affinity:
     nodeAffinity:
       requiredDuringSchedulingIgnoredDuringExecution:
         nodeSelectorTerms:
         - matchExpressions:
           - key: myKey*
             operator: Gt
             values:
             - 4

so basically using a key-wildcard and the different checks connected via a logical OR. Is this possible or is there another solution to check the value of multiple similar labels?

2
  • I would add an extra label to all nodes, which should match. I think that was the simplest solution. Was that also a solution for you? kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node
    – Matthias M
    Commented Feb 25, 2022 at 21:41
  • @MatthiasM unfortunately, this solution does not work in my case. The labels are given and the values set depending on features of different HW components of the same HW entity of the node. If one of these components then match my Pod's container requirement, it should be scheduled there.
    – Wolfson
    Commented Feb 28, 2022 at 12:17

1 Answer 1

2

As Matthias M wrote in the comment:

I would add an extra label to all nodes, which should match. I think that was the simplest solution. Was that also a solution for you? kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node

In your situation, it will actually be easier to just add another key and check only one condition. Alternatively, you can try to use set-based values:

Newer resources, such as Job, Deployment, ReplicaSet, and DaemonSet, support set-based requirements as well.

selector:
  matchLabels:
    component: redis
  matchExpressions:
    - {key: tier, operator: In, values: [cache]}
    - {key: environment, operator: NotIn, values: [dev]}

matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". matchExpressions is a list of pod selector requirements. Valid operators include In, NotIn, Exists, and DoesNotExist. The values set must be non-empty in the case of In and NotIn. All of the requirements, from both matchLabels and matchExpressions are ANDed together -- they must all be satisfied in order to match.

For more about it read also this question.

2
  • The set-based requirements seem to not work since I can't consolidate my different labels to one and also need to ensure that my Pod requirement is greater or equal to the keys' values.
    – Wolfson
    Commented Feb 28, 2022 at 12:27
  • Look at this answer to see how should be created. Commented Mar 1, 2022 at 7:46

Not the answer you're looking for? Browse other questions tagged or ask your own question.