This section details the configuration of memory resources for the Firewall within a Kubernetes container, focusing on the distinct requirements of the control plane and data plane.
The memory allocation for the control plane is defined using Kubernetes pod specifications. It involves setting both the memory request and limit to the same value, a requirement for the pod to achieve Guaranteed Quality of Service (QoS).
apiVersion: v1
kind: Pod
spec:
containers:
-name: netshield
resources:
requests:
memory: 900Mi
limits:
memory: 900Mi
For the control plane's memory allocation, it is important to specify both the memory request and limit similar in the Kubernetes pod configuration. This alignment is necessary for the pod to meet the criteria for Guaranteed Quality of Service (QoS), ensuring reliable resource availability for the control plane operations.
The data plane configuration utilizes hugepages, which are larger memory pages used to improve handling of network traffic. The setup involves specifying hugepages in the resources section, based on the expected memory requirements of the data plane.
apiVersion: v1 kind: Pod containers: - name: netshield env: - name: HUGEPAGES_2M valueFrom: resourceFieldRef: containerName: netshield resource: limits.hugepages-2Mi divisor: 1Mi - name: HUGEPAGES_1G valueFrom: resourceFieldRef: containerName: netshield resource: limits.hugepages-1Gi divisor: 1Mi #- name: DP_MEMORY_MB # value: '300' resources: requests: hugepages-2Mi: 800Mi hugepages-1Gi: 1Gi limits: hugepages-2Mi: 800Mi hugepages-1Gi: 1Gi volumeMounts: #- mountPath: /hugepages # name: hugepages - mountPath: /hugepages_2MB name: hugepages-2mb - mountPath: /hugepages_1GB name: hugepages-1gb volumes: #- name: hugepages # emptyDir: # medium: HugePages - name: hugepages-2mb emptyDir: medium: HugePages-2Mi - name: hugepages-1gb emptyDir: medium: HugePages-1Gi
Request hugepages totaling to the amount of memory that should be available to dataplane. Both request and limit needs to be set to the same value for the pod to qualify for guaranteed QoS. If using pages of both sizes only less than 1GiB of memory from 2MiB hugepages are supported. If using hugepages of just one size then the single "hugepages" mount/volume can be used. If using hugepages of both sizes the both the size-specific mounts/voluments (hugepages-2mb and hugepages-1gb) must to be used.
If downward API for hugepages is enabled in the cluster, then add both HUGEPAGES_2M and HUGEPAGES_1G to the environment, just make sure that the "containerName" matches the container's name. If they are added without proper downward API support then an error like the below example will be triggered:
.valueFrom.resourceFieldRef.resource: Unsupported value: "limits.hugepages-2Mi" (or "..-1Gi")
The preferred solution to this is to enable the downward API. Workaround is to remove the HUGEPAGES_* environment variables and manually set environment variable DP_MEMORY_MB and keep it up to date/in-sync with the requests/limits.