Firewall pods can be deployed using various object types. For initial tests or simple scenarios, directly deploying a pod might be sufficient. In other cases perhaps a StatefulSet is better. This section focuses on the pod details that are needed either way.
This section contains a full example of a pod specification that can be used to deploy the firewall. This is based on "netshield-pod.yaml" that is distributed with the software, but with some of the more exotic settings removed for better overview. It can be good enough for an initial test but since it uses "host-path" for persistent storage it is not suitable for more production-like deployments. For those scenarios something like "netshield-ls-statefulset.yaml" is more suitable. The differences are basically in the storage and that a stateful set is used instead of a single pod.
The details that are specific to the firewall product are mostly the same so the simpler single pod specification is a good starting point to not get distracted by other aspects that likely vary anyway depending on the environment where the firewall is being deployed. In later sections, parts of this example is presented under various themes, such as CPU, Memory or Networking, focus on those parts that affects that theme.
For samples of .yaml configuration files see examples found in: clavister-cos-stream-4.00.01.34-cnf-x64-generic-deploy.tar.gz
# Example pod definition to deploy a netshield pod. apiVersion: v1 kind: Pod metadata: name: netshield annotations: # Optional request to multus, to use a specific network configuration # as the default pod network # v1.multus-cni.io/default-network: # <name of the NetworkAttachmentDefinition to use> # Optional request to multus, to request additional network interfaces k8s.v1.cni.cncf.io/networks: lan-network@lan, wan-network@wan spec: containers: - name: netshield image: '<some-registry-url>/cos-stream:4.00.00.00' env: # Optional CPU list specification for control plane and data plane. # Default is to detect CPUs via the cgroup cpuset controller. # These needs to be used if dedicated CPU resources (pod in guaranteed # QoS class and static CPU manager policy) are not used. #- name: CP_CPU_LIST # value: '0' # rangelist format supported, for instance: '4,6,9-12' #- name: DP_CPU_LIST # value: '1' # rangelist format supported, for instance: '4,6,9-12' # NETS will expose the 'k8s.v1.cni.cncf.io/networks' annotation in the # container's environment. No user configuration required. - name: NETS valueFrom: fieldRef: fieldPath: metadata.annotations['k8s.v1.cni.cncf.io/networks'] # CPU_REQ and CPU_LIMIT will expose the amount of CPU resources # requested to the container. No user configuration required # other than to make sure that the "containerName" matches the # container's name. - name: CPU_REQ valueFrom: resourceFieldRef: containerName: netshield resource: requests.cpu divisor: 1m - name: CPU_LIMIT valueFrom: resourceFieldRef: containerName: netshield resource: limits.cpu divisor: 1m # The HUGEPAGES* variables below require that the DownwardAPIHugePages # feature gate is enabled. When using downward API for hugepages and # 2MB hugepages this one should be included: - name: HUGEPAGES_2M valueFrom: resourceFieldRef: containerName: netshield resource: limits.hugepages-2Mi divisor: 1Mi # When using downward API for hugepages and 1GB hugepages this one # should be included: - name: HUGEPAGES_1G valueFrom: resourceFieldRef: containerName: netshield resource: limits.hugepages-1Gi divisor: 1Mi # When not using downward API for hugepages this one needs to be set # and kept in sync with the amount of hugepages requested for the # container. #- name: DP_MEMORY_MB # value: '300' resources: # Requests and limits should usually be identical to qualify the # pod for the "Guaranteed" quality of service class. requests: cpu: '10' hugepages-2Mi: 300Mi #hugepages-1Gi: 1Gi memory: 900Mi # Allocate resources/devices needed by the extra networks. # (These can be injected automatically by the "Network Resources # Injector" # https://github.com/k8snetworkplumbingwg/network-resources-injector.) # The "resource prefix" and the "resource name" must match the # configuration of the device plugin used, for instance, the "SR-IOV # Network Device Plugin for Kubernetes # (https://github.com/k8snetworkplumbingwg/sriov-network-device-plugin) example.com/lan_device: '1' example.com/wan_device: '1' limits: cpu: '10' hugepages-2Mi: 300Mi #hugepages-1Gi: 1Gi memory: 900Mi example.com/lan_device: '1' example.com/wan_device: '1' securityContext: privileged: true volumeMounts: # If using just one size of hugepages then this generic # one can be used, otherwise use one of each type.. - mountPath: /hugepages name: hugepages #- mountPath: /hugepages_2MB # name: hugepages-2mb #- mountPath: /hugepages_1GB # name: hugepages-1gb - mountPath: /etc/podinfo name: podinfo # The system expects persistent storage to be # mounted/available at /mnt/storage. - mountPath: /mnt/storage name: storage volumes: # If using just one size of hugepages then this generic one can # be used, otherwise use one of each type.. - name: hugepages emptyDir: medium: HugePages #- name: hugepages-2mb # emptyDir: # medium: HugePages-2Mi #- name: hugepages-1gb # emptyDir: # medium: HugePages-1Gi - name: podinfo downwardAPI: items: - path: "network-status" fieldRef: fieldPath: metadata.annotations['k8s.v1.cni.cncf.io/network-status'] - name: storage hostPath: path: /opt/netshield/storage/pod1 type: Directory