diff --git a/.github/ISSUE_TEMPLATE/2-new-workload.md b/.github/ISSUE_TEMPLATE/2-new-workload.md
index bed5a0705..d32e37a61 100644
--- a/.github/ISSUE_TEMPLATE/2-new-workload.md
+++ b/.github/ISSUE_TEMPLATE/2-new-workload.md
@@ -15,7 +15,7 @@ assignees: ""
- tbd
**Test Category Name**
-- ADD CATEGORY_NAME (e.g. State, Security, etc from [README](https://github.com/cncf/cnf-testsuite/blob/main/README.md#cnf-testsuite))
+- ADD CATEGORY_NAME (e.g. State, Security, etc from [README](../../README.md#cnf-testsuite))
**Type of test (static or runtime)**
- tbd
@@ -23,9 +23,8 @@ assignees: ""
---
### Documentation tasks:
-- [ ] Update [installation instructions](https://github.com/cncf/cnf-testsuite/blob/main/install.md) if needed
-- [ ] Update [Test Categories md](https://github.com/cncf/cnf-testsuite/blob/main/TEST-CATEGORIES.md) if needed
-- [ ] Update [USAGE md](https://github.com/cncf/cnf-testsuite/blob/main/USAGE.md) if needed
+- [ ] Update [installation instructions](../../INSTALL.md) if needed
+- [ ] Update [TEST_DOCUMENTATION md](../../docs/TEST_DOCUMENTATION.md) if needed
- [ ] How to run
- [ ] Description and details
- [ ] What the best practice is
diff --git a/.github/ISSUE_TEMPLATE/3-new-platform.md b/.github/ISSUE_TEMPLATE/3-new-platform.md
index 85707319a..cfaa90e2d 100644
--- a/.github/ISSUE_TEMPLATE/3-new-platform.md
+++ b/.github/ISSUE_TEMPLATE/3-new-platform.md
@@ -48,10 +48,9 @@ assignees: ""
**Documentation tasks:**
-- [ ] Update [Test Categories md](https://github.com/cncf/cnf-testsuite/blob/main/TEST-CATEGORIES.md) if needed
- [ ] Update [Pseudo Code md](https://github.com/cncf/cnf-testsuite/blob/main/PSEUDO-CODE.md) if needed
-- [ ] Update [USAGE md](https://github.com/cncf/cnf-testsuite/blob/main/USAGE.md) if needed
-- [ ] Update [installation instructions](https://github.com/cncf/cnf-testsuite/install.md) if needed
+- [ ] Update [TEST_DOCUMENTATION md](../../docs/TEST_DOCUMENTATION.md) if needed
+- [ ] Update [installation instructions](../../install.md) if needed
### QA tasks
diff --git a/.github/ISSUE_TEMPLATE/ignore/5-proof-of-concept.md b/.github/ISSUE_TEMPLATE/ignore/5-proof-of-concept.md
index 57a2a7d06..230e17399 100644
--- a/.github/ISSUE_TEMPLATE/ignore/5-proof-of-concept.md
+++ b/.github/ISSUE_TEMPLATE/ignore/5-proof-of-concept.md
@@ -19,8 +19,8 @@ Tasks:
- [ ] Select a tool to use, minimal/least effort, and add selection to ticket
- [ ] Add new POC test code
- [ ] Add comment suggesting updates as needed for:
- - [ ] the [test categories markdown](https://github.com/cncf/cnf-testsuite/blob/main/TEST-CATEGORIES.md)
- - [ ] the [psuedo code markdown](https://github.com/cncf/cnf-testsuite/blob/main/PSEUDO-CODE.md)
+ - [ ] the [TEST_DOCUMENTATION md](../../../docs/TEST_DOCUMENTATION.md)
+ - [ ] the [psuedo code markdown](../../../PSEUDO-CODE.md)
- [ ] slide content updates, LINK_TO_UPDATES
- - [ ] the [README](https://github.com/cncf/cnf-testsuite/blob/main/README.md)
+ - [ ] the [README](../../../README.md)
- [ ] Tag 1 or more people to peer review
diff --git a/CNF_TESTSUITE_YML_USAGE.md b/CNF_TESTSUITE_YML_USAGE.md
index e6a5f2459..1de8061b0 100644
--- a/CNF_TESTSUITE_YML_USAGE.md
+++ b/CNF_TESTSUITE_YML_USAGE.md
@@ -103,7 +103,7 @@ Example Setting:
#### helm_install_namespace
-This sets the namespace that helm will use to install the CNF to. This is to conform to the best practice of not installing your CNF to the `default` namespace on your cluster. You can learn more about this practice [here](./docs/LIST_OF_TESTS.md#default-namespaces). This is an optional setting but highly recommended as installing your CNF to use the `default` namespace will result with failed tests.
+This sets the namespace that helm will use to install the CNF to. This is to conform to the best practice of not installing your CNF to the `default` namespace on your cluster. You can learn more about this practice [here](./docs/TEST_DOCUMENTATION.md#default-namespaces). This is an optional setting but highly recommended as installing your CNF to use the `default` namespace will result with failed tests.
You can learn more about kubernetes namespaces [here](https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/)
diff --git a/RATIONALE.md b/RATIONALE.md
deleted file mode 100644
index c40fdcd28..000000000
--- a/RATIONALE.md
+++ /dev/null
@@ -1,383 +0,0 @@
-# CNF Test Rationale
-
-**Workload Tests**
-
-## Compatibility, Installability, and Upgradability Tests
-
-#### Service providers have historically had issues with the installability of vendor network functions. This category tests the installabilityand lifecycle management (the create, update, and delete of network applications) against widely used K8s installation solutions such as Helm.
-***
-
-#### *To test the increasing and decreasing of capacity*: [increase_decrease_capacity](docs/LIST_OF_TESTS.md#increase-decrease-capacity)
-> A CNF should be able to increase and decrease its capacity without running into errors.
-
-#### *Test if the Helm chart is published*: [helm_chart_published](docs/LIST_OF_TESTS.md#helm-chart-published)
-> If a helm chart is published, it is significantly easier to install for the end user.
-The management and versioning of the helm chart are handled by the helm registry and client tools
-rather than manually as directly referencing the helm chart source.
-
-#### *Test if the Helm chart is valid*: [helm_chart_valid](docs/LIST_OF_TESTS.md#helm-chart-valid)
-> A chart should pass the [lint specification](https://helm.sh/docs/helm/helm_lint/#helm)
-
-#### *Test if the Helm deploys*: [helm_deploy](docs/LIST_OF_TESTS.md#helm-deploy)
-> A helm chart should be [deployable to a cluster](https://helm.sh/docs/helm/helm_install/#helm)
-
-#### *To check if a CNF version can be rolled back*: [rollback](docs/LIST_OF_TESTS.md#rollback)
-> K8s best practice is to allow [K8s to manage the rolling back](https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#rolling-back-a-deployment) of an application resource instead of having operators manually rolling back the resource by using something like blue/green deploys.
-
-#### *To test if the CNF can perform a rolling update*: [rolling_update](docs/LIST_OF_TESTS.md#rolling-update)
-> See rolling downgrade
-
-#### *To check if a CNF version can be downgraded through a rolling_version_change*: [rolling_version_change](docs/LIST_OF_TESTS.md#rolling-version-change)
-> See rolling downgrade
-
-#### *To check if a CNF version can be downgraded through a rolling_downgrade*: [rolling_downgrade](docs/LIST_OF_TESTS.md#rolling-downgrade)
-> (update, version change, downgrade): K8s best practice for version/installation
-management (lifecycle management) of applications is to have [K8s track the version of
-the manifest information](https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#updating-a-deployment)
-for the resource (deployment, pod, etc) internally. Whenever a
-rollback is needed the resource will have the exact manifest information
-that was tied to the application when it was deployed. This adheres the principles driving
-immutable infrastructure and declarative specifications.
-
-#### *To check if the CNF is compatible with different CNIs*: [cni_compatibility](docs/LIST_OF_TESTS.md#cni-compatible)
-> A CNF should be runnable by any CNI that adheres to the [CNI specification](https://github.com/containernetworking/cni/blob/master/SPEC.md)
-
-#### *[POC] To check if a CNF uses Kubernetes alpha APIs 'alpha_k8s_apis'*: [alpha_k8s_apis](docs/LIST_OF_TESTS.md#kubernetes-alpha-apis---proof-of-concept)
-
-> If a CNF uses alpha or undocumented APIs, the CNF is tightly coupled to an unstable platform
-
-## Microservice Tests
-
-#### [Good microservice practices](https://vmblog.com/archive/2022/01/04/the-zeitgeist-of-cloud-native-microservices.aspx) promote agility which means less time will occur between deployments. One benefit of more agility is it allows for different organizations and teams to deploy at the rate of change that they build out features, instead of deploying in lock step with other teams. This is very important when it comes to changes that are time sensitive like security patches.
-***
-
-#### *To check if the CNF has a reasonable image size*: [reasonable_image_size](docs/LIST_OF_TESTS.md#reasonable-image-size)
-
-> A CNF with a large image size of 5 gig or more tends to indicate a monolithic application
-
-#### *To check if the CNF have a reasonable startup time*: [reasonable_startup_time](docs/LIST_OF_TESTS.md#reasonable-startup-time)
-
-> A CNF that starts up with a time (adjusted for server resources) that is approaching a minute
-is indicative of a monolithic application. The liveness probe's initialDelaySeconds and failureThreshhold determine the startup time and retry amount of the CNF.
-Specifically, if the initiaDelay is too long it is indicative of a monolithic application. If the failureThreshold is too high it is indicative of a CNF or a component of the CNF that has too many intermittent failures.
-
-#### *To check if the CNF has multiple process types within one container*: [single_process_type](docs/LIST_OF_TESTS.md#single-process-type-in-one-container)
-
-> A microservice should have only one process (or set of parent/child processes) that is
-managed by a non home grown supervisor or orchestrator. The microservice should not spawn
-other process types (e.g. executables) as a way to contributeto the workload but rather
-should interact with other processes through a microservice API.
-
-#### *To check if the CNF exposes any of its containers as a service 'service_discovery'*: [service_discovery](docs/LIST_OF_TESTS.md#service-discovery)
-
-> A K8s microservice should expose it's API though a K8s service resource. K8s services
-handle service discovery and load balancing for the cluster.
-
-#### *To check if the CNF uses a shared database*: [shared_database](docs/LIST_OF_TESTS.md#shared-database)
-
-> A K8s microservice should not share a database with another K8s database because
-it forces the two services to upgrade in lock step
-
-#### *To check if the CNF uses container images with specialized init systems*: [specialized_init_systems](docs/LIST_OF_TESTS.md#specialized-init-systems)
-
-> There are proper init systems and sophisticated supervisors that can be run inside of a container. Both of these systems properly reap and pass signals. Sophisticated supervisors are considered overkill because they take up too many resources and are sometimes too complicated. Some examples of sophisticated supervisors are: supervisord, monit, and runit. Proper init systems are smaller than sophisticated supervisors and therefore suitable for containers. Some of the proper container init systems are tini, dumb-init, and s6-overlay.
-
-#### *To check if the CNF PID 1 processes handle SIGTERM*: [sigterm_handled](docs/LIST_OF_TESTS.md#sig-term-handled)
-
-> The Linux kernel handles signals differently for the process that has PID 1 than it does for other processes. Signal handlers aren't automatically registered for this process, meaning that signals such as SIGTERM or SIGINT will have no effect by default. By default, one must kill processes by using SIGKILL, preventing any graceful shutdown. Depending on the application, using SIGKILL can result in user-facing errors, interrupted writes (for data stores), or unwanted alerts in a monitoring system.
-
-#### *To check if the CNF PID 1 processes handle zombie processes correctly*: [zombie_handled](docs/LIST_OF_TESTS.md#zombie-handled)
-
-> Classic init systems such as systemd are also used to remove (reap) orphaned, zombie processes. Orphaned processes — processes whose parents have died - are reattached to the process that has PID 1, which should reap them when they die. A normal init system does that. But in a container, this responsibility falls on whatever process has PID 1. If that process doesn't properly handle the reaping, you risk running out of memory or some other resources.
-
-## State Tests
-
-#### If infrastructure is immutable, it is easily reproduced, consistent, disposable, will have a repeatable deployment process, and will not have configuration or artifacts that are modifiable in place. This ensures that all *configuration* is stateless. Any [*data* that is persistent](https://vmblog.com/archive/2022/05/16/stateful-cnfs.aspx) should be managed by K8s statefulsets.
-***
-
-#### *Test if the CNF crashes when node drain occurs*: [node_drain](docs/LIST_OF_TESTS.md#node-drain)
-
-> No CNF should fail because of stateful configuration. A CNF should function properly if it is rescheduled on other nodes. This test will remove
-resources which are running on a target node and reschedule them on the another node.
-
-
-#### *To test if the CNF uses a volume host path*: [volume_hostpath_not_found](docs/LIST_OF_TESTS.md#volume-hostpath-not-found)
-
-> When a cnf uses a volume host path or local storage it makes the application tightly coupled
-to the node that it is on.
-
-#### *To test if the CNF uses local storage*: [no_local_volume_configuration](docs/LIST_OF_TESTS.md#no-local-volume-configuration)
-> A CNF should refrain from using the [local storage class](https://kubernetes.io/docs/concepts/storage/storage-classes/#local)
-
-#### *To test if the CNF uses elastic volumes*: [elastic_volumes](docs/LIST_OF_TESTS.md#elastic-volumes)
-
-> A cnf that uses elastic volumes can be rescheduled to other nodes by the orchestrator easily
-
-#### *To test if the CNF uses a database with either statefulsets, elastic volumes, or both*: [database_persistence](docs/LIST_OF_TESTS.md#database-persistence)
-
-> When a traditional database such as mysql is configured to use statefulsets, it allows
- the database to use a persistent identifier that it maintains across any rescheduling.
- Persistent Pod identifiers make it easier to match existing volumes to the new Pods that
- have been rescheduled. https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/
-
-## Reliability, Resilience and Availability
-
-#### Cloud native systems promote resilience by putting a high priority on testing individual components (chaos testing) as they are running (possibly in production).[Reliability in traditional telecommunications](https://vmblog.com/archive/2021/09/15/cloud-native-chaos-and-telcos-enforcing-reliability-and-availability-for-telcos.aspx) is handled differently than in Cloud Native systems. Cloud native systems try to address reliability (MTBF) by having the subcomponents have higher availability through higher serviceability (MTTR) and redundancy. For example, having ten redundant subcomponents where seven components are available and three have failed will produce a top level component that is more reliable (MTBF) than a single component that "never fails" in the cloud native world.
-
-#### *Test if the CNF crashes when network latency occurs*: [pod_network_latency](docs/LIST_OF_TESTS.md#cnf-under-network-latency)
-
-> Network latency can have a significant impact on the overall performance of the application. Network outages that result from low latency can cause
-a range of failures for applications and can severely impact user/customers with downtime. This chaos experiment allows you to see the impact of latency
-traffic on the CNF.
-
-#### *Test if the CNF crashes when disk fill occurs*: [disk_fill](docs/LIST_OF_TESTS.md#cnf-with-host-disk-fill)
-
-> Disk Pressure is a scenario we find in Kubernetes applications that can result in the eviction of the application replica and impact its delivery. Such scenarios can still occur despite whatever availability aids K8s provides. These problems are generally referred to as "Noisy Neighbour" problems.
-
-#### *Test if the CNF crashes when pod delete occurs*: [pod_delete](docs/LIST_OF_TESTS.md#pod-delete)
-
-> In a distributed system like Kubernetes, application replicas may not be sufficient to manage the traffic (indicated by SLIs) when some replicas are unavailable due to any failure (can be system or application). The application needs to meet the SLO (service level objectives) for this. It's imperative that the application has defenses against this sort of failure to ensure that the application always has a minimum number of available replicas.
-
-
-#### *Test if the CNF crashes when pod memory hog occurs*: [pod_memory_hog](docs/LIST_OF_TESTS.md#memory-hog)
-
-> If the memory policies for a CNF are not set and granular, containers on the node can be killed based on their oom_score and the QoS class a given pod belongs to (best-effort ones are first to be targeted). This eval is extended to all pods running on the node, thereby causing a bigger blast radius.
-
-#### *Test if the CNF crashes when pod io stress occurs*: [pod_io_stress](docs/LIST_OF_TESTS.md#io-stress)
-
-> Stressing the disk with continuous and heavy IO can cause degradation in reads/ writes by other microservices that use this
-shared disk. Scratch space can be used up on a node which leads to the lack of space for newer containers to get scheduled which
-causes a movement of all pods to other nodes. This test determines the limits of how a CNF uses its storage device.
-
-#### *Test if the CNF crashes when pod network corruption occurs*: [pod_network_corruption](docs/LIST_OF_TESTS.md#network-corruption)
-
-> A higher quality CNF should be resilient to a lossy/flaky network. This test injects packet corruption on the specified CNF's container by
-starting a traffic control (tc) process with netem rules to add egress packet corruption.
-
-#### *Test if the CNF crashes when pod network duplication occurs*: [pod_network_duplication](docs/LIST_OF_TESTS.md#network-duplication)
-
-> A higher quality CNF should be resilient to erroneously duplicated packets. This test injects network duplication on the specified container
-by starting a traffic control (tc) process with netem rules to add egress delays.
-
-#### *Test if the CNF crashes when DNS errors occur*: [pod_dns_errors](docs/LIST_OF_TESTS.md#pod-dns-errors)
-
-> A CNF should be resilient to name resolution (DNS) disruptions within the kubernetes pod. This ensures that at least some application availability will be maintained if DNS resolution fails.
-
-#### *To test if there is a liveness entry in the Helm chart*: [liveness](docs/LIST_OF_TESTS.md#helm-chart-liveness-entry)
-
-> A cloud native principle is that application developers understand their own
-resilience requirements better than operators[1]. This is exemplified in the Kubernetes best practice
-of pods declaring how they should be managed through the liveness and readiness entries in the
-pod's configuration.
-
-> [1] "No one knows more about what an application needs to run in a healthy state than the developer.
-For a long time, infrastructure administrators have tried to figure out what “healthy” means for
-applications they are responsible for running. Without knowledge of what actually makes an
-application healthy, their attempts to monitor and alert when applications are unhealthy are
-often fragile and incomplete. To increase the operability of cloud native applications,
-applications should expose a health check."" Garrison, Justin; Nova, Kris. Cloud Native
-Infrastructure: Patterns for Scalable Infrastructure and Applications in a Dynamic
-Environment . O'Reilly Media. Kindle Edition.
-
-#### *To test if there is a readiness entry in the Helm chart*: [readiness](docs/LIST_OF_TESTS.md#helm-chart-readiness-entry)
-
-> A CNF should tell Kubernetes when it is [ready to serve traffic](https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#define-readiness-probes).
-
-## Observability and Diagnostic Tests
-
-#### In order to maintain, debug, and have insight into a production environment that is protected (versioned, kept in source control, and changed only by using a deployment pipeline), its infrastructure elements must have the property of being observable. This means these elements must externalize their internal states in some way that lends itself to metrics, tracing, and logging.
-
-#### *To check if logs are being sent to stdout/stderr (standard out, standard error) instead of a log file*: [log_output](docs/LIST_OF_TESTS.md#use-stdoutstderr-for-logs)
-
-> By sending logs to standard out/standard error
-["logs will be treated like event streams"](https://12factor.net/) as recommended by 12
-factor apps principles.
-
-#### *To check if prometheus is installed and configured for the cnf*: [prometheus_traffic](docs/LIST_OF_TESTS.md#prometheus-installed)
-
-> Recording metrics within a cloud native deployment is important because it gives
-the maintainer of a cluster of hundreds or thousands of services the ability to pinpoint
-[small anomalies](https://about.gitlab.com/blog/2018/09/27/why-all-organizations-need-prometheus/),
-such as those that will eventually cause a failure.
-
-#### *To check if logs and data are being routed through a Unified Logging Layer*: [routed_logs](docs/LIST_OF_TESTS.md#routed-logs)
-> A CNF should have logs managed by a [unified logging layer](https://www.fluentd.org/why) It's considered a best-practice for CNFs to route logs and data through programs like fluentd to analyze and better understand data.
-
-#### *To check if OpenMetrics is being used and or compatible.*: [open_metrics](docs/LIST_OF_TESTS.md#openmetrics-compatible)
-> OpenMetrics is the de facto standard for transmitting cloud native metrics at scale, with support for both text representation and Protocol Buffers and brings it into an Internet Engineering Task Force (IETF) standard. A CNF should expose metrics that are [OpenMetrics compatible](https://github.com/OpenObservability/OpenMetrics/blob/main/specification/OpenMetrics.md)
-
-#### *To check if tracing is being used with Jaeger.*: [tracing](docs/LIST_OF_TESTS.md#jaeger-tracing)
-> A CNF should provide tracing that conforms to the [open telemetry tracing specification](https://opentelemetry.io/docs/reference/specification/trace/api/)
->
-## Security Tests
-
-#### *"Cloud native security is a [...] mutifaceted topic [...] with multiple, diverse components that need to be secured. The cloud platform, the underlying host operating system, the container runtime, the container orchestrator,and then the applications themselves each require specialist security attention"* -- Chris Binne, Rory Mccune. Cloud Native Security. (Wiley, 2021)(pp. xix)*
-
-#### *To check if the cnf performs a CRI socket mount*: [container_sock_mounts](docs/LIST_OF_TESTS.md#container-socket-mounts)
-
-> *[Container daemon socket bind mounts](https://kyverno.io/policies/best-practices/disallow_cri_sock_mount/disallow_cri_sock_mount/) allows access to the container engine on the node. This access can be used for privilege escalation and to manage containers outside of Kubernetes, and hence should not be allowed..*
-
-#### *To check if there are any privileged containers*: [privileged_containers](docs/LIST_OF_TESTS.md#privileged-containers)
-
-> *... docs describe Privileged mode as essentially enabling “…access to all devices on the host
-as well as [having the ability to] set some configuration in AppArmor or SElinux to allow the
-container nearly all the same access to the host as processes running outside containers on the
-host.” In other words, you should rarely, if ever, use this switch on your container command line.*
-Binnie, Chris; McCune, Rory (2021-06-17T23:58:59). Cloud Native Security . Wiley. Kindle Edition.
-
-
-#### *To check if External IPs are used for services*: [external_ips](docs/LIST_OF_TESTS.md#external-ips)
-
-> Service externalIPs can be used for a MITM attack (CVE-2020-8554). Restrict externalIPs or limit to a known set of addresses. See: https://github.com/kyverno/kyverno/issues/1367
-
-#### *To check if any containers allow for privilege escalation*: [privilege_escalation](docs/LIST_OF_TESTS.md#privilege-escalation)
-
-> *When [privilege escalation](https://kubernetes.io/docs/concepts/policy/pod-security-policy/#privilege-escalation) is [enabled for a container](https://hub.armo.cloud/docs/c-0016), it will allow setuid binaries to change the effective user ID, allowing processes to turn on extra capabilities.
-In order to prevent illegitimate escalation by processes and restrict a processes to a NonRoot user mode, escalation must be disabled.*
-
-#### *To check if an attacker can use a symlink for arbitrary host file system access (CVE-2021-25741)*: [symlink_file_system](docs/LIST_OF_TESTS.md#symlink-file-system)
-
-> *Due to CVE-2021-25741, subPath or subPathExpr volume mounts can be [used to gain unauthorised access](https://hub.armo.cloud/docs/c-0058) to files and directories anywhere on the host filesystem. In order to follow a best-practice security standard and prevent unauthorised data access, there should be no active CVEs affecting either the container or underlying platform.*
-
-#### *To check if selinux has been configured properly*: [selinux_options](docs/LIST_OF_TESTS.md#selinux-options)
-> If [SELinux options](https://kyverno.io/policies/pod-security/baseline/disallow-selinux/disallow-selinux/) is configured improperly it can be used to escalate privileges and should not be allowed.
-
-#### *To check if any pods in the CNF use sysctls with restricted values*: [sysctls](docs/LIST_OF_TESTS.md#sysctls)
-> Sysctls can disable security mechanisms or affect all containers on a host, and should be disallowed except for an allowed "safe" subset. A sysctl is considered safe if it is namespaced in the container or the Pod, and it is isolated from other Pods or processes on the same Node. This test ensures that only those "safe" subsets are specified in a Pod.
-
-#### *To check if there are applications credentials in configuration files*: [application_credentials](docs/LIST_OF_TESTS.md#application-credentials)
-
-> *Developers store secrets in the Kubernetes configuration files, such as environment variables in the pod configuration. Such behavior is commonly seen in clusters that are monitored by Azure Security Center. Attackers who have access to those configurations, by querying the API server or by accessing those files on the developer’s endpoint, can steal the stored secrets and use them.*
-
-#### *To check if there is a host network attached to a pod*: [host_network](docs/LIST_OF_TESTS.md#host-network)
-
-> *When a container has the [hostNetwork](https://hub.armo.cloud/docs/c-0041) feature turned on, the container has direct access to the underlying hostNetwork. Hackers frequently exploit this feature to [facilitate a container breakout](https://media.defense.gov/2021/Aug/03/2002820425/-1/-1/1/CTR_KUBERNETES%20HARDENING%20GUIDANCE.PDF) and gain access to the underlying host network, data and other integral resources.*
-
-
-#### *To check if there is automatic mapping of service accounts*: [service_account_mapping](docs/LIST_OF_TESTS.md#service-account-mapping)
-
-> *When a pod gets created and a service account wasn't specified, then the default service account will be used. Service accounts assigned in this way can unintentionally give third-party applications root access to the K8s APIs and other applicaton services. In order to follow a zero-trust / fine-grained security methodology, this functionality will need to be explicitly disabled by using the automountServiceAccountToken: false flag. In addition, if RBAC is not enabled, the SA has unlimited permissions in the cluster.*
-
-
-#### *To check if there is an ingress and egress policy defined.*: [ingress_egress_blocked](docs/LIST_OF_TESTS.md#ingress-and-egress-blocked)
-
-> *By default, [no network policies are applied](https://hub.armo.cloud/docs/c-0030) to Pods or namespaces, resulting in unrestricted ingress and egress traffic within the Pod network. In order to [prevent lateral movement](https://media.defense.gov/2021/Aug/03/2002820425/-1/-1/1/CTR_KUBERNETES%20HARDENING%20GUIDANCE.PDF) or escalation on a compromised cluster, administrators should implement a default policy to deny all ingress and egress traffic. This will ensure that all Pods are isolated by default and further policies could then be used to specifically relax these restrictions on a case-by-case basis.*
-
-
-#### *To check for insecure capabilities*: [insecure_capabilities](docs/LIST_OF_TESTS.md#insecure-capabilities)
-> Giving [insecure](https://hub.armo.cloud/docs/c-0046) and unnecessary capabilities for a container can increase the impact of a container compromise.
-
-#### *To check if containers are running with non-root user with non-root membership*: [non_root_containers](docs/LIST_OF_TESTS.md#non-root-containers)
-> Container engines allow containers to run applications as a non-root user with non-root group membership. Typically, this non-default setting is configured when the container image is built. . Alternatively, Kubernetes can load containers into a Pod with SecurityContext:runAsUser specifying a non-zero user. While the runAsUser directive effectively forces non-root execution at deployment, [NSA and CISA encourage developers](https://hub.armo.cloud/docs/c-0013) to build container applications to execute as a non-root user. Having non-root execution integrated at build time provides better assurance that applications will function correctly without root privileges.
-
-#### *To check if containers are running with hostPID or hostIPC privileges*: [host_pid_ipc_privileges](docs/LIST_OF_TESTS.md#host-pidipc-privileges)
-> Containers should be isolated from the host machine as much as possible. The [hostPID and hostIPC](https://hub.armo.cloud/docs/c-0038) fields in deployment yaml may allow cross-container influence and may expose the host itself to potentially malicious or destructive actions. This control identifies all PODs using hostPID or hostIPC privileges.
-
-#### *To check if security services are being used to harden containers*: [linux_hardening](docs/LIST_OF_TESTS.md#linux-hardening)
-> In order to reduce the attack surface, it is recommend, when it is possible, to harden your application using [security services](https://hub.armo.cloud/docs/c-0055) such as SELinux®, AppArmor®, and seccomp. Starting from Kubernetes version 1.22, SELinux is enabled by default.
-
-#### *To check if containers have CPU limits defined*: [cpu_limits](docs/LIST_OF_TESTS.md#cpu-limits)
-> Every container [should have a limit set for the CPU available for it](https://hub.armo.cloud/docs/c-0270) set for every container or a namespace to prevent resource exhaustion. This control identifies all the Pods without CPU limit definitions by checking their yaml definition file as well as their namespace LimitRange objects. It is also recommended to use ResourceQuota object to restrict overall namespace resources, but this is not verified by this control.
-
-#### *To check if containers have memory limits defined*: [memory_limits](docs/LIST_OF_TESTS.md#memory-limits)
-> Every container [should have a limit set for the memory available for it](https://hub.armo.cloud/docs/c-0271) set for every container or a namespace to prevent resource exhaustion. This control identifies all the Pods without memory limit definitions by checking their yaml definition file as well as their namespace LimitRange objects. It is also recommended to use ResourceQuota object to restrict overall namespace resources, but this is not verified by this control.
-
-#### *To check if containers have immutable file systems*: [immutable_file_systems](docs/LIST_OF_TESTS.md#immutable-file-systems)
-> Mutable container filesystem can be abused to gain malicious code and data injection into containers. By default, containers are permitted unrestricted execution within their own context. An attacker who has access to a container, [can create files](https://hub.armo.cloud/docs/c-0017) and download scripts as they wish, and modify the underlying application running on the container.
-
-#### *To check if containers have hostPath mounts (check: is this a duplicate of state test - ./cnf-testsuite volume_hostpath_not_found)*: [hostpath_mounts](docs/LIST_OF_TESTS.md#hostpath-mounts)
-> [hostPath mount](https://hub.armo.cloud/docs/c-0006) can be used by attackers to get access to the underlying host and thus break from the container to the host. (See “3: Writable hostPath mount” for details).
-
-
-## Configuration Tests
-#### Declarative APIs for an immutable infrastructure are anything that configures the infrastructure element. This declaration can come in the form of a YAML file or a script, as long as the configuration designates the desired outcome, not how to achieve said outcome. *"Because it describes the state of the world, declarative configuration does not have to be executed to be understood. Its impact is concretely declared. Since the effects of declarative configuration can be understood before they are executed, declarative configuration is far less error-prone. " --Hightower, Kelsey; Burns, Brendan; Beda, Joe. Kubernetes: Up and Running: Dive into the Future of Infrastructure (Kindle Locations 183-186). Kindle Edition*
-
-#### *To check if a CNF is using the default namespace*: [default_namespace](docs/LIST_OF_TESTS.md#default-namespaces)
-> *Namespaces provide a way to segment and isolate cluster resources across multiple applications and users. As a best practice, workloads should be isolated with Namespaces and not use the default namespace.
-
-#### *To test if mutable tags being used for image versioning(Using Kyverno): latest_tag*: [latest_tag](docs/LIST_OF_TESTS.md#latest-tag)
-
-> *"You should [avoid using the :latest tag](https://kubernetes.io/docs/concepts/containers/images/)
-when deploying containers in production as it is harder to track which version of the image
-is running and more difficult to roll back properly."*
-
-#### *To test if the recommended labels are being used to describe resources*: [required_labels](docs/LIST_OF_TESTS.md#require-labels)
-> Defining and using labels help identify semantic attributes of your application or Deployment. A common set of labels allows tools to work collaboratively, while describing objects in a common manner that all tools can understand. You should use recommended labels to describe applications in a way that can be queried.
-
-
-#### *To test if there are versioned tags on all images (using OPA Gatekeeper)*: [versioned_tag](docs/LIST_OF_TESTS.md#versioned-tag)
-
-> *"You should [avoid using the :latest tag](https://kubernetes.io/docs/concepts/containers/images/)
-when deploying containers in production as it is harder to track which version of the image
-is running and more difficult to roll back properly."*
-
-#### *To test if there are node ports used in the service configuration*: [nodeport_not_used](docs/LIST_OF_TESTS.md#nodeport-not-used)
-
-> Using node ports ties the CNF to a specific node and therefore makes the CNF less
-portable and scalable
-
-#### *To test if there are host ports used in the service configuration*: [hostport_not_used](docs/LIST_OF_TESTS.md#hostport-not-used)
-
-> Using host ports ties the CNF to a specific node and therefore makes the CNF less
-portable and scalable
-
-#### *To test if there are any (non-declarative) hardcoded IP addresses or subnet masks in the K8s runtime configuration*: [hardcoded_ip_addresses_in_k8s_runtime_configuration](docs/LIST_OF_TESTS.md#Hardcoded-ip-addresses-in-k8s-runtime-configuration)
-
-> Using a hard coded IP in a CNF's configuration designates *how* (imperative) a CNF should
-achieve a goal, not *what* (declarative) goal the CNF should achieve
-
-#### *To check if a CNF uses K8s secrets*: [secrets_used](docs/LIST_OF_TESTS.md#secrets-used)
-
-> If a CNF uses kubernetes K8s secrets instead of unencrypted environment
-variables or configmaps, there is [less risk of the Secret (and its data) being
-exposed](https://kubernetes.io/docs/concepts/configuration/secret/) during the
-workflow of creating, viewing, and editing Pods
-
-#### *To check if a CNF version uses immutable configmaps*: [immutable_configmap](docs/LIST_OF_TESTS.md#immutable-configmap)
-
-> *"For clusters that extensively use ConfigMaps (at least tens of thousands of unique ConfigMap to Pod mounts),
-[preventing changes](https://kubernetes.io/docs/concepts/configuration/configmap/#configmap-immutable)
-to their data has the following advantages:*
-- *protects you from accidental (or unwanted) updates that could cause applications outages*
-- *improves performance of your cluster by significantly reducing load on kube-apiserver, by
-closing watches for ConfigMaps marked as immutable.*"
-
-
-## 5g Tests
-#### A 5g core is an important part of the service provider's telecommuncations offering. A cloud native 5g architecture uses immutable infrastructure, declarative configuration, and microservices when creating and hosting 5g cloud native network functions.
-
-#### *To check if the 5g core is resistant to chaos*: [smf_upf_core_validator](docs/LIST_OF_TESTS.md#smf_upf_core_validator)
-> *A 5g core's [SMF and UPF CNFs have a hearbeat](https://www.etsi.org/deliver/etsi_ts/123500_123599/123527/15.01.00_60/ts_123527v150100p.pdf), implemented use the PFCP protocol standard, which measures if the connection between the two CNFs is active. After measure a baseline of the heartbeat a comparison between the baseline and the performance of the heartbeat while running test functions will expose the [cloud native resilience](https://www.cncf.io/blog/2021/09/23/cloud-native-chaos-and-telcos-enforcing-reliability-and-availability-for-telcos/) of the cloud native 5g core.
-
-#### *To check if the 5g core is using 5g authentication*: [suci_enabled](docs/LIST_OF_TESTS.md#suci_enabled)
-> *In order to [protect identifying information](https://nickvsnetworking.com/5g-subscriber-identifiers-suci-supi/) from being sent over the network as clear text, 5g cloud native cores should implement [SUPI and SUCI concealment](https://www.etsi.org/deliver/etsi_ts/133500_133599/133514/16.04.00_60/ts_133514v160400p.pdf)
-
-
-## RAN Tests
-#### A cloud native radio access network's (RAN) cloud native functions should use immutable infrastructure, declarative configuration, and microservices. ORAN cloud native functions should adhere to cloud native principles while also complying with the [ORAN alliance's standards](https://www.o-ran.org/blog/o-ran-alliance-introduces-48-new-specifications-released-since-july-2021).
-
-#### *To check if an ORAN compliant RAN is using the e2 3gpp standard*: [oran_e2_connection](docs/LIST_OF_TESTS.md#oran_e2_connection)
-> *A near real-time RAN intelligent controler (RIC) uses the [E2 standard](https://wiki.o-ran-sc.org/display/RICP/E2T+Architecture) as an open, interoperable, interface to connect to [RAN-optimizated applications, onboarded as xApps](https://www.5gtechnologyworld.com/how-does-5gs-o-ran-e2-interface-work/). The xApps use platform services available in the near-RT RIC to communicate with the downstream network functions through the E2 interface.
-
-## Platform Tests
-
-#### *To check if the plateform passes K8s Conformance tests*: [k8s-conformance](docs/LIST_OF_TESTS.md#k8s-conformance)
-> * A Vendor's Kubernetes Platform should pass [Kubernetes Conformance](https://github.com/kubernetes/community/blob/master/contributors/devel/sig-architecture/conformance-tests.md). This ensures that the platform offering meets the same required APIs, features & interoperability expectations as in open source community versions of K8s. Applications that can operate on a [Certified Kubernetes](https://www.cncf.io/certification/software-conformance/) should be cross-compatible with any other Certified Kubernetes platform.
-
-#### *To check if the plateform is being managed by ClusterAPI*: [clusterapi-enabled](docs/LIST_OF_TESTS.md#clusterapi-enabled)
-> * A Kubernetes Platform should leverage [Cluster API](https://cluster-api.sigs.k8s.io/) to ensure that best-practices are followed for both bootstrapping & cluster lifecycle management. Kubernetes is a complex system that relies on several components being configured correctly, maintaining an in-house lifecycle management system for kubernetes is unlikey to meet best practice guideline unless significant resources are deticated to it.
-
-#### *To check if the plateform is using an OCI compliant runtime*: [oci-compliant](docs/LIST_OF_TESTS.md#oci-compliant)
-> *The [OCI Initiative](https://opencontainers.org/) was created to ensure that runtimes conform to both the runtime-spec and image-spec. These two specifications outline how a “filesystem bundle” is unpacked on disk and that the image itself contains sufficient information to launch the application on the target platform. As a best practice, your platform must use an OCI compliant runtime, this ensures that the runtime used is cross-compatible and supports interoperability with other runtimes. This means that workloads can be freely moved to other runtimes and prevents vendor lock in.
-
-#### *To check if workloads are rescheduled on node failure*: [worker-reboot-recovery](docs/LIST_OF_TESTS.md#poc-worker-reboot-recovery)
-> *Cloud native systems should be self-healing. To follow cloud-native best practices your platform should be resiliant and reschedule all workloads when such node failures occur.
-
-#### *To check if the plateform has a default Cluster admin role*: [cluster-admin](docs/LIST_OF_TESTS.md#cluster-admin)
-> *Role-based access control (RBAC) is a key security feature in Kubernetes. RBAC can restrict the allowed actions of the various identities in the cluster. Cluster-admin is a built-in high privileged role in Kubernetes. Attackers who have permissions to create bindings and cluster-bindings in the cluster can create a binding to the cluster-admin ClusterRole or to other high privileges roles. As a best practice, a principle of least privilege should be followed and cluster-admin privilege should only be used on an as-needed basis.
-
-#### *Check if the plateform is using insecure ports for the API server*: [Control_plane_hardening](docs/LIST_OF_TESTS.md#control-plane-hardening)
-> *The control plane is the core of Kubernetes and gives users the ability to view containers, schedule new Pods, read Secrets, and execute commands in the cluster. Therefore, it should be protected. It is recommended to avoid control plane exposure to the Internet or to an untrusted network and require TLS encryption.
-
-#### *Check if Tiller is being used on the plaform*: [Tiller images](docs/LIST_OF_TESTS.md#tiller-images)
-> *Tiller, found in Helm v2, has known security challenges. It requires administrative privileges and acts as a shared resource accessible to any authenticated user. Tiller can lead to privilege escalation as restricted users can impact other users. It is recommend to use Helm v3+ which does not contain Tiller for these reasons
diff --git a/README.md b/README.md
index 20c95f6e3..3231b2fa3 100644
--- a/README.md
+++ b/README.md
@@ -38,7 +38,7 @@ The CNTI Test Catalog will inspect CNFs for the following characteristics:
- **Observability & Diagnostics** - CNFs should externalize their internal states in a way that supports metrics, tracing, and logging.
- **Security** - CNF containers should be isolated from one another and the host. CNFs are to be verified against any common CVE or other vulnerabilities.
-See the [Test Categories Documentation](TEST-CATEGORIES.md) for a complete overview of the tests.
+See the [Complete Test Documentation](docs/TEST_DOCUMENTATION.md) for a complete overview of the tests.
## Contributing
diff --git a/ROADMAP.md b/ROADMAP.md
index a3ac628fe..b01ebc609 100644
--- a/ROADMAP.md
+++ b/ROADMAP.md
@@ -12,10 +12,10 @@ To get a more complete overview of planned features and current work see the [pr
- Build tests for Kubernetes best practices that address issues voiced by the end users, including:
- On-boarding (day 1) items
- CNF WG best practices
-- Build [resilience tests](https://github.com/cnti-testcatalog/testsuite/blob/main/USAGE.md#resilience-tests) using [LitmusChaos](https://litmuschaos.io/) experiments
-- Create [observability tests](https://github.com/cnti-testcatalog/testsuite/blob/main/USAGE.md#observability-tests) to check for cloud native monitoring
-- Create [state tests](https://github.com/cnti-testcatalog/testsuite/blob/main/USAGE.md#state-tests) to check cloud native data handling
-- Create [security tests](https://github.com/cnti-testcatalog/testsuite/blob/main/USAGE.md#security-tests)
+- Build [resilience tests](https://github.com/cnti-testcatalog/testsuite/blob/main/docs/TEST_DOCUMENTATION.md#category-reliability-resilience--availability-tests) using [LitmusChaos](https://litmuschaos.io/) experiments
+- Create [observability tests](https://github.com/cnti-testcatalog/testsuite/blob/main/docs/TEST_DOCUMENTATION.md#category-observability--diagnostic-tests) to check for cloud native monitoring
+- Create [state tests](https://github.com/cnti-testcatalog/testsuite/blob/main/docs/TEST_DOCUMENTATION.md#category-state-tests) to check cloud native data handling
+- Create [security tests](https://github.com/cnti-testcatalog/testsuite/blob/main/docs/TEST_DOCUMENTATION.md#category-security-tests)
### Enhance the functionality of the test suite framework
diff --git a/TEST-CATEGORIES.md b/TEST-CATEGORIES.md
deleted file mode 100644
index f70202176..000000000
--- a/TEST-CATEGORIES.md
+++ /dev/null
@@ -1,123 +0,0 @@
-# Test Catalog Categories
-
-The CNTI Test Catalog validates interoperability of CNF **workloads** supplied by multiple different vendors orchestrated by Kubernetes **platforms** that are supplied by multiple different vendors. The goal is to provide an open source test catalog to enable both open and closed source CNFs to demonstrate conformance and implementation of best practices. For more detailed CLI documentation see the [usage document.](USAGE.md)
-
-## Compatibility, Installability & Upgradability Tests
-
-#### CNFs should work with any Certified Kubernetes product and any CNI-compatible network that meet their functionality requirements. The CNTI Test Catalog will check for usage of standard, in-band deployment tools such as Helm (version 3) charts. The CNTI Test Catalog checks to see if CNFs support horizontal scaling (across multiple machines) and vertical scaling (between sizes of machines) by using the native K8s [kubectl](https://kubernetes.io/docs/reference/kubectl/cheatsheet/#scaling-resources). The CNTI Test Catalog validates this:
-
-#### On workloads:
-
-- Performing K8s API usage testing by running [API snoop](https://github.com/cncf/apisnoop) on the cluster which:
- - Checks alpha endpoint usage
- - Checks beta endpoint usage
- - Checks generally available (GA) endpoint usage
-- Test increasing/decreasing capacity
-- Test small scale autoscaling with kubectl
-- Test large scale autoscaling with load test tools like [CNF Testbed](https://github.com/cncf/cnf-testbed)
-- Test if the CNF control layer responds to retries for failed communication (e.g. using [Pumba](https://github.com/alexei-led/pumba) or [Blockade](https://github.com/worstcase/blockade) for network chaos and [Envoy](https://github.com/envoyproxy/envoy) for retries)
-- Testing if the install script uses [Helm v3](https://github.com/helm/)
-- Testing if the CNF is published to a public helm chart repository.
-- Testing if the Helm chart is valid (e.g. using the [helm linter](https://github.com/helm/chart-testing))
-- Testing if the CNF can perform a rolling update (i.e. [kubectl rolling update](https://kubernetes.io/docs/tasks/run-application/rolling-update-replication-controller/))
-- Performing CNI Plugin testing which:
- - Tests if CNI Plugin follows the [CNI specification](https://github.com/containernetworking/cni/blob/master/SPEC.md)
-
-## Microservice Tests
-
-#### The CNF should be developed and delivered as a microservice. The CNTI Test Catalog tests to determine the organizational structure and rate of change of the CNF being tested. Once these are known we can detemine whether or not the CNF is a microservice. See: [Microservice-Principles](https://networking.cloud-native-principles.org/cloud-native-microservice-principles):
-
-#### On workloads:
-
-- Check if the CNF have a reasonable startup time.
-- Check the image size of the CNF.
-- Checks for single process on pods.
-
-## State Tests
-
-#### The CNTI Test Catalog checks if state is stored in a [custom resource definition](https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/) or a separate database (e.g. [etcd](https://github.com/etcd-io/etcd)) rather than requiring local storage. It also checks to see if state is resilient to node failure:
-
-#### On workloads:
-
-- Checking volume hostpath is found or not.
-- Checks if no local volume is configured.
-- Check if the CNF is using elastic persistent volumes
-- Checks for k8s database persistence.
-
-## Reliability, Resilience & Availability Tests
-
-[Cloud Native Definition](https://github.com/cncf/toc/blob/master/DEFINITION.md) requires systems to be Resilient to failures inevitable in cloud environments. CNF Resilience should be tested to ensure CNFs are designed to deal with non-carrier-grade shared cloud HW/SW platform:
-
-#### On workloads:
-
-- Checks for network latency
-- Performs a disk fill
-- Deletes a pod to test reliability and availability.
-- Performs a memory hog test for resilience.
-- Performs an IO stress test.
-- Tests network corruption.
-- Tests network duplication.
-- Drains a node on the cluster.
-- Checking for a liveness entry in the helm chart and if the container is responsive to it after a reset (e.g. by checking the [helm chart entry](https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/))
-- Checking for a readiness entry in the helm chart and if the container is responsive to it after a reset
-
-## Observability & Diagnostic Tests
-
-#### In order to maintain, debug, and have insight into a protected environment, infrastructure elements must have the property of being observable. This means these elements must externalize their internal states in some way that lends itself to metrics, tracing, and logging. The CNTI Test Catalog checks this:
-
-#### On workloads:
-
-- Testing to see if there is traffic to [Fluentd](https://github.com/fluent/fluentd)
-- Testing to see if there is traffic to [Jaeger](https://github.com/jaegertracing/jaeger)
-- Testing to see if Prometheus rules for the CNF are configured correctly (e.g. using [Promtool](https://prometheus.io/docs/prometheus/latest/configuration/unit_testing_rules/))
-- Testing to see if there is traffic to [Prometheus](https://github.com/prometheus/prometheus)
-- Testing to see if the monitoring calls are compatible with [OpenMetric](https://github.com/OpenObservability/OpenMetrics)
-- Tests log output.
-
-## Security Tests
-
-#### CNF containers should be isolated from one another and the host. The CNTI Test Catalog uses tools like [OPA Gatekeeper](https://github.com/open-policy-agent/gatekeeper) and [Armosec Kubescape](https://github.com/armosec/kubescape):
-
-#### On workloads:
-
-- Check if any containers are running in privileged mode.
-- Checks root user.
-- Checks for privilege escalation.
-- Checks symlink file system.
-- Checks application credentials.
-- Checks if the container or pods can access the host network.
-- Checks for service accounts and mappings.
-- Checks for ingress and egress being blocked.
-- Privileged container checks.
-- Verifies if there are insecure and dangerous capabilities.
-- Checks network policies.
-- Checks for non root containers.
-- Checks PID and IPC privileges.
-- Checks for Linux Hardening, eg. Selinux is used.
-- Checks memory limits are defined.
-- Checks CPU limits are defined.
-- Checks for immutable file systems.
-- Verifies and checks if any hostpath mounts are used.
-
-#### On platforms:
-
-- Check if there are any shells
-
-## Configuration Tests
-
-#### Configuration should be managed in a declarative manner, using [ConfigMaps](https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/), [Operators](https://kubernetes.io/docs/concepts/extend-kubernetes/operator/), or other [declarative interfaces](https://kubernetes.io/docs/concepts/overview/working-with-objects/kubernetes-objects/#understanding-kubernetes-objects). The CNTI Test Catalog checks this by:
-
-#### On workloads:
-
-- Testing if the CNF is installed using a [versioned](https://helm.sh/docs/topics/chart_best_practices/dependencies/#versions) Helm v3 chart
-- Searching for hardcoded IP addresses, subnets, or node ports in the configuration
-- Checking if the pod/container can be started without mounting a volume (e.g. using [helm configuration](https://kubernetes.io/docs/tasks/configure-pod-container/configure-volume-storage/)) that has configuration files
-- Testing by reseting any child processes, and when the parent process is started, checking to see if those child processes are reaped (ie. monitoring processes with [sysdig-inspect](https://github.com/draios/sysdig-inspect))
-- Testing if there are any (non-declarative) hardcoded IP addresses or subnet masks
-- Tests if nodeport is not used.
-- Tests hostport is not used.
-- Checks for secrets used or configured.
-- Tests immutable configmaps.
-
-
-Tools to study/use for such testing methodology: The previously mentioned Pumba and Blocade, [ChaosMesh](https://github.com/pingcap/chaos-mesh), [Mitmproxy](https://github.com/mitmproxy/mitmproxy/), Istio for "[Network Resilience](https://istio.io/docs/concepts/traffic-management/#network-resilience-and-testing)", kill -STOP -CONT, [LimitCPU](http://limitcpu.sourceforge.net/), [Packet pROcessing eXecution (PROX) engine](https://wiki.opnfv.org/pages/viewpage.action?pageId=12387840) as [Impair Gateway](https://github.com/opnfv/samplevnf/blob/master/VNFs/DPPD-PROX/helper-scripts/rapid/impair.cfg).
diff --git a/USAGE.md b/USAGE.md
index 755e2a0e7..d705d6e01 100644
--- a/USAGE.md
+++ b/USAGE.md
@@ -6,17 +6,6 @@
- [Syntax and Usage](USAGE.md#syntax-for-running-any-of-the-tests)
- [Common Examples](USAGE.md#common-example-commands)
- [Logging Options](USAGE.md#logging-options)
-- [Workload Tests](USAGE.md#workload-tests)
- - [Compatibility, Installability, and Upgradability Tests](USAGE.md#compatibility-installability-and-upgradability-tests)
- - [Microservice Tests](USAGE.md#microservice-tests)
- - [State Tests](USAGE.md#state-tests)
- - [Reliability, Resilience and Availability Tests](USAGE.md#reliability-resilience-and-availability)
- - [Observability and Diagnostic Tests](USAGE.md#observability-and-diagnostic-tests)
- - [Security Tests](USAGE.md#security-tests)
- - [Configuration Tests](USAGE.md#configuration-tests)
- - [5g Tests](USAGE.md#5g-tests)
- - [Ran Tests](USAGE.md#ran-tests)
-- [Platform Tests](USAGE.md#platform-tests)
### Overview
@@ -179,1234 +168,6 @@ shards install # only for first install
crystal bin/ameba.cr
```
----
-# Compatibility, Installability, and Upgradability Tests
-
-##### To run all of the compatibility tests
-
-```
-./cnf-testsuite compatibility
-```
-
-## [Increase decrease capacity:](https://github.com/cnti-testcatalog/testsuite/blob/refactor_usage_doc%231371/docs/LIST_OF_TESTS.md#increase-decrease-capacity)
-##### To run both increase and decrease tests, you can use the alias command that calls them both:
-```
-./cnf-testsuite increase_decrease_capacity
-```
-
-Remediation for failing this test:
-
-Check out the kubectl docs for how to [manually scale your cnf.](https://kubernetes.io/docs/reference/kubectl/cheatsheet/#scaling-resources)
-
-Also here is some info about [things that could cause failures.](https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#failed-deployment)
-
-
-
-
-
-## [Helm chart published](docs/LIST_OF_TESTS.md#helm-chart-published)
-
-##### To run the Helm chart published test, you can use the following command:
-```
-./cnf-testsuite helm_chart_published
-```
-
-Remediation for failing this test:
-
-Make sure your CNF helm charts are published in a Helm Repository.
-
-
-
-
-
-## [Helm chart is valid](docs/LIST_OF_TESTS.md#helm-chart-valid)
-
-##### To run the Helm chart vaild test, you can use the following command:
-```
-./cnf-testsuite helm_chart_valid
-```
-
-Remediation for failing this test:
-
-Make sure your helm charts pass lint tests.
-
-
-
-
-
-## [Helm deploy](docs/LIST_OF_TESTS.md#helm-deploy)
-
-##### To run the Helm deploy test, you can use the following command:
-```
-./cnf-testsuite helm_deploy
-```
-
-Remediation for failing this test:
-
-Make sure your helm charts are valid and can be deployed to clusters.
-
-
-
-
-
-## [Rollback](docs/LIST_OF_TESTS.md#rollback)
-
-##### To run the Rollback test, you can use the following command:
-```
-./cnf-testsuite rollback
-```
-Remediation for failing this test:
-
-Ensure that you can upgrade your CNF using the [Kubectl Set Image](https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#-em-image-em-) command, then rollback the upgrade using the [Kubectl Rollout Undo](https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#rollout) command.
-
-
-
-
-### [Rolling update](docs/LIST_OF_TESTS.md#rolling-update)
-
-##### To run the Rolling update test, you can use the following command:
-```
-./cnf-testsuite rolling_update
-```
-
-Remediation for failing this test:
-
-Ensure that you can successfuly perform a rolling upgrade of your CNF using the [Kubectl Set Image](https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#-em-image-em-) command.
-
-
-
-
-
-### [Rolling version change](docs/LIST_OF_TESTS.md#rolling-version-change)
-
-##### To run the Rolling version change test, you can use the following command:
-```
-./cnf-testsuite rolling_version_change
-```
-
-Remediation for failing this test:
-
-Ensure that you can successfuly rollback the software version of your CNF by using the [Kubectl Set Image](https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#-em-image-em-) command.
-
-
-
-
-### [Rolling downgrade](docs/LIST_OF_TESTS.md#rolling-downgrade)
-
-##### To run the Rolling downgrade test, you can use the following command:
-```
-./cnf-testsuite rolling_downgrade
-```
-
-Remediation for failing this test:
-
-Ensure that you can successfuly change the software version of your CNF back to an older version by using the [Kubectl Set Image](https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#-em-image-em-) command.
-
-
-
-
-## [CNI compatible](docs/LIST_OF_TESTS.md#cni-compatible)
-
-##### To run the CNI compatible test, you can use the following command:
-```
-./cnf-testsuite cni_compatible
-```
-
-Remediation for failing this test:
-
-Ensure that your CNF is compatible with Calico, Cilium and other available CNIs.
-
-
-
-
-
-## [Kubernetes Alpha APIs](docs/LIST_OF_TESTS.md#kubernetes-alpha-apis---proof-of-concept)
-
-##### To run the Kubernetes Alpha APIs test, you can use the following command:
-```
-./cnf-testsuite alpha_k8s_apis
-```
-
-Remediation for failing this test:
-
-Make sure your CNFs are not utilizing any Kubernetes alpha APIs. You can learn more about Kubernetes API versioning [here](https://bit.ly/k8s_api).
-
-
-
-
-
- Details for Compatibility, Installability and Upgradability Tests To Do's
-
-
-#### :memo: (To Do) To check of the CNF's CNI plugin accepts valid calls from the [CNI specification](https://github.com/containernetworking/cni/blob/master/SPEC.md)
-
-```
-crystal src/cnf-testsuite.cr cni_spec
-```
-
-#### :memo: (To Do) To check for the use of beta K8s API endpoints
-
-```
-crystal src/cnf-testsuite.cr api_snoop_beta
-```
-
-#### :memo: (To Do) To check for the use of generally available (GA) K8s API endpoints
-
-```
-crystal src/cnf-testsuite.cr api_snoop_general_apis
-```
-
-#### :memo: (To Do) To test small scale autoscaling
-
-```
-crystal src/cnf-testsuite.cr small_autoscaling
-```
-
-#### :memo: (To Do) To test [large scale autoscaling](https://github.com/cncf/cnf-testbed)
-
-```
-crystal src/cnf-testsuite.cr large_autoscaling
-```
-
-#### :memo: (To Do) To test if the CNF responds to [network](https://github.com/alexei-led/pumba) [chaos](https://github.com/worstcase/blockade)
-
-```
-crystal src/cnf-testsuite.cr network_chaos
-```
-
-#### :memo: (To Do) To test if the CNF control layer uses [external retry logic](https://github.com/envoyproxy/envoy)
-
-```
-crystal src/cnf-testsuite.cr external_retry
-```
-
-#### :memo: (To Do) To test small scale autoscaling
-
-```
-crystal src/cnf-testsuite.cr small_autoscaling
-```
-
-#### :memo: (To Do) To test [large scale autoscaling](https://github.com/cncf/cnf-testbed)
-
-```
-crystal src/cnf-testsuite.cr large_autoscaling
-```
-
-#### :memo: (To Do) To test if the CNF responds to [network](https://github.com/alexei-led/pumba) [chaos](https://github.com/worstcase/blockade)
-
-```
-crystal src/cnf-testsuite.cr network_chaos
-```
-
-#### :memo: (To Do) To test if the CNF control layer uses [external retry logic](https://github.com/envoyproxy/envoy)
-
-```
-crystal src/cnf-testsuite.cr external_retry
-```
-
-
-
-
-
-
-# Microservice Tests
-
-##### To run all of the microservice tests
-
-```
-./cnf-testsuite microservice
-```
-
-## [Reasonable Image Size](docs/LIST_OF_TESTS.md#reasonable-image-size)
-
-##### To run the Reasonable image size, you can use the following command:
-```
-./cnf-testsuite reasonable_image_size
-```
-
-Remediation for failing this test:
-
-Enure your CNFs image size is under 5GB.
-
-
-
-
-
-## [Reasonable startup time](docs/LIST_OF_TESTS.md#reasonable-startup-time)
-
-##### To run the Reasonable startup time test, you can use the following command:
-
-```
-./cnf-testsuite reasonable_startup_time
-```
-
-Remediation for failing this test:
-
-Ensure that your CNF gets into a running state within 30 seconds.
-
-
-
-
-## [Single process type in one container](docs/LIST_OF_TESTS.md#single-process-type-in-one-container)
-
-##### To run the Single process type test, you can use the following command:
-
-```
-./cnf-testsuite single_process_type
-```
-
-Remediation for failing this test:
-
-Ensure that there is only one process type within a container. This does not count against child processes, e.g. nginx or httpd could be a parent process with 10 child processes and pass this test, but if both nginx and httpd were running, this test would fail.
-
-
-
-
-
-
-## [Service discovery](docs/LIST_OF_TESTS.md#service-discovery)
-
-##### To run the Service discovery test, you can use the following command:
-
-```
-./cnf-testsuite service_discovery
-```
-
-Remediation for failing this test:
-
-Make sure the CNF exposes any of its containers as a Kubernetes Service. You can learn more about Kubernetes Service [here](https://kubernetes.io/docs/concepts/services-networking/service/).
-
-
-
-
-
-## [Shared database](docs/LIST_OF_TESTS.md#shared-database)
-
-##### To run the Shared database test, you can use the following command:
-
-
-```
-./cnf-testsuite shared_database
-```
-
-Remediation for failing this test:
-
-Make sure that your CNFs containers are not sharing the same [database](https://martinfowler.com/bliki/IntegrationDatabase.html).
-
-
-## [Specialized Init System](docs/LIST_OF_TESTS.md#specialized-init-system)
-
-##### To run the Specialized Init System test, you can use the following command:
-
-```
-./cnf-testsuite specialized_init_system
-```
-
-Remediation for failing this test:
-
-Use init systems that are purpose-built for containers like tini, dumb-init, s6-overlay.
-
-## [Sigterm Handled](docs/LIST_OF_TESTS.md#sig-term-handled)
-
-##### To run the Sigterm Handled test, you can use the following command:
-
-```
-./cnf-testsuite sig_term_handled
-```
-
-Remediation for failing this test:
-
-Make the PID 1 container process to handle SIGTERM; enable process namespace sharing in Kubernetes or use specialized Init system.
-
-
-## [Zombie Handled](docs/LIST_OF_TESTS.md#zombie-handled)
-
-##### To run the Zombie Handled test, you can use the following command:
-
-```
-./cnf-testsuite zombie_handled
-```
-
-Remediation for failing this test:
-
-Make the PID 1 container process to handle/reap zombie processes; enable process namespace sharing in Kubernetes or use specialized Init system.
-
-
-
-# State Tests
-
-##### To run all of the state tests:
-
-```
-./cnf-testsuite state
-```
-
-## [Node drain](docs/LIST_OF_TESTS.md#node-drain)
-
-##### To run the Node drain test, you can use the following command:
-
-```
-./cnf-testsuite node_drain
-```
-
-Please note, that this test requires a cluster with atleast two schedulable nodes.
-
-Remediation for failing this test
-Ensure that your CNF can be successfully rescheduled when a node fails or is [drained](https://kubernetes.io/docs/tasks/administer-cluster/safely-drain-node/)
-
-
-
-
-
-## [Volume hostpath not found](docs/LIST_OF_TESTS.md#volume-hostpath-not-found)
-
-##### To run the Volume hostpath not found test, you can use the following command:
-
-```
-./cnf-testsuite volume_hostpath_not_found
-```
-
-Remediation for failing this test:
-Ensure that none of the containers in your CNFs are using ["hostPath"] to mount volumes.
-
-
-
-
-## [No local volume configuration](docs/LIST_OF_TESTS.md#no-local-volume-configuration)
-
-##### To run the No local volume configuration test, you can use the following command:
-
-```
-./cnf-testsuite no_local_volume_configuration
-```
-
-Remediation for failing this test:
-Ensure that your CNF isn't using any persistent volumes that use a ["local"] mount point.
-
-
-
-
-## [Elastic volumes](docs/LIST_OF_TESTS.md#elastic-volumes)
-
-##### To run the Elastic volume test, you can use the following command:
-
-```
-./cnf-testsuite elastic_volume
-```
-
-Remediation for failing this test:
-Setup and use elastic persistent volumes instead of local storage.
-
-
-
-
-## [Database persistence](docs/LIST_OF_TESTS.md#database-persistence)
-
-##### To run the Database persistence test, you can use the following command:
-
-```
-./cnf-testsuite database_persistence
-```
-
-Remediation for failing this test:
-Select a database configuration that uses statefulsets and elastic storage volumes.
-
-
-
-# Reliability, Resilience and Availability
-
-##### To run all of the resilience tests
-```
-./cnf-testsuite resilience
-```
-
-## [CNF network latency](docs/LIST_OF_TESTS.md#cnf-under-network-latency)
-
-##### To run the CNF network latency test, you can use the following command:
-
-```
-./cnf-testsuite pod_network_latency
-```
-
-Remediation for failing this test:
-Ensure that your CNF doesn't stall or get into a corrupted state when network degradation occurs.
-A mitigation stagagy (in this case keep the timeout i.e., access latency low) could be via some middleware that can switch traffic based on some SLOs parameters.
-
-
-## [CNF disk fill](docs/LIST_OF_TESTS.md#cnf-with-host-disk-fill)
-
-##### To run the CNF disk fill test, you can use the following command:
-
-```
-./cnf-testsuite disk_fill
-```
-
-Remediation for failing this test:
-Ensure that your CNF is resilient and doesn't stall when heavy IO causes a degradation in storage resource availability.
-
-
-
-## [Pod delete](docs/LIST_OF_TESTS.md#pod-delete)
-
-##### To run the CNF Pod delete test, you can use the following command:
-```
-./cnf-testsuite pod_delete
-```
-
-Remediation for failing this test:
-Ensure that your CNF is resilient and doesn't fail on a forced/graceful pod failure on specific or random replicas of an application.
-
-
-
-## [Memory hog](docs/LIST_OF_TESTS.md#memory-hog)
-
-##### To run the CNF Pod delete test, you can use the following command:
-```
-./cnf-testsuite pod_memory_hog
-```
-
-Remediation for failing this test:
-Ensure that your CNF is resilient to heavy memory usage and can maintain some level of availability.
-
-
-
-## [IO Stress](docs/LIST_OF_TESTS.md#io-stress)
-
-##### To run the IO Stress test, you can use the following command:
-```
-./cnf-testsuite pod_io_stress
-```
-
-Remediation for failing this test:
-Ensure that your CNF is resilient to continuous and heavy disk IO load and can maintain some level of availability
-
-
-## [Network corruption](docs/LIST_OF_TESTS.md#network-corruption)
-
-##### To run the Network corruption test, you can use the following command:
-```
-./cnf-testsuite pod_network_corruption
-```
-
-Remediation for failing this test:
-Ensure that your CNF is resilient to a lossy/flaky network and can maintain a level of availability.
-
-
-
-
-## [Network duplication](docs/LIST_OF_TESTS.md#network-duplication)
-
-##### To run the Network duplication test, you can use the following command:
-```
-./cnf-testsuite pod_network_duplication
-```
-
-Remediation for failing this test:
-Ensure that your CNF is resilient to erroneously duplicated packets and can maintain a level of availability.
-
-
-
-## [Pod DNS errors](docs/LIST_OF_TESTS.md#pod-dns-errors)
-
-##### To run the Pod DNS error test, you can use the following command:
-```
-./cnf-testsuite pod_dns_error
-```
-
-Remediation for failing this test:
-Ensure that your CNF is resilient to DNS resolution failures can maintain a level of availability.
-
-
-
-
-## [Helm chart liveness entry](docs/LIST_OF_TESTS.md#helm-chart-liveness-entry)
-
-##### To run the Helm chart liveness entry test, you can use the following command:
-
-```
-./cnf-testsuite liveness
-```
-
-Remediation for failing this test:
-Ensure that your CNF has a [Liveness Probe](https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/) configured.
-
-
-
-
-## [Helm chart readiness entry](docs/LIST_OF_TESTS.md#helm-chart-readiness-entry)
-
-##### To run the Helm chart readiness entry test, you can use the following command:
-
-```
-./cnf-testsuite readiness
-```
-Remediation for failing this test:
-Ensure that your CNF has a [Readiness Probe](https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/) configured.
-
-
-
-# Observability and Diagnostic Tests
-
-##### To run all observability tests, you can use the following command:
-
-```
-./cnf-testsuite observability
-```
-
-## [Use stdout/stderr for logs](docs/LIST_OF_TESTS.md#use-stdoutstderr-for-logs)
-
-##### To run the stdout/stderr logging test, you can use the following command:
-
-```
-./cnf-testsuite log_output
-```
-
-Remediation for failing this test:
-Make sure applications and CNF's are sending log output to STDOUT and or STDERR.
-
-
-
-## [Prometheus installed](docs/LIST_OF_TESTS.md#prometheus-installed)
-
-##### To run the Prometheus installed test, you can use the following command:
-```
-./cnf-testsuite prometheus_traffic
-```
-
-Remediation for failing this test:
-Install and configure Prometheus for your CNF.
-
-
-
-
-## [Routed logs](docs/LIST_OF_TESTS.md#routed-logs)
-
-##### To run the routed logs test, you can use the following command:
-```
-./cnf-testsuite routed_logs
-```
-
-Remediation for failing this test:
-Install and configure fluentd or fluentbit to collect data and logs. See more at [fluentd.org](https://bit.ly/fluentd) for fluentd or [fluentbit.io](https://fluentbit.io/) for fluentbit.
-
-
-
-## [OpenMetrics compatible](docs/LIST_OF_TESTS.md#openmetrics-compatible)
-
-##### To run the OpenMetrics compatible test, you can use the following command:
-```
-./cnf-testsuite open_metrics
-```
-
-Remediation for failing this test:
-Ensure that your CNF is publishing OpenMetrics compatible metrics.
-
-
-
-
-## [Jaeger tracing](docs/LIST_OF_TESTS.md#jaeger-tracing)
-
-##### To run the Jaeger tracing test, you can use the following command:
-```
-./cnf-testsuite tracing
-```
-
-Remediation for failing this test:
-Ensure that your CNF is both using & publishing traces to Jaeger.
-
-
-
-
-# Security Tests
-
-##### To run all of the security tests, you can use the following command:
-
-```
-./cnf-testsuite security
-```
-
-## [Container socket mounts](docs/LIST_OF_TESTS.md#container-socket-mounts)
-
-##### To run the Container socket mount test, you can use the following command:
-
-```
-./cnf-testsuite container_sock_mounts
-```
-
-Remediation for failing this test:
-Make sure your CNF doesn't mount `/var/run/docker.sock`, `/var/run/containerd.sock` or `/var/run/crio.sock` on any containers.
-
-
-
-## [External IPs](docs/LIST_OF_TESTS.md#external-ips)
-
-##### To run the External IPs test, you can use the following command:
-```
-./cnf-testsuite external_ips
-```
-
-Remediation for failing this test:
-Make sure to not define external IPs in your kubernetes service configuration
-
-
-## [Privileged containers](docs/LIST_OF_TESTS.md#privileged-containers)
-
-##### To run the Privilege container test, you can use the following command:
-
-```
-./cnf-testsuite privileged_containers
-```
-
-
-Remediation for failing this test:
-
-Remove privileged capabilities by setting the securityContext.privileged to false. If you must deploy a Pod as privileged, add other restriction to it, such as network policy, Seccomp etc and still remove all unnecessary capabilities.
-
-
-
-
-## [Privilege escalation](docs/LIST_OF_TESTS.md#privilege-escalation)
-
-##### To run the Privilege escalation test, you can use the following command:
-```
-./cnf-testsuite privilege_escalation
-```
-
-Remediation for failing this test:
-If your application does not need it, make sure the allowPrivilegeEscalation field of the securityContext is set to false. See more at [ARMO-C0016](https://bit.ly/C0016_privilege_escalation)
-
-
-
-
-## [Symlink file system](docs/LIST_OF_TESTS.md#symlink-file-system)
-
-##### To run the Symlink file test, you can use the following command:
-```
-./cnf-testsuite symlink_file_system
-```
-
-Remediation for failing this test:
-To mitigate this vulnerability without upgrading kubelet, you can disable the VolumeSubpath feature gate on kubelet and kube-apiserver, or remove any existing Pods using subPath or subPathExpr feature.
-
-
-
-## [Sysctls](docs/LIST_OF_TESTS.md#sysctls)
-
-##### To run the Sysctls test, you can use the following command:
-```
-./cnf-testsuite sysctls
-```
-
-Remediation for failing this test:
-The spec.securityContext.sysctls field must be unset or not use.
-
-
-
-## [Application credentials](docs/LIST_OF_TESTS.md#application-credentials)
-
-##### To run the Application credentials test, you can use the following command:
-```
-./cnf-testsuite application_credentials
-```
-
-Remediation for failing this test:
-Use Kubernetes secrets or Key Management Systems to store credentials.
-
-
-
-## [Host network](docs/LIST_OF_TESTS.md#host-network)
-
-##### To run the Host network credentials test, you can use the following command:
-```
-./cnf-testsuite host_network
-```
-
-Remediation for failing this test:
-Only connect PODs to the hostNetwork when it is necessary. If not, set the hostNetwork field of the pod spec to false, or completely remove it (false is the default). Allow only those PODs that must have access to host network by design.
-
-
-
-
-
-## [Service account mapping](docs/LIST_OF_TESTS.md#service-account-mapping)
-
-##### To run the Service account mapping test, you can use the following command:
-```
-./cnf-testsuite service_account_mapping
-```
-
-Remediation for failing this test:
-Disable automatic mounting of service account tokens to PODs either at the service account level or at the individual POD level, by specifying the automountServiceAccountToken: false. Note that POD level takes precedence.
-
-
-
-## [Ingress and Egress blocked](docs/LIST_OF_TESTS.md#ingress-and-egress-blocked)
-
-##### To run the Ingress and Egress test, you can use the following command:
-```
-./cnf-testsuite ingress_egress_blocked
-```
-
-Remediation for failing this test:
-
-By default, you should disable or restrict Ingress and Egress traffic on all pods.
-
-
-
-## [Insecure capabilities](docs/LIST_OF_TESTS.md#insecure-capabilities)
-
-##### To run the Insecure capabilities test, you can use the following command:
-
-```
-./cnf-testsuite insecure_capabilities
-```
-
-
-Remediation for failing this test:
-
-Remove all insecure capabilities which aren’t necessary for the container.
-
-
-
-
-## [Non Root containers](docs/LIST_OF_TESTS.md#non-root-containers)
-
-##### To run the Non-root containers test, you can use the following command:
-
-```
-./cnf-testsuite non_root_containers
-```
-
-Remediation for failing this test:
-
-If your application does not need root privileges, make sure to define the runAsUser and runAsGroup under the PodSecurityContext to use user ID 1000 or higher, do not turn on allowPrivlegeEscalation bit and runAsNonRoot is true.
-
-
-
-## [Host PID/IPC privileges](docs/LIST_OF_TESTS.md#host-pidipc-privileges)
-
-##### To run the Host PID/IPC test, you can use the following command:
-
-```
-./cnf-testsuite host_pid_ipc_privileges
-```
-
-Remediation for failing this test:
-
-Apply least privilege principle and remove hostPID and hostIPC from the yaml configuration privileges unless they are absolutely necessary.
-
-
-
-
-## [Linux hardening](docs/LIST_OF_TESTS.md#linux-hardening)
-
-##### To run the Linux hardening test, you can use the following command:
-```
-./cnf-testsuite linux_hardening
-```
-
-Remediation for failing this test:
-
-Use AppArmor, Seccomp, SELinux and Linux Capabilities mechanisms to restrict containers abilities to utilize unwanted privileges.
-
-
-
-
-
-## [CPU limits](docs/LIST_OF_TESTS.md#cpu-limits)
-
-##### To run the CPU limits test, you can use the following command:
-```
-./cnf-testsuite cpu_limits
-```
-
-Remediation for failing this test:
-
-Define LimitRange and ResourceQuota policies to limit CPU usage for namespaces or in the deployment/POD yamls.
-
-
-
-
-## [Memory limits](docs/LIST_OF_TESTS.md#memory-limits)
-
-##### To run the memory limits test, you can use the following command:
-```
-./cnf-testsuite memory_limits
-```
-
-Remediation for failing this test:
-
-Define LimitRange and ResourceQuota policies to limit memory usage for namespaces or in the deployment/POD yamls.
-
-
-
-
-## [Immutable File Systems](docs/LIST_OF_TESTS.md#immutable-file-systems)
-
-##### To run the Immutable File Systems test, you can use the following command:
-```
-./cnf-testsuite immutable_file_systems
-```
-
-Remediation for failing this test:
-
-Set the filesystem of the container to read-only when possible. If the containers application needs to write into the filesystem, it is possible to mount secondary filesystems for specific directories where application require write access.
-
-
-
-
-## [HostPath Mounts](docs/LIST_OF_TESTS.md#hostpath-mounts)
-
-##### To run the HostPath Mounts test, you can use the following command:
-```
-./cnf-testsuite hostpath_mounts
-```
-
-Remediation for failing this test:
-
-Refrain from using a hostPath mount.
-
-
-
-
-## [SELinux options](docs/LIST_OF_TESTS.md#selinux-options)
-
-##### To run the SELinux options test, you can use the following command:
-```
-./cnf-testsuite selinux_options
-```
-
-Remediation for failing this test:
-Ensure the following guidelines are followed for any cluster resource that allow SELinux options.
-
- -
- If the SELinux option `type` is set, it should only be one of the allowed values: `container_t`, `container_init_t`, or `container_kvm_t`.
-
- -
- SELinux options `user` or `role` should not be set.
-
-
-
-
-
- Details for Security Tests To Do's
-
-
-#### :memo: (To Do) To check if there are any [shells running in the container](https://github.com/open-policy-agent/gatekeeper)
-
-```
-crystal src/cnf-testsuite.cr shells
-```
-
-#### :memo: (To Do) To check if there are any [protected directories](https://github.com/open-policy-agent/gatekeeper) or files that are accessed from within the container
-
-```
-crystal src/cnf-testsuite.cr protected_access
-```
-
-
-
-
-# Configuration Tests
-
-##### To run all Configuration tests, you can use the following command:
-
-```
-./cnf-testsuite configuration_lifecycle
-```
-
-## [Default namespaces](docs/LIST_OF_TESTS.md#default-namespaces)
-
-##### To run the Default namespace test, you can use the following command:
-```
-./cnf-testsuite default_namespace
-```
-
-Remediation for failing this test:
-
-Ensure that your CNF is configured to use a Namespace and is not using the default namespace.
-
-
-
-
-
-## [Latest tag](docs/LIST_OF_TESTS.md#latest-tag)
-
-##### To run the Latest tag test, you can use the following command:
-```
-./cnf-testsuite latest_tag
-```
-
-Remediation for failing this test:
-
-When specifying container images, always specify a tag and ensure to use an immutable tag that maps to a specific version of an application Pod. Remove any usage of the `latest` tag, as it is not guaranteed to be always point to the same version of the image.
-
-
-
-
-## [Require labels](docs/LIST_OF_TESTS.md#require-labels)
-
-##### To run the require labels test, you can use the following command:
-```
-./cnf-testsuite require_labels
-```
-
-Remediation for failing this test:
-
-Make sure to define `app.kubernetes.io/name` label under metadata for your CNF.
-
-
-
-
-## [Versioned tag](docs/LIST_OF_TESTS.md#versioned-tag)
-
-##### To run the versioned tag test, you can use the following command:
-```
-./cnf-testsuite versioned_tag
-```
-
-Remediation for failing this test:
-
-When specifying container images, always specify a tag and ensure to use an immutable tag that maps to a specific version of an application Pod. Remove any usage of the `latest` tag, as it is not guaranteed to be always point to the same version of the image.
-
-
-
-
-## [nodePort not used](docs/LIST_OF_TESTS.md#nodeport-not-used)
-
-##### To run the nodePort not used test, you can use the following command:
-```
-./cnf-testsuite nodeport_not_used
-```
-
-Remediation for failing this test:
-
-Review all Helm Charts & Kubernetes Manifest files for the CNF and remove all occurrences of the nostPort field in you configuration. Alternatively, configure a service or use another mechanism for exposing your container.
-
-
-
-
-## [hostPort not used](docs/LIST_OF_TESTS.md#hostport-not-used)
-
-##### To run the hodePort not used test, you can use the following command:
-
-```
-./cnf-testsuite hostport_not_used
-```
-
-Remediation for failing this test:
-
-Review all Helm Charts & Kubernetes Manifest files for the CNF and remove all occurrences of the hostPort field in you configuration. Alternatively, configure a service or use another mechanism for exposing your container.
-
-
-
-
-
-## [Hardcoded IP addresses in K8s runtime configuration](docs/LIST_OF_TESTS.md#Hardcoded-ip-addresses-in-k8s-runtime-configuration)
-
-##### To run the Hardcoded IP addresses test, you can use the following command:
-
-```
-./cnf-testsuite hardcoded_ip_addresses_in_k8s_runtime_configuration
-```
-
-Remediation for failing this test:
-
-Review all Helm Charts & Kubernetes Manifest files of the CNF and look for any hardcoded usage of ip addresses. If any are found, you will need to use an operator or some other method to abstract the IP management out of your configuration in order to pass this test.
-
-
-
-
-## [Secrets used](docs/LIST_OF_TESTS.md#secrets-used)
-
-##### To run the Secrets used test, you can use the following command:
-```
-./cnf-testsuite secrets_used
-```
-
-Rules for the test: The whole test passes if _any_ workload resource in the cnf uses a (non-exempt) secret. If no workload resources use a (non-exempt) secret, the test is skipped.
-
-Remediation for failing this test:
-
-Remove any sensitive data stored in configmaps, environment variables and instead utilize K8s Secrets for storing such data. Alternatively, you can use an operator or some other method to abstract hardcoded sensitive data out of your configuration.
-
-
-
-
-## [Immutable configmaps](docs/LIST_OF_TESTS.md#immutable-configmap)
-
-##### To run the immutable configmap test, you can use the following command:
-```
-./cnf-testsuite immutable_configmap
-```
-
-Remediation for failing this test:
-Use immutable configmaps for any non-mutable configuration data.
-
-
-# 5g Tests
-
-##### To run all 5g tests, you can use the following command:
-
-```
-./cnf-testsuite 5g
-```
-
-## [smf_upf_core_validator](docs/LIST_OF_TESTS.md#smf_upf_core_validator)
-
-##### To run the 5g core_validator test, you can use the following command:
-
-```
-./cnf-testsuite smf_upf_core_validator
-```
-## [suci_enabled](docs/LIST_OF_TESTS.md#suci_enabled)
-##### To run the 5g suci_enabled test, you can use the following command:
-
-```
-./cnf-testsuite suci_enabled
-```
-
-# RAN Tests
-
-##### To run all RAN tests, you can use the following command:
-
-```
-./cnf-testsuite ran
-```
-
-## [oran_e2_connection](docs/LIST_OF_TESTS.md#oran_e2_connection)
-
-##### To run the oran e2 connection test, you can use the following command:
-
-```
-./cnf-testsuite oran_e2_connection
-```
-
-
-
-# Platform Tests
-
-##### To run all Platform tests, you can use the following command:
-
-```
-./cnf-testsuite platform
-```
-
-## [K8s Conformance](docs/LIST_OF_TESTS.md#k8s-conformance)
-
-##### To run the K8s Conformance test, you can use the following command:
-
-```
-./cnf-testsuite k8s_conformance
-```
-
-Remediation for failing this test:
-Check that [Sonobuoy](https://github.com/vmware-tanzu/sonobuoy) can be successfully run and passes without failure on your platform. Any failures found by Sonobuoy will provide debug and remediation steps required to get your K8s cluster into a conformant state.
-
-
-
-## [ClusterAPI enabled](docs/LIST_OF_TESTS.md#clusterapi-enabled)
-
-##### To run the ClusterAPI enabled test, you can use the following command:
-
-```
-./cnf-testsuite clusterapi_enabled
-```
-
-Remediation for failing this test:
-Enable ClusterAPI and start using it to manage the provisioning and lifecycle of your Kubernetes clusters.
-
-
-
-##### To run all platform harware and scheduling tests, you can use the following command:
-```
-./cnf-testsuite platform:hardware_and_scheduling
-```
-
-## [OCI Compliant](docs/LIST_OF_TESTS.md#oci-compliant)
-
-##### To run the OCI Compliant test, you can use the following command:
-
-```
-./cnf-testsuite platform:oci_compliant
-```
-Remediation for failing this test:
-
-Check if your Kuberentes Platform is using an [OCI Compliant Runtime](https://opencontainers.org/). If you platform is not using an OCI Compliant Runtime, you'll need to switch to a new runtime that is OCI Compliant in order to pass this test.
-
-
-
-
-##### (PoC) To run All platform resilience tests, you can use the following command:
-
-```
-./cnf-testsuite platform:resilience poc
-```
-
-## [Worker reboot recovery](docs/LIST_OF_TESTS.md#poc-worker-reboot-recovery)
-
-##### To run the Worker reboot recovery test, you can use the following command:
-
-```
-./cnf-testsuite platform:worker_reboot_recovery poc destructive
-```
-Remediation for failing this test:
-
-Reboot a worker node in your Kubernetes cluster verify that the node can recover and re-join the cluster in a schedulable state. Workloads should also be rescheduled to the node once it's back online.
-
-
-
-
-##### :heavy_check_mark: Run All platform security tests
-
-```
-./cnf-testsuite platform:security
-```
-## [Cluster admin](docs/LIST_OF_TESTS.md#cluster-admin)
-##### To run the Cluster admin test, you can use the following command:
-
-```
-./cnf-testsuite platform:cluster_admin
-```
-
-Remediation for failing this test:
-You should apply least privilege principle. Make sure cluster admin permissions are granted only when it is absolutely necessary. Don't use subjects with high privileged permissions for daily operations.
-
-See more at [ARMO-C0035](https://bit.ly/C0035_cluster_admin)
-
-
-
-
-## [Control plane hardening](docs/LIST_OF_TESTS.md#control-plane-hardening)
-
-##### To run the Control plane hardening test, you can use the following command:
-
-```
-./cnf-testsuite platform:control_plane_hardening
-```
-
-Remediation for failing this test:
-
-Set the insecure-port flag of the API server to zero.
-
-See more at [ARMO-C0005](https://bit.ly/C0005_Control_Plane)
-
-
-
-```
-./cnf-testsuite platform:control_plane_hardening
-```
-
-
-## [Tiller images](docs/LIST_OF_TESTS.md#tiller-images)
-
-##### To run the Tiller images test, you can use the following command:
-```
-./cnf-testsuite platform:helm_tiller
-```
-
-Remediation for failing this test:
-Switch to using Helm v3+ and make sure not to pull any images with name tiller in them
-
-
+### Usage for categories and single tests
+It's located in [TEST_DOCUMENTATION](docs/TEST_DOCUMENTATION.md), Check for needed category or test there.
diff --git a/docs/LIST_OF_TESTS.md b/docs/LIST_OF_TESTS.md
deleted file mode 100644
index 562d73b33..000000000
--- a/docs/LIST_OF_TESTS.md
+++ /dev/null
@@ -1,826 +0,0 @@
-# CNF Test Suite List of Tests - v0.27.0
-
-
-## Summary
-This document provides a summary of the tests included in the CNF Test Suite. Each test lists a general overview of what the test does, a link to the test code for that test, and links to additional information when relevant/available.
-
-To learn how to run these tests, see the [USAGE.md](../USAGE.md)
-
-To learn why these tests were written, see the [RATIONALE.md](../RATIONALE.md)
-
-
-List of Workload Tests
----
-
-# Compatibility, Installability, and Upgradability Category
-
-
-
-
-## [Increase decrease capacity:](https://github.com/cnti-testcatalog/testsuite/blob/v0.27.0/src/tasks/workload/compatibility.cr#L168)
-
-The increase and decrease capacity tests: HPA (horizonal pod autoscale) will autoscale replicas to accommodate when there is an increase of CPU, memory or other configured metrics to prevent disruption by allowing more requests
-by balancing out the utilisation across all of the pods.
-
-Decreasing replicas works the same as increase but rather scale down the number of replicas when the traffic decreases to the number of pods that can handle the requests.
-
-You can read more about horizonal pod autoscaling to create replicas [here](https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/) and in the [K8s scaling cheatsheet](https://kubernetes.io/docs/reference/kubectl/cheatsheet/#scaling-resources).
-
-
-
-
-### [Increase capacity](https://github.com/cnti-testcatalog/testsuite/blob/v0.27.0/src/tasks/workload/compatibility.cr#L184)
-- Expectation: The number of replicas for a Pod increases
-
-**What's tested:** The pod is increased and replicated to 3 for the CNF image or release being tested.
-
-
-### [Decrease capacity](https://github.com/cnti-testcatalog/testsuite/blob/v0.27.0/src/tasks/workload/compatibility.cr#L213)
-- Expectation: The number of replicas for a Pod decreases
-
-**What's tested:** After `increase_capacity` increases the replicas to 3, it decreases back to 1.
-
-
-[**Usage**](../USAGE.md#increase-decrease-capacity)
-
-
-[**Rationale & Reasoning**](../RATIONALE.md#to-test-the-increasing-and-decreasing-of-capacity-increase_decrease_capacity)
-
-
-## [Helm chart published](https://github.com/cnti-testcatalog/testsuite/blob/v0.27.0/src/tasks/workload/compatibility.cr#L406)
-- Expectation: The Helm chart is published in a Helm Repsitory.
-
-**What's tested:** Checks if the helm chart is found in a remote repository when running [`helm search`](https://helm.sh/docs/helm/helm_search_repo/).
-
-[**Usage**](../USAGE.md#helm-chart-published)
-
-[**Rationale & Reasoning**](../RATIONALE.md#test-if-the-helm-chart-is-published-helm_chart_published)
-
-
-## [Helm chart valid](https://github.com/cnti-testcatalog/testsuite/blob/v0.27.0/src/tasks/workload/compatibility.cr#L449)
-- Expectation: No syntax or validation problems are found in the chart.
-
-**What's tested:** Checks the syntax & validity of the chart using [`helm lint`](https://helm.sh/docs/helm/helm_lint/)
-
-[**Usage**](../USAGE.md#helm-chart-is-valid)
-
-[**Rationale & Reasoning**](../RATIONALE.md#test-if-the-helm-chart-is-valid-helm_chart_valid)
-
-
-
-## [Helm deploy](../USAGE.md#helm-deploy)
-- Expectation: The CNF was installed using Helm.
-
-**What's tested:** Checks if the CNF is installed by using a Helm Chart.
-
-[**Usage**](../USAGE.md#helm-deploy)
-
-[**Rationale & Reasoning**](../RATIONALE.md#test-if-the-helm-deploys-helm_deploy)
-
-## [Rollback:](https://github.com/cnti-testcatalog/testsuite/blob/v0.27.0/src/tasks/workload/compatibility.cr#L87)
-- Expectation: The CNF Software version can be successfully incremented, then rolled back.
-
-**What's tested:** Checks if the Pod can be upgraded to a new software version, then restored back to the orginal software version by using the [Kubectl Set Image](https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#-em-image-em-) & [Kubectl Rollout Undo](https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#rollout) commands.
-
-[**Usage**](../USAGE.md#rollback)
-
-[**Rationale & Reasoning**](../RATIONALE.md#to-check-if-a-cnf-version-can-be-rolled-back-rollback)
-
-
-### [Rolling update](https://github.com/cnti-testcatalog/testsuite/blob/v0.27.0/src/tasks/workload/compatibility.cr#L8)
-- Expectation: The CNF Software version can be successfully incremented.
-
-**What's tested:** Checks if the Pod can be upgraded to a new software version by using the [Kubectl Set Image](https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#-em-image-em-)
-
-[**Rationale & Reasoning**](../RATIONALE.md#to-test-if-the-cnf-can-perform-a-rolling-update-rolling_update)
-
-
-### [Rolling version change](https://github.com/cnti-testcatalog/testsuite/blob/v0.27.0/src/tasks/workload/compatibility.cr#L8)
-- Expectation: The CNF Software version is successfully rolled back to its original version.
-
-**What's tested:** Checks if the Pod can be rolled back to the original software version by using the [Kubectl Set Image](https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#-em-image-em-) to perform a rollback.
-
-[**Rationale & Reasoning**](../RATIONALE.md#to-check-if-a-cnf-version-can-be-downgraded-through-a-rolling_version_change-rolling_version_change)
-
-
-### [Rolling downgrade](https://github.com/cnti-testcatalog/testsuite/blob/v0.27.0/src/tasks/workload/compatibility.cr#L8)
-- Expectation: The CNF Software version is successfully downgraded to a software version older than the orginal installation version.
-
-**What's tested:** Checks if the Pod can be rolled back older software version(Older than the original software version) by using the [Kubectl Set Image](https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#-em-image-em-) to perform a downgrade.
-
-[**Rationale & Reasoning**](../RATIONALE.md#to-check-if-a-cnf-version-can-be-downgraded-through-a-rolling_downgrade-rolling_downgrade)
-
-
-
-
-## [CNI compatible](https://github.com/cnti-testcatalog/testsuite/blob/v0.27.0/src/tasks/workload/compatibility.cr#L588)
-- Expectation: CNF should be compatible with multiple and different CNIs
-
-**What's tested:** This installs temporary kind clusters and will test the CNF against both Calico and Cilium CNIs.
-
-[**Usage**](../USAGE.md#cni-compatible)
-
-[**Rationale & Reasoning**](../RATIONALE.md#to-check-if-the-cnf-is-compatible-with-different-cnis-cni_compatibility)
-
-
-
-## [Kubernetes Alpha APIs - Proof of Concept](https://github.com/cnti-testcatalog/testsuite/blob/v0.27.0/src/tasks/workload/configuration.cr#L499)
-- Expectation: CNF should not use Kubernetes alpha APIs
-
-**What's tested:** This checks if a CNF uses alpha or unstable versions of Kubernetes APIs
-
-[**Usage**](../USAGE.md#kubernetes-alpha-apis)
-
-[**Rationale & Reasoning**](../RATIONALE.md#poc-to-check-if-a-cnf-uses-kubernetes-alpha-apis-alpha_k8s_apis-alpha_k8s_apis)
-
-
-# Microservice Category
-
-## [Reasonable image size](https://github.com/cnti-testcatalog/testsuite/blob/v0.27.0/src/tasks/workload/microservice.cr#L200)
-- Expectation: CNF image size is under 5 gigs
-
-**What's tested:** Checks the size of the image used.
-
-[**Usage**](../USAGE.md#reasonable-image-size)
-
-[**Rationale & Reasoning**](../RATIONALE.md#to-check-if-the-cnf-has-a-reasonable-image-size-reasonable_image_size)
-
-
-## [Reasonable startup time](https://github.com/cnti-testcatalog/testsuite/blob/v0.27.0/src/tasks/workload/microservice.cr#L109)
-- Expectation: CNF starts up under one minute
-
-**What's tested:** Checks how long the it takes for the CNF to pass a Readiness Probe and reach a ready/running state.
-
-[**Usage**](../USAGE.md#reasonable-startup-time)
-
-[**Rationale & Reasoning**](../RATIONALE.md#to-check-if-the-cnf-have-a-reasonable-startup-time-reasonable_startup_time)
-
-
-## [Single process type in one container](https://github.com/cnti-testcatalog/testsuite/blob/v0.27.0/src/tasks/workload/microservice.cr#L300)
-- Expectation: CNF container has one process type
-
-**What's tested:** This verifies that there is only one process type within one container. This does not count against child processes. Example would be nginx or httpd could have a parent process and then 10 child processes but if both nginx and httpd were running, this test would fail.
-
-[**Usage**](../USAGE.md#single-process-type-in-one-container)
-
-[**Rationale & Reasoning**](../RATIONALE.md#to-check-if-the-cnf-has-multiple-process-types-within-one-container-single_process_type)
-
-
-## [Service discovery](https://github.com/cnti-testcatalog/testsuite/blob/v0.27.0/src/tasks/workload/microservice.cr#L405)
-- Expectation: CNFs accessible to other applications should be exposed via a Service.
-
-**What's tested:** This tests and checks if the containers within a CNF have services exposed via a Kubernetes Service resource. Application access for microservices within a cluster should be exposed via a Service. Read more about K8s Service [here](https://kubernetes.io/docs/concepts/services-networking/service/).
-
-[**Usage**](../USAGE.md#service-discovery)
-
-[**Rationale & Reasoning**](../RATIONALE.md#to-check-if-the-cnf-exposes-any-of-its-containers-as-a-service-service_discovery-service_discovery)
-
-
-## [Shared database](https://github.com/cnti-testcatalog/testsuite/blob/v0.27.0/src/tasks/workload/microservice.cr#L26)
-- Expectation: Multiple microservices should not share the same database.
-
-**What's tested:** This tests if multiple CNFs are using the same database.
-
-[**Usage**](../USAGE.md#shared-database)
-
-[**Rationale & Reasoning**](../RATIONALE.md#to-check-if-the-cnf-uses-a-shared-database-shared_database)
-
-## [Specialized Init Systems](https://github.com/cnti-testcatalog/testsuite/blob/v0.27.0/src/tasks/workload/microservice.cr#L462)
-- Expectation: Container images should use specialized init systems for containers.
-
-**What's tested:** This tests if containers in pods have dumb-init, tini or s6-overlay as init processes.
-
-[**Rationale & Reasoning**](../RATIONALE.md#to-check-if-the-cnf-uses-a-shared-database-shared_database)
-
-## [Sigterm Handled](https://github.com/cnti-testcatalog/testsuite/blob/v0.46.0/src/tasks/workload/microservice.cr#L500)
-- Expectation: Sigterm is handled by PID 1 process of containers.
-
-**What's tested:** This tests if the PID 1 process of containers handles SIGTERM.
-
-[**Rationale & Reasoning**](../RATIONALE.md#to_check_if_the_cnf_pid_1_processes_handle_sigterm)
-
-## [Zombie Handled](https://github.com/cnti-testcatalog/testsuite/blob/v0.46.0/src/tasks/workload/microservice.cr#L436)
-- Expectation: Zombie processes are handled/reaped by PID 1 process of containers.
-
-**What's tested:** This tests if the PID 1 process of containers handles/reaps zombie processes.
-
-[**Rationale & Reasoning**](../RATIONALE.md#to_check_if_zombie_processes_are_handled_correctly)
-
-# State Category
-
-## [Node drain](https://github.com/cnti-testcatalog/testsuite/blob/v0.27.0/src/tasks/workload/state.cr#L209)
-- Expectation: All workload resources are successfully rescheduled onto other available node(s).
-
-**What's tested:** A node is drained and workload resources rescheduled to another node, passing with a liveness and readiness check. This will skip when the cluster only has a single node.
-
-[**Usage**](../USAGE.md#node-drain)
-
-[**Rationale & Reasoning**](../RATIONALE.md#test-if-the-cnf-crashes-when-node-drain-occurs-node_drain)
-
-
-## [Volume hostpath not found](https://github.com/cnti-testcatalog/testsuite/blob/v0.27.0/src/tasks/workload/state.cr#L419)
-- Expectation: Volume host path configurations should not be used.
-
-**What's tested:** This tests if volume host paths are configured and used by the CNF.
-
-[**Usage**](../USAGE.md#volume-hostpath-not-found)
-
-[**Rationale & Reasoning**](../RATIONALE.md#to-test-if-the-cnf-uses-a-volume-host-path-volume_hostpath_not_found)
-
-
-## [No local volume configuration](https://github.com/cnti-testcatalog/testsuite/blob/v0.27.0/src/tasks/workload/state.cr#L457)
-- Expectation: Local storage should not be used or configured.
-
-**What's tested:** This tests if local volumes are being used for the CNF.
-
-[**Usage**](../USAGE.md#no-local-volume-configuration)
-
-[**Rationale & Reasoning**](../RATIONALE.md#to-test-if-the-cnf-uses-local-storage-no_local_volume_configuration)
-
-
-## [Elastic volumes](https://github.com/cnti-testcatalog/testsuite/blob/v0.27.0/src/tasks/workload/state.cr#L321)
-- Expectation: Elastic persistent volumes should be configured for statefulness.
-
-**What's tested:** This checks for elastic persistent volumes in use by the CNF.
-
-[**Usage**](../USAGE.md#elastic-volumes)
-
-[**Rationale & Reasoning**](../RATIONALE.md#to-test-if-the-cnf-uses-elastic-volumes-elastic_volumes)
-
-
-## [Database persistence](https://github.com/cnti-testcatalog/testsuite/blob/v0.27.0/src/tasks/workload/state.cr#L358)
-- Expectation: Elastic volumes and or statefulsets should be used for databases to maintain a minimum resilience level in K8s clusters.
-
-**What's tested:** This checks if elastic volumes and stateful sets are used for MySQL databases. If no MySQL database is found, the test is skipped.
-
-[**Usage**](../USAGE.md#database-persistence)
-
-[**Rationale & Reasoning**](../RATIONALE.md#to-test-if-the-cnf-uses-a-database-with-either-statefulsets-elastic-volumes-or-both-database_persistence)
-
-
-# Reliability, Resilience and Availability Category
-
-## [CNF under network latency](https://github.com/cnti-testcatalog/testsuite/blob/v0.27.0/src/tasks/workload/reliability.cr#L231)
-- Expectation: The CNF should continue to function when network latency occurs
-
-**What's tested:** [This experiment](https://litmuschaos.github.io/litmus/experiments/categories/pods/pod-network-latency/) causes network degradation without the pod being marked unhealthy/unworthy of traffic by kube-proxy (unless you have a liveness probe of sorts that measures latency and restarts/crashes the container). The idea of this experiment is to simulate issues within your pod network OR microservice communication across services in different availability zones/regions etc.
-
-The applications may stall or get corrupted while they wait endlessly for a packet. The experiment limits the impact (blast radius) to only the traffic you want to test by specifying IP addresses or application information. This experiment will help to improve the resilience of your services over time.
-
-[**Usage**](../USAGE.md#cnf-network-latency)
-
-[**Rationale & Reasoning**](../RATIONALE.md#test-if-the-cnf-crashes-when-network-latency-occurs-pod_network_latency)
-
-
-
-## [CNF with host disk fill](https://github.com/cnti-testcatalog/testsuite/blob/v0.27.0/src/tasks/workload/reliability.cr#L390)
-- Expectation: The CNF should continue to function when disk fill occurs and pods should not be evicted to another node.
-
-**What's tested:** [This experiment](https://litmuschaos.github.io/litmus/experiments/categories/pods/disk-fill/) stresses the disk with continuous and heavy IO to cause degradation in the shared disk. This experiment also reduces the amount of scratch space available on a node which can lead to a lack of space for newer containers to get scheduled. This can cause (Kubernetes gives up by applying an "eviction" taint like "disk-pressure") a wholesale movement of all pods to other nodes.
-
-[**Usage**](../USAGE.md#cnf-disk-fill)
-
-[**Rationale & Reasoning**](../RATIONALE.md#test-if-the-cnf-crashes-when-disk-fill-occurs-disk_fill)
-
-
-## [Pod delete](https://github.com/cnti-testcatalog/testsuite/blob/v0.27.0/src/tasks/workload/reliability.cr#L441)
-- Expectation: The CNF should continue to function when pod delete occurs
-
-**What's tested:** [This experiment](https://litmuschaos.github.io/litmus/experiments/categories/pods/pod-delete/) helps to simulate such a scenario with forced/graceful pod failure on specific or random replicas of an application resource and checks the deployment sanity (replica availability & uninterrupted service) and recovery workflow of the application.
-
-[**Usage**](../USAGE.md#pod-delete)
-
-[**Rationale & Reasoning**](../RATIONALE.md#test-if-the-cnf-crashes-when-pod-delete-occurs-pod_delete)
-
-
-## [Memory hog](https://github.com/cnti-testcatalog/testsuite/blob/v0.27.0/src/tasks/workload/reliability.cr#L495)
-- Expectation: The CNF should continue to function when pod memory hog occurs
-
-**What's tested:** The [pod-memory hog](https://litmuschaos.github.io/litmus/experiments/categories/pods/pod-memory-hog/) experiment launches a stress process within the target container - which can cause either the primary process in the container to be resource constrained in cases where the limits are enforced OR eat up available system memory on the node in cases where the limits are not specified.
-
-[**Usage**](../USAGE.md#memory-hog)
-
-[**Rationale & Reasoning**](../RATIONALE.md#test-if-the-cnf-crashes-when-pod-memory-hog-occurs-pod_memory_hog)
-
-
-
-## [IO Stress](https://github.com/cnti-testcatalog/testsuite/blob/v0.27.0/src/tasks/workload/reliability.cr#L549)
-- Expectation: The CNF should continue to function when pod io stress occurs
-
-**What's tested:** The [pod-io stress](https://litmuschaos.github.io/litmus/experiments/categories/pods/pod-io-stress/) experiment the disk with continuous and heavy IO to cause degradation in reads/writes by other microservices that use this shared disk.
-
-[**Usage**](../USAGE.md#io-stress)
-
-[**Rationale & Reasoning**](../RATIONALE.md#test-if-the-cnf-crashes-when-pod-io-stress-occurs-pod_io_stress)
-
-
-
-## [Network corruption](https://github.com/cnti-testcatalog/testsuite/blob/v0.27.0/src/tasks/workload/reliability.cr#L284)
-- Expectation: The CNF should be resilient to a lossy/flaky network and should continue to provide some level of availability.
-
-**What's tested:** The [pod-network corruption](https://litmuschaos.github.io/litmus/experiments/categories/pods/pod-network-corruption/) experiment injects packet corruption on the CNF by starting a traffic control (tc) process with netem rules to add egress packet corruption.
-
-[**Usage**](../USAGE.md#network-corruption)
-
-[**Rationale & Reasoning**](../RATIONALE.md#test-if-the-cnf-crashes-when-pod-network-corruption-occurs-pod_network_corruption)
-
-
-## [Network duplication](https://github.com/cnti-testcatalog/testsuite/blob/v0.27.0/src/tasks/workload/reliability.cr#L337)
-- Expectation: The CNF should continue to function and be resilient to a duplicate network.
-
-**What's tested:** The [pod-network duplication](https://litmuschaos.github.io/litmus/experiments/categories/pods/pod-network-duplication/) experiment injects network duplication into the CNF by starting a traffic control (tc) process with netem rules to add egress delays.
-
-[**Usage**](../USAGE.md#network-duplication)
-
-[**Rationale & Reasoning**](../RATIONALE.md#test-if-the-cnf-crashes-when-pod-network-duplication-occurs-pod_network_duplication)
-
-## [Pod DNS errors](https://github.com/cnti-testcatalog/testsuite/blob/v0.26.0/src/tasks/workload/reliability.cr#L604)
-- :heavy_check_mark: Added to CNF Test Suite in release v0.26.0
-- Expectation: That the CNF dosen't crash is resilient to DNS resolution failures.
-
-**What's tested:** The [pod-dns error](https://litmuschaos.github.io/litmus/experiments/categories/pods/pod-dns-error/) experiment injects chaos to disrupt DNS resolution in kubernetes pods and causes loss of access to services by blocking DNS resolution of hostnames/domains.
-
-[**Usage**](../USAGE.md#pod-dns-errors)
-
-[**Rationale & Reasoning**](../RATIONALE.md#test-if-the-cnf-crashes-when-dns-errors-occur-pod_dns_errors)
-
-
-## [Helm chart liveness entry](https://github.com/cnti-testcatalog/testsuite/blob/v0.27.0/src/tasks/workload/reliability.cr#L15)
-- Expectation: The Helm chart should have a liveness probe configured.
-
-**What's tested:** This test scans all of the CNFs workload resources and check if a Liveness Probe has been configuered for each container.
-
-[**Usage**](../USAGE.md#helm-chart-liveness-entry)
-
-[**Rationale & Reasoning**](../RATIONALE.md#to-test-if-there-is-a-liveness-entry-in-the-helm-chart-liveness)
-
-
-## [Helm chart readiness entry](https://github.com/cnti-testcatalog/testsuite/blob/v0.27.0/src/tasks/workload/reliability.cr#L45)
-- Expectation: The Helm chart should have a readiness probe configured.
-
-**What's tested:** This test scans all of the CNFs workload resources and check if a Readiness Probe has been configuered for each container.
-
-[**Usage**](../USAGE.md#helm-chart-readiness-entry)
-
-[**Rationale & Reasoning**](../RATIONALE.md#to-test-if-there-is-a-readiness-entry-in-the-helm-chart-readiness)
-
-
-# Observability and Diagnostic Category
-
-## [Use stdout/stderr for logs](https://github.com/cnti-testcatalog/testsuite/blob/v0.27.0/src/tasks/workload/observability.cr#L13)
-- Expectation: Resource output logs should be sent to STDOUT/STDERR
-
-**What's tested:** This checks and verifies that STDOUT/STDERR logging is configured for the CNF.
-
-[**Usage**](../USAGE.md#use-stdoutstderr-for-logs)
-
-[**Rationale & Reasoning**](../RATIONALE.md#to-check-if-logs-are-being-sent-to-stdoutstderr-standard-out-standard-error-instead-of-a-log-file-log_output)
-
-## [Prometheus installed](https://github.com/cnti-testcatalog/testsuite/blob/v0.27.0/src/tasks/workload/observability.cr#L42)
-- Expectation: The CNF is configured and sending metrics to a Prometheus server.
-
-**What's tested:** Tests for the presence of [Prometheus](https://prometheus.io/) and if the CNF configured to sent metrics to the prometheus server.
-
-[**Usage**](../USAGE.md#prometheus-installed)
-
-[**Rationale & Reasoning**](../RATIONALE.md#to-check-if-prometheus-is-installed-and-configured-for-the-cnf-prometheus_traffic)
-
-
-## [Routed logs](https://github.com/cnti-testcatalog/testsuite/blob/v0.27.0/src/tasks/workload/observability.cr#L170)
-- Expectation: Fluentd or FluentBit is installed and capturing logs for the CNF.
-
-**What's tested:** Checks for presence of a Unified Logging Layer and if the CNFs logs are being captured by the Unified Logging Layer. fluentd and fluentbit are currently supported.
-
-[**Usage**](../USAGE.md#routed-logs)
-
-[**Rationale & Reasoning**](../RATIONALE.md#to-check-if-logs-and-data-are-being-routed-through-a-unified-logging-layer-routed_logs)
-
-
-## [OpenMetrics compatible](https://github.com/cnti-testcatalog/testsuite/blob/v0.27.0/src/tasks/workload/observability.cr#L146)
-- Expectation: CNF should emit OpenMetrics compatible traffic.
-
-**What's tested:** Checks if the CNFs metrics are [OpenMetrics](https://openmetrics.io/) compliant.
-
-[**Usage**](../USAGE.md#openmetrics-compatible)
-
-[**Rationale & Reasoning**](../RATIONALE.md#to-check-if-openmetrics-is-being-used-and-or-compatible-open_metrics)
-
-## [Jaeger tracing](https://github.com/cnti-testcatalog/testsuite/blob/v0.27.0/src/tasks/workload/observability.cr#L203)
-- Expectation: The CNF is sending traces to Jaeger.
-
-**What's tested:** Checks if Jaeger is installed and the CNF is configured to send traces to the Jaeger Server.
-
-[**Usage**](../USAGE.md#jaeger-tracing)
-
-[**Rationale & Reasoning**](../RATIONALE.md#to-check-if-tracing-is-being-used-with-jaeger-tracing)
-
-
-# Security Category
-
-## [Container socket mounts](https://github.com/cnti-testcatalog/testsuite/blob/v0.27.0/src/tasks/workload/security.cr#L51)
-- :heavy_check_mark: Added to CNF Test Suite in release v0.27.0
-- Expectation: Container runtime sockets should not be mounted as volumes
-
-**What's tested** This test checks all of the CNFs containers and looks to see if any of them have access a container runtime socket from the host.
-
-[**Usage**](../USAGE.md#container-socket-mounts)
-
-[**Rationale & Reasoning**](../RATIONALE.md#to-check-if-the-cnf-performs-a-cri-socket-mount-container_sock_mounts)
-
-
-## [Privileged Containers](https://github.com/cnti-testcatalog/testsuite/blob/v0.27.0/src/tasks/workload/security.cr#L420)
-- Expectation: Containers should not run in privileged mode
-
-**What's tested:** Checks if any containers are running in privileged mode (using [Kubescape](https://hub.armo.cloud/docs/c-0057))
-
-[**Usage**](../USAGE.md#privileged-containers)
-
-[**Rationale & Reasoning**](../RATIONALE.md#to-check-if-there-are-any-privileged-containers-privileged_containers)
-
-
-## [External IPs](https://github.com/cnti-testcatalog/testsuite/blob/v0.27.0/src/tasks/workload/security.cr#L31)
-- :heavy_check_mark: Added to CNF Test Suite in release v0.27.0
-- Expectation: A CNF should not run services with external IPs
-
-**What's tested:** Checks if the CNF has services with external IPs configured
-
-[**Usage**](../USAGE.md#external-ips)
-
-[**Rationale & Reasoning**](../RATIONALE.md#to-check-if-external-ips-are-used-for-services-external_ips)
-
-
-## [Selinux Options](https://github.com/cnti-testcatalog/testsuite/blob/v0.30.1/src/tasks/workload/security.cr#L91)
-- Expectation: A CNF should not have any 'seLinuxOptions' configured that allow privilege escalation.
-
-**What's tested:** Checks if the CNF has escalatory seLinuxOptions configured.
-
-[**Usage**](../USAGE.md#selinux-options)
-
-[**Rationale & Reasoning**](../RATIONALE.md#to-check-if-selinux-has-been-configured-properly-selinux_options)
-
-
-## [Sysctls](https://github.com/cnti-testcatalog/testsuite/blob/v0.30.1/src/tasks/workload/security.cr#L39)
-- Expectation: The CNF should only have "safe" sysctls mechanisms configured, that are isolated from other Pods.
-
-**What's tested:** Checks the CNF for usage of non-namespaced sysctls mechanisms that can affect the entire host.
-
-[**Usage**](../USAGE.md#sysctls)
-
-[**Rationale & Reasoning**](../RATIONALE.md#to-check-if-any-pods-in-the-cnf-use-sysctls-with-restricted-values-sysctls)
-
-
-## [Privilege escalation](https://github.com/cnti-testcatalog/testsuite/blob/v0.27.0/src/tasks/workload/security.cr#L156)
-- Expectation: Containers should not allow [privilege escalation](https://bit.ly/C0016_privilege_escalation)
-
-**What's tested:** Check that the allowPrivilegeEscalation field in the securityContext of each container is set to false.
-
-[**Usage**](../USAGE.md#privilege-escalation)
-
-[**Rationale & Reasoning**](../RATIONALE.md#to-check-if-any-containers-allow-for-privilege-escalation-privilege_escalation)
-
-
-## [Symlink file system](https://github.com/cnti-testcatalog/testsuite/blob/v0.27.0/src/tasks/workload/security.cr#L175)
-- Expectation: No vulnerable K8s version being used in conjunction with the [subPath](https://bit.ly/C0058_symlink_filesystem) feature.
-
-**What's tested:** This test checks for vulnerable K8s versions and the actual usage of the subPath feature for all Pods in the CNF.
-
-[**Usage**](../USAGE.md#symlink-file-system)
-
-[**Rationale & Reasoning**](../RATIONALE.md#to-check-if-an-attacker-can-use-a-symlink-for-arbitrary-host-file-system-access-cve-2021-25741-symlink_file_system)
-
-
-## [Application credentials](https://github.com/cnti-testcatalog/testsuite/blob/v0.27.0/src/tasks/workload/security.cr#L194)
-- Exepectation: Application credentials should not be found in the CNFs configuration files
-
-**What's tested:** Checks the CNF for sensitive information in environment variables, by using list of known sensitive key names. Also checks for configmaps with sensitive information.
-
-[**Usage**](../USAGE.md#application-credentials)
-
-[**Rationale & Reasoning**](../RATIONALE.md#to-check-if-there-are-applications-credentials-in-configuration-files-application_credentials)
-
-
-## [Host network](https://github.com/cnti-testcatalog/testsuite/blob/v0.27.0/src/tasks/workload/security.cr#L213)
-- Expectation: The CNF should not have access to the host systems network.
-
-**What's tested:** Checks if there is a [host network](https://bit.ly/C0041_hostNetwork) attached to any of the Pods in the CNF.
-
-[**Usage**](../USAGE.md#host-network)
-
-[**Rationale & Reasoning**](../RATIONALE.md#to-check-if-there-is-a-host-network-attached-to-a-pod-host_network)
-
-
-## [Service account mapping](https://github.com/cnti-testcatalog/testsuite/blob/v0.27.0/src/tasks/workload/security.cr#L232)
-- Expectation: The [automatic mapping](https://bit.ly/C0034_service_account_mapping) of service account tokens should be disabled.
-
-**What's tested:** Check if the CNF is using service accounts that are automatically mapped.
-
-[**Usage**](../USAGE.md#service-account-mapping)
-
-[**Rationale & Reasoning**](../RATIONALE.md#to-check-if-there-is-automatic-mapping-of-service-accounts-service_account_mapping)
-
-
-## [Ingress and Egress blocked](https://github.com/cnti-testcatalog/testsuite/blob/v0.27.0/src/tasks/workload/security.cr#L335)
-- Expectation: [Ingress and Egress traffic should be blocked on Pods](https://bit.ly/3bhT10s).
-
-**What's tested:** Checks each Pod in the CNF for a defined ingress and egress policy.
-
-[**Usage**](../USAGE.md#ingress-and-egress-blocked)
-
-[**Rationale & Reasoning**](../RATIONALE.md#to-check-if-there-is-an-ingress-and-egress-policy-defined-ingress_egress_blocked)
-
-
-## [Insecure capabilities](https://github.com/cnti-testcatalog/testsuite/blob/v0.27.0/src/tasks/workload/security.cr#L272)
-- Expectation: Containers should not have insecure capabilities enabled.
-
-**What's tested:** Checks the CNF for any usage of insecure capabilities using the following [deny list](https://man7.org/linux/man-pages/man7/capabilities.7.html)
-
-[**Usage**](../USAGE.md#insecure-capabilities)
-
-[**Rationale & Reasoning**](../RATIONALE.md#to-check-for-insecure-capabilities-insecure_capabilities)
-
-
-## [Non-root containers](https://github.com/cnti-testcatalog/testsuite/blob/v0.27.0/src/tasks/workload/security.cr#L377)
-- Expectation: Containers should run with non-root user and allowPrivilegeEscalation should be set to false.
-
-**What's tested:** Checks if the CNF has runAsUser and runAsGroup set to a user id greater than 999. Also checks that the allowPrivilegeEscalation field is set to false for the CNF. Read more at [ARMO-C0013](https://bit.ly/2Zzlts3)
-
-[**Usage**](../USAGE.md#non-root-containers)
-
-[**Rationale & Reasoning**](../RATIONALE.md#to-check-if-containers-are-running-with-non-root-user-with-non-root-membership-non_root_containers)
-
-
-## [Host PID/IPC privileges](https://github.com/cnti-testcatalog/testsuite/blob/v0.27.0/src/tasks/workload/security.cr#L356)
-- Expectation: Containers should not have hostPID and hostIPC privileges
-
-**What's tested:** Checks if containers are running with hostPID or hostIPC privileges. Read more at [ARMO-C0038](https://bit.ly/3nGvpIQ)
-
-[**Usage**](../USAGE.md#host-pidipc-privileges)
-
-[**Rationale & Reasoning**](../RATIONALE.md#to-check-if-containers-are-running-with-hostpid-or-hostipc-privileges-host_pid_ipc_privileges)
-
-
-## [Linux hardening](https://github.com/cnti-testcatalog/testsuite/blob/v0.27.0/src/tasks/workload/security.cr#L251)
-- Expectation: Security services are being used to harden application.
-
-**What's tested:** Check if there are AppArmor, Seccomp, SELinux or Capabilities defined in the securityContext of the CNF's containers and pods. Read more at [ARMO-C0055](https://bit.ly/2ZKOjpJ)
-
-[**Usage**](../USAGE.md#linux-hardening)
-
-[**Rationale & Reasoning**](../RATIONALE.md#to-check-if-security-services-are-being-used-to-harden-containers-linux_hardening)
-
-
-
-## [Resource policies](https://github.com/cnti-testcatalog/testsuite/blob/v0.27.0/src/tasks/workload/security.cr#L314)
-- Expectation: Containers should have resource limits defined
-
-**What's tested:** Check if there is a ‘limits’ field defined for the CNF. Check for each limitrange/resourcequota if there is a max/hard field defined, respectively. Read more at [ARMO-C0009](https://bit.ly/3Ezxkps).
-
-[**Usage**](../USAGE.md#resource-policies)
-
-[**Rationale & Reasoning**](../RATIONALE.md#to-check-if-containers-have-resource-limits-defined-resource_policies)
-
-
-## [Immutable File Systems](https://github.com/cnti-testcatalog/testsuite/blob/v0.27.0/src/tasks/workload/security.cr#L441)
-- Expectation: Containers should use an immutable file system when possible.
-
-**What's tested:**
-Checks whether the readOnlyRootFilesystem field in the SecurityContext is set to true. Read more at [ARMO-C0017](https://bit.ly/3pSMtxK)
-
-[**Usage**](../USAGE.md#immutable-file-systems)
-
-[**Rationale & Reasoning**](../RATIONALE.md#to-check-if-containers-have-immutable-file-systems-immutable_file_systems)
-
-
-## [HostPath Mounts](https://github.com/cnti-testcatalog/testsuite/blob/v0.27.0/src/tasks/workload/security.cr#L462)
-- Expectation: Containers should not have hostPath mounts
-
-**What's tested:** Checks the CNF's POD spec for any hostPath volumes, if found it checks the volume for the field mount.readOnly == false (or if it doesn’t exist).
-Read more at [ARMO-C0045](https://bit.ly/3EvltIL)
-
-[**Usage**](../USAGE.md#hostpath-mounts)
-
-[**Rationale & Reasoning**](../RATIONALE.md#to-check-if-containers-have-hostpath-mounts-check-is-this-a-duplicate-of-state-test---cnf-testsuite-volume_hostpath_not_found-hostpath_mounts)
-
-
-# Configuration Category
-
-## [Default namespaces](https://github.com/cnti-testcatalog/testsuite/blob/v0.30.0/src/tasks/workload/configuration.cr#L56)
-- Expectation: Resources should not be deployed in the default namespace.
-
-**What's tested:** Checks if any of the CNF's resources are deployed in the default namespace.
-
-[**Usage**](../USAGE.md#default-namespaces)
-
-[**Rationale & Reasoning**](../RATIONALE.md#to-check-if-a-cnf-is-using-the-default-namespace-default_namespace)
-
-
-## [Latest tag](https://github.com/cnti-testcatalog/testsuite/blob/v0.30.0/src/tasks/workload/configuration.cr#L79)
-
-- Expectation: The CNF should use an immutable tag that maps to a symantic version of the application.
-
-**What's tested:** Checks if the CNF is using a 'latest' tag instead of a semantic version.
-
-[**Usage**](../USAGE.md#latest-tag)
-
-[**Rationale & Reasoning**](../RATIONALE.md#to-test-if-mutable-tags-being-used-for-image-versioningusing-kyverno-latest_tag-latest_tag)
-
-
-## [Require labels](https://github.com/cnti-testcatalog/testsuite/blob/v0.27.0/src/tasks/workload/configuration.cr#L18)
-- :heavy_check_mark: Added to CNF Test Suite in release v0.27.0
-- Expectation: Checks if pods are using the 'app.kubernetes.io/name' label
-
-**What's tested:** Checks if the CNF validates that the label `app.kubernetes.io/name` is specified with some value.
-
-[**Usage**](../USAGE.md#require-labels)
-
-[**Rationale & Reasoning**](../RATIONALE.md#to-test-if-the-recommended-labels-are-being-used-to-describe-resources-required_labels)
-
-
-## [Versioned tag](https://github.com/cnti-testcatalog/testsuite/blob/v0.27.0/src/tasks/workload/configuration.cr#L80)
-- Expectation: The CNF should use an immutable tag that maps to a symantic version of the application.
-
-**What's tested:** Checks if the CNF is using a 'latest' tag instead of a semantic version using OPA Gatekeeper.
-
-[**Usage**](../USAGE.md#versioned-tag)
-
-[**Rationale & Reasoning**](../RATIONALE.md#to-test-if-there-are-versioned-tags-on-all-images-using-opa-gatekeeper-versioned_tag)
-
-
-## [nodePort not used](https://github.com/cnti-testcatalog/testsuite/blob/v0.27.0/src/tasks/workload/configuration.cr#L131)
-- Expectation: The nodePort configuration field is not found in any of the CNF's services.
-
-**What's tested:** Checks the CNF for any associated K8s Services that configured to expose the CNF by using a nodePort.
-
-[**Usage**](../USAGE.md#nodeport-not-used)
-
-[**Rationale & Reasoning**](../RATIONALE.md#to-test-if-there-are-node-ports-used-in-the-service-configuration-nodeport_not_used)
-
-
-## [hostPort not used](https://github.com/cnti-testcatalog/testsuite/blob/v0.27.0/src/tasks/workload/configuration.cr#L166)
-- Expectation: The hostPort configuration field is not found in any of the defined containers.
-
-**What's tested:** Checks the CNF's workload resources for any containers using the hostPort configuration field to expose the application.
-
-[**Usage**](../USAGE.md#hostport-not-used)
-
-[**Rationale & Reasoning**](../RATIONALE.md#to-test-if-there-are-host-ports-used-in-the-service-configuration-hostport_not_used)
-
-
-## [Hardcoded IP addresses in K8s runtime configuration](https://github.com/cnti-testcatalog/testsuite/blob/v0.27.0/src/tasks/workload/configuration.cr#L213)
-- Expectation: That no hardcoded IP addresses or subnet masks are found in the Kubernetes workload resources for the CNF.
-
-**What's tested:** The hardcoded ip address test will scan all of the CNF's workload resources and check for any static, hardcoded ip addresses being used in the configuration.
-
-[**Usage**](../USAGE.md#hardcoded-ip-addresses-in-k8s-runtime-configuration)
-
-[**Rationale & Reasoning**](../RATIONALE.md#to-test-if-there-are-any-non-declarative-hardcoded-ip-addresses-or-subnet-masks-in-the-k8s-runtime-configuration-hardcoded_ip_addresses_in_k8s_runtime_configuration)
-
-
-## [Secrets used](https://github.com/cnti-testcatalog/testsuite/blob/v0.27.0/src/tasks/workload/configuration.cr#L257)
-- Expectation: The CNF is using K8s secrets for the management of sensitive data.
-
-**What's tested:** The secrets used test will scan all the Kubernetes workload resources to see if K8s secrets are being used.
-
-[**Usage**](../USAGE.md#secrets-used)
-
-[**Rationale & Reasoning**](../RATIONALE.md#to-check-if-a-cnf-uses-k8s-secrets-secrets_used)
-
-
-## [Immutable configmap](https://github.com/cnti-testcatalog/testsuite/blob/v0.27.0/src/tasks/workload/configuration.cr#L362)
-- Expectation: Immutable configmaps are being used for non-mutable data.
-
-**What's tested:** The immutable configmap test will scan the CNF's workload resources and see if immutable configmaps are being used.
-
-[**Usage**](../USAGE.md#immutable-configmaps)
-
-[**Rationale & Reasoning**](../RATIONALE.md#to-check-if-a-cnf-version-uses-immutable-configmaps-immutable_configmap)
-
-
-# 5g Category
-
-## [smf_upf_core_validator](https://github.com/cnti-testcatalog/testsuite/blob/v0.30.0/src/tasks/workload/5g_validator.cr#L9)
-- Expectation: 5g core should continue to function during various CNF tests.
-
-**What's tested:** Checks the pfcp heartbeat between the smf and upf to make sure it remains close to baseline.
-
-[**Usage**](../USAGE.md#smf_upf_core_validator)
-
-[**Rationale & Reasoning**](../RATIONALE.md#to-validate-a-5g-core)
-
-## [suci_enabled](https://github.com/cnti-testcatalog/testsuite/blob/v0.30.0/src/tasks/workload/5g_validator.cr#L20)
-- Expectation: 5g core should use suci concealment.
-
-**What's tested:** Checks to see if the 5g core supports suci concealment.
-
-[**Usage**](../USAGE.md#suci_enabled)
-
-[**Rationale & Reasoning**](../RATIONALE.md#to-check-for-5g-suci-concealment)
-
-
-# Ran Category
-
-## [oran_e2_connection](https://github.com/cnti-testcatalog/testsuite/blob/v0.30.0/src/tasks/workload/ran.cr#L10)
-- Expectation: An ORAN RIC should use an e2 connection.
-
-**What's tested:** Checks if a RIC uses a oran compatible e2 connection.
-
-[**Usage**](../USAGE.md#oran_e2_connection)
-
-[**Rationale & Reasoning**](../RATIONALE.md#to-check-if-a-ric-uses-oran-compatible-e2-interface)
-
-
----
-
-List of Platform Tests
----
-
-
-## [K8s Conformance](https://github.com/cnti-testcatalog/testsuite/blob/v0.27.0/src/tasks/platform/platform.cr#L21)
-- Expectation: The K8s cluster passes the K8s conformance tests
-
-**What's tested:** Check if your platform passes the K8s conformance test. See https://github.com/cncf/k8s-conformance for details on what is tested.
-
-[**Usage**](../USAGE.md#k8s-conformance)
-
-[**Rationale & Reasoning**](../RATIONALE.md#to-check-if-a-cnf-version-uses-immutable-configmaps-immutable_configmap)
-
-
-## [ClusterAPI enabled](https://github.com/cnti-testcatalog/testsuite/blob/v0.27.0/src/tasks/platform/platform.cr#L88)
-- Expectation: The cluster has Cluster API enabled which manages at least one Node.
-
-**What's tested:** Checks the platforms Kubernetes Nodes to see if they were instansiated by ClusterAPI.
-
-[**Usage**](../USAGE.md#clusterapi-enabled)
-
-[**Rationale & Reasoning**](../RATIONALE.md#to-check-if-the-plateform-is-being-managed-by-clusterapi-clusterapi-enabled)
-
-
-## [OCI Compliant](https://github.com/cnti-testcatalog/testsuite/blob/v0.27.0/src/tasks/platform/hardware_and_scheduling.cr#L15)
-- Expectation: All worker nodes are using an OCI compliant run-time.
-
-**What's tested:** Inspects all worker nodes and checks if the run-time being used for scheduling is OCI compliant.
-
-[**Usage**](../USAGE.md#oci-compliant)
-
-[**Rationale & Reasoning**](../RATIONALE.md#to-check-if-the-plateform-is-using-an-oci-compliant-runtime-oci-compliant)
-
-
-## (PoC) [Worker reboot recovery](https://github.com/cnti-testcatalog/testsuite/blob/v0.27.0/src/tasks/platform/resilience.cr#L15)
-- Expectation: Pods should reschedule after a node failure.
-- **WARNING**: this is a destructive test and will reboot your _host_ node! Do not run this unless you have completely separate cluster, e.g. development or test cluster.
-
-**What's tested:** Run node failure test which forces a reboot of the Node ("host system"). The Pods on that node should be rescheduled to a new Node.
-
-[**Usage**](../USAGE.md#worker-reboot-recovery)
-
-[**Rationale & Reasoning**](../RATIONALE.md#to-check-if-workloads-are-rescheduled-on-node-failure-worker-reboot-recovery)
-
-
-
-## [Cluster admin](https://github.com/cnti-testcatalog/testsuite/blob/v0.27.0/src/tasks/platform/security.cr#L33)
-- Expectation: The [cluster admin role should not be bound to a Pod](https://bit.ly/C0035_cluster_admin)
-
-**What's tested:** Check which subjects have cluster-admin RBAC permissions – either by being bound to the cluster-admin clusterrole, or by having equivalent high privileges.
-
-[**Usage**](../USAGE.md#cluster-admin)
-
-[**Rationale & Reasoning**](../RATIONALE.md#to-check-if-the-plateform-has-a-default-cluster-admin-role-cluster-admin)
-
-
-## [Control plane hardening](https://github.com/cnti-testcatalog/testsuite/blob/v0.27.0/src/tasks/platform/security.cr#L13)
-- Expectation: That the the k8s control plane is secure and not hosted on an [insecure port](https://bit.ly/C0005_Control_Plane)
-
-**What's tested:** Checks if the insecure-port flag is set for the K8s API Server.
-
-[**Usage**](../USAGE.md#control-plane-hardening)
-
-[**Rationale & Reasoning**](../RATIONALE.md#check-if-the-plateform-is-using-insecure-ports-for-the-api-server-control_plane_hardening)
-
-
-## [Tiller images](https://github.com/cnti-testcatalog/testsuite/blob/v0.27.0/src/tasks/platform/security.cr#L75)
-- Added in release v0.27.0
-- Expectation: The platform should be using Helm v3+ without Tiller.
-
-**What's tested:** Checks if a Helm v2 / Tiller image is deployed and used on the platform.
-
-[**Usage**](../USAGE.md#tiller-images)
-
-[**Rationale & Reasoning**](../RATIONALE.md#check-if-tiller-is-being-used-on-the-plaform-tiller-images)
-
diff --git a/docs/TEST_DOCUMENTATION.md b/docs/TEST_DOCUMENTATION.md
new file mode 100644
index 000000000..e9e7ebbde
--- /dev/null
+++ b/docs/TEST_DOCUMENTATION.md
@@ -0,0 +1,1811 @@
+# CNF TestSuite test documentation
+
+## Table of Contents
+
+* [**Category: Compatibility, Installability and Upgradability Tests**](#category-compatibility-installability--upgradability-tests)
+
+ [[Increase decrease capacity]](#increase-decrease-capacity) | [[Helm chart published]](#helm-chart-published) | [[Helm chart valid]](#helm-chart-valid) | [[Helm deploy]](#helm-deploy) | [[Rollback]](#rollback) | [[Rolling version change]](#rolling-version-change) | [[Rolling update]](#rolling-update) | [[Rolling downgrade]](#rolling-downgrade) | [[CNI compatible]](#cni-compatible) | [[Kubernetes Alpha APIs **PoC**]](#kubernetes-alpha-apis-poc)
+
+* [**Category: Microservice Tests**](#category-microservice-tests)
+
+ [[Reasonable Image Size]](#reasonable-image-size) | [[Reasonable Startup Time]](#reasonable-startup-time) | [[Single Process Type in One Container]](#single-process-type-in-one-container) | [[Service Discovery]](#service-discovery) | [[Shared Database]](#shared-database) | [[Specialized Init Systems]](#specialized-init-systems) | [[Sigterm Handled]](#sigterm-handled) | [[Zombie Handled]](#zombie-handled)
+
+* [**Category: State Tests**](#category-state-tests)
+
+ [[Node drain]](#node-drain) | [[Volume hostpath not found]](#volume-hostpath-not-found) | [[No local volume configuration]](#no-local-volume-configuration) | [[Elastic volumes]](#elastic-volumes) | [[Database persistence]](#database-persistence)
+
+* [**Category: Reliability, Resilience and Availability Tests**](#category-reliability-resilience--availability-tests)
+
+ [[CNF under network latency]](#cnf-under-network-latency) | [[CNF with host disk fill]](#cnf-with-host-disk-fill) | [[Pod delete]](#pod-delete) | [[Memory hog]](#memory-hog) | [[IO Stress]](#io-stress) | [[Network corruption]](#network-corruption) | [[Network duplication]](#network-duplication) | [[Pod DNS errors]](#pod-dns-errors) | [[Helm chart liveness entry]](#helm-chart-liveness-entry) | [[Helm chart readiness entry]](#helm-chart-readiness-entry)
+
+* [**Category: Observability and Diagnostic Tests**](#category-observability--diagnostic-tests)
+
+ [[Use stdout/stderr for logs]](#use-stdoutstderr-for-logs) | [[Prometheus installed]](#prometheus-installed) | [[Routed logs]](#routed-logs) | [[OpenMetrics compatible]](#openmetrics-compatible) | [[Jaeger tracing]](#jaeger-tracing)
+
+* [**Category: Security Tests**](#category-security-tests)
+
+ [[Container socket mounts]](#container-socket-mounts) | [[Privileged Containers]](#privileged-containers) | [[External IPs]](#external-ips) | [[SELinux Options]](#selinux-options) | [[Sysctls]](#sysctls) | [[Privilege escalation]](#privilege-escalation) | [[Symlink file system]](#symlink-file-system) | [[Application credentials]](#application-credentials) | [[Host network]](#host-network) | [[Service account mapping]](#service-account-mapping) | [[Ingress and Egress blocked]](#ingress-and-egress-blocked) | [[Insecure capabilities]](#insecure-capabilities) | [[Non-root containers]](#non-root-containers) | [[Host PID/IPC privileges]](#host-pidipc-privileges) | [[Linux hardening]](#linux-hardening) | [[CPU limits]](#cpu-limits) | [[Memory limits]](#memory-limits) | [[Immutable File Systems]](#immutable-file-systems) | [[HostPath Mounts]](#hostpath-mounts)
+
+* [**Category: Configuration Tests**](#category-configuration-tests)
+
+ [[Default namespaces]](#default-namespaces) | [[Latest tag]](#latest-tag) | [[Require labels]](#require-labels) | [[Versioned tag]](#versioned-tag) | [[NodePort not used]](#nodeport-not-used) | [[HostPort not used]](#hostport-not-used) | [[Hardcoded IP addresses in K8s runtime configuration]](#hardcoded-ip-addresses-in-k8s-runtime-configuration) | [[Secrets used]](#secrets-used) | [[Immutable configmap]](#immutable-configmap)
+
+* [**Category: 5G Tests**](#category-5g-tests)
+
+ [[SMF_UPF_core_validator]](#smf_upf_core_validator) | [[SUCI_enabled]](#suci_enabled)
+
+* [**Category: RAN Tests**](#category-ran-tests)
+
+ [[ORAN_e2_connection]](#oran_e2_connection)
+
+* [**Category: Platform Tests**](#category-platform-tests)
+
+ [[K8s Conformance]](#k8s-conformance) | [[ClusterAPI enabled]](#clusterapi-enabled) | [[OCI Compliant]](#oci-compliant) | [[(POC) Worker reboot recovery]](#poc-worker-reboot-recovery) | [[Cluster admin]](#cluster-admin) | [[Control plane hardening]](#control-plane-hardening) | [[Tiller images]](#tiller-images)
+
+----------
+
+## Category: Compatibility, Installability and Upgradability Tests
+
+CNFs should work with any Certified Kubernetes product and any CNI-compatible network that meet their functionality requirements. The CNTI Test Catalog will check for usage of standard, in-band deployment tools such as Helm (version 3) charts. The CNTI Test Catalog checks to see if CNFs support horizontal scaling (across multiple machines) and vertical scaling (between sizes of machines) by using the native K8s [kubectl](https://kubernetes.io/docs/reference/kubectl/cheatsheet/#scaling-resources).
+
+Service providers have historically had issues with the installability of vendor network functions. This category tests the installability and lifecycle management (the create, update, and delete of network applications) against widely used K8s installation solutions such as Helm.
+
+### Usage
+
+All compatibility: `./cnf-testsuite compatibility`
+
+----------
+
+### Increase decrease capacity
+
+#### Overview
+
+HPA (horizonal pod autoscale) will autoscale replicas to accommodate when there is an increase of CPU, memory or other configured metrics to prevent disruption by allowing more requests
+by balancing out the utilisation across all of the pods.
+Decreasing replicas works the same as increase but rather scale down the number of replicas when the traffic decreases to the number of pods that can handle the requests.
+You can read more about horizonal pod autoscaling to create replicas [here](https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/) and in the [K8s scaling cheatsheet](https://kubernetes.io/docs/reference/kubectl/cheatsheet/#scaling-resources).
+Expectation: The number of replicas for a Pod increases and then decreases.
+
+#### Rationale
+
+A CNF should be able to increase and decrease its capacity without running into errors.
+
+#### Remediation
+
+Check out the kubectl docs for how to [manually scale your cnf.](https://kubernetes.io/docs/reference/kubectl/cheatsheet/#scaling-resources)
+Also here is some info about [things that could cause failures.](https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#failed-deployment)
+
+#### Usage
+
+`./cnf-testsuite increase_decrease_capacity`
+
+----------
+
+### Helm chart published
+
+#### Overview
+
+Checks if the helm chart is found in a remote repository when running [`helm search`](https://helm.sh/docs/helm/helm_search_repo/).
+Expectation: The Helm chart is published in a Helm Repsitory.
+
+#### Rationale
+
+If a helm chart is published, it is significantly easier to install for the end user.
+The management and versioning of the helm chart are handled by the helm registry and client tools
+rather than manually as directly referencing the helm chart source.
+
+#### Remediation
+
+Make sure your CNF helm charts are published in a Helm Repository.
+
+#### Usage
+
+`./cnf-testsuite helm_chart_published`
+
+----------
+
+### Helm chart valid
+
+#### Overview
+
+Checks the syntax & validity of the chart using [`helm lint`](https://helm.sh/docs/helm/helm_lint/)
+Expectation: No syntax or validation problems are found in the chart.
+
+#### Rationale
+
+A chart should pass the [lint specification](https://helm.sh/docs/helm/helm_lint/#helm)
+
+#### Remediation
+
+Make sure your helm charts pass lint tests.
+
+#### Usage
+
+`./cnf-testsuite helm_chart_valid`
+
+----------
+
+### Helm deploy
+
+#### Overview
+
+Checks if the CNF is installed by using a Helm Chart.
+Expectation: The CNF was installed using Helm.
+
+#### Rationale
+
+A helm chart should be [deployable to a cluster](https://helm.sh/docs/helm/helm_install/#helm)
+
+#### Remediation
+
+Make sure your helm charts are valid and can be deployed to clusters.
+
+#### Usage
+
+`./cnf-testsuite helm_deploy`
+
+----------
+
+### Rollback
+
+#### Overview
+
+Checks if the Pod can be upgraded to a new software version, then restored back to the orginal software version by using the [Kubectl Set Image](https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#-em-image-em-) & [Kubectl Rollout Undo](https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#rollout) commands.
+Expectation: The CNF Software version can be successfully incremented, then rolled back.
+
+#### Rationale
+
+K8s best practice is to allow [K8s to manage the rolling back](https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#rolling-back-a-deployment) of an application resource instead of having operators manually rolling back the resource by using something like blue/green deploys.
+
+#### Remediation
+
+Ensure that you can upgrade your CNF using the [Kubectl Set Image](https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#-em-image-em-) command, then rollback the upgrade using the [Kubectl Rollout Undo](https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#rollout) command.
+
+#### Usage
+
+`./cnf-testsuite rollback`
+
+----------
+
+### Rolling version change
+
+#### Overview
+
+Checks if the Pod can be rolled back to the original software version by using the [Kubectl Set Image](https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#-em-image-em-) to perform a rollback.
+Expectation: The CNF Software version is successfully rolled back to its original version.
+
+#### Rationale
+
+(update, version change, downgrade): K8s best practice for version/installation management (lifecycle management) of applications is to have [K8s track the version of the manifest information](https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#updating-a-deployment) for the resource (deployment, pod, etc) internally.
+Whenever a rollback is needed the resource will have the exact manifest information that was tied to the application when it was deployed.
+This adheres the principles driving immutable infrastructure and declarative specifications.
+
+#### Remediation
+
+Ensure that you can successfuly rollback the software version of your CNF by using the [Kubectl Set Image](https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#-em-image-em-) command.
+
+#### Usage
+
+`./cnf-testsuite rolling_version_change`
+
+----------
+
+### Rolling update
+
+#### Overview
+
+Checks if the Pod can be upgraded to a new software version by using the [Kubectl Set Image](https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#-em-image-em-)
+Expectation: The CNF Software version can be successfully incremented.
+
+#### Rationale
+
+See rolling version change.
+
+#### Remediation
+
+Ensure that you can successfuly perform a rolling upgrade of your CNF using the [Kubectl Set Image](https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#-em-image-em-) command.
+
+#### Usage
+
+`./cnf-testsuite rolling_update`
+
+----------
+
+### Rolling downgrade
+
+#### Overview
+
+Checks if the Pod can be rolled back older software version(Older than the original software version) by using the [Kubectl Set Image](https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#-em-image-em-) to perform a downgrade.
+Expectation: The CNF Software version is successfully downgraded to a software version older than the orginal installation version.
+
+#### Rationale
+
+See rolling version change.
+
+#### Remediation
+
+Ensure that you can successfuly change the software version of your CNF back to an older version by using the [Kubectl Set Image](https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#-em-image-em-) command.
+
+#### Usage
+
+`./cnf-testsuite rolling_downgrade`
+
+----------
+
+### CNI compatible
+
+#### Overview
+
+This installs temporary kind clusters and will test the CNF against both Calico and Cilium CNIs.
+Expectation: CNF should be compatible with multiple and different CNIs
+
+#### Rationale
+
+A CNF should be runnable by any CNI that adheres to the [CNI specification](https://github.com/containernetworking/cni/blob/master/SPEC.md)
+
+#### Remediation
+
+Ensure that your CNF is compatible with Calico, Cilium and other available CNIs.
+
+#### Usage
+
+`./cnf-testsuite cni_compatible`
+
+----------
+
+### Kubernetes Alpha APIs **PoC**
+
+#### Overview
+
+This checks if a CNF uses alpha or unstable versions of Kubernetes APIs
+Expectation: CNF should not use Kubernetes alpha APIs
+
+#### Rationale
+
+If a CNF uses alpha or undocumented APIs, the CNF is tightly coupled to an unstable platform
+
+#### Remediation
+
+Make sure your CNFs are not utilizing any Kubernetes alpha APIs. You can learn more about Kubernetes API versioning [here](https://bit.ly/k8s_api).
+
+#### Usage
+
+`./cnf-testsuite alpha_k8s_apis`
+
+----------
+
+## Category: Microservice Tests
+
+The CNF should be developed and delivered as a microservice. The CNTI Test Catalog tests to determine the organizational structure and rate of change of the CNF being tested. Once these are known we can detemine whether or not the CNF is a microservice. See: [Microservice-Principles](https://networking.cloud-native-principles.org/cloud-native-microservice-principles)
+
+[Good microservice practices](https://vmblog.com/archive/2022/01/04/the-zeitgeist-of-cloud-native-microservices.aspx) promote agility which means less time will occur between deployments. One benefit of more agility is it allows for different organizations and teams to deploy at the rate of change that they build out features, instead of deploying in lock step with other teams. This is very important when it comes to changes that are time sensitive like security patches.
+
+### Usage
+
+All microservice: `./cnf-testsuite microservice`
+
+----------
+
+### Reasonable Image Size
+
+#### Overview
+
+Checks the size of the image used.
+Expectation: CNF image size is under 5 gigs
+
+#### Rationale
+
+A CNF with a large image size of 5 gigabytes or more tends to indicate a monolithic application.
+
+#### Remediation
+
+Ensure your CNF's image size is under 5GB.
+
+#### Usage
+
+`./cnf-testsuite reasonable_image_size`
+
+----------
+
+### Reasonable Startup Time
+
+#### Overview
+
+Checks how long it takes for the CNF to pass a Readiness Probe and reach a ready/running state.
+Expectation: CNF starts up under one minute
+
+#### Rationale
+
+A CNF that starts up with a time (adjusted for server resources) that is approaching a minute is indicative of a monolithic application. The liveness probe's `initialDelaySeconds` and `failureThreshold` determine the startup time and retry amount of the CNF. Specifically, if the `initialDelay` is too long, it is indicative of a monolithic application. If the `failureThreshold` is too high, it is indicative of a CNF or a component of the CNF that has too many intermittent failures.
+
+#### Remediation
+
+Ensure that your CNF gets into a running state within 30 seconds.
+
+#### Usage
+
+`./cnf-testsuite reasonable_startup_time`
+
+----------
+
+### Single Process Type in One Container
+
+#### Overview
+
+This verifies that there is only one process type within one container. This does not count against child processes. For example, nginx or httpd could have a parent process and then 10 child processes, but if both nginx and httpd were running, this test would fail.
+Expectation: CNF container has one process type
+
+#### Rationale
+
+A microservice should have only one process (or set of parent/child processes) that is managed by a non-homegrown supervisor or orchestrator. The microservice should not spawn other process types (e.g., executables) as a way to contribute to the workload but rather should interact with other processes through a microservice API.
+
+#### Remediation
+
+Ensure that there is only one process type within a container. This does not count against child processes, e.g., nginx or httpd could be a parent process with 10 child processes and pass this test, but if both nginx and httpd were running, this test would fail.
+
+#### Usage
+
+`./cnf-testsuite single_process_type`
+
+----------
+
+### Service Discovery
+
+#### Overview
+
+This tests and checks if the containers within a CNF have services exposed via a Kubernetes Service resource. Application access for microservices within a cluster should be exposed via a Service. Read more about K8s Service [here](https://kubernetes.io/docs/concepts/services-networking/service/).
+Expectation: CNFs accessible to other applications should be exposed via a Service.
+
+#### Rationale
+
+A K8s microservice should expose its API through a K8s service resource. K8s services handle service discovery and load balancing for the cluster, ensuring that microservices can efficiently communicate and distribute traffic among themselves.
+
+#### Remediation
+
+Make sure the CNF exposes any of its containers as a Kubernetes Service. This is crucial for enabling service discovery and load balancing within the cluster, facilitating smoother operation and communication between microservices. You can learn more about Kubernetes Service [here](https://kubernetes.io/docs/concepts/services-networking/service/).
+
+#### Usage
+
+`./cnf-testsuite service_discovery`
+
+----------
+
+### Shared Database
+
+#### Overview
+
+This tests if multiple CNFs are using the same database.
+Expectation: Multiple microservices should not share the same database.
+
+#### Rationale
+
+A K8s microservice should not share a database with another K8s database because it forces the two services to upgrade in lock step.
+
+#### Remediation
+
+Make sure that your CNFs containers are not sharing the same [database](https://martinfowler.com/bliki/IntegrationDatabase.html).
+
+#### Usage
+
+`./cnf-testsuite shared_database`
+
+----------
+
+### Specialized Init Systems
+
+#### Overview
+
+This tests if containers in pods have dumb-init, tini or s6-overlay as init processes.
+Expectation: Container images should use specialized init systems for containers.
+
+#### Rationale
+
+There are proper init systems and sophisticated supervisors that can be run inside of a container. Both of these systems properly reap and pass signals. Sophisticated supervisors are considered overkill because they take up too many resources and are sometimes too complicated. Some examples of sophisticated supervisors are: supervisord, monit, and runit. Proper init systems are smaller than sophisticated supervisors and therefore suitable for containers. Some of the proper container init systems are tini, dumb-init, and s6-overlay.
+
+#### Remediation
+
+Use init systems that are purpose-built for containers like tini, dumb-init, s6-overlay.
+
+#### Usage
+
+`./cnf-testsuite specialized_init_system`
+
+----------
+
+### Sigterm Handled
+
+#### Overview
+
+This tests if the PID 1 process of containers handles SIGTERM.
+Expectation: Sigterm is handled by PID 1 process of containers.
+
+#### Rationale
+
+The Linux kernel handles signals differently for the process that has PID 1 than it does for other processes. Signal handlers aren't automatically registered for this process, meaning that signals such as SIGTERM or SIGINT will have no effect by default. By default, one must kill processes by using SIGKILL, preventing any graceful shutdown. Depending on the application, using SIGKILL can result in user-facing errors, interrupted writes (for data stores), or unwanted alerts in a monitoring system.
+
+#### Remediation
+
+Make the PID 1 container process to handle SIGTERM; enable process namespace sharing in Kubernetes or use specialized Init system.
+
+#### Usage
+
+`./cnf-testsuite sig_term_handled`
+
+----------
+
+### Zombie Handled
+
+#### Overview
+
+This tests if the PID 1 process of containers handles/reaps zombie processes.
+Expectation: Zombie processes are handled/reaped by PID 1 process of containers.
+
+#### Rationale
+
+Classic init systems such as systemd are also used to remove (reap) orphaned, zombie processes. Orphaned processes — processes whose parents have died - are reattached to the process that has PID 1, which should reap them when they die. A normal init system does that. But in a container, this responsibility falls on whatever process has PID 1. If that process doesn't properly handle the reaping, you risk running out of memory or some other resources.
+
+#### Remediation
+
+Make the PID 1 container process to handle/reap zombie processes; enable process namespace sharing in Kubernetes or use specialized Init system.
+
+#### Usage
+
+`./cnf-testsuite zombie_handled`
+
+----------
+
+## Category: State Tests
+
+The CNTI Test Catalog checks if state is stored in a [custom resource definition](https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/) or a separate database (e.g. [etcd](https://github.com/etcd-io/etcd)) rather than requiring local storage. It also checks to see if state is resilient to node failure
+
+If infrastructure is immutable, it is easily reproduced, consistent, disposable, will have a repeatable deployment process, and will not have configuration or artifacts that are modifiable in place.
+This ensures that all *configuration* is stateless.
+Any [*data* that is persistent](https://vmblog.com/archive/2022/05/16/stateful-cnfs.aspx) should be managed by K8s statefulsets.
+
+### Usage
+
+All state: `./cnf-testsuite state`
+
+----------
+
+### Node drain
+
+#### Overview
+
+A node is drained and workload resources rescheduled to another node, passing with a liveness and readiness check. This will skip when the cluster only has a single node.
+Expectation: All workload resources are successfully rescheduled onto other available node(s).
+
+#### Rationale
+
+No CNF should fail because of stateful configuration. A CNF should function properly if it is rescheduled on other nodes.
+This test will remove resources which are running on a target node and reschedule them on the another node.
+
+#### Remediation
+
+Ensure that your CNF can be successfully rescheduled when a node fails or is [drained](https://kubernetes.io/docs/tasks/administer-cluster/safely-drain-node/)
+
+#### Usage
+
+`./cnf-testsuite node_drain`
+
+----------
+
+### Volume hostpath not found
+
+#### Overview
+
+This tests if volume host paths are configured and used by the CNF.
+Expectation: Volume host path configurations should not be used.
+
+#### Rationale
+
+When a cnf uses a volume host path or local storage it makes the application tightly coupled
+to the node that it is on.
+
+#### Remediation
+
+Ensure that none of the containers in your CNFs are using ["hostPath"] to mount volumes.
+
+#### Usage
+
+`./cnf-testsuite volume_hostpath_not_found`
+
+----------
+
+### No local volume configuration
+
+#### Overview
+
+This tests if local volumes are being used for the CNF.
+Expectation: Local storage should not be used or configured.
+
+#### Rationale
+
+A CNF should refrain from using the [local storage class](https://kubernetes.io/docs/concepts/storage/storage-classes/#local)
+
+#### Remediation
+
+Ensure that your CNF isn't using any persistent volumes that use a ["local"] mount point.
+
+#### Usage
+
+`./cnf-testsuite no_local_volume_configuration`
+
+----------
+
+### Elastic volumes
+
+#### Overview
+
+This checks for elastic persistent volumes in use by the CNF.
+Expectation: Elastic persistent volumes should be configured for statefulness.
+
+#### Rationale
+
+A cnf that uses elastic volumes can be rescheduled to other nodes by the orchestrator easily
+
+#### Remediation
+
+Setup and use elastic persistent volumes instead of local storage.
+
+#### Usage
+
+`./cnf-testsuite elastic_volume`
+
+----------
+
+### Database persistence
+
+#### Overview
+
+This checks if elastic volumes and stateful sets are used for MySQL databases. If no MySQL database is found, the test is skipped.
+Expectation: Elastic volumes and or statefulsets should be used for databases to maintain a minimum resilience level in K8s clusters.
+
+#### Rationale
+
+When a traditional database such as mysql is configured to use statefulsets, it allows the database to use a persistent identifier that it maintains across any rescheduling.
+Persistent Pod identifiers make it easier to match existing volumes to the new Pods that have been rescheduled.
+
+
+#### Remediation
+
+Select a database configuration that uses statefulsets and elastic storage volumes.
+
+#### Usage
+
+`./cnf-testsuite database_persistence`
+
+----------
+
+## Category: Reliability, Resilience and Availability Tests
+
+[Cloud Native Definition](https://github.com/cncf/toc/blob/master/DEFINITION.md) requires systems to be Resilient to failures inevitable in cloud environments. CNF Resilience should be tested to ensure CNFs are designed to deal with non-carrier-grade shared cloud HW/SW platform
+
+Cloud native systems promote resilience by putting a high priority on testing individual components (chaos testing) as they are running (possibly in production).
+[Reliability in traditional telecommunications](https://vmblog.com/archive/2021/09/15/cloud-native-chaos-and-telcos-enforcing-reliability-and-availability-for-telcos.aspx) is handled differently than in Cloud Native systems. Cloud native systems try to address reliability (MTBF) by having the subcomponents have higher availability through higher serviceability (MTTR) and redundancy. For example, having ten redundant subcomponents where seven components are available and three have failed will produce a top level component that is more reliable (MTBF) than a single component that "never fails" in the cloud native world.
+
+### Usage
+
+All resilience: `./cnf-testsuite resilience`
+
+----------
+
+### CNF under network latency
+
+#### Overview
+
+[This experiment](https://litmuschaos.github.io/litmus/experiments/categories/pods/pod-network-latency/) causes network degradation without the pod being marked unhealthy/unworthy of traffic by kube-proxy (unless you have a liveness probe of sorts that measures latency and restarts/crashes the container). The idea of this experiment is to simulate issues within your pod network OR microservice communication across services in different availability zones/regions etc.
+The applications may stall or get corrupted while they wait endlessly for a packet. The experiment limits the impact (blast radius) to only the traffic you want to test by specifying IP addresses or application information. This experiment will help to improve the resilience of your services over time.
+Expectation: The CNF should continue to function when network latency occurs
+
+#### Rationale
+
+Network latency can have a significant impact on the overall performance of the application. Network outages that result from low latency can cause
+a range of failures for applications and can severely impact user/customers with downtime. This chaos experiment allows you to see the impact of latency
+traffic on the CNF.
+
+#### Remediation
+
+Ensure that your CNF doesn't stall or get into a corrupted state when network degradation occurs.
+A mitigation stagagy (in this case keep the timeout i.e., access latency low) could be via some middleware that can switch traffic based on some SLOs parameters.
+
+#### Usage
+
+`./cnf-testsuite pod_network_latency`
+
+----------
+
+### CNF with host disk fill
+
+#### Overview
+
+[This experiment](https://litmuschaos.github.io/litmus/experiments/categories/pods/disk-fill/) stresses the disk with continuous and heavy IO to cause degradation in the shared disk. This experiment also reduces the amount of scratch space available on a node which can lead to a lack of space for newer containers to get scheduled. This can cause (Kubernetes gives up by applying an "eviction" taint like "disk-pressure") a wholesale movement of all pods to other nodes.
+Expectation: The CNF should continue to function when disk fill occurs and pods should not be evicted to another node.
+
+#### Rationale
+
+Disk Pressure is a scenario we find in Kubernetes applications that can result in the eviction of the application replica and impact its delivery. Such scenarios can still occur despite whatever availability aids K8s provides. These problems are generally referred to as "Noisy Neighbour" problems.
+
+#### Remediation
+
+Ensure that your CNF is resilient and doesn't stall when heavy IO causes a degradation in storage resource availability.
+
+#### Usage
+
+`./cnf-testsuite disk_fill`
+
+----------
+
+### Pod delete
+
+#### Overview
+
+[This experiment](https://litmuschaos.github.io/litmus/experiments/categories/pods/pod-delete/) helps to simulate such a scenario with forced/graceful pod failure on specific or random replicas of an application resource and checks the deployment sanity (replica availability & uninterrupted service) and recovery workflow of the application.
+Expectation: The CNF should continue to function when pod delete occurs
+
+#### Rationale
+
+In a distributed system like Kubernetes, application replicas may not be sufficient to manage the traffic (indicated by SLIs) when some replicas are unavailable due to any failure (can be system or application). The application needs to meet the SLO (service level objectives) for this. It's imperative that the application has defenses against this sort of failure to ensure that the application always has a minimum number of available replicas.
+
+#### Remediation
+
+Ensure that your CNF is resilient and doesn't fail on a forced/graceful pod failure on specific or random replicas of an application.
+
+#### Usage
+
+`./cnf-testsuite pod_delete`
+
+----------
+
+### Memory hog
+
+#### Overview
+
+The [pod-memory hog](https://litmuschaos.github.io/litmus/experiments/categories/pods/pod-memory-hog/) experiment launches a stress process within the target container - which can cause either the primary process in the container to be resource constrained in cases where the limits are enforced OR eat up available system memory on the node in cases where the limits are not specified.
+Expectation: The CNF should continue to function when pod memory hog occurs
+
+#### Rationale
+
+If the memory policies for a CNF are not set and granular, containers on the node can be killed based on their oom_score and the QoS class a given pod belongs to (best-effort ones are first to be targeted). This eval is extended to all pods running on the node, thereby causing a bigger blast radius.
+
+#### Remediation
+
+Ensure that your CNF is resilient to heavy memory usage and can maintain some level of availability.
+
+#### Usage
+
+`./cnf-testsuite pod_memory_hog`
+
+----------
+
+### IO Stress
+
+#### Overview
+
+The [pod-io stress](https://litmuschaos.github.io/litmus/experiments/categories/pods/pod-io-stress/) experiment the disk with continuous and heavy IO to cause degradation in reads/writes by other microservices that use this shared disk.
+Expectation: The CNF should continue to function when pod io stress occurs
+
+#### Rationale
+
+Stressing the disk with continuous and heavy IO can cause degradation in reads/ writes by other microservices that use this
+shared disk. Scratch space can be used up on a node which leads to the lack of space for newer containers to get scheduled which
+causes a movement of all pods to other nodes. This test determines the limits of how a CNF uses its storage device.
+
+#### Remediation
+
+Ensure that your CNF is resilient to continuous and heavy disk IO load and can maintain some level of availability
+
+#### Usage
+
+`./cnf-testsuite pod_io_stress`
+
+----------
+
+### Network corruption
+
+#### Overview
+
+The [pod-network corruption](https://litmuschaos.github.io/litmus/experiments/categories/pods/pod-network-corruption/) experiment injects packet corruption on the CNF by starting a traffic control (tc) process with netem rules to add egress packet corruption.
+Expectation: The CNF should be resilient to a lossy/flaky network and should continue to provide some level of availability.
+
+#### Rationale
+
+A higher quality CNF should be resilient to a lossy/flaky network. This test injects packet corruption on the specified CNF's container by
+starting a traffic control (tc) process with netem rules to add egress packet corruption.
+
+#### Remediation
+
+Ensure that your CNF is resilient to a lossy/flaky network and can maintain a level of availability.
+
+#### Usage
+
+`./cnf-testsuite pod_network_corruption`
+
+----------
+
+### Network duplication
+
+#### Overview
+
+The [pod-network duplication](https://litmuschaos.github.io/litmus/experiments/categories/pods/pod-network-duplication/) experiment injects network duplication into the CNF by starting a traffic control (tc) process with netem rules to add egress delays.
+Expectation: The CNF should continue to function and be resilient to a duplicate network.
+
+#### Rationale
+
+A higher quality CNF should be resilient to erroneously duplicated packets. This test injects network duplication on the specified container
+by starting a traffic control (tc) process with netem rules to add egress delays.
+
+#### Remediation
+
+Ensure that your CNF is resilient to erroneously duplicated packets and can maintain a level of availability.
+
+#### Usage
+
+`./cnf-testsuite pod_network_duplication`
+
+----------
+
+### Pod DNS errors
+
+#### Overview
+
+The [pod-dns error](https://litmuschaos.github.io/litmus/experiments/categories/pods/pod-dns-error/) experiment injects chaos to disrupt DNS resolution in kubernetes pods and causes loss of access to services by blocking DNS resolution of hostnames/domains.
+Expectation: That the CNF dosen't crash is resilient to DNS resolution failures.
+
+#### Rationale
+
+A CNF should be resilient to name resolution (DNS) disruptions within the kubernetes pod. This ensures that at least some application availability will be maintained if DNS resolution fails.
+
+#### Remediation
+
+Ensure that your CNF is resilient to DNS resolution failures can maintain a level of availability.
+
+#### Usage
+
+`./cnf-testsuite pod_dns_error`
+
+----------
+
+### Helm chart liveness entry
+
+#### Overview
+
+This test scans all of the CNFs workload resources and check if a Liveness Probe has been configuered for each container.
+Expectation: The Helm chart should have a liveness probe configured.
+
+#### Rationale
+
+A cloud native principle is that application developers understand their own resilience requirements better than operators:
+
+> "No one knows more about what an application needs to run in a healthy state than the developer. For a long time, infrastructure administrators have tried to figure out what “healthy” means for applications they are responsible for running. Without knowledge of what actually makes an application healthy, their attempts to monitor and alert when applications are unhealthy are often fragile and incomplete. To increase the operability of cloud native applications, applications should expose a health check." -- Garrison, Justin; Nova, Kris. Cloud Native Infrastructure: Patterns for Scalable Infrastructure and Applications in a Dynamic Environment. O'Reilly Media. Kindle Edition.
+
+This is exemplified in the Kubernetes best practice of pods declaring how they should be managed through the liveness and readiness entries in the pod's configuration.
+
+#### Remediation
+
+Ensure that your CNF has a [Liveness Probe](https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/) configured.
+
+#### Usage
+
+`./cnf-testsuite liveness`
+
+----------
+
+### Helm chart readiness entry
+
+#### Overview
+
+This test scans all of the CNFs workload resources and check if a Readiness Probe has been configuered for each container.
+Expectation: The Helm chart should have a readiness probe configured.
+
+#### Rationale
+
+A CNF should tell Kubernetes when it is [ready to serve traffic](https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#define-readiness-probes).
+
+#### Remediation
+
+Ensure that your CNF has a [Readiness Probe](https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/) configured.
+
+#### Usage
+
+`./cnf-testsuite readiness`
+
+----------
+
+## Category: Observability and Diagnostic Tests
+
+In order to maintain, debug, and have insight into a protected environment, infrastructure elements must have the property of being observable. This means these elements must externalize their internal states in some way that lends itself to metrics, tracing, and logging.
+
+In order to maintain, debug, and have insight into a production environment that is protected (versioned, kept in source control, and changed only by using a deployment pipeline), its infrastructure elements must have the property of being observable. This means these elements must externalize their internal states in some way that lends itself to metrics, tracing, and logging.
+
+### Usage
+
+All observability: `./cnf-testsuite observability`
+
+----------
+
+### Use stdout/stderr for logs
+
+#### Overview
+
+This checks and verifies that STDOUT/STDERR logging is configured for the CNF.
+Expectation: Resource output logs should be sent to STDOUT/STDERR
+
+#### Rationale
+
+By sending logs to standard out/standard error [logs will be treated like event streams](https://12factor.net/) as recommended by 12 factor apps principles.
+
+#### Remediation
+
+Make sure applications and CNF's are sending log output to STDOUT and or STDERR.
+
+#### Usage
+
+`./cnf-testsuite log_output`
+
+----------
+
+### Prometheus installed
+
+#### Overview
+
+Tests for the presence of [Prometheus](https://prometheus.io/) and if the CNF configured to sent metrics to the prometheus server.
+Expectation: The CNF is configured and sending metrics to a Prometheus server.
+
+#### Rationale
+
+Recording metrics within a cloud native deployment is important because it gives the maintainer of a cluster of hundreds or thousands of services the ability to pinpoint [small anomalies](https://about.gitlab.com/blog/2018/09/27/why-all-organizations-need-prometheus/), such as those that will eventually cause a failure.
+
+#### Remediation
+
+Install and configure Prometheus for your CNF.
+
+#### Usage
+
+`./cnf-testsuite prometheus_traffic`
+
+----------
+
+### Routed logs
+
+#### Overview
+
+Checks for presence of a Unified Logging Layer and if the CNFs logs are being captured by the Unified Logging Layer. fluentd and fluentbit are currently supported.
+Expectation: Fluentd or FluentBit is installed and capturing logs for the CNF.
+
+#### Rationale
+
+A CNF should have logs managed by a [unified logging layer](https://www.fluentd.org/why) It's considered a best-practice for CNFs to route logs and data through programs like fluentd to analyze and better understand data.
+
+#### Remediation
+
+Install and configure fluentd or fluentbit to collect data and logs. See more at [fluentd.org](https://bit.ly/fluentd) for fluentd or [fluentbit.io](https://fluentbit.io/) for fluentbit.
+
+#### Usage
+
+`./cnf-testsuite routed_logs`
+
+----------
+
+### OpenMetrics compatible
+
+#### Overview
+
+Checks if the CNFs metrics are [OpenMetrics](https://openmetrics.io/) compliant.
+Expectation: CNF should emit OpenMetrics compatible traffic.
+
+#### Rationale
+
+OpenMetrics is the de facto standard for transmitting cloud native metrics at scale, with support for both text representation and Protocol Buffers and brings it into an Internet Engineering Task Force (IETF) standard. A CNF should expose metrics that are [OpenMetrics compatible](https://github.com/OpenObservability/OpenMetrics/blob/main/specification/OpenMetrics.md)
+
+#### Remediation
+
+Ensure that your CNF is publishing OpenMetrics compatible metrics.
+
+#### Usage
+
+`./cnf-testsuite open_metrics`
+
+----------
+
+### Jaeger tracing
+
+#### Overview
+
+Checks if Jaeger is installed and the CNF is configured to send traces to the Jaeger Server.
+Expectation: The CNF is sending traces to Jaeger.
+
+#### Rationale
+
+A CNF should provide tracing that conforms to the [open telemetry tracing specification](https://opentelemetry.io/docs/reference/specification/trace/api/)
+
+#### Remediation
+
+Ensure that your CNF is both using & publishing traces to Jaeger.
+
+#### Usage
+
+`./cnf-testsuite tracing`
+
+----------
+
+## Category: Security Tests
+
+CNF containers should be isolated from one another and the host. The CNTI Test Catalog uses tools like [OPA Gatekeeper](https://github.com/open-policy-agent/gatekeeper) and [Armosec Kubescape](https://github.com/armosec/kubescape)
+
+> "Cloud native security is a [...] mutifaceted topic [...] with multiple, diverse components that need to be secured. The cloud platform, the underlying host operating system, the container runtime, the container orchestrator,and then the applications themselves each require specialist security attention" -- Chris Binne, Rory Mccune. Cloud Native Security. (Wiley, 2021)(pp. xix)
+
+### Usage
+
+All security: `./cnf-testsuite security`
+
+----------
+
+### Container socket mounts
+
+#### Overview
+
+This test checks all of the CNFs containers and looks to see if any of them have access a container runtime socket from the host.
+Expectation: Container runtime sockets should not be mounted as volumes
+
+#### Rationale
+
+[Container daemon socket bind mounts](https://kyverno.io/policies/best-practices/disallow_cri_sock_mount/disallow_cri_sock_mount/) allows access to the container engine on the node. This access can be used for privilege escalation and to manage containers outside of Kubernetes, and hence should not be allowed.
+
+#### Remediation
+
+Make sure your CNF doesn't mount `/var/run/docker.sock`, `/var/run/containerd.sock` or `/var/run/crio.sock` on any containers.
+
+#### Usage
+
+`./cnf-testsuite container_sock_mounts`
+
+----------
+
+### Privileged Containers
+
+#### Overview
+
+Checks if any containers are running in privileged mode (using [Kubescape](https://hub.armo.cloud/docs/c-0057))
+Expectation: Containers should not run in privileged mode
+
+#### Rationale
+
+> "... docs describe Privileged mode as essentially enabling “…access to all devices on the host as well as [having the ability to] set some configuration in AppArmor or SElinux to allow the container nearly all the same access to the host as processes running outside containers on the host.” In other words, you should rarely, if ever, use this switch on your container command line." -- Binnie, Chris; McCune, Rory (2021-06-17T23:58:59). Cloud Native Security . Wiley. Kindle Edition.
+
+#### Remediation
+
+Remove privileged capabilities by setting the securityContext.privileged to false. If you must deploy a Pod as privileged, add other restriction to it, such as network policy, Seccomp etc and still remove all unnecessary capabilities.
+
+#### Usage
+
+`./cnf-testsuite privileged_containers`
+
+----------
+
+### External IPs
+
+#### Overview
+
+Checks if the CNF has services with external IPs configured
+Expectation: A CNF should not run services with external IPs
+
+#### Rationale
+
+Service external IPs can be used for a MITM attack (CVE-2020-8554). Restrict external IPs or limit to a known set of addresses.
+See:
+
+#### Remediation
+
+Make sure to not define external IPs in your kubernetes service configuration
+
+#### Usage
+
+`./cnf-testsuite external_ips`
+
+----------
+
+### SELinux Options
+
+#### Overview
+
+Checks if the CNF has escalatory SELinuxOptions configured.
+Expectation: A CNF should not have any 'seLinuxOptions' configured that allow privilege escalation.
+
+#### Rationale
+
+If [SELinux options](https://kyverno.io/policies/pod-security/baseline/disallow-selinux/disallow-selinux/) is configured improperly it can be used to escalate privileges and should not be allowed.
+
+#### Remediation
+
+Ensure the following guidelines are followed for any cluster resource that allow SELinux options:
+
+* If the SELinux option `type` is set, it should only be one of the allowed values: `container_t`, `container_init_t`, or `container_kvm_t`.
+* SELinux options `user` or `role` should not be set.
+
+#### Usage
+
+`./cnf-testsuite selinux_options`
+
+----------
+
+### Sysctls
+
+#### Overview
+
+Checks the CNF for usage of non-namespaced sysctls mechanisms that can affect the entire host.
+Expectation: The CNF should only have "safe" sysctls mechanisms configured, that are isolated from other Pods.
+
+#### Rationale
+
+Sysctls can disable security mechanisms or affect all containers on a host, and should be disallowed except for an allowed "safe" subset. A sysctl is considered safe if it is namespaced in the container or the Pod, and it is isolated from other Pods or processes on the same Node. This test ensures that only those "safe" subsets are specified in a Pod.
+
+#### Remediation
+
+The spec.securityContext.sysctls field must be unset or not use.
+
+#### Usage
+
+`./cnf-testsuite sysctls`
+
+----------
+
+### Privilege escalation
+
+#### Overview
+
+Check that the allowPrivilegeEscalation field in the securityContext of each container is set to false.
+Expectation: Containers should not allow privilege escalation
+
+#### Rationale
+
+*When [privilege escalation](https://kubernetes.io/docs/concepts/policy/pod-security-policy/#privilege-escalation) is [enabled for a container](https://hub.armo.cloud/docs/c-0016), it will allow setuid binaries to change the effective user ID, allowing processes to turn on extra capabilities.
+In order to prevent illegitimate escalation by processes and restrict a processes to a NonRoot user mode, escalation must be disabled.
+
+#### Remediation
+
+If your application does not need it, make sure the allowPrivilegeEscalation field of the securityContext is set to false. See more at [ARMO-C0016](https://bit.ly/C0016_privilege_escalation)
+
+#### Usage
+
+`./cnf-testsuite privilege_escalation`
+
+----------
+
+### Symlink file system
+
+#### Overview
+
+This test checks for vulnerable K8s versions and the actual usage of the subPath feature for all Pods in the CNF.
+Expectation: No vulnerable K8s version being used in conjunction with the subPath feature.
+
+#### Rationale
+
+Due to CVE-2021-25741, subPath or subPathExpr volume mounts can be [used to gain unauthorised access](https://hub.armo.cloud/docs/c-0058) to files and directories anywhere on the host filesystem. In order to follow a best-practice security standard and prevent unauthorised data access, there should be no active CVEs affecting either the container or underlying platform.
+
+#### Remediation
+
+To mitigate this vulnerability without upgrading kubelet, you can disable the VolumeSubpath feature gate on kubelet and kube-apiserver, or remove any existing Pods using subPath or subPathExpr feature.
+
+#### Usage
+
+`./cnf-testsuite symlink_file_system`
+
+----------
+
+### Application credentials
+
+#### Overview
+
+Checks the CNF for sensitive information in environment variables, by using list of known sensitive key names. Also checks for configmaps with sensitive information.
+Exepectation: Application credentials should not be found in the CNFs configuration files
+
+#### Rationale
+
+Developers store secrets in the Kubernetes configuration files, such as environment variables in the pod configuration. Such behavior is commonly seen in clusters that are monitored by Azure Security Center.
+Attackers who have access to those configurations, by querying the API server or by accessing those files on the developer’s endpoint, can steal the stored secrets and use them.
+
+#### Remediation
+
+Use Kubernetes secrets or Key Management Systems to store credentials.
+
+#### Usage
+
+`./cnf-testsuite application_credentials`
+
+----------
+
+### Host network
+
+#### Overview
+
+Checks if there is a [host network](https://bit.ly/C0041_hostNetwork) attached to any of the Pods in the CNF.
+Expectation: The CNF should not have access to the host systems network.
+
+#### Rationale
+
+When a container has the [hostNetwork](https://hub.armo.cloud/docs/c-0041) feature turned on, the container has direct access to the underlying hostNetwork. Hackers frequently exploit this feature to [facilitate a container breakout](https://media.defense.gov/2021/Aug/03/2002820425/-1/-1/1/CTR_KUBERNETES%20HARDENING%20GUIDANCE.PDF) and gain access to the underlying host network, data and other integral resources.
+
+#### Remediation
+
+Only connect PODs to the hostNetwork when it is necessary. If not, set the hostNetwork field of the pod spec to false, or completely remove it (false is the default). Allow only those PODs that must have access to host network by design.
+
+#### Usage
+
+`./cnf-testsuite host_network`
+
+----------
+
+### Service account mapping
+
+#### Overview
+
+heck if the CNF is using service accounts that are automatically mapped.
+Expectation: The [automatic mapping](https://bit.ly/C0034_service_account_mapping) of service account tokens should be disabled.
+
+#### Rationale
+
+When a pod gets created and a service account wasn't specified, then the default service account will be used. Service accounts assigned in this way can unintentionally give third-party applications root access to the K8s APIs and other applicaton services. In order to follow a zero-trust / fine-grained security methodology, this functionality will need to be explicitly disabled by using the automountServiceAccountToken: false flag. In addition, if RBAC is not enabled, the SA has unlimited permissions in the cluster.
+
+#### Remediation
+
+Disable automatic mounting of service account tokens to PODs either at the service account level or at the individual POD level, by specifying the automountServiceAccountToken: false. Note that POD level takes precedence.
+
+#### Usage
+
+`./cnf-testsuite service_account_mapping`
+
+----------
+
+### Ingress and Egress blocked
+
+#### Overview
+
+Checks each Pod in the CNF for a defined ingress and egress policy.
+Expectation: Ingress and Egress traffic should be blocked on Pods.
+
+#### Rationale
+
+By default, [no network policies are applied](https://hub.armo.cloud/docs/c-0030) to Pods or namespaces, resulting in unrestricted ingress and egress traffic within the Pod network. In order to [prevent lateral movement](https://media.defense.gov/2021/Aug/03/2002820425/-1/-1/1/CTR_KUBERNETES%20HARDENING%20GUIDANCE.PDF) or escalation on a compromised cluster, administrators should implement a default policy to deny all ingress and egress traffic.
+This will ensure that all Pods are isolated by default and further policies could then be used to specifically relax these restrictions on a case-by-case basis.
+
+#### Remediation
+
+By default, you should disable or restrict Ingress and Egress traffic on all pods.
+
+#### Usage
+
+`./cnf-testsuite ingress_egress_blocked`
+
+----------
+
+### Insecure capabilities
+
+#### Overview
+
+Checks the CNF for any usage of insecure capabilities using the following [deny list](https://man7.org/linux/man-pages/man7/capabilities.7.html)
+Expectation: Containers should not have insecure capabilities enabled.
+
+#### Rationale
+
+Giving [insecure](https://hub.armo.cloud/docs/c-0046) and unnecessary capabilities for a container can increase the impact of a container compromise.
+
+#### Remediation
+
+Remove all insecure capabilities which aren’t necessary for the container.
+
+#### Usage
+
+`./cnf-testsuite insecure_capabilities`
+
+----------
+
+### Non-root containers
+
+#### Overview
+
+Checks if the CNF has runAsUser and runAsGroup set to a user id greater than 999. Also checks that the allowPrivilegeEscalation field is set to false for the CNF.
+Read more at [ARMO-C0013](https://bit.ly/2Zzlts3)
+Expectation: Containers should run with non-root user and allowPrivilegeEscalation should be set to false.
+
+#### Rationale
+
+Container engines allow containers to run applications as a non-root user with non-root group membership. Typically, this non-default setting is configured when the container image is built. . Alternatively, Kubernetes can load containers into a Pod with SecurityContext:runAsUser specifying a non-zero user. While the runAsUser directive effectively forces non-root execution at deployment, [NSA and CISA encourage developers](https://hub.armo.cloud/docs/c-0013) to build container applications to execute as a non-root user. Having non-root execution integrated at build time provides better assurance that applications will function correctly without root privileges.
+
+#### Remediation
+
+If your application does not need root privileges, make sure to define the runAsUser and runAsGroup under the PodSecurityContext to use user ID 1000 or higher, do not turn on allowPrivlegeEscalation bit and runAsNonRoot is true.
+
+#### Usage
+
+`./cnf-testsuite non_root_containers`
+
+----------
+
+### Host PID/IPC privileges
+
+#### Overview
+
+Checks if containers are running with hostPID or hostIPC privileges.
+Read more at [ARMO-C0038](https://bit.ly/3nGvpIQ)
+Expectation: Containers should not have hostPID and hostIPC privileges
+
+#### Rationale
+
+Containers should be isolated from the host machine as much as possible. The [hostPID and hostIPC](https://hub.armo.cloud/docs/c-0038) fields in deployment yaml may allow cross-container influence and may expose the host itself to potentially malicious or destructive actions. This control identifies all PODs using hostPID or hostIPC privileges.
+
+#### Remediation
+
+Apply least privilege principle and remove hostPID and hostIPC from the yaml configuration privileges unless they are absolutely necessary.
+
+#### Usage
+
+`./cnf-testsuite host_pid_ipc_privileges`
+
+----------
+
+### Linux hardening
+
+#### Overview
+
+Check if there are AppArmor, Seccomp, SELinux or Capabilities defined in the securityContext of the CNF's containers and pods.
+Read more at [ARMO-C0055](https://bit.ly/2ZKOjpJ).
+Expectation: Security services are being used to harden application.
+
+#### Rationale
+
+In order to reduce the attack surface, it is recommend, when it is possible, to harden your application using [security services](https://hub.armo.cloud/docs/c-0055) such as SELinux®, AppArmor®, and seccomp. Starting from Kubernetes version 1.22, SELinux is enabled by default.
+
+#### Remediation
+
+Use AppArmor, Seccomp, SELinux and Linux Capabilities mechanisms to restrict containers abilities to utilize unwanted privileges.
+
+#### Usage
+
+`./cnf-testsuite linux_hardening`
+
+----------
+
+### CPU limits
+
+#### Overview
+
+Check if there is a ‘containers[].resources.limits.cpu’ field defined for all pods in the CNF.
+Expectation: Containers should have cpu limits defined
+
+#### Rationale
+
+Every container [should have a limit set for the CPU available for it](https://hub.armo.cloud/docs/c-0270) set for every container or a namespace to prevent resource exhaustion. This test identifies all the Pods without CPU limit definitions by checking their yaml definition file as well as their namespace LimitRange objects. It is also recommended to use ResourceQuota object to restrict overall namespace resources, but this is not verified by this test.
+
+#### Remediation
+
+Define LimitRange and ResourceQuota policies to limit CPU usage for namespaces or in the deployment/POD yamls.
+
+#### Usage
+
+`./cnf-testsuite cpu_limits`
+
+----------
+
+### Memory limits
+
+#### Overview
+
+Check if there is a ‘containers[].resources.limits.memory’ field defined for all pods in the CNF.
+Expectation: Containers should have memory limits defined
+
+#### Rationale
+
+Every container [should have a limit set for the memory available for it](https://hub.armo.cloud/docs/c-0271) set for every container or a namespace to prevent resource exhaustion. This test identifies all the Pods without memory limit definitions by checking their yaml definition file as well as their namespace LimitRange objects. It is also recommended to use ResourceQuota object to restrict overall namespace resources, but this is not verified by this test.
+
+#### Remediation
+
+Define LimitRange and ResourceQuota policies to limit memory usage for namespaces or in the deployment/POD yamls.
+
+#### Usage
+
+`./cnf-testsuite memory_limits`
+
+----------
+
+### Immutable File Systems
+
+#### Overview
+
+Checks whether the readOnlyRootFilesystem field in the SecurityContext is set to true.
+Read more at [ARMO-C0017](https://bit.ly/3pSMtxK)
+Expectation: Containers should use an immutable file system when possible.
+
+#### Rationale
+
+Mutable container filesystem can be abused to gain malicious code and data injection into containers. By default, containers are permitted unrestricted execution within their own context.
+An attacker who has access to a container, [can create files](https://hub.armo.cloud/docs/c-0017) and download scripts as they wish, and modify the underlying application running on the container.
+
+#### Remediation
+
+Set the filesystem of the container to read-only when possible. If the containers application needs to write into the filesystem, it is possible to mount secondary filesystems for specific directories where application require write access.
+
+#### Usage
+
+`./cnf-testsuite immutable_file_systems`
+
+----------
+
+### HostPath Mounts
+
+#### Overview
+
+Checks the CNF's POD spec for any hostPath volumes, if found it checks the volume for the field mount.readOnly == false (or if it doesn’t exist).
+Read more at [ARMO-C0045](https://bit.ly/3EvltIL)
+Expectation: Containers should not have hostPath mounts
+
+#### Rationale
+
+[hostPath mount](https://hub.armo.cloud/docs/c-0006) can be used by attackers to get access to the underlying host and thus break from the container to the host. (See “3: Writable hostPath mount” for details).
+
+#### Remediation
+
+Refrain from using a hostPath mount.
+
+#### Usage
+
+`./cnf-testsuite hostpath_mounts`
+
+----------
+
+## Category: Configuration Tests
+
+Configuration should be managed in a declarative manner, using [ConfigMaps](https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/), [Operators](https://kubernetes.io/docs/concepts/extend-kubernetes/operator/), or other [declarative interfaces](https://kubernetes.io/docs/concepts/overview/working-with-objects/kubernetes-objects/#understanding-kubernetes-objects).
+
+Declarative APIs for an immutable infrastructure are anything that configures the infrastructure element. This declaration can come in the form of a YAML file or a script, as long as the configuration designates the desired outcome, not how to achieve said outcome.
+
+> "Because it describes the state of the world, declarative configuration does not have to be executed to be understood. Its impact is concretely declared. Since the effects of declarative configuration can be understood before they are executed, declarative configuration is far less error-prone." -- Hightower, Kelsey; Burns, Brendan; Beda, Joe. Kubernetes: Up and Running: Dive into the Future of Infrastructure (Kindle Locations 183-186). Kindle Edition*
+
+### Usage
+
+All configuration: `./cnf-testsuite configuration_lifecycle`
+
+----------
+
+### Default namespaces
+
+#### Overview
+
+Checks if any of the CNF's resources are deployed in the default namespace.
+Expectation: Resources should not be deployed in the default namespace.
+
+#### Rationale
+
+Namespaces provide a way to segment and isolate cluster resources across multiple applications and users.
+As a best practice, workloads should be isolated with Namespaces and not use the default namespace.
+
+#### Remediation
+
+Ensure that your CNF is configured to use a Namespace and is not using the default namespace.
+
+#### Usage
+
+`./cnf-testsuite default_namespace`
+
+----------
+
+### Latest tag
+
+#### Overview
+
+Checks if the CNF is using a 'latest' tag instead of a semantic version.
+Expectation: The CNF should use an immutable tag that maps to a symantic version of the application.
+
+#### Rationale
+
+You should [avoid using the :latest tag](https://kubernetes.io/docs/concepts/containers/images/) when deploying containers in production as it is harder to track which version of the image is running and more difficult to roll back properly.
+
+#### Remediation
+
+When specifying container images, always specify a tag and ensure to use an immutable tag that maps to a specific version of an application Pod. Remove any usage of the `latest` tag, as it is not guaranteed to be always point to the same version of the image.
+
+#### Usage
+
+`./cnf-testsuite latest_tag`
+
+----------
+
+### Require labels
+
+#### Overview
+
+Checks if the CNF validates that the label `app.kubernetes.io/name` is specified with some value.
+Expectation: Checks if pods are using the 'app.kubernetes.io/name' label
+
+#### Rationale
+
+Defining and using labels help identify semantic attributes of your application or Deployment. A common set of labels allows tools to work collaboratively, while describing objects in a common manner that all tools can understand. You should use recommended labels to describe applications in a way that can be queried.
+
+#### Remediation
+
+Make sure to define `app.kubernetes.io/name` label under metadata for your CNF.
+
+#### Usage
+
+`./cnf-testsuite require_labels`
+
+----------
+
+### Versioned tag
+
+#### Overview
+
+Checks if the CNF is using a 'latest' tag instead of a semantic version using OPA Gatekeeper.
+Expectation: The CNF should use an immutable tag that maps to a symantic version of the application.
+
+#### Rationale
+
+You should [avoid using the :latest tag](https://kubernetes.io/docs/concepts/containers/images/) when deploying containers in production as it is harder to track which version of the image is running and more difficult to roll back properly.
+
+#### Remediation
+
+When specifying container images, always specify a tag and ensure to use an immutable tag that maps to a specific version of an application Pod. Remove any usage of the `latest` tag, as it is not guaranteed to be always point to the same version of the image.
+
+#### Usage
+
+`./cnf-testsuite versioned_tag`
+
+----------
+
+### NodePort not used
+
+#### Overview
+
+Checks the CNF for any associated K8s Services that configured to expose the CNF by using a nodePort.
+Expectation: The nodePort configuration field is not found in any of the CNF's services.
+
+#### Rationale
+
+Using node ports ties the CNF to a specific node and therefore makes the CNF less portable and scalable.
+
+#### Remediation
+
+Review all Helm Charts & Kubernetes Manifest files for the CNF and remove all occurrences of the nostPort field in you configuration. Alternatively, configure a service or use another mechanism for exposing your container.
+
+#### Usage
+
+`./cnf-testsuite nodeport_not_used`
+
+----------
+
+### HostPort not used
+
+#### Overview
+
+Checks the CNF's workload resources for any containers using the hostPort configuration field to expose the application.
+Expectation: The hostPort configuration field is not found in any of the defined containers.
+
+#### Rationale
+
+Using host ports ties the CNF to a specific node and therefore makes the CNF less portable and scalable.
+
+#### Remediation
+
+Review all Helm Charts & Kubernetes Manifest files for the CNF and remove all occurrences of the hostPort field in you configuration. Alternatively, configure a service or use another mechanism for exposing your container.
+
+#### Usage
+
+`./cnf-testsuite hostport_not_used`
+
+----------
+
+### Hardcoded IP addresses in K8s runtime configuration
+
+#### Overview
+
+The hardcoded ip address test will scan all of the CNF's workload resources and check for any static, hardcoded ip addresses being used in the configuration.
+Expectation: That no hardcoded IP addresses or subnet masks are found in the Kubernetes workload resources for the CNF.
+
+#### Rationale
+
+Using a hard coded IP in a CNF's configuration designates *how* (imperative) a CNF should achieve a goal, not *what* (declarative) goal the CNF should achieve.
+
+#### Remediation
+
+Review all Helm Charts & Kubernetes Manifest files of the CNF and look for any hardcoded usage of ip addresses. If any are found, you will need to use an operator or some other method to abstract the IP management out of your configuration in order to pass this test.
+
+#### Usage
+
+`./cnf-testsuite hardcoded_ip_addresses_in_k8s_runtime_configuration`
+
+----------
+
+### Secrets used
+
+#### Overview
+
+The secrets used test will scan all the Kubernetes workload resources to see if K8s secrets are being used.
+Expectation: The CNF is using K8s secrets for the management of sensitive data.
+
+#### Rationale
+
+If a CNF uses kubernetes K8s secrets instead of unencrypted environment variables or configmaps, there is [less risk of the Secret (and its data) being exposed](https://kubernetes.io/docs/concepts/configuration/secret/) during the workflow of creating, viewing, and editing Pods.
+
+#### Remediation
+
+Remove any sensitive data stored in configmaps, environment variables and instead utilize K8s Secrets for storing such data.
+Alternatively, you can use an operator or some other method to abstract hardcoded sensitive data out of your configuration.
+The whole test passes if _any_ workload resource in the cnf uses a (non-exempt) secret. If no workload resources use a (non-exempt) secret, the test is skipped.
+
+#### Usage
+
+`./cnf-testsuite secrets_used`
+
+----------
+
+### Immutable configmap
+
+#### Overview
+
+The immutable configmap test will scan the CNF's workload resources and see if immutable configmaps are being used.
+Expectation: Immutable configmaps are being used for non-mutable data.
+
+#### Rationale
+
+For clusters that extensively use ConfigMaps (at least tens of thousands of unique ConfigMap to Pod mounts),
+[preventing changes](https://kubernetes.io/docs/concepts/configuration/configmap/#configmap-immutable)
+to their data has the following advantages:
+
+* protects you from accidental (or unwanted) updates that could cause applications outages
+* improves performance of your cluster by significantly reducing load on kube-apiserver, by closing watches for ConfigMaps marked as immutable.
+
+#### Remediation
+
+Use immutable configmaps for any non-mutable configuration data.
+
+#### Usage
+
+`./cnf-testsuite immutable_configmap`
+
+----------
+
+## Category: 5G Tests
+
+A 5g core is an important part of the service provider's telecommuncations offering. A cloud native 5g architecture uses immutable infrastructure, declarative configuration, and microservices when creating and hosting 5g cloud native network functions.
+
+### Usage
+
+All 5G: `./cnf-testsuite 5g`
+
+----------
+
+### SMF_UPF_core_validator
+
+#### Overview
+
+Checks the pfcp heartbeat between the smf and upf to make sure it remains close to baseline.
+Expectation: 5g core should continue to function during various CNF tests.
+
+#### Rationale
+
+A 5g core's [SMF and UPF CNFs have a hearbeat](https://www.etsi.org/deliver/etsi_ts/123500_123599/123527/15.01.00_60/ts_123527v150100p.pdf), implemented use the PFCP protocol standard, which measures if the connection between the two CNFs is active.
+After measure a baseline of the heartbeat a comparison between the baseline and the performance of the heartbeat while running test functions will expose the [cloud native resilience](https://www.cncf.io/blog/2021/09/23/cloud-native-chaos-and-telcos-enforcing-reliability-and-availability-for-telcos/) of the cloud native 5g core.
+
+#### Remediation
+
+#### Usage
+
+`./cnf-testsuite smf_upf_core_validator`
+
+----------
+
+### SUCI_enabled
+
+#### Overview
+
+Checks to see if the 5g core supports suci concealment.
+Expectation: 5g core should use suci concealment.
+
+#### Rationale
+
+In order to [protect identifying information](https://nickvsnetworking.com/5g-subscriber-identifiers-suci-supi/) from being sent over the network as clear text, 5g cloud native cores should implement [SUPI and SUCI concealment](https://www.etsi.org/deliver/etsi_ts/133500_133599/133514/16.04.00_60/ts_133514v160400p.pdf)
+
+#### Remediation
+
+#### Usage
+
+`./cnf-testsuite suci_enabled`
+
+----------
+
+## Category: RAN Tests
+
+### Usage
+
+All RAN: `./cnf-testsuite ran`
+
+A cloud native radio access network's (RAN) cloud native functions should use immutable infrastructure, declarative configuration, and microservices.
+ORAN cloud native functions should adhere to cloud native principles while also complying with the [ORAN alliance's standards](https://www.o-ran.org/blog/o-ran-alliance-introduces-48-new-specifications-released-since-july-2021).
+
+----------
+
+### ORAN_e2_connection
+
+#### Overview
+
+Checks if a RIC uses a oran compatible e2 connection.
+Expectation: An ORAN RIC should use an e2 connection.
+
+#### Rationale
+
+*A near real-time RAN intelligent controler (RIC) uses the [E2 standard](https://wiki.o-ran-sc.org/display/RICP/E2T+Architecture) as an open, interoperable, interface to connect to [RAN-optimizated applications, onboarded as xApps](https://www.5gtechnologyworld.com/how-does-5gs-o-ran-e2-interface-work/).
+The xApps use platform services available in the near-RT RIC to communicate with the downstream network functions through the E2 interface.
+
+#### Remediation
+
+#### Usage
+
+`./cnf-testsuite oran_e2_connection`
+
+----------
+
+## Category: Platform Tests
+
+### Usage
+
+All platform: `./cnf-testsuite platform`
+
+All platform hardware and scheduling: `./cnf-testsuite platform:hardware_and_scheduling`
+
+All platform resilience: `./cnf-testsuite platform:resilience poc`
+
+All platform security: `./cnf-testsuite platform:security`
+
+----------
+
+### K8s Conformance
+
+#### Overview
+
+Check if your platform passes the K8s conformance test.
+See for details on what is tested.
+Expectation: The K8s cluster passes the K8s conformance tests
+
+#### Rationale
+
+A Vendor's Kubernetes Platform should pass [Kubernetes Conformance](https://github.com/kubernetes/community/blob/master/contributors/devel/sig-architecture/conformance-tests.md). This ensures that the platform offering meets the same required APIs, features & interoperability expectations as in open source community versions of K8s.
+Applications that can operate on a [Certified Kubernetes](https://www.cncf.io/certification/software-conformance/) should be cross-compatible with any other Certified Kubernetes platform.
+
+#### Remediation
+
+Check that [Sonobuoy](https://github.com/vmware-tanzu/sonobuoy) can be successfully run and passes without failure on your platform. Any failures found by Sonobuoy will provide debug and remediation steps required to get your K8s cluster into a conformant state.
+
+#### Usage
+
+`./cnf-testsuite k8s_conformance`
+
+----------
+
+### ClusterAPI enabled
+
+#### Overview
+
+Checks the platforms Kubernetes Nodes to see if they were instansiated by ClusterAPI.
+Expectation: The cluster has Cluster API enabled which manages at least one Node.
+
+#### Rationale
+
+A Kubernetes Platform should leverage [Cluster API](https://cluster-api.sigs.k8s.io/) to ensure that best-practices are followed for both bootstrapping & cluster lifecycle management. Kubernetes is a complex system that relies on several components being configured correctly, maintaining an in-house lifecycle management system for kubernetes is unlikey to meet best practice guideline unless significant resources are deticated to it.
+
+#### Remediation
+
+Enable ClusterAPI and start using it to manage the provisioning and lifecycle of your Kubernetes clusters.
+
+#### Usage
+
+`./cnf-testsuite clusterapi_enabled`
+
+----------
+
+### OCI Compliant
+
+#### Overview
+
+Inspects all worker nodes and checks if the run-time being used for scheduling is OCI compliant.
+Expectation: All worker nodes are using an OCI compliant run-time.
+
+#### Rationale
+
+The [OCI Initiative](https://opencontainers.org/) was created to ensure that runtimes conform to both the runtime-spec and image-spec. These two specifications outline how a “filesystem bundle” is unpacked on disk and that the image itself contains sufficient information to launch the application on the target platform.
+As a best practice, your platform must use an OCI compliant runtime, this ensures that the runtime used is cross-compatible and supports interoperability with other runtimes. This means that workloads can be freely moved to other runtimes and prevents vendor lock in.
+
+#### Remediation
+
+Check if your Kuberentes Platform is using an [OCI Compliant Runtime](https://opencontainers.org/). If you platform is not using an OCI Compliant Runtime, you'll need to switch to a new runtime that is OCI Compliant in order to pass this test.
+
+#### Usage
+
+`./cnf-testsuite platform:oci_compliant`
+
+----------
+
+### (POC) Worker reboot recovery
+
+#### Overview
+
+**WARNING**: this is a destructive test and will reboot your _host_ node! Do not run this unless you have completely separate cluster, e.g. development or test cluster.
+
+Run node failure test which forces a reboot of the Node ("host system"). The Pods on that node should be rescheduled to a new Node.
+Expectation: Pods should reschedule after a node failure.
+
+#### Rationale
+
+Cloud native systems should be self-healing. To follow cloud-native best practices your platform should be resiliant and reschedule all workloads when such node failures occur.
+
+#### Remediation
+
+Reboot a worker node in your Kubernetes cluster verify that the node can recover and re-join the cluster in a schedulable state. Workloads should also be rescheduled to the node once it's back online.
+
+#### Usage
+
+`./cnf-testsuite platform:worker_reboot_recovery poc destructive`
+
+----------
+
+### Cluster admin
+
+#### Overview
+
+Check which subjects have cluster-admin RBAC permissions – either by being bound to the cluster-admin clusterrole, or by having equivalent high privileges.
+Expectation: The [cluster admin role should not be bound to a Pod](https://bit.ly/C0035_cluster_admin)
+
+#### Rationale
+
+Role-based access control (RBAC) is a key security feature in Kubernetes. RBAC can restrict the allowed actions of the various identities in the cluster. Cluster-admin is a built-in high privileged role in Kubernetes. Attackers who have permissions to create bindings and cluster-bindings in the cluster can create a binding to the cluster-admin ClusterRole or to other high privileges roles.
+As a best practice, a principle of least privilege should be followed and cluster-admin privilege should only be used on an as-needed basis.
+
+#### Remediation
+
+You should apply least privilege principle. Make sure cluster admin permissions are granted only when it is absolutely necessary. Don't use subjects with high privileged permissions for daily operations.
+
+#### Usage
+
+`./cnf-testsuite platform:cluster_admin`
+
+----------
+
+### Control plane hardening
+
+#### Overview
+
+Checks if the insecure-port flag is set for the K8s API Server.
+Expectation: That the the k8s control plane is secure and not hosted on an [insecure port](https://bit.ly/C0005_Control_Plane)
+
+#### Rationale
+
+The control plane is the core of Kubernetes and gives users the ability to view containers, schedule new Pods, read Secrets, and execute commands in the cluster. Therefore, it should be protected. It is recommended to avoid control plane exposure to the Internet or to an untrusted network and require TLS encryption.
+
+#### Remediation
+
+Set the insecure-port flag of the API server to zero.
+See more at [ARMO-C0005](https://bit.ly/C0005_Control_Plane)
+
+#### Usage
+
+`./cnf-testsuite platform:control_plane_hardening`
+
+----------
+
+### Tiller images
+
+#### Overview
+
+Checks if a Helm v2 / Tiller image is deployed and used on the platform.
+Expectation: The platform should be using Helm v3+ without Tiller.
+
+#### Rationale
+
+Tiller, found in Helm v2, has known security challenges. It requires administrative privileges and acts as a shared resource accessible to any authenticated user. Tiller can lead to privilege escalation as restricted users can impact other users. It is recommend to use Helm v3+ which does not contain Tiller for these reasons
+
+#### Remediation
+
+Switch to using Helm v3+ and make sure not to pull any images with name tiller in them
+
+#### Usage
+
+`./cnf-testsuite platform:helm_tiller`