diff --git a/docs/getting-started/download.md b/docs/getting-started/download.md index 6b53861f..6a1092bb 100644 --- a/docs/getting-started/download.md +++ b/docs/getting-started/download.md @@ -12,7 +12,7 @@ After that you will be provided with the credentials to access the software on [ In order to use the software, log in to the registry using the following command: ```bash -docker login ghcr.io +docker login ghcr.io --username provided_user_name --password provided_token_string ``` ## Downloading hhfab diff --git a/docs/vlab/demo.md b/docs/vlab/demo.md index 3c651021..1f6c099a 100644 --- a/docs/vlab/demo.md +++ b/docs/vlab/demo.md @@ -86,6 +86,118 @@ graph TD L1 & L2 & L2 & L3 & L4 & L5 <----> S1 & S2 ``` +## Utility based VPC creation + +### Setup VPCs +`hhfab vlab` includes a utility to create VPCs in vlab. This utility is a `hhfab vlab` sub-command. `hhfab vlab setup-vpcs`. + +``` +NAME: + hhfab vlab setup-vpcs - setup VPCs and VPCAttachments for all servers and configure networking on them + +USAGE: + hhfab vlab setup-vpcs [command options] + +OPTIONS: + --dns-servers value, --dns value [ --dns-servers value, --dns value ] DNS servers for VPCs advertised by DHCP + --force-clenup, -f start with removing all existing VPCs and VPCAttachments (default: false) + --help, -h show help + --interface-mtu value, --mtu value interface MTU for VPCs advertised by DHCP (default: 0) + --ipns value IPv4 namespace for VPCs (default: "default") + --name value, -n value name of the VM or HW to access + --servers-per-subnet value, --servers value number of servers per subnet (default: 1) + --subnets-per-vpc value, --subnets value number of subnets per VPC (default: 1) + --time-servers value, --ntp value [ --time-servers value, --ntp value ] Time servers for VPCs advertised by DHCP + --vlanns value VLAN namespace for VPCs (default: "default") + --wait-switches-ready, --wait wait for switches to be ready before and after configuring VPCs and VPCAttachments (default: true) + + Global options: + + --brief, -b brief output (only warn and error) (default: false) [$HHFAB_BRIEF] + --cache-dir DIR use cache dir DIR for caching downloaded files (default: "/home/ubuntu/.hhfab-cache") [$HHFAB_CACHE_DIR] + --verbose, -v verbose output (includes debug) (default: false) [$HHFAB_VERBOSE] + --workdir PATH run as if hhfab was started in PATH instead of the current working directory (default: "/home/ubuntu") [$HHFAB_WORK_DIR] +``` + +### Setup Peering +`hhfab vlab` includes a utility to create VPC peerings in VLAB. This utility is a `hhfab vlab` sub-command. `hhfab vlab setup-peerings`. + +``` +NAME: + hhfab vlab setup-peerings - setup VPC and External Peerings per requests (remove all if empty) + +USAGE: + Setup test scenario with VPC/External Peerings by specifying requests in the format described below. + + Example command: + + $ hhfab vlab setup-peerings 1+2 2+4:r=border 1~as5835 2~as5835:subnets=sub1,sub2:prefixes=0.0.0.0/0,22.22.22.0/24 + + Which will produce: + 1. VPC peering between vpc-01 and vpc-02 + 2. Remote VPC peering between vpc-02 and vpc-04 on switch group named border + 3. External peering for vpc-01 with External as5835 with default vpc subnet and any routes from external permitted + 4. External peering for vpc-02 with External as5835 with subnets sub1 and sub2 exposed from vpc-02 and default route + from external permitted as well any route that belongs to 22.22.22.0/24 + + VPC Peerings: + + 1+2 -- VPC peering between vpc-01 and vpc-02 + demo-1+demo-2 -- VPC peering between demo-1 and demo-2 + 1+2:r -- remote VPC peering between vpc-01 and vpc-02 on switch group if only one switch group is present + 1+2:r=border -- remote VPC peering between vpc-01 and vpc-02 on switch group named border + 1+2:remote=border -- same as above + + External Peerings: + + 1~as5835 -- external peering for vpc-01 with External as5835 + 1~ -- external peering for vpc-1 with external if only one external is present for ipv4 namespace of vpc-01, allowing + default subnet and any route from external + 1~:subnets=default@prefixes=0.0.0.0/0 -- external peering for vpc-1 with auth external with default vpc subnet and + default route from external permitted + 1~as5835:subnets=default,other:prefixes=0.0.0.0/0_le32_ge32,22.22.22.0/24 -- same but with more details + 1~as5835:s=default,other:p=0.0.0.0/0_le32_ge32,22.22.22.0/24 -- same as above + +OPTIONS: + --help, -h show help + --name value, -n value name of the VM or HW to access + --wait-switches-ready, --wait wait for switches to be ready before before and after configuring peerings (default: true) + + Global options: + + --brief, -b brief output (only warn and error) (default: false) [$HHFAB_BRIEF] + --cache-dir DIR use cache dir DIR for caching downloaded files (default: "/home/ubuntu/.hhfab-cache") [$HHFAB_CACHE_DIR] + --verbose, -v verbose output (includes debug) (default: false) [$HHFAB_VERBOSE] + --workdir PATH run as if hhfab was started in PATH instead of the current working directory (default: "/home/ubuntu") [$HHFAB_WORK_DIR] +``` + +### Test Connectivity +`hhfab vlab` includes a utility to test connectivity between servers inside VLAB. This utility is a `hhfab vlab` sub-command. `hhfab vlab test-connectivity`. + +``` +NAME: + hhfab vlab test-connectivity - test connectivity between all servers + +USAGE: + hhfab vlab test-connectivity [command options] + +OPTIONS: + --curls value number of curl tests to run for each server to test external connectivity (0 to disable) (default: 3) + --help, -h show help + --iperfs value seconds of iperf3 test to run between each pair of reachable servers (0 to disable) (default: 10) + --iperfs-speed value minimum speed in Mbits/s for iperf3 test to consider successful (0 to not check speeds) (default: 7000) + --name value, -n value name of the VM or HW to access + --pings value number of pings to send between each pair of servers (0 to disable) (default: 5) + --wait-switches-ready, --wait wait for switches to be ready before testing connectivity (default: true) + + Global options: + + --brief, -b brief output (only warn and error) (default: false) [$HHFAB_BRIEF] + --cache-dir DIR use cache dir DIR for caching downloaded files (default: "/home/ubuntu/.hhfab-cache") [$HHFAB_CACHE_DIR] + --verbose, -v verbose output (includes debug) (default: false) [$HHFAB_VERBOSE] + --workdir PATH run as if hhfab was started in PATH instead of the current working directory (default: "/home/ubuntu") [$HHFAB_WORK_DIR] + +``` ## Manual VPC creation ### Creating and attaching VPCs @@ -93,7 +205,7 @@ You can create and attach VPCs to the VMs using the `kubectl fabric vpc` command cluster using the kubeconfig. For example, run the following commands to create 2 VPCs with a single subnet each, a DHCP server enabled with its optional IP address range start defined, and to attach them to some of the test servers: -```console +``` core@control-1 ~ $ kubectl get conn | grep server server-01--mclag--leaf-01--leaf-02 mclag 5h13m server-02--mclag--leaf-01--leaf-02 mclag 5h13m @@ -117,7 +229,7 @@ core@control-1 ~ $ kubectl fabric vpc attach --vpc-subnet vpc-2/default --connec The VPC subnet should belong to an IPv4Namespace, the default one in the VLAB is `10.0.0.0/16`: -```console +``` core@control-1 ~ $ kubectl get ipns NAME SUBNETS AGE default ["10.0.0.0/16"] 5h14m @@ -126,7 +238,7 @@ default ["10.0.0.0/16"] 5h14m After you created the VPCs and VPCAttachments, you can check the status of the agents to make sure that the requested configuration was applied to the switches: -```console +``` core@control-1 ~ $ kubectl get agents NAME ROLE DESCR APPLIED APPLIEDG CURRENTG VERSION leaf-01 server-leaf VS-01 MCLAG 1 2m2s 5 5 v0.23.0 @@ -149,7 +261,7 @@ the little helper pre-installed by Fabricator on test servers, `hhnet`. For `server-01`: -```console +``` core@server-01 ~ $ hhnet cleanup core@server-01 ~ $ hhnet bond 1001 enp2s1 enp2s2 10.0.1.10/24 @@ -173,7 +285,7 @@ core@server-01 ~ $ ip a And for `server-02`: -```console +``` core@server-02 ~ $ hhnet cleanup core@server-02 ~ $ hhnet bond 1002 enp2s1 enp2s2 10.0.2.10/24 @@ -199,7 +311,7 @@ core@server-02 ~ $ ip a You can test connectivity between the servers before peering the switches using the `ping` command: -```console +``` core@server-01 ~ $ ping 10.0.2.10 PING 10.0.2.10 (10.0.2.10) 56(84) bytes of data. From 10.0.1.1 icmp_seq=1 Destination Net Unreachable @@ -210,7 +322,7 @@ From 10.0.1.1 icmp_seq=3 Destination Net Unreachable 3 packets transmitted, 0 received, +3 errors, 100% packet loss, time 2003ms ``` -```console +``` core@server-02 ~ $ ping 10.0.1.10 PING 10.0.1.10 (10.0.1.10) 56(84) bytes of data. From 10.0.2.1 icmp_seq=1 Destination Net Unreachable @@ -225,7 +337,7 @@ From 10.0.2.1 icmp_seq=3 Destination Net Unreachable To enable connectivity between the VPCs, peer them using `kubectl fabric vpc peer`: -```console +``` core@control-1 ~ $ kubectl fabric vpc peer --vpc vpc-1 --vpc vpc-2 07:04:58 INF VPCPeering created name=vpc-1--vpc-2 ``` @@ -233,7 +345,7 @@ core@control-1 ~ $ kubectl fabric vpc peer --vpc vpc-1 --vpc vpc-2 Make sure to wait until the peering is applied to the switches using `kubectl get agents` command. After that, you can test connectivity between the servers again: -```console +``` core@server-01 ~ $ ping 10.0.2.10 PING 10.0.2.10 (10.0.2.10) 56(84) bytes of data. 64 bytes from 10.0.2.10: icmp_seq=1 ttl=62 time=6.25 ms @@ -245,7 +357,7 @@ PING 10.0.2.10 (10.0.2.10) 56(84) bytes of data. rtt min/avg/max/mdev = 6.245/7.481/8.601/0.965 ms ``` -```console +``` core@server-02 ~ $ ping 10.0.1.10 PING 10.0.1.10 (10.0.1.10) 56(84) bytes of data. 64 bytes from 10.0.1.10: icmp_seq=1 ttl=62 time=5.44 ms @@ -260,12 +372,12 @@ rtt min/avg/max/mdev = 4.489/5.529/6.656/0.886 ms If you delete the VPC peering with `kubectl delete` applied to the relevant object and wait for the agent to apply the configuration on the switches, you can observe that connectivity is lost again: -```console +``` core@control-1 ~ $ kubectl delete vpcpeering/vpc-1--vpc-2 vpcpeering.vpc.githedgehog.com "vpc-1--vpc-2" deleted ``` -```console +``` core@server-01 ~ $ ping 10.0.2.10 PING 10.0.2.10 (10.0.2.10) 56(84) bytes of data. From 10.0.1.1 icmp_seq=1 Destination Net Unreachable @@ -280,7 +392,7 @@ From 10.0.1.1 icmp_seq=3 Destination Net Unreachable You can see duplicate packets in the output of the `ping` command between some of the servers. This is expected behavior and is caused by the limitations in the VLAB environment. - ```console + ``` core@server-01 ~ $ ping 10.0.5.10 PING 10.0.5.10 (10.0.5.10) 56(84) bytes of data. 64 bytes from 10.0.5.10: icmp_seq=1 ttl=62 time=9.58 ms @@ -294,124 +406,12 @@ From 10.0.1.1 icmp_seq=3 Destination Net Unreachable 3 packets transmitted, 3 received, +3 duplicates, 0% packet loss, time 2003ms rtt min/avg/max/mdev = 6.987/8.720/9.595/1.226 ms ``` -## Utility based VPC creation - -### Setup VPCs -`hhfab vlab` includes a utility to create VPCs in vlab. This utility is a `hhfab vlab` sub-command. `hhfab vlab setup-vpcs`. - -```console -NAME: - hhfab vlab setup-vpcs - setup VPCs and VPCAttachments for all servers and configure networking on them - -USAGE: - hhfab vlab setup-vpcs [command options] - -OPTIONS: - --dns-servers value, --dns value [ --dns-servers value, --dns value ] DNS servers for VPCs advertised by DHCP - --force-clenup, -f start with removing all existing VPCs and VPCAttachments (default: false) - --help, -h show help - --interface-mtu value, --mtu value interface MTU for VPCs advertised by DHCP (default: 0) - --ipns value IPv4 namespace for VPCs (default: "default") - --name value, -n value name of the VM or HW to access - --servers-per-subnet value, --servers value number of servers per subnet (default: 1) - --subnets-per-vpc value, --subnets value number of subnets per VPC (default: 1) - --time-servers value, --ntp value [ --time-servers value, --ntp value ] Time servers for VPCs advertised by DHCP - --vlanns value VLAN namespace for VPCs (default: "default") - --wait-switches-ready, --wait wait for switches to be ready before and after configuring VPCs and VPCAttachments (default: true) - - Global options: - - --brief, -b brief output (only warn and error) (default: false) [$HHFAB_BRIEF] - --cache-dir DIR use cache dir DIR for caching downloaded files (default: "/home/ubuntu/.hhfab-cache") [$HHFAB_CACHE_DIR] - --verbose, -v verbose output (includes debug) (default: false) [$HHFAB_VERBOSE] - --workdir PATH run as if hhfab was started in PATH instead of the current working directory (default: "/home/ubuntu") [$HHFAB_WORK_DIR] -``` - -### Setup Peering -`hhfab vlab` includes a utility to create VPC peerings in VLAB. This utility is a `hhfab vlab` sub-command. `hhfab vlab setup-peerings`. - -```console -NAME: - hhfab vlab setup-peerings - setup VPC and External Peerings per requests (remove all if empty) - -USAGE: - Setup test scenario with VPC/External Peerings by specifying requests in the format described below. - - Example command: - - $ hhfab vlab setup-peerings 1+2 2+4:r=border 1~as5835 2~as5835:subnets=sub1,sub2:prefixes=0.0.0.0/0,22.22.22.0/24 - - Which will produce: - 1. VPC peering between vpc-01 and vpc-02 - 2. Remote VPC peering between vpc-02 and vpc-04 on switch group named border - 3. External peering for vpc-01 with External as5835 with default vpc subnet and any routes from external permitted - 4. External peering for vpc-02 with External as5835 with subnets sub1 and sub2 exposed from vpc-02 and default route - from external permitted as well any route that belongs to 22.22.22.0/24 - - VPC Peerings: - - 1+2 -- VPC peering between vpc-01 and vpc-02 - demo-1+demo-2 -- VPC peering between demo-1 and demo-2 - 1+2:r -- remote VPC peering between vpc-01 and vpc-02 on switch group if only one switch group is present - 1+2:r=border -- remote VPC peering between vpc-01 and vpc-02 on switch group named border - 1+2:remote=border -- same as above - - External Peerings: - - 1~as5835 -- external peering for vpc-01 with External as5835 - 1~ -- external peering for vpc-1 with external if only one external is present for ipv4 namespace of vpc-01, allowing - default subnet and any route from external - 1~:subnets=default@prefixes=0.0.0.0/0 -- external peering for vpc-1 with auth external with default vpc subnet and - default route from external permitted - 1~as5835:subnets=default,other:prefixes=0.0.0.0/0_le32_ge32,22.22.22.0/24 -- same but with more details - 1~as5835:s=default,other:p=0.0.0.0/0_le32_ge32,22.22.22.0/24 -- same as above - -OPTIONS: - --help, -h show help - --name value, -n value name of the VM or HW to access - --wait-switches-ready, --wait wait for switches to be ready before before and after configuring peerings (default: true) - - Global options: - - --brief, -b brief output (only warn and error) (default: false) [$HHFAB_BRIEF] - --cache-dir DIR use cache dir DIR for caching downloaded files (default: "/home/ubuntu/.hhfab-cache") [$HHFAB_CACHE_DIR] - --verbose, -v verbose output (includes debug) (default: false) [$HHFAB_VERBOSE] - --workdir PATH run as if hhfab was started in PATH instead of the current working directory (default: "/home/ubuntu") [$HHFAB_WORK_DIR] -``` - -### Test Connectivity -`hhfab vlab` includes a utility to test connectivity between servers inside VLAB. This utility is a `hhfab vlab` sub-command. `hhfab vlab test-connectivity`. - -```console -NAME: - hhfab vlab test-connectivity - test connectivity between all servers - -USAGE: - hhfab vlab test-connectivity [command options] - -OPTIONS: - --curls value number of curl tests to run for each server to test external connectivity (0 to disable) (default: 3) - --help, -h show help - --iperfs value seconds of iperf3 test to run between each pair of reachable servers (0 to disable) (default: 10) - --iperfs-speed value minimum speed in Mbits/s for iperf3 test to consider successful (0 to not check speeds) (default: 7000) - --name value, -n value name of the VM or HW to access - --pings value number of pings to send between each pair of servers (0 to disable) (default: 5) - --wait-switches-ready, --wait wait for switches to be ready before testing connectivity (default: true) - - Global options: - - --brief, -b brief output (only warn and error) (default: false) [$HHFAB_BRIEF] - --cache-dir DIR use cache dir DIR for caching downloaded files (default: "/home/ubuntu/.hhfab-cache") [$HHFAB_CACHE_DIR] - --verbose, -v verbose output (includes debug) (default: false) [$HHFAB_VERBOSE] - --workdir PATH run as if hhfab was started in PATH instead of the current working directory (default: "/home/ubuntu") [$HHFAB_WORK_DIR] - -``` ## Using VPCs with overlapping subnets First, create a second IPv4Namespace with the same subnet as the default one: -```console +``` core@control-1 ~ $ kubectl get ipns NAME SUBNETS AGE default ["10.0.0.0/16"] 24m @@ -440,7 +440,7 @@ Let's assume that `vpc-1` already exists and is attached to `server-01` (see [Cr Now we can create `vpc-3` with the same subnet as `vpc-1` (but in the different IPv4Namespace) and attach it to the `server-03`: -```console +``` core@control-1 ~ $ cat < vpc-3.yaml apiVersion: vpc.githedgehog.com/v1beta1 kind: VPC diff --git a/docs/vlab/overview.md b/docs/vlab/overview.md index 249d1380..8b7b3122 100644 --- a/docs/vlab/overview.md +++ b/docs/vlab/overview.md @@ -1,4 +1,4 @@ -# Overview +# VLAB Overview It's possible to run Hedgehog Fabric in a fully virtual environment using QEMU/KVM and SONiC Virtual Switch (VS). It's a great way to try out Fabric and learn about its look and feel, API, and capabilities. It's not suitable for any @@ -7,7 +7,7 @@ data plane or performance testing, or for production use. In the VLAB all switches start as empty VMs with only the ONIE image on them, and they go through the whole discovery, boot and installation process like on real hardware. -## Overview +## HHFAB The `hhfab` CLI provides a special command `vlab` to manage the virtual labs. It allows you to run sets of virtual machines to simulate the Fabric infrastructure including control node, switches, test servers and it automatically runs @@ -45,9 +45,11 @@ sure that you have at least allocated RAM and disk space for all VMs. NVMe SSD for VM disks is highly recommended. -## Installing prerequisites +## Installing Prerequisites -On Ubuntu 22.04 LTS you can install all required packages using the following commands: +To run VLAB, your system needs `docker`,`qemu`,`kvm`, and `hhfab`. On Ubuntu 22.04 LTS you can install all required packages using the following commands: + +### Docker ```bash curl -fsSL https://get.docker.com -o install-docker.sh @@ -56,6 +58,7 @@ sudo usermod -aG docker $USER newgrp docker ``` +### Qemu/KVM ```bash sudo apt install -y qemu-kvm swtpm-tools tpm2-tools socat sudo usermod -aG kvm $USER @@ -71,6 +74,30 @@ INFO: /dev/kvm exists KVM acceleration can be used ``` +### Oras + +For convenience Hedgehog provides a script to install `oras`: + +```bash +curl -fsSL https://i.hhdev.io/oras | bash +``` + +### Hhfab + +Hedgehog maintains a utility to install and configure VLAB, called `hhfab`. + +You need a GitHub access token to download `hhfab`, please submit a ticket using the [Hedgehog Support Portal](https://support.githedgehog.com/). Once in possession of the credentials, use the provided username and token to log into the GitHub container registry: + +```bash +docker login ghcr.io --username provided_username --password provided_token +``` + +Once logged in, download and run the script: + +```bash +curl -fsSL https://i.hhdev.io/hhfab | bash +``` + ## Next steps -* [Running VLAB](./running.md) +* [Configure and Run VLAB](./running.md) diff --git a/docs/vlab/running.md b/docs/vlab/running.md index c8f25aae..c4901eb3 100644 --- a/docs/vlab/running.md +++ b/docs/vlab/running.md @@ -5,7 +5,7 @@ before running VLAB. ## Initialize VLAB -First, initialize Fabricator by running `hhfab init --dev`. This command supports several customization options that are listed in the output of `hhfab init --help`. +First, initialize Fabricator by running `hhfab init --dev`. This command creates the `fab.yaml` file, which is the main configuration file for the fabric. This command supports several customization options that are listed in the output of `hhfab init --help`. ```console ubuntu@docs:~$ hhfab init --dev @@ -16,7 +16,7 @@ ubuntu@docs:~$ hhfab init --dev ``` ## VLAB Topology -By default, the command creates 2 spines, 2 MCLAG leaves and 1 non-MCLAG leaf with 2 fabric connections (between each spine and leaf), 2 MCLAG peer links and 2 MCLAG session links as well as 2 loopbacks per leaf for implementing VPC loopback workaround. To generate the preceding topology, `hhfab vlab gen`. You can also configure the number of spines, leafs, connections, and so on. For example, flags `--spines-count` and `--mclag-leafs-count` allow you to set the number of spines and MCLAG leaves, respectively. For complete options, `hhfab vlab gen -h`. +By default, `hhfab init` creates 2 spines, 2 MCLAG leaves and 1 non-MCLAG leaf with 2 fabric connections (between each spine and leaf), 2 MCLAG peer links and 2 MCLAG session links as well as 2 loopbacks per leaf for implementing VPC loopback workaround. To generate the preceding topology, `hhfab vlab gen`. You can also configure the number of spines, leafs, connections, and so on. For example, flags `--spines-count` and `--mclag-leafs-count` allow you to set the number of spines and MCLAG leaves, respectively. For complete options, `hhfab vlab gen -h`. ```console ubuntu@docs:~$ hhfab vlab gen @@ -29,10 +29,10 @@ ubuntu@docs:~$ hhfab vlab gen 21:27:16 INF >>> mclagServers=2 eslagServers=2 unbundledServers=1 bundledServers=1 21:27:16 INF Generated wiring file name=vlab.generated.yaml ``` +You can jump [to the instructions](#build-the-installer-and-start-vlab) to start VLAB, or see the next section for customizing the topology. ### Collapsed Core -If a Collapsed Core topology is desired, after the `hhfab init --dev` step, edit the resulting `fab.yaml` file and change the `mode: spine-leaf` to `mode: collapsed-core`. -Or if you want to run Collapsed Core topology with 2 MCLAG switches: +If a Collapsed Core topology is desired, after the `hhfab init --dev` step, edit the resulting `fab.yaml` file and change the `mode: spine-leaf` to `mode: collapsed-core`: ```console ubuntu@docs:~$ hhfab vlab gen @@ -69,7 +69,7 @@ prerequisites for running the VLAB. ## Build the Installer and Start VLAB -In VLAB the build and run step are combined into one command for simplicity, `hhfab vlab up`. For successive runs use the `--kill-stale` flag to ensure that any virtual machines from a previous run are gone. This command does not return, it runs as long as the VLAB is up. This is done so that shutdown is a simple `ctrl + c`. +To build and start the virtual machines, use `hhfab vlab up`. For successive runs, use the `--kill-stale` flag to ensure that any virtual machines from a previous run are gone. `hhfab vlab up` runs in the foreground and does not return, which allows you to stop all VLAB VMs by simply pressing `Ctrl + C`. ```console ubuntu@docs:~$ hhfab vlab up 11:48:22 INF Hedgehog Fabricator version=v0.30.0 @@ -119,7 +119,7 @@ When the message `INF Control node is ready vm=control-1 type=control` from the has been displayed, you can get into the control node and other VMs to watch the Fabric coming up and switches getting provisioned. See [Accessing the VLAB](#accessing-the-vlab). -## Configuring VLAB VMs +## Enable Outside connectivity from VLAB VMs By default, all test server VMs are isolated and have no connectivity to the host or the Internet. You can configure enable connectivity using `hhfab vlab up --restrict-servers=false` to allow the test servers to access the Internet and @@ -127,14 +127,6 @@ the host. When you enable connectivity, VMs get a default route pointing to the VPC peering you need to configure test server VMs to use the VPC attachment as a default route (or just some specific subnets). -## Default credentials - -Fabricator creates default users and keys for you to login into the control node and test servers as well as for the -SONiC Virtual Switches. - -Default user with passwordless sudo for the control node and test servers is `core` with password `HHFab.Admin!`. -Admin user with full access and passwordless sudo for the switches is `admin` with password `HHFab.Admin!`. -Read-only, non-sudo user with access only to the switch CLI for the switches is `op` with password `HHFab.Op!`. ## Accessing the VLAB @@ -167,9 +159,19 @@ Name: control-1 Ready: true Basedir: .hhfab/vlab-vms/control-1 ``` +### Default credentials + +Fabricator creates default users and keys for you to login into the control node and test servers as well as for the +SONiC Virtual Switches. + +The default user with password-less sudo for the control node and test servers is `core` with password `HHFab.Admin!`. +The admin user with full access and password-less sudo for the switches is `admin` with password `HHFab.Admin!`. +The read-only, non-sudo user with access to the switch CLI is `op` with password `HHFab.Op!`. + -On the control node you have access to kubectl, Fabric CLI, and k9s to manage the Fabric. You can find information -about the switches provisioning by running `kubectl get agents -o wide`. It usually takes about 10-15 minutes for the +## Use Kubectl to Interact with the Fabric +On the control node you have access to kubectl, Fabric CLI, and k9s to manage the Fabric. To view information +about the switches run `kubectl get agents -o wide`. After the control node is available it usually takes about 10-15 minutes for the switches to get installed. After the switches are provisioned, the command returns something like this: @@ -192,7 +194,7 @@ applied. `CurrentG` shows the generation of the configuration the switch is supp At that point Fabric is ready and you can use `kubectl` and `kubectl fabric` to manage the Fabric. You can find more about managing the Fabric in the [Running Demo](demo.md) and [User Guide](../user-guide/overview.md) sections. -## Getting main Fabric objects +### Getting main Fabric objects You can list the main Fabric objects by running `kubectl get` on the control node. You can find more details about using the Fabric in the [User Guide](../user-guide/overview.md), [Fabric API](../reference/api.md) and @@ -261,7 +263,7 @@ default 6h12m ## Reset VLAB -To reset VLAB and start over directory and run `hhfab init -f` which will force overwrite your existing configuration, `fab.yaml`. +If VLAB is currently running, press `Ctrl + C` to stop it. To reset VLAB and start over run `hhfab init -f`. This option forces the process to overwrite your existing configuration in `fab.yaml`. ## Next steps