Skip to content

Commit

Permalink
Minor doc bug fixes (#451)
Browse files Browse the repository at this point in the history
  • Loading branch information
nkmcalli authored Feb 18, 2025
1 parent ae6b2ff commit 698e13f
Show file tree
Hide file tree
Showing 2 changed files with 15 additions and 15 deletions.
16 changes: 8 additions & 8 deletions docs/docs/user-guide/developer-guide/environment-config.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,16 +5,16 @@ The following are the environment configuration variables that you can specify i

| Name | Example | Description |
|----------------------------------|--------------------------------|-----------------------------------------------------------------------|
| `CAPTION_CLASSIFIER_GRPC_TRITON` | - `triton:8001` <br/> | The endpoint where the caption classifier model is hosted using gRPC for communication. This is used to send requests for caption classification. You must specify only ONE of an http or gRPC endpoint. If both are specified gRPC will take precedence. |
| `CAPTION_CLASSIFIER_MODEL_NAME` | - `deberta_large` <br/> | The name of the caption classifier model. |
| `DOUGHNUT_TRITON_HOST` | - `triton-doughnut` <br/> | The hostname or IP address of the DOUGHNUT model service. |
| `DOUGHNUT_TRITON_PORT` | - `8001` <br/> | The port number on which the DOUGHNUT model service is listening. |
| `CAPTION_CLASSIFIER_GRPC_TRITON` | `triton:8001` <br/> | The endpoint where the caption classifier model is hosted using gRPC for communication. This is used to send requests for caption classification. You must specify only ONE of an http or gRPC endpoint. If both are specified gRPC will take precedence. |
| `CAPTION_CLASSIFIER_MODEL_NAME` | `deberta_large` <br/> | The name of the caption classifier model. |
| `DOUGHNUT_TRITON_HOST` | `triton-doughnut` <br/> | The hostname or IP address of the DOUGHNUT model service. |
| `DOUGHNUT_TRITON_PORT` | `8001` <br/> | The port number on which the DOUGHNUT model service is listening. |
| `INGEST_LOG_LEVEL` | - `DEBUG` <br/> - `INFO` <br/> - `WARNING` <br/> - `ERROR` <br/> - `CRITICAL` <br/> | The log level for the ingest service, which controls the verbosity of the logging output. |
| `MESSAGE_CLIENT_HOST` | - `redis` <br/> - `localhost` <br/> - `192.168.1.10` <br/> | Specifies the hostname or IP address of the message broker used for communication between services. |
| `MESSAGE_CLIENT_PORT` | - `7670` <br/> - `6379` <br/> | Specifies the port number on which the message broker is listening. |
| `MINIO_BUCKET` | - `nv-ingest` <br/> | Name of MinIO bucket, used to store image, table, and chart extractions. |
| `NGC_API_KEY` | - `nvapi-*************` <br/> | An authorized NGC API key, used to interact with hosted NIMs. To create an NGC key, go to [https://org.ngc.nvidia.com/setup/personal-keys](https://org.ngc.nvidia.com/setup/personal-keys). |
| `MINIO_BUCKET` | `nv-ingest` <br/> | Name of MinIO bucket, used to store image, table, and chart extractions. |
| `NGC_API_KEY` | `nvapi-*************` <br/> | An authorized NGC API key, used to interact with hosted NIMs. To create an NGC key, go to [https://org.ngc.nvidia.com/setup/personal-keys](https://org.ngc.nvidia.com/setup/personal-keys). |
| `NIM_NGC_API_KEY` || The key that NIM microservices inside docker containers use to access NGC resources. This is necessary only in some cases when it is different from `NGC_API_KEY`. If this is not specified, `NGC_API_KEY` is used to access NGC resources. |
| `NVIDIA_BUILD_API_KEY` || The key to access NIMs that are hosted on build.nvidia.com instead of a self-hosted NIM. This is necessary only in some cases when it is different from `NGC_API_KEY`. If this is not specified, `NGC_API_KEY` is used for build.nvidia.com. |
| `OTEL_EXPORTER_OTLP_ENDPOINT` | - `http://otel-collector:4317` <br/> | The endpoint for the OpenTelemetry exporter, used for sending telemetry data. |
| `REDIS_MORPHEUS_TASK_QUEUE` | - `morpheus_task_queue` <br/> | The name of the task queue in Redis where tasks are stored and processed. |
| `OTEL_EXPORTER_OTLP_ENDPOINT` | `http://otel-collector:4317` <br/> | The endpoint for the OpenTelemetry exporter, used for sending telemetry data. |
| `REDIS_MORPHEUS_TASK_QUEUE` | `morpheus_task_queue` <br/> | The name of the task queue in Redis where tasks are stored and processed. |
14 changes: 7 additions & 7 deletions docs/docs/user-guide/quickstart-guide.md
Original file line number Diff line number Diff line change
Expand Up @@ -21,11 +21,11 @@ If preferred, you can also [start services one by one](developer-guide/deploymen

`git clone https://github.com/nvidia/nv-ingest`

1. Change the directory to the cloned repo
2. Change the directory to the cloned repo

`cd nv-ingest`.

1. [Generate API keys](developer-guide/ngc-api-key.md) and authenticate with NGC with the `docker login` command:
3. [Generate API keys](developer-guide/ngc-api-key.md) and authenticate with NGC with the `docker login` command:

```shell
# This is required to access pre-built containers and NIM microservices
Expand All @@ -38,7 +38,7 @@ If preferred, you can also [start services one by one](developer-guide/deploymen
During the early access (EA) phase, you must apply for early access at [https://developer.nvidia.com/nemo-microservices-early-access/join](https://developer.nvidia.com/nemo-microservices-early-access/join). When your early access is approved, follow the instructions in the email to create an organization and team, link your profile, and generate your NGC API key.
```

1. Create a .env file containing your NGC API key and the following paths. For more information, refer to [Environment Configuration Variables](developer-guide/environment-config.md).
4. Create a .env file containing your NGC API key and the following paths. For more information, refer to [Environment Configuration Variables](developer-guide/environment-config.md).

```
# Container images must access resources from NGC.
Expand All @@ -55,19 +55,19 @@ If preferred, you can also [start services one by one](developer-guide/deploymen
As configured by default in [docker-compose.yaml](https://github.com/NVIDIA/nv-ingest/blob/main/docker-compose.yaml#L52), the DePlot NIM is on a dedicated GPU. All other NIMs and the NV-Ingest container itself share a second. This avoids DePlot and other NIMs competing for VRAM on the same device. Change the `CUDA_VISIBLE_DEVICES` pinnings as desired for your system within [docker-compose.yaml](https://github.com/NVIDIA/nv-ingest/blob/main/docker-compose.yaml).
```

1. Make sure NVIDIA is set as your default container runtime before running the docker compose command with the command:
5. Make sure NVIDIA is set as your default container runtime before running the docker compose command with the command:

`sudo nvidia-ctk runtime configure --runtime=docker --set-as-default`

1. Start all services:
6. Start all services:

`docker compose --profile retrieval up`

```{tip}
By default, we have [configured log levels to be verbose](https://github.com/NVIDIA/nv-ingest/blob/main/docker-compose.yaml). It's possible to observe service startup proceeding. You will notice a lot of log messages. Disable verbose logging by configuring `NIM_TRITON_LOG_VERBOSE=0` for each NIM in [docker-compose.yaml](https://github.com/NVIDIA/nv-ingest/blob/main/docker-compose.yaml).
```

1. When all services have fully started, `nvidia-smi` should show processes like the following:
7. When all services have fully started, `nvidia-smi` should show processes like the following:

```
# If it's taking > 1m for `nvidia-smi` to return, the bus will likely be busy setting up the models.
Expand All @@ -87,7 +87,7 @@ If preferred, you can also [start services one by one](developer-guide/deploymen
+---------------------------------------------------------------------------------------+
```

1. Observe the started containers with `docker ps`:
8. Observe the started containers with `docker ps`:

```
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
Expand Down

0 comments on commit 698e13f

Please sign in to comment.