Skip to content

Commit

Permalink
[Doc] Brokerless doc update (#52346)
Browse files Browse the repository at this point in the history
(cherry picked from commit 9e52d28)

# Conflicts:
#	docs/en/sql-reference/sql-statements/data-manipulation/BROKER_LOAD.md
#	docs/zh/sql-reference/sql-statements/data-manipulation/BROKER_LOAD.md
  • Loading branch information
DanRoscigno authored and mergify[bot] committed Oct 26, 2024
1 parent 4265738 commit 9b5a9b5
Show file tree
Hide file tree
Showing 2 changed files with 13 additions and 0 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -225,6 +225,15 @@ Open-source HDFS supports two authentication methods: simple authentication and
You can configure an HA mechanism for the NameNode of the HDFS cluster. This way, if the NameNode is switched over to another node, StarRocks can automatically identify the new node that serves as the NameNode. This includes the following scenarios:
- If you load data from a single HDFS cluster that has one Kerberos user configured, both broker-based loading and broker-free loading are supported.
<<<<<<< HEAD:docs/en/sql-reference/sql-statements/data-manipulation/BROKER_LOAD.md
=======
- To perform broker-based loading, make sure that at least one independent broker group is deployed, and place the `hdfs-site.xml` file to the `{deploy}/conf` path on the broker node that serves the HDFS cluster. StarRocks will add the `{deploy}/conf` path to the environment variable `CLASSPATH` upon broker startup, allowing the brokers to read information about the HDFS cluster nodes.
- To perform broker-free loading, you only need to set `hadoop.security.authentication = kerberos` in `conf/core-site.xml` under the deployment directories of all FE, BE, and CN nodes in your cluster, and use the `kinit` command to configure the Kerberos account.
- If you load data from a single HDFS cluster that has multiple Kerberos users configured, only broker-based loading is supported. Make sure that at least one independent broker group is deployed, and place the `hdfs-site.xml` file to the `{deploy}/conf` path on the broker node that serves the HDFS cluster. StarRocks will add the `{deploy}/conf` path to the environment variable `CLASSPATH` upon broker startup, allowing the brokers to read information about the HDFS cluster nodes.
>>>>>>> 9e52d28867 ([Doc] Brokerless doc update (#52346)):docs/en/sql-reference/sql-statements/loading_unloading/BROKER_LOAD.md
- To perform broker-based loading, make sure that at least one independent [broker group](../../../deployment/deploy_broker.md) is deployed, and place the `hdfs-site.xml` file to the `{deploy}/conf` path on the broker node that serves the HDFS cluster. StarRocks will add the `{deploy}/conf` path to the environment variable `CLASSPATH` upon broker startup, allowing the brokers to read information about the HDFS cluster nodes.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -227,7 +227,11 @@ StarRocks 访问存储系统的认证配置。
- 如果采用有 Broker 的导入,您需要确保至少部署了一组独立的 [Broker](../../../deployment/deploy_broker.md),并将 `hdfs-site.xml` 文件放在 HDFS 集群对应的 Broker 节点的 `{deploy}/conf` 目录下。Broker 进程重启时,会将 `{deploy}/conf` 目录添加到 `CLASSPATH` 环境变量,使 Broker 能够读取 HDFS 集群中各节点的信息。
<<<<<<< HEAD:docs/zh/sql-reference/sql-statements/data-manipulation/BROKER_LOAD.md
- 如果采用无 Broker 的导入,您需要将 `hdfs-site.xml` 文件放在每个 FE 节点和每个 BE 节点的 `{deploy}/conf` 目录下。
=======
- 如果采用无 Broker 的导入,您需要在各个 FE、BE、CN 节点的部署路径下的 `conf/core-site.xml` 文件中设置 `hadoop.security.authentication = kerberos`,并通过 `kinit` 命令配置 Kerberos 账号。
>>>>>>> 9e52d28867 ([Doc] Brokerless doc update (#52346)):docs/zh/sql-reference/sql-statements/loading_unloading/BROKER_LOAD.md
- 在单 HDFS 集群、并且配置了多 Kerberos 用户的场景下,只支持有 Broker 的导入。您需要确保至少部署了一组独立的 [Broker](../../../deployment/deploy_broker.md),并将 `hdfs-site.xml` 文件放在 HDFS 集群对应的 Broker 节点的 `{deploy}/conf` 目录下。Broker 进程重启时,会将 `{deploy}/conf` 目录添加到 `CLASSPATH` 环境变量,使 Broker 能够读取 HDFS 集群中各节点的信息。
Expand Down

0 comments on commit 9b5a9b5

Please sign in to comment.