Skip to content

Commit

Permalink
EDU-1551: Edits pages-links-copy
Browse files Browse the repository at this point in the history
  • Loading branch information
franrob-projects committed Jan 23, 2025
1 parent a012d25 commit 07d4e5b
Show file tree
Hide file tree
Showing 17 changed files with 821 additions and 2,438 deletions.
23 changes: 3 additions & 20 deletions content/integrations/aws-authentication.textile
Original file line number Diff line number Diff line change
@@ -1,29 +1,12 @@
---
title: AWS authentication
meta_description: "There are two AWS authentication schemes that can be used when working with Ably: Credentials and the ARN of an assumable role."
meta_keywords: "Ably, AWS, credentials, ARN, ARN of assumable role, SQS, Lambda, Kinesis."
meta_description: " "
meta_keywords: "Ably, AWS, ARN, IAM, SQS, Lambda, Kinesis."
redirect_from:
- /general/aws-authentication
---

When adding an Integration rule for an AWS endpoint such as for an "AWS Lambda function rule":/general/webhooks/aws-lambda/, or a "Firehose rule":/general/firehose/ for AWS Kinesis or AWS SQS, there are two AWS authentication methods that can be used with Ably:

* Credentials
* ARN of an assumable role

h2(#aws-credentials). Credentials

These are a set of credentials for an AWS "IAM":https://docs.aws.amazon.com/IAM/latest/UserGuide/introduction.html user that has permission to invoke your Lambda function, and, in the case of a Firehose rule, publish to your AWS SQS queue or AWS Kinesis stream. These credentials consist of the 'access key id' and the 'secret access key' for the AWS IAM user. These are entered into the rule dialog as @access_key_id:secret_access_key@, that is, as a key-value pair, joined by a single colon (without a space). You can read more about these credentials in the AWS blog article "How to quickly find and update your access keys, password, and MFA setting using the AWS Management Console":https://aws.amazon.com/blogs/security/how-to-find-update-access-keys-password-mfa-aws-management-console/.

This is not the recommended approach, as "AWS best practices":https://docs.aws.amazon.com/IAM/latest/UserGuide/best-practices.html#sharing-credentials state that you should not share your access keys with third-parties.

When using this scheme you need to "create a policy":#create-policy.

h2(#aws-arn). ARN of an assumable role

This scheme enables you to delegate access to resources on your account using an IAM role that the Ably AWS account can assume, avoiding the need to share user credentials with Ably. See "this AWS blog article on roles":https://docs.aws.amazon.com/IAM/latest/UserGuide/tutorial_cross-account-with-roles.html.

This is the recommended scheme as it follows "AWS best practices":https://docs.aws.amazon.com/IAM/latest/UserGuide/best-practices.html#sharing-credentials, and means you do not need to share your 'access key id' and the 'secret access key' with Ably, but instead specify the "ARN":https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_identifiers.html#identifiers-arns of a role.
Delegate access to your AWS resources by creating an IAM role that the Ably AWS account can assume. This follows AWS "best practices":https://docs.aws.amazon.com/IAM/latest/UserGuide/best-practices.html#sharing-credentials, as it avoids sharing access keys directly. Specify the role's "ARN":https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_identifiers.html#identifiers-arns, which grants Ably the necessary permissions in a secure manner. For more information, see the "AWS guide":https://docs.aws.amazon.com/IAM/latest/UserGuide/tutorial_cross-account-with-roles.html on cross-account roles.

When using this scheme there are two steps you need to carry out:

Expand Down
32 changes: 11 additions & 21 deletions content/integrations/continuous-streaming/amqp-rule.textile
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@ redirect_from:
- /general/firehose/amqp-rule
---

You can use a AMQP rule to send "data":/integrations/payloads#sources such as messages, occupancy, lifecycle and presence events from Ably to AMQP.
Use Ably's "Firehose":/integrations/continuous-streaming/firehose#firehose-rules AMQP rule to send "data":/integrations/continuous-streaming/firehoset#data-sources such as messages, occupancy, lifecycle and presence events from Ably to AMQP.

h2(#creating-amqp-rule). Creating a AMQP rule

Expand All @@ -23,11 +23,11 @@ To create a rule in your "dashboard":https://ably.com/dashboard#58:

* Login and select the application you wish to integrate with AMQP.
* Click the *Integrations* tab.
* Click the *+ New Integration Rule* button.
* Click the *New Integration Rule* button.
* Choose Firehose.
* Choose AMQP.
* Configure the settings applicable to your use case and your AMQP installation.
* Click *Create* to create the rule.
* Click *Create*.

h4(#header). APMQ header and authentication settings:

Expand All @@ -37,27 +37,17 @@ h4(#header). APMQ header and authentication settings:
|Another header button |Adds additional headers for the message. |


h4(#general). APMQ general rule settings
h4(#settings). APMQ rule settings

The following explains the APMQ general rule settings:

|_. Section |_. Purpose |
|URL |Specifies the HTTPS URL for the SQS queue, including credentials, region, and stream name. |
|AWS region |Defines the AWS region associated with the SQS queue. |
|AWS authentication scheme |Allows selection of the authentication method: AWS credentials or ARN of an assumable role. |
|AWS credentials |Enter AWS credentials in `key:value` format for authentication. |

h4(#specific). APMQ-specific settings

The following explains the APMQ-specific settings:

|_. Section |_. Purpose |
|Routing key |Specifies the routing key used by the AMQP exchange to route messages to a physical queue. Supports interpolation. |
|Route mandatory |Ensures delivery is rejected if the route does not exist. |
|Route persistent |Marks messages as persistent, instructing the broker to write them to disk if the queue is durable. |
|Optional TTL (minutes)|Allows overriding the default queue TTL for messages to be persisted. |
|Create button |Click to save and provision the AMQP settings. |
|Cancel button |Click to cancel the configuration and return to the previous screen. |
|_. Section |_. Purpose |
|URL |Specifies the HTTPS URL for the SQS queue, including credentials, region, and stream name. |
|AWS region |Defines the AWS region associated with the SQS queue. |
|AWS authentication scheme |Allows selection of the authentication method: AWS credentials or ARN of an assumable role. |
|AWS credentials |Enter AWS credentials in `key:value` format for authentication. | |
|Routing key |Specifies the routing key used by the AMQP exchange to route messages to a physical queue. Supports interpolation. |
|Optional TTL (minutes) |Allows overriding the default queue TTL for messages to be persisted. |


h3(#creating-rule-control-api). Creating a AMQP rule using Control API
Expand Down
4 changes: 2 additions & 2 deletions content/integrations/continuous-streaming/firehose.textile
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
---
title: Firehose
title: Firehose overview
meta_description: "Firehose allows you to stream data from Ably to an external service for realtime processing."
meta_keywords: "Firehose, realtime streaming, stream processing"
languages:
Expand Down Expand Up @@ -27,7 +27,7 @@ Unlike "channels":/channels, which follow a "pub/sub pattern":https://en.wikiped

As each message is delivered once to your streaming or queueing server, this design is commonly used to process realtime data published by Ably asynchronously. For example, using workers consuming data from your stream or queue, you could persist each message of a live chat to your own database, start publishing updates once a channel becomes active, or trigger an event if a device has submitted a location that indicates that it has reached its destination.

Find out why Ably thinks streams and message queues help solve many of the challenges associated with consuming pub/sub data server-side in the article: "Message queues — the right way to process and work with realtime data on your servers":https://ably.com/blog/message-queues-the-right-way.
Find out why Ably thinks streams and message queues help solve many of the challenges associated with consuming pub/sub data server-side in the article: "Message queuesthe right way to process and work with realtime data on your servers":https://ably.com/blog/message-queues-the-right-way.

Note that if you want to consume realtime data from a queue, you should take a look at "Ably Queues":/general/queues. They provide a simple and robust way to consume realtime data from your worker servers without having to worry about queueing infrastructure.

Expand Down
31 changes: 11 additions & 20 deletions content/integrations/continuous-streaming/kafka-rule.textile
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@ redirect_from:
- /general/firehose/kafka-rule
---

You can use a Kafka rule to send "data":/integrations/payloads#sources such as messages, occupancy, lifecycle and presence events from Ably to Kafka.
Use Ably's "Firehose":/integrations/continuous-streaming/firehose#firehose-rules Kafka rule to send "data":/integrations/continuous-streaming/firehoset#data-sources such as messages, occupancy, lifecycle and presence events from Ably to Kafka.

If you want to send data from Kafka to Ably, you can use the "Ably Kafka Connector":/general/kafka-connector, rather than Kafka rules.

Expand All @@ -25,33 +25,24 @@ To create a rule in your "dashboard":https://ably.com/dashboard#58:

* Login and select the application you wish to integrate with Kafka.
* Click the *Integrations* tab.
* Click the *+ New Integration Rule* button.
* Click the *New Integration Rule* button.
* Choose Firehose.
* Choose Kafka.
* Configure the settings applicable to your use case and your Kafka installation.
* Click *Create* to create the rule.

h4(#general). Kafka general rule settings
h4(#settings). Kafka rule settings

The following explains the Kafka general rule settings:

|_. Section |_. Purpose |
|Source |Select the type of event source. |
|Channel filter |Allows filtering of the rule based on a regular expression matching the channel name. |
|Encoding |Selects a setting for encoding the payload. |
|Enveloped |Wraps the payload with additional metadata when checked. Uncheck to receive raw payloads.|

h4(#specific). Kafka-specific settings

The following explains the Kafka-specific settings:
|_. Section |_. Purpose |
|"Source":/integrations/continuous-streaming/firehose#data-sources |Select the type of event source. |
|Channel filter |Allows filtering of the rule based on a regular expression matching the channel name. |
|Routing key |Used to route messages to Kafka topics. |
|Mechanism |Dropdown to select the SASL/SCRAM mechanism used for Kafka connection. |
|Brokers |List of Kafka broker endpoints in the format `<host>:<port>`. |
|Another broker |Option to add additional Kafka broker endpoints. |

|_. Section |_. Purpose |
|Routing key |Used to route messages to Kafka topics. |
|Mechanism |Dropdown to select the SASL/SCRAM mechanism used for Kafka connection. |
|Username |Enter the username required to authenticate with the Kafka server. |
|Password |Enter the password required to authenticate with the Kafka server. |
|Brokers |List of Kafka broker endpoints in the format `<host>:<port>`. |
|Another broker |Option to add additional Kafka broker endpoints. |
|Create button |Click to save and provision the Kafka settings for the integration rule. |

In this section you need to set up your Authentication for Kafka by selecting your preferred mechanism for authentication and providing credentials.

Expand Down
32 changes: 12 additions & 20 deletions content/integrations/continuous-streaming/kinesis-rule.textile
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@ redirect_from:
- /general/firehose/kinesis-rule
---

You can use a Kinesis rule to send "data":/integrations/payloads#sources such as messages, occupancy, lifecycle and presence events from Ably to Kinesis.
Use Ably's "Firehose":/integrations/continuous-streaming/firehose#firehose-rules Kinesis rule to send "data":/integrations/continuous-streaming/firehoset#data-sources such as messages, occupancy, lifecycle and presence events from Ably to Kinesis.

h2(#creating-kinesis-rule). Creating a Kinesis rule

Expand All @@ -23,31 +23,23 @@ To create a rule in your "dashboard":https://ably.com/dashboard#58:

* Login and select the application you wish to integrate with Kinesis.
* Click the *Integrations* tab.
* Click the *+ New Integration Rule* button.
* Click the *New Integration Rule* button.
* Choose Firehose.
* Choose AWS Kinesis.
* Configure the settings applicable to your use case and your Kinesis installation.
* Click *Create* to create the rule.
* Click *Create*.

h4(#aws). AWS Kinesis rule settings
h4(#settings). AWS Kinesis rule settings

The following explains the AWS Kinesis generalrule settings:
The following explains the AWS Kinesis rule settings:

|_. Section |_. Purpose |
|AWS Region |Specifies the AWS region for the Kinesis Stream. |
|Stream Name |Defines the name of the Kinesis Stream to connect to. |
|AWS authentication scheme |Choose the authentication method: AWS credentials or ARN of an assumable role. |
|AWS Credentials |Enter your AWS credentials in `key:value` format. |

h4(#general). Kinesis general rule settings

The following explains the Kinesis general rule settings:

|_. Section |_. Purpose |
|Source |Defines the type of event to deliver . |
|Channel filter |Allows filtering of the rule using a regular expression matching the channel name. |
|Encoding |Selects a setting for encoding the payload. |
|Enveloped |When checked, wraps payloads with additional metadata. Uncheck for raw payloads. |
|_. Section |_. Purpose |
|AWS Region |Specifies the AWS region for the Kinesis Stream. |
|Stream Name |Defines the name of the Kinesis Stream to connect to. |
|AWS authentication scheme |Choose the authentication method: AWS credentials or ARN of an assumable role. |
|AWS Credentials |Enter your AWS credentials in `key:value` format. |
|"Source":/integrations/continuous-streaming/firehose#data-sources |Defines the type of event to deliver. |
|Channel filter |Allows filtering of the rule using a regular expression matching the channel name. |


h3(#creating-rule-control-api). Creating a Kinesis rule using Control API
Expand Down
Loading

0 comments on commit 07d4e5b

Please sign in to comment.