Skip to content

Commit

Permalink
vale need to fix
Browse files Browse the repository at this point in the history
  • Loading branch information
rustagir committed Nov 15, 2023
1 parent 6c1b4fb commit c9324f4
Show file tree
Hide file tree
Showing 12 changed files with 25 additions and 25 deletions.
2 changes: 1 addition & 1 deletion source/introduction/kafka-connect.txt
Original file line number Diff line number Diff line change
Expand Up @@ -69,7 +69,7 @@ For more information on Kafka Connect, see the following resources:
reliable pipeline.
- There are a large number of community maintained connectors for connecting
Apache Kafka to popular datastores like MongoDB, PostgreSQL, and MySQL using the
Kafka Connect framework. This reduces the amount of boilerplate code you need to
Kafka Connect framework. This reduces the amount of boilerplate code you must
write and maintain to manage database connections, error handling,
dead letter queue integration, and other problems involved in connecting Apache Kafka
with a datastore.
Expand Down
22 changes: 11 additions & 11 deletions source/monitoring.txt
Original file line number Diff line number Diff line change
Expand Up @@ -58,22 +58,22 @@ to satisfy those use cases:
* - Use Case
- Metrics to Use

* - You need to know if a component of your pipeline is falling behind.
* - You must know if a component of your pipeline is falling behind.
- Use the ``latest-kafka-time-difference-ms``
metric. This metric indicates the interval of time between
when a record arrived in a Kafka topic and when your connector
received that record. If the value of this metric is increasing,
it signals that there may be a problem with {+kafka+} or MongoDB.

* - You need to know the total number of records your connector
* - You must know the total number of records your connector
wrote to MongoDB.
- Use the ``records`` metric.

* - You need to know the total number of write errors your connector
* - You must know the total number of write errors your connector
encountered when attempting to write to MongoDB.
- Use the ``batch-writes-failed`` metric.

* - You need to know if your MongoDB performance is getting slower
* - You must know if your MongoDB performance is getting slower
over time.
- Use the ``in-task-put-duration-ms`` metric to initially diagnose
a slowdown.
Expand All @@ -84,7 +84,7 @@ to satisfy those use cases:
- ``batch-writes-failed-duration-over-<number>-ms``
- ``processing-phase-duration-over-<number>-ms``

* - You need to find a bottleneck in how {+kafka-connect+} and your MongoDB sink
* - You must find a bottleneck in how {+kafka-connect+} and your MongoDB sink
connector write {+kafka+} records to MongoDB.
- Compare the values of the following metrics:

Expand All @@ -108,17 +108,17 @@ to satisfy those use cases:
* - Use Case
- Metrics to Use

* - You need to know if a component of your pipeline is falling behind.
* - You must know if a component of your pipeline is falling behind.
- Use the ``latest-mongodb-time-difference-secs``
metric. This metric indicates how old the most recent change
stream event your connector processed is. If this metric is increasing,
it signals that there may be a problem with {+kafka+} or MongoDB.

* - You need to know the total number of change stream events your source connector
* - You must know the total number of change stream events your source connector
has processed.
- Use the ``records`` metric.

* - You need to know the percentage of records your connector
* - You must know the percentage of records your connector
received but failed to write to {+kafka+}.
- Perform the following calculation with the ``records``,
``records-filtered``, and ``records-acknowledged`` metrics:
Expand All @@ -127,7 +127,7 @@ to satisfy those use cases:

(records - (records-acknowledged + records-filtered)) / records

* - You need to know the average size of the documents your connector
* - You must know the average size of the documents your connector
has processed.
- Perform the following calculation with the ``mongodb-bytes-read`` and
``records`` metrics:
Expand All @@ -139,14 +139,14 @@ to satisfy those use cases:
To learn how to calculate the average size of records over a span of
time, see :ref:`mongodb-bytes-read <kafka-monitoring-averge-record-size-span>`.

* - You need to find a bottleneck in how {+kafka-connect+} and your MongoDB source
* - You must find a bottleneck in how {+kafka-connect+} and your MongoDB source
connector write MongoDB documents to {+kafka+}.
- Compare the values of the following metrics:

- ``in-task-poll-duration-ms``
- ``in-connect-framework-duration-ms``

* - You need to know if your MongoDB performance is getting slower
* - You must know if your MongoDB performance is getting slower
over time.
- Use the ``in-task-poll-duration-ms`` metric to initially diagnose
a slowdown.
Expand Down
2 changes: 1 addition & 1 deletion source/quick-start.txt
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,7 @@ Kafka topic, and to read data from a Kafka topic and write it to MongoDB.

To complete the steps in this guide, you must download and work in a
**sandbox**, a containerized development environment that includes services
you need to build a sample *data pipeline*.
you must have to build a sample *data pipeline*.

Check failure on line 25 in source/quick-start.txt

View workflow job for this annotation

GitHub Actions / TDBX Vale rules

[vale] reported by reviewdog 🐶 [MongoDB.ConciseTerms] 'must' is preferred over 'have to'. Raw Output: {"message": "[MongoDB.ConciseTerms] 'must' is preferred over 'have to'.", "location": {"path": "source/quick-start.txt", "range": {"start": {"line": 25, "column": 10}}}, "severity": "ERROR"}

Read the following sections to set up your sandbox and sample data pipeline.

Expand Down
6 changes: 3 additions & 3 deletions source/security-and-authentication/mongodb-aws-auth.txt
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,7 @@ AWS IAM credentials, see the guide on :atlas:`How to Set Up Unified AWS Access <

.. important::

You need to use {+connector+} version 1.5 of later to connect to a MongoDB
You must use {+connector+} version 1.5 of later to connect to a MongoDB
server set up to authenticate using your AWS IAM credentials. AWS IAM
credential authentication is available in MongoDB server version 4.4
and later.
Expand All @@ -39,8 +39,8 @@ connection URI connector property as shown in the following example:

connection.uri=mongodb://<AWS access key id>:<AWS secret access key>@<hostname>:<port>/?authSource=<authentication database>&authMechanism=MONGODB-AWS&authMechanismProperties=AWS_SESSION_TOKEN:<AWS session token>

The preceding example uses the following placeholders which you need to
replace:
The preceding example uses the following placeholders which you must
replace with your own credentials:

.. list-table::
:header-rows: 1
Expand Down
2 changes: 1 addition & 1 deletion source/sink-connector/fundamentals/write-strategies.txt
Original file line number Diff line number Diff line change
Expand Up @@ -326,7 +326,7 @@ business key, perform the following tasks:
#. Specify the ``DeleteOneBusinessKeyStrategy`` write model strategy in the
connector configuration.

Suppose you need to delete a calendar event from a specific year from
Suppose you want to delete a calendar event from a specific year from
a collection that contains a document that resembles the following:

.. _delete-one-business-key-sample-document:
Expand Down
2 changes: 1 addition & 1 deletion source/source-connector/fundamentals/change-streams.txt
Original file line number Diff line number Diff line change
Expand Up @@ -87,7 +87,7 @@ The oplog is a special capped collection which cannot use indexes. For more
information on this limitation, see
:manual:`Change Streams Production Recommendations </administration/change-streams-production-recommendations/#indexes>`.

If you need to improve change stream performance, use a faster disk for
If you want to improve change stream performance, use a faster disk for
your MongoDB cluster and increase the size of your WiredTiger cache. To
learn how to set your WiredTiger cache, see the guide on the
:manual:`WiredTiger Storage Engine </core/wiredtiger/#memory-use>`.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@ This usage example demonstrates how to copy data from a MongoDB collection to an
Example
-------

Suppose you need to copy a MongoDB collection to {+kafka+} and filter some of the data.
Suppose you must copy a MongoDB collection to {+kafka+} and filter some of the data.

Check failure on line 13 in source/source-connector/usage-examples/copy-existing-data.txt

View workflow job for this annotation

GitHub Actions / TDBX Vale rules

[vale] reported by reviewdog 🐶 [MongoDB.Wordiness] Consider using 'some' instead of 'some of the'. Raw Output: {"message": "[MongoDB.Wordiness] Consider using 'some' instead of 'some of the'.", "location": {"path": "source/source-connector/usage-examples/copy-existing-data.txt", "range": {"start": {"line": 13, "column": 68}}}, "severity": "ERROR"}

Your requirements and your solutions are as follows:

Expand Down
2 changes: 1 addition & 1 deletion source/source-connector/usage-examples/custom-pipeline.txt
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@ For more information, see the MongoDB Server manual entry on
Example
-------

Suppose you are coordinating an event and need to collect names and arrival times
Suppose you are coordinating an event and must collect names and arrival times
of each guest at a specific event. Whenever a guest checks into the event,
an application inserts a new document that contains the following details:

Expand Down
4 changes: 2 additions & 2 deletions source/source-connector/usage-examples/schema.txt
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ Specify a Schema
This usage example demonstrates how you can configure your {+source-connector+}
to apply a custom **schema** to your data. A schema is a
definition that specifies the structure and type information about data in an
{+kafka+} topic. Use a schema when you need to ensure the data on the topic populated
{+kafka+} topic. Use a schema when you must ensure the data on the topic populated
by your source connector has a consistent structure.

To learn more about using schemas with the connector, see the
Expand All @@ -17,7 +17,7 @@ Example
-------

Suppose your application keeps track of customer data in a MongoDB
collection, and you need to publish this data to a Kafka topic. You want
collection, and you must publish this data to a Kafka topic. You want
the subscribers of the customer data to receive consistently formatted data.
You choose to apply a schema to your data.

Expand Down
2 changes: 1 addition & 1 deletion source/tutorials/migrate-time-series.txt
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@ data consists of measurements taken at time intervals, metadata that describes
the measurement, and the time of the measurement.

To convert data from a MongoDB collection to a time series collection using
the connector, you need to perform the following tasks:
the connector, you must perform the following tasks:

#. Identify the time field common to all documents in the collection.
#. Configure a source connector to copy the existing collection data to a
Expand Down
2 changes: 1 addition & 1 deletion source/tutorials/replicate-with-cdc.txt
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@ Overview
Follow this tutorial to learn how to use a
**change data capture (CDC) handler** to replicate data with the {+connector+}.
A CDC handler is an application that translates CDC events into MongoDB
write operations. Use a CDC handler when you need to reproduce the changes
write operations. Use a CDC handler when you must reproduce the changes
in one datastore into another datastore.

In this tutorial, you configure and run MongoDB Kafka source and sink
Expand Down
2 changes: 1 addition & 1 deletion source/tutorials/tutorial-setup.txt
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ Kafka Connector Tutorial Setup
==============================

The tutorials in this section run on a development environment using Docker to
package the dependencies and configurations you need to run the
package the dependencies and configurations you must have to run the

Check failure on line 8 in source/tutorials/tutorial-setup.txt

View workflow job for this annotation

GitHub Actions / TDBX Vale rules

[vale] reported by reviewdog 🐶 [MongoDB.ConciseTerms] 'must' is preferred over 'have to'. Raw Output: {"message": "[MongoDB.ConciseTerms] 'must' is preferred over 'have to'.", "location": {"path": "source/tutorials/tutorial-setup.txt", "range": {"start": {"line": 8, "column": 54}}}, "severity": "ERROR"}
{+connector-long+}. Make sure you complete the development environment setup
steps before proceeding to the tutorials.

Expand Down

0 comments on commit c9324f4

Please sign in to comment.