Skip to content

Commit

Permalink
Snooty setup (#67)
Browse files Browse the repository at this point in the history
* DOP-1608: set up kafka v1.3 for snooty build

* test
  • Loading branch information
schmalliso committed Apr 26, 2022
1 parent a5375f5 commit 0717d18
Show file tree
Hide file tree
Showing 8 changed files with 123 additions and 115 deletions.
5 changes: 5 additions & 0 deletions snooty.toml
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
name = "kafka-connector"
title = "MongoDB Kafka Connector"
intersphinx = ["https://docs.mongodb.com/manual/objects.inv"]

toc_landing_pages = ["/kafka-sink"]
3 changes: 1 addition & 2 deletions source/includes/externalize-secrets.rst
Original file line number Diff line number Diff line change
@@ -1,5 +1,4 @@
.. admonition:: Avoid Exposing Your Authentication Credentials
:class: important
.. important:: Avoid Exposing Your Authentication Credentials

To avoid exposing your authentication credentials in your
``connection.uri`` setting, use a
Expand Down
2 changes: 1 addition & 1 deletion source/kafka-docker-example.txt
Original file line number Diff line number Diff line change
Expand Up @@ -185,7 +185,7 @@ http://localhost:9021/ and navigate to the cluster topics.
Next, explore the collection data in the MongoDB replica set:

* In your local shell, navigate to the ``docker`` directory from which you
ran the ``docker-compose`` commands and connect to the `mongo1` MongoDB
ran the ``docker-compose`` commands and connect to the ``mongo1`` MongoDB
instance using the following command:

.. code-block:: none
Expand Down
3 changes: 1 addition & 2 deletions source/kafka-sink-data-formats.txt
Original file line number Diff line number Diff line change
Expand Up @@ -103,8 +103,7 @@ without schema** data format. The Kafka topic data must be in JSON format.
value.converter=org.apache.kafka.connect.json.JsonConverter
value.converter.schemas.enable=false

.. admonition:: Choose the appropriate data format
:class: note
.. note:: Choose the appropriate data format

When you specify **JSON without Schema**, any JSON schema objects such
as ``schema`` or ``payload`` are read explicitly rather than as a
Expand Down
40 changes: 24 additions & 16 deletions source/kafka-sink-postprocessors.txt
Original file line number Diff line number Diff line change
Expand Up @@ -40,32 +40,37 @@ class or use one of the following pre-built ones:
- | Full Path: ``com.mongodb.kafka.connect.sink.processor.DocumentIdAdder``
| Uses a configured *strategy* to insert an ``_id`` field.

.. seealso:: :ref:`Strategy options and configuration <config-document-id-adder>`.
.. seealso::

:ref:`Strategy options and configuration <config-document-id-adder>`.
* - BlockListKeyProjector
- | Full Path: ``com.mongodb.kafka.connect.sink.processor.BlockListKeyProjector``
| Removes matching key fields from the sink record.

.. seealso:: :ref:`Configuration <config-blocklist-allowlist>` and :ref:`Example <blocklist-example>`.
.. seealso::

:ref:`Configuration <config-blocklist-allowlist>` and :ref:`Example <blocklist-example>`.
* - BlockListValueProjector
- | Full Path: ``com.mongodb.kafka.connect.sink.processor.BlockListValueProjector``
| Removes matching value fields from the sink record.

.. seealso:: :ref:`Configuration <config-blocklist-allowlist>` and :ref:`Example <blocklist-example>`.
.. seealso::

:ref:`Configuration <config-blocklist-allowlist>` and :ref:`Example <blocklist-example>`.
* - AllowListKeyProjector
- | Full Path: ``com.mongodb.kafka.connect.sink.processor.AllowListKeyProjector``
| Includes only matching key fields from the sink record.

.. seealso:: :ref:`Configuration <config-blocklist-allowlist>` and :ref:`Example <allowlist-example>`.
.. seealso::

:ref:`Configuration <config-blocklist-allowlist>` and :ref:`Example <allowlist-example>`.
* - AllowListValueProjector
- | Full Path: ``com.mongodb.kafka.connect.sink.processor.AllowListValueProjector``
| matching value fields from the sink record.

.. seealso:: :ref:`Configuration <config-blocklist-allowlist>` and :ref:`Example <allowlist-example>`.
.. seealso::

:ref:`Configuration <config-blocklist-allowlist>` and :ref:`Example <allowlist-example>`.
* - KafkaMetaAdder
- | Full Path: ``com.mongodb.kafka.connect.sink.processor.KafkaMetaAdder``
| Adds a field composed of the concatenation of Kafka topic, partition, and offset to the document.
Expand All @@ -74,14 +79,16 @@ class or use one of the following pre-built ones:
- | Full Path: ``com.mongodb.kafka.connect.sink.processor.field.renaming.RenameByMapping``
| Renames fields that are an exact match to a specified field name in the key or value document.

.. seealso:: :ref:`Renaming configuration <config-field-renaming>` and :ref:`Example <field-renaming-mapping-example>`.
.. seealso::

:ref:`Renaming configuration <config-field-renaming>` and :ref:`Example <field-renaming-mapping-example>`.
* - RenameByRegex
- | Full Path: ``com.mongodb.kafka.connect.sink.processor.field.renaming.RenameByRegex``
| Renames fields that match a regular expression.

.. seealso:: :ref:`Renaming configuration <config-field-renaming>` and :ref:`Example <field-renaming-regex-example>`.
.. seealso::

:ref:`Renaming configuration <config-field-renaming>` and :ref:`Example <field-renaming-regex-example>`.

You can configure the post processor chain by specifying an ordered,
comma separated list of fully-qualified ``PostProcessor`` class names:
Expand Down Expand Up @@ -178,8 +185,7 @@ To define a custom strategy, create a class that implements the interface
and provide the fully-qualified path to the ``document.id.strategy``
setting.

.. admonition:: Selected strategy may have implications on delivery semantics
:class: note
.. note:: Selected strategy may have implications on delivery semantics

BSON ObjectId or UUID strategies can only guarantee at-least-once
delivery since new ids would be generated on retries or re-processing.
Expand Down Expand Up @@ -324,10 +330,10 @@ The previous example projection configurations demonstrated exact
string matching on field names. The projection ``list`` setting also supports
the following wildcard patterns matching on field names:

* "``*``" (`star`): matches a string of any length for the level in the
* "``*``" (``star``): matches a string of any length for the level in the
document in which it is specified.

* "``**``" (`double star`): matches the current and all nested levels from
* "``**``" (``double star``): matches the current and all nested levels from
which it is specified.

The examples below demonstrate how to use each wildcard pattern and the
Expand Down Expand Up @@ -637,7 +643,7 @@ The post processor applied the following changes:
subdocuments of ``crepes`` are matched. In the matched fields, all
instances of "purchased" are replaced with "quantity".

.. admonition:: Ensure renaming does not result in duplicate keys in the same document
.. tip:: Ensure renaming does not result in duplicate keys in the same document

The renaming post processors update the key fields of a JSON document
which can result in duplicate keys within a document. They skip the
Expand All @@ -650,9 +656,9 @@ Custom Write Models

A **write model** defines the behavior of bulk write operations made on a
MongoDB collection. The default write model for the connector is
:java-docs-latest:`ReplaceOneModel
:java-docs:`ReplaceOneModel
<javadoc/com/mongodb/client/model/ReplaceOneModel.html>` with
:java-docs-latest:`ReplaceOptions </javadoc/com/mongodb/client/model/ReplaceOptions.html>`
:java-docs:`ReplaceOptions </javadoc/com/mongodb/client/model/ReplaceOptions.html>`
set to upsert mode.

You can override the default write model by specifying a custom one in the
Expand All @@ -674,8 +680,9 @@ strategies are provided with the connector:
- | Replaces at most one document that matches filters provided by the ``document.id.strategy`` setting.
| Set the following configuration: ``writemodel.strategy=com.mongodb.kafka.connect.sink.writemodel.strategy.ReplaceOneBusinessKeyStrategy``

.. seealso:: Example of usage in :ref:`writemodel-strategy-business-key`.
.. seealso::

Example of usage in :ref:`writemodel-strategy-business-key`.
* - DeleteOneDefaultStrategy
- | Deletes at most one document that matches the id specified by the ``document.id.strategy`` setting, only when the document contains a null value record.
| Implicitly specified when the configuration setting ``mongodb.delete.on.null.values=true`` is set.
Expand All @@ -685,8 +692,9 @@ strategies are provided with the connector:
- | Add ``_insertedTS`` (inserted timestamp) and ``_modifiedTS`` (modified timestamp) fields into documents.
| Set the following configuration: ``writemodel.strategy=com.mongodb.kafka.connect.sink.writemodel.strategy.UpdateOneTimestampsStrategy``

.. seealso:: Example of usage in :ref:`writemodel-strategy-timestamps`.
.. seealso::

Example of usage in :ref:`writemodel-strategy-timestamps`.
* - UpdateOneBusinessKeyTimestampStrategy
- | Add ``_insertedTS`` (inserted timestamp) and ``_modifiedTS`` (modified timestamp) fields into documents that match the filters provided by the ``document.id.strategy`` setting.
| Set the following configuration: ``writemodel.strategy=com.mongodb.kafka.connect.sink.writemodel.strategy.UpdateOneBusinessKeyTimestampStrategy``
Expand Down
33 changes: 12 additions & 21 deletions source/kafka-sink-properties.txt
Original file line number Diff line number Diff line change
Expand Up @@ -43,15 +43,13 @@ data to sink to MongoDB. For an example configuration file, see
- string
- | A regular expression that matches the Kafka topics that the sink connector should watch.

.. example::
The following regex matches topics such as
"activity.landing.clicks" and "activity.support.clicks",
but not "activity.landing.views" or "activity.clicks":

| The following regex matches topics such as
| "activity.landing.clicks" and "activity.support.clicks",
| but not "activity.landing.views" or "activity.clicks":
.. code-block:: none

.. code-block:: none

topics.regex=activity\\.\\w+\\.clicks$
topics.regex=activity\\.\\w+\\.clicks$

| *Required*
| **Note:** You can only define either ``topics`` or ``topics.regex``.
Expand All @@ -61,11 +59,9 @@ data to sink to MongoDB. For an example configuration file, see
- string
- | A :manual:`MongoDB connection URI string </reference/connection-string/#standard-connection-string-format>`.

.. example::

.. code-block:: none
.. code-block:: none

mongodb://username:password@localhost/
mongodb://username:password@localhost/

.. include:: /includes/externalize-secrets.rst

Expand Down Expand Up @@ -146,11 +142,9 @@ data to sink to MongoDB. For an example configuration file, see
- string
- | An inline JSON array with objects describing field name mappings.

.. example::
.. code-block:: none

.. code-block:: none

[ { "oldName":"key.fieldA", "newName":"field1" }, { "oldName":"value.xyz", "newName":"abc" } ]
[ { "oldName":"key.fieldA", "newName":"field1" }, { "oldName":"value.xyz", "newName":"abc" } ]

| **Default**: ``[]``
| **Accepted Values**: A valid JSON array
Expand All @@ -159,11 +153,9 @@ data to sink to MongoDB. For an example configuration file, see
- string
- | An inline JSON array containing regular expression statement objects.

.. example::

.. code-block:: none
.. code-block:: none

[ {"regexp":"^key\\\\..*my.*$", "pattern":"my", "replace":""}, {"regexp":"^value\\\\..*$", "pattern":"\\\\.", "replace":"_"} ]
[ {"regexp":"^key\\\\..*my.*$", "pattern":"my", "replace":""}, {"regexp":"^value\\\\..*$", "pattern":"\\\\.", "replace":"_"} ]

| **Default**: ``[]``
| **Accepted Values**: A valid JSON array
Expand Down Expand Up @@ -242,8 +234,7 @@ data to sink to MongoDB. For an example configuration file, see
- int
- | The maximum number of tasks that should be created for this connector. The connector may create fewer tasks if it cannot handle the specified level of parallelism.

.. admonition:: Messages May Be Processed Out of Order For Values Greater Than 1
:class: important
.. important:: Messages May Be Processed Out of Order For Values Greater Than 1

If you specify a value greater than ``1``, the connector
enables parallel processing of the tasks. If your topic has
Expand Down
Loading

0 comments on commit 0717d18

Please sign in to comment.