Skip to content

Commit

Permalink
MINOR: fix typos and mailto-links (apache#225)
Browse files Browse the repository at this point in the history
Reviewers: Jim Galasyn <[email protected]>, Matthias J. Sax <[email protected]>
  • Loading branch information
londoncalling authored and mjsax committed Aug 3, 2019
1 parent 365a503 commit de31245
Show file tree
Hide file tree
Showing 13 changed files with 17 additions and 17 deletions.
2 changes: 1 addition & 1 deletion 0100/uses.html
Original file line number Diff line number Diff line change
Expand Up @@ -43,7 +43,7 @@ <h4><a id="uses_logs" href="#uses_logs">Log Aggregation</a></h4>

<h4><a id="uses_streamprocessing" href="#uses_streamprocessing">Stream Processing</a></h4>

Many users of Kafka process data in processing pipelines consisting of multiple stages, where raw input data is consumed from Kafka topics and then aggregated, enriched, or otherwise transformed into new topics for further consumption or follow-up processing. For example, a processing pipeline for recommending news articles might crawl article content from RSS feeds and publish it to an "articles" topic; further processing might normalize or deduplicate this content and published the cleansed article content to a new topic; a final processing stage might attempt to recommend this content to users. Such processing pipelines create graphs of real-time data flows based on the individual topics. Starting in 0.10.0.0, a light-weight but powerful stream processing library called <a href="#streams_overview">Kafka Streams</a> is available in Apache Kafka to perform such data processing as described above. Apart from Kafka Streams, alternative open source stream processing tools include <a href="https://storm.apache.org/">Apache Storm</a> and <a href="http://samza.apache.org/">Apache Samza</a>.
Many users of Kafka process data in processing pipelines consisting of multiple stages, where raw input data is consumed from Kafka topics and then aggregated, enriched, or otherwise transformed into new topics for further consumption or follow-up processing. For example, a processing pipeline for recommending news articles might crawl article content from RSS feeds and publish it to an "articles" topic; further processing might normalize or deduplicate this content and publish the cleansed article content to a new topic; a final processing stage might attempt to recommend this content to users. Such processing pipelines create graphs of real-time data flows based on the individual topics. Starting in 0.10.0.0, a light-weight but powerful stream processing library called <a href="#streams_overview">Kafka Streams</a> is available in Apache Kafka to perform such data processing as described above. Apart from Kafka Streams, alternative open source stream processing tools include <a href="https://storm.apache.org/">Apache Storm</a> and <a href="http://samza.apache.org/">Apache Samza</a>.

<h4><a id="uses_eventsourcing" href="#uses_eventsourcing">Event Sourcing</a></h4>

Expand Down
2 changes: 1 addition & 1 deletion 0101/uses.html
Original file line number Diff line number Diff line change
Expand Up @@ -43,7 +43,7 @@ <h4><a id="uses_logs" href="#uses_logs">Log Aggregation</a></h4>

<h4><a id="uses_streamprocessing" href="#uses_streamprocessing">Stream Processing</a></h4>

Many users of Kafka process data in processing pipelines consisting of multiple stages, where raw input data is consumed from Kafka topics and then aggregated, enriched, or otherwise transformed into new topics for further consumption or follow-up processing. For example, a processing pipeline for recommending news articles might crawl article content from RSS feeds and publish it to an "articles" topic; further processing might normalize or deduplicate this content and published the cleansed article content to a new topic; a final processing stage might attempt to recommend this content to users. Such processing pipelines create graphs of real-time data flows based on the individual topics. Starting in 0.10.0.0, a light-weight but powerful stream processing library called <a href="#streams_overview">Kafka Streams</a> is available in Apache Kafka to perform such data processing as described above. Apart from Kafka Streams, alternative open source stream processing tools include <a href="https://storm.apache.org/">Apache Storm</a> and <a href="http://samza.apache.org/">Apache Samza</a>.
Many users of Kafka process data in processing pipelines consisting of multiple stages, where raw input data is consumed from Kafka topics and then aggregated, enriched, or otherwise transformed into new topics for further consumption or follow-up processing. For example, a processing pipeline for recommending news articles might crawl article content from RSS feeds and publish it to an "articles" topic; further processing might normalize or deduplicate this content and publish the cleansed article content to a new topic; a final processing stage might attempt to recommend this content to users. Such processing pipelines create graphs of real-time data flows based on the individual topics. Starting in 0.10.0.0, a light-weight but powerful stream processing library called <a href="#streams_overview">Kafka Streams</a> is available in Apache Kafka to perform such data processing as described above. Apart from Kafka Streams, alternative open source stream processing tools include <a href="https://storm.apache.org/">Apache Storm</a> and <a href="http://samza.apache.org/">Apache Samza</a>.

<h4><a id="uses_eventsourcing" href="#uses_eventsourcing">Event Sourcing</a></h4>

Expand Down
2 changes: 1 addition & 1 deletion 0102/uses.html
Original file line number Diff line number Diff line change
Expand Up @@ -60,7 +60,7 @@ <h4><a id="uses_streamprocessing" href="#uses_streamprocessing">Stream Processin
Many users of Kafka process data in processing pipelines consisting of multiple stages, where raw input data is consumed from Kafka topics and then
aggregated, enriched, or otherwise transformed into new topics for further consumption or follow-up processing.
For example, a processing pipeline for recommending news articles might crawl article content from RSS feeds and publish it to an "articles" topic;
further processing might normalize or deduplicate this content and published the cleansed article content to a new topic;
further processing might normalize or deduplicate this content and publish the cleansed article content to a new topic;
a final processing stage might attempt to recommend this content to users.
Such processing pipelines create graphs of real-time data flows based on the individual topics.
Starting in 0.10.0.0, a light-weight but powerful stream processing library called <a href="/{{version}}/documentation/streams">Kafka Streams</a>
Expand Down
2 changes: 1 addition & 1 deletion 0110/uses.html
Original file line number Diff line number Diff line change
Expand Up @@ -60,7 +60,7 @@ <h4><a id="uses_streamprocessing" href="#uses_streamprocessing">Stream Processin
Many users of Kafka process data in processing pipelines consisting of multiple stages, where raw input data is consumed from Kafka topics and then
aggregated, enriched, or otherwise transformed into new topics for further consumption or follow-up processing.
For example, a processing pipeline for recommending news articles might crawl article content from RSS feeds and publish it to an "articles" topic;
further processing might normalize or deduplicate this content and published the cleansed article content to a new topic;
further processing might normalize or deduplicate this content and publish the cleansed article content to a new topic;
a final processing stage might attempt to recommend this content to users.
Such processing pipelines create graphs of real-time data flows based on the individual topics.
Starting in 0.10.0.0, a light-weight but powerful stream processing library called <a href="/documentation/streams">Kafka Streams</a>
Expand Down
2 changes: 1 addition & 1 deletion 081/ops.html
Original file line number Diff line number Diff line change
Expand Up @@ -42,7 +42,7 @@ <h4><a id="basic_ops_modify_topic">Modifying topics</a></h4>
<pre>
&gt; bin/kafka-topics.sh --zookeeper zk_host:port/chroot --delete --topic my_topic_name
</pre>
WARNING: Delete topic functionality is beta in 0.8.1. Please report any bugs that you encounter on the <a href="mailto: [email protected]">mailing list</a> or <a href="https://issues.apache.org/jira/browse/KAFKA">JIRA</a>.
WARNING: Delete topic functionality is beta in 0.8.1. Please report any bugs that you encounter on the <a href="mailto:[email protected]">mailing list</a> or <a href="https://issues.apache.org/jira/browse/KAFKA">JIRA</a>.
<p>
Kafka does not currently support reducing the number of partitions for a topic or changing the replication factor.

Expand Down
2 changes: 1 addition & 1 deletion 10/uses.html
Original file line number Diff line number Diff line change
Expand Up @@ -60,7 +60,7 @@ <h4><a id="uses_streamprocessing" href="#uses_streamprocessing">Stream Processin
Many users of Kafka process data in processing pipelines consisting of multiple stages, where raw input data is consumed from Kafka topics and then
aggregated, enriched, or otherwise transformed into new topics for further consumption or follow-up processing.
For example, a processing pipeline for recommending news articles might crawl article content from RSS feeds and publish it to an "articles" topic;
further processing might normalize or deduplicate this content and published the cleansed article content to a new topic;
further processing might normalize or deduplicate this content and publish the cleansed article content to a new topic;
a final processing stage might attempt to recommend this content to users.
Such processing pipelines create graphs of real-time data flows based on the individual topics.
Starting in 0.10.0.0, a light-weight but powerful stream processing library called <a href="/documentation/streams">Kafka Streams</a>
Expand Down
2 changes: 1 addition & 1 deletion 11/uses.html
Original file line number Diff line number Diff line change
Expand Up @@ -60,7 +60,7 @@ <h4><a id="uses_streamprocessing" href="#uses_streamprocessing">Stream Processin
Many users of Kafka process data in processing pipelines consisting of multiple stages, where raw input data is consumed from Kafka topics and then
aggregated, enriched, or otherwise transformed into new topics for further consumption or follow-up processing.
For example, a processing pipeline for recommending news articles might crawl article content from RSS feeds and publish it to an "articles" topic;
further processing might normalize or deduplicate this content and published the cleansed article content to a new topic;
further processing might normalize or deduplicate this content and publish the cleansed article content to a new topic;
a final processing stage might attempt to recommend this content to users.
Such processing pipelines create graphs of real-time data flows based on the individual topics.
Starting in 0.10.0.0, a light-weight but powerful stream processing library called <a href="/documentation/streams">Kafka Streams</a>
Expand Down
2 changes: 1 addition & 1 deletion 20/uses.html
Original file line number Diff line number Diff line change
Expand Up @@ -60,7 +60,7 @@ <h4><a id="uses_streamprocessing" href="#uses_streamprocessing">Stream Processin
Many users of Kafka process data in processing pipelines consisting of multiple stages, where raw input data is consumed from Kafka topics and then
aggregated, enriched, or otherwise transformed into new topics for further consumption or follow-up processing.
For example, a processing pipeline for recommending news articles might crawl article content from RSS feeds and publish it to an "articles" topic;
further processing might normalize or deduplicate this content and published the cleansed article content to a new topic;
further processing might normalize or deduplicate this content and publish the cleansed article content to a new topic;
a final processing stage might attempt to recommend this content to users.
Such processing pipelines create graphs of real-time data flows based on the individual topics.
Starting in 0.10.0.0, a light-weight but powerful stream processing library called <a href="/documentation/streams">Kafka Streams</a>
Expand Down
2 changes: 1 addition & 1 deletion 21/uses.html
Original file line number Diff line number Diff line change
Expand Up @@ -60,7 +60,7 @@ <h4><a id="uses_streamprocessing" href="#uses_streamprocessing">Stream Processin
Many users of Kafka process data in processing pipelines consisting of multiple stages, where raw input data is consumed from Kafka topics and then
aggregated, enriched, or otherwise transformed into new topics for further consumption or follow-up processing.
For example, a processing pipeline for recommending news articles might crawl article content from RSS feeds and publish it to an "articles" topic;
further processing might normalize or deduplicate this content and published the cleansed article content to a new topic;
further processing might normalize or deduplicate this content and publish the cleansed article content to a new topic;
a final processing stage might attempt to recommend this content to users.
Such processing pipelines create graphs of real-time data flows based on the individual topics.
Starting in 0.10.0.0, a light-weight but powerful stream processing library called <a href="/documentation/streams">Kafka Streams</a>
Expand Down
2 changes: 1 addition & 1 deletion 22/uses.html
Original file line number Diff line number Diff line change
Expand Up @@ -60,7 +60,7 @@ <h4><a id="uses_streamprocessing" href="#uses_streamprocessing">Stream Processin
Many users of Kafka process data in processing pipelines consisting of multiple stages, where raw input data is consumed from Kafka topics and then
aggregated, enriched, or otherwise transformed into new topics for further consumption or follow-up processing.
For example, a processing pipeline for recommending news articles might crawl article content from RSS feeds and publish it to an "articles" topic;
further processing might normalize or deduplicate this content and published the cleansed article content to a new topic;
further processing might normalize or deduplicate this content and publish the cleansed article content to a new topic;
a final processing stage might attempt to recommend this content to users.
Such processing pipelines create graphs of real-time data flows based on the individual topics.
Starting in 0.10.0.0, a light-weight but powerful stream processing library called <a href="/documentation/streams">Kafka Streams</a>
Expand Down
2 changes: 1 addition & 1 deletion 23/uses.html
Original file line number Diff line number Diff line change
Expand Up @@ -60,7 +60,7 @@ <h4><a id="uses_streamprocessing" href="#uses_streamprocessing">Stream Processin
Many users of Kafka process data in processing pipelines consisting of multiple stages, where raw input data is consumed from Kafka topics and then
aggregated, enriched, or otherwise transformed into new topics for further consumption or follow-up processing.
For example, a processing pipeline for recommending news articles might crawl article content from RSS feeds and publish it to an "articles" topic;
further processing might normalize or deduplicate this content and published the cleansed article content to a new topic;
further processing might normalize or deduplicate this content and publish the cleansed article content to a new topic;
a final processing stage might attempt to recommend this content to users.
Such processing pipelines create graphs of real-time data flows based on the individual topics.
Starting in 0.10.0.0, a light-weight but powerful stream processing library called <a href="/documentation/streams">Kafka Streams</a>
Expand Down
8 changes: 4 additions & 4 deletions contact.html
Original file line number Diff line number Diff line change
Expand Up @@ -11,16 +11,16 @@ <h3>Mailing Lists</h3>
</p>
<ul>
<li>
<b>User mailing list</b>: A list for general user questions about Kafka&reg;. To subscribe, send an email to <a href="mailto: [email protected]">[email protected]</a>. Once subscribed, send your emails to <a href="mailto: [email protected]">[email protected]</a>. Archives are available <a href="https://lists.apache.org/[email protected]">here</a>.
<b>User mailing list</b>: A list for general user questions about Kafka&reg;. To subscribe, send an email to <a href="mailto:[email protected]">[email protected]</a>. Once subscribed, send your emails to <a href="mailto:[email protected]">[email protected]</a>. Archives are available <a href="https://lists.apache.org/[email protected]">here</a>.
</li>
<li>
<b>Developer mailing list</b>: A list for discussion on Kafka&reg; development. To subscribe, send an email to <a href="mailto: [email protected]">[email protected]</a>. Once subscribed, send your emails to <a href="mailto: [email protected]">[email protected]</a>. Archives are available <a href="https://lists.apache.org/[email protected]">here</a>.
<b>Developer mailing list</b>: A list for discussion on Kafka&reg; development. To subscribe, send an email to <a href="mailto:[email protected]">[email protected]</a>. Once subscribed, send your emails to <a href="mailto:[email protected]">[email protected]</a>. Archives are available <a href="https://lists.apache.org/[email protected]">here</a>.
</li>
<li>
<b>JIRA mailing list</b>: A list to track Kafka&reg; <a href="https://issues.apache.org/jira/projects/KAFKA">JIRA</a> notifications. To subscribe, send an email to <a href="mailto: [email protected]">[email protected]</a>. Archives are available <a href="https://lists.apache.org/[email protected]">here</a>.
<b>JIRA mailing list</b>: A list to track Kafka&reg; <a href="https://issues.apache.org/jira/projects/KAFKA">JIRA</a> notifications. To subscribe, send an email to <a href="mailto:[email protected]">[email protected]</a>. Archives are available <a href="https://lists.apache.org/[email protected]">here</a>.
</li>
<li>
<b>Commit mailing list</b>: A list to track Kafka&reg; commits. To subscribe, send an email to <a href="mailto: [email protected]">[email protected]</a>. Archives are available <a href="http://mail-archives.apache.org/mod_mbox/kafka-commits">here</a>.
<b>Commit mailing list</b>: A list to track Kafka&reg; commits. To subscribe, send an email to <a href="mailto:[email protected]">[email protected]</a>. Archives are available <a href="http://mail-archives.apache.org/mod_mbox/kafka-commits">here</a>.
</li>
</ul>

Expand Down
4 changes: 2 additions & 2 deletions contributing.html
Original file line number Diff line number Diff line change
Expand Up @@ -27,7 +27,7 @@ <h2>Contributing A Code Change</h2>
<li>Make sure you have observed the recommendations in the <a href="coding-guide.html">style guide</a>.</li>
<li>Follow the detailed instructions in <a href="https://cwiki.apache.org/confluence/display/KAFKA/Contributing+Code+Changes">Contributing Code Changes</a>.</li>
<li>Note that if the change is related to user-facing protocols / interface / configs, etc, you need to make the corresponding change on the documentation as well. For wiki page changes feel free to edit the page content directly (you may need to contact us to get the permission first if it is your first time to edit on wiki); website docs live in the code repo under `docs` so that changes to that can be done in the same PR as changes to the code. Website doc change instructions are given below.
<li>It is our job to follow up on patches in a timely fashion. <a href="mailto: [email protected]">Nag us</a> if we aren't doing our job (sometimes we drop things).</li>
<li>It is our job to follow up on patches in a timely fashion. <a href="mailto:[email protected]">Nag us</a> if we aren't doing our job (sometimes we drop things).</li>
</ul>

<h2>Contributing A Change To The Website</h2>
Expand All @@ -36,7 +36,7 @@ <h2>Contributing A Change To The Website</h2>

<ul>
<li>Follow the instructions in <a href="https://cwiki.apache.org/confluence/display/KAFKA/Contributing+Website+Documentation+Changes">Contributing Website Changes</a>.</li>
<li>It is our job to follow up on patches in a timely fashion. <a href="mailto: [email protected]">Nag us</a> if we aren't doing our job (sometimes we drop things). If the patch needs improvement, the reviewer will mark the jira back to "In Progress" after reviewing.</li>
<li>It is our job to follow up on patches in a timely fashion. <a href="mailto:[email protected]">Nag us</a> if we aren't doing our job (sometimes we drop things). If the patch needs improvement, the reviewer will mark the jira back to "In Progress" after reviewing.</li>
</ul>

<h2>Finding A Project To Work On</h2>
Expand Down

0 comments on commit de31245

Please sign in to comment.