Skip to content

Commit

Permalink
added new meetup links on events page and remove docs subnav on old v…
Browse files Browse the repository at this point in the history
…ersions of the docs

hide subnav on prior docs and make 07 docs consistent with others
  • Loading branch information
derrickdoo committed Nov 17, 2016
1 parent 9aa3bae commit e8fdfce
Show file tree
Hide file tree
Showing 11 changed files with 96 additions and 80 deletions.
1 change: 0 additions & 1 deletion 0100/documentation.html
Original file line number Diff line number Diff line change
Expand Up @@ -197,4 +197,3 @@ <h2><a id="streams" href="#streams">9. Kafka Streams</a></h2>
<!--#include virtual="streams.html" -->

<!--#include virtual="../includes/_footer.htm" -->
<!--#include virtual="../includes/_docs_footer.htm" -->
19 changes: 7 additions & 12 deletions 07/configuration.html
Original file line number Diff line number Diff line change
@@ -1,10 +1,8 @@
<!--#include virtual="../includes/_header.htm" -->

<h2> Configuration </h2>
<h2 id="configuration"> Configuration </h2>

<h3> Important configuration properties for Kafka broker: </h3>

<p>More details about server configuration can be found in the scala class <code>kafka.server.KafkaConfig</code>.</p>
<p>More details about server configuration can be found in the scala class <code>kafka.server.KafkaConfig</code>.</p>

<table class="data-table">
<tr>
Expand All @@ -25,7 +23,7 @@ <h3> Important configuration properties for Kafka broker: </h3>
<tr>
<td><code>log.flush.interval</code></td>
<td>500</td>
<td>Controls the number of messages accumulated in each topic (partition) before the data is flushed to disk and made available to consumers.</td>
<td>Controls the number of messages accumulated in each topic (partition) before the data is flushed to disk and made available to consumers.</td>
</tr>
<tr>
<td><code>log.default.flush.scheduler.interval.ms</code></td>
Expand Down Expand Up @@ -138,7 +136,7 @@ <h3> Important configuration properties for Kafka broker: </h3>

<h3> Important configuration properties for the high-level consumer: </h3>

<p>More details about consumer configuration can be found in the scala class <code>kafka.consumer.ConsumerConfig</code>.</p>
<p>More details about consumer configuration can be found in the scala class <code>kafka.consumer.ConsumerConfig</code>.</p>

<table class="data-table">
<tr>
Expand Down Expand Up @@ -169,7 +167,7 @@ <h3> Important configuration properties for the high-level consumer: </h3>
<tr>
<td><code>backoff.increment.ms</code></td>
<td>1000</td>
<td>This parameter avoids repeatedly polling a broker node which has no new data. We will backoff every time we get an empty set
<td>This parameter avoids repeatedly polling a broker node which has no new data. We will backoff every time we get an empty set
from the broker for this time period</td>
</tr>
<tr>
Expand Down Expand Up @@ -228,7 +226,7 @@ <h3> Important configuration properties for the high-level consumer: </h3>

<h3> Important configuration properties for the producer: </h3>

<p>More details about producer configuration can be found in the scala class <code>kafka.producer.ProducerConfig</code>.</p>
<p>More details about producer configuration can be found in the scala class <code>kafka.producer.ProducerConfig</code>.</p>

<table class="data-table">
<tr>
Expand Down Expand Up @@ -330,7 +328,7 @@ <h3> Important configuration properties for the producer: </h3>
<tr>
<td><code>event.handler</code></td>
<td><code>kafka.producer.async.EventHandler&lt;T&gt;</code></td>
<td>the class that implements <code>kafka.producer.async.IEventHandler&lt;T&gt;</code> used to dispatch a batch of produce requests, using an instance of <code>kafka.producer.SyncProducer</code>.
<td>the class that implements <code>kafka.producer.async.IEventHandler&lt;T&gt;</code> used to dispatch a batch of produce requests, using an instance of <code>kafka.producer.SyncProducer</code>.
</td>
</tr>
<tr>
Expand All @@ -350,6 +348,3 @@ <h3> Important configuration properties for the producer: </h3>
<td>the <code>java.util.Properties()</code> object used to initialize the custom <code>callback.handler</code> through its <code>init()</code> API</td>
</tr>
</table>


<!--#include virtual="../includes/_footer.htm" -->
9 changes: 6 additions & 3 deletions 07/documentation.html
Original file line number Diff line number Diff line change
Expand Up @@ -11,9 +11,12 @@ <h3>Kafka 0.7</h3>
<li><a href="/07/quickstart.html">Quickstart</a> &ndash; Get up and running quickly.
<li><a href="/07/configuration.html">Configuration</a> &ndash; All the knobs.
<li><a href="/07/performance.html">Performance</a> &ndash; Some performance results.
<li><a href="https://cwiki.apache.org/confluence/display/KAFKA/Operations">Operations</a> &ndash; Notes on running the system.
<li><a href="http://people.apache.org/~joestein/kafka-0.7.1-incubating-docs">API Docs</a> &ndash; Scaladoc for the api.
<li><a href="https://cwiki.apache.org/confluence/display/KAFKA/Operations" target="_blank">Operations</a> &ndash; Notes on running the system.
<li><a href="http://people.apache.org/~joestein/kafka-0.7.1-incubating-docs" target="_blank">API Docs</a> &ndash; Scaladoc for the api.
</ul>

<!--#include virtual="quickstart.html" -->
<!--#include virtual="configuration.html" -->
<!--#include virtual="performance.html" -->

<!--#include virtual="../includes/_footer.htm" -->
<!--#include virtual="../includes/_docs_footer.htm" -->
22 changes: 9 additions & 13 deletions 07/performance.html
Original file line number Diff line number Diff line change
@@ -1,6 +1,4 @@
<!--#include virtual="../includes/_header.htm" -->

<h2>Performance Results</h2>
<h2 id="performance">Performance Results</h2>
<p>The following tests give some basic information on Kafka throughput as the number of topics, consumers and producers and overall data size varies. Since Kafka nodes are independent, these tests are run with a single producer, consumer, and broker machine. Results can be extrapolated for a larger cluster.
</p>

Expand Down Expand Up @@ -46,13 +44,13 @@ <h2>How to Run a Performance Test</h2>

<p>&nbsp;../run-simulator.sh -kafkaServer=localhost -numTopic=10&nbsp; -reportFile=report-html/data -time=15 -numConsumer=20 -numProducer=40 -xaxis=numTopic</p>

<p>It will run a simulator with 40 producer and 20 consumer threads
<p>It will run a simulator with 40 producer and 20 consumer threads
producing/consuming from a local kafkaserver.&nbsp; The simulator is going to
run 15 minutes and the results are going to be saved under
run 15 minutes and the results are going to be saved under
report-html/data</p>

<p>and they will be plotted from there. Basically it will write MB of
data consumed/produced, number of messages consumed/produced given a
<p>and they will be plotted from there. Basically it will write MB of
data consumed/produced, number of messages consumed/produced given a
number of topic and report.html will plot the charts.</p>


Expand All @@ -63,21 +61,19 @@ <h2>How to Run a Performance Test</h2>


<p>#!/bin/bash<br />

for i in 1 10 20 30 40 50;<br />

do<br />

&nbsp; ../kafka-server.sh server.properties 2>&amp;1 >kafka.out&amp;<br />
sleep 60<br />
&nbsp;../run-simulator.sh -kafkaServer=localhost -numTopic=$i&nbsp; -reportFile=report-html/data -time=15 -numConsumer=20 -numProducer=40 -xaxis=numTopic<br />
&nbsp;../stop-server.sh<br />
&nbsp;rm -rf /tmp/kafka-logs<br />

&nbsp;sleep 300<br />

done</p>

<p>The charts similar to above graphs can be plotted with report.html automatically.</p>

<!--#include virtual="../includes/_footer.htm" -->
46 changes: 21 additions & 25 deletions 07/quickstart.html
Original file line number Diff line number Diff line change
@@ -1,7 +1,5 @@
<!--#include virtual="../includes/_header.htm" -->
<h2 id="quickstart">Quick Start</h2>

<h2>Quick Start</h2>

<h3> Step 1: Download the code </h3>

<a href="../downloads.html" title="Kafka downloads">Download</a> a recent stable release.
Expand All @@ -15,20 +13,20 @@ <h3> Step 1: Download the code </h3>

<h3>Step 2: Start the server</h3>

Kafka brokers and consumers use this for co-ordination.
Kafka brokers and consumers use this for co-ordination.
<p>
First start the zookeeper server. You can use the convenience script packaged with kafka to get a quick-and-dirty single-node zookeeper instance.

<pre>
<b>&gt; bin/zookeeper-server-start.sh config/zookeeper.properties</b>
[2010-11-21 23:45:02,335] INFO Reading configuration from: config/zookeeper.properties
[2010-11-21 23:45:02,335] INFO Reading configuration from: config/zookeeper.properties
...
</pre>

Now start the Kafka server:
<pre>
<b>&gt; bin/kafka-server-start.sh config/server.properties</b>
jkreps-mn-2:kafka-trunk jkreps$ bin/kafka-server-start.sh config/server.properties
jkreps-mn-2:kafka-trunk jkreps$ bin/kafka-server-start.sh config/server.properties
[2010-11-21 23:51:39,608] INFO starting log cleaner every 60000 ms (kafka.log.LogManager)
[2010-11-21 23:51:39,628] INFO connecting to ZK: localhost:2181 (kafka.server.KafkaZooKeeper)
...
Expand All @@ -39,7 +37,7 @@ <h3>Step 3: Send some messages</h3>
Kafka comes with a command line client that will take input from standard in and send it out as messages to the Kafka cluster. By default each line will be sent as a separate message. The topic <i>test</i> is created automatically when messages are sent to it. Omitting logging you should see something like this:

<pre>
&gt; <b>bin/kafka-console-producer.sh --zookeeper localhost:2181 --topic test</b>
&gt; <b>bin/kafka-console-producer.sh --zookeeper localhost:2181 --topic test</b>
This is a message
This is another message
</pre>
Expand All @@ -57,7 +55,7 @@ <h3>Step 4: Start a consumer</h3>
If you have each of the above commands running in a different terminal then you should now be able to type messages into the producer terminal and see them appear in the consumer terminal.
</p>
<p>
Both of these command line tools have additional options. Running the command with no arguments will display usage information documenting them in more detail.
Both of these command line tools have additional options. Running the command with no arguments will display usage information documenting them in more detail.
</p>

<h3>Step 5: Write some code</h3>
Expand Down Expand Up @@ -100,7 +98,7 @@ <h5>Producer API </h5>
<pre>
<small>// The message is sent to a randomly selected partition registered in ZK</small>
ProducerData&lt;String, String&gt; data = new ProducerData&lt;String, String&gt;("test-topic", "test-message");
producer.send(data);
producer.send(data);
</pre>
</li>
<li>Send multiple messages to multiple topics in one request
Expand All @@ -113,7 +111,7 @@ <h5>Producer API </h5>
List&lt;ProducerData&lt;String, String&gt;&gt; dataForMultipleTopics = new ArrayList&lt;ProducerData&lt;String, String&gt;&gt;();
dataForMultipleTopics.add(data1);
dataForMultipleTopics.add(data2);
producer.send(dataForMultipleTopics);
producer.send(dataForMultipleTopics);
</pre>
</li>
<li>Send a message with a partition key. Messages with the same key are sent to the same partition
Expand All @@ -139,7 +137,7 @@ <h5>Producer API </h5>
Producer&lt;String, String&gt; producer = new Producer&lt;String, String&gt;(config);
</pre>
</li>
<li>Use custom Encoder
<li>Use custom Encoder
<p>The producer takes in a required config parameter <code>serializer.class</code> that specifies an <code>Encoder&lt;T&gt;</code> to convert T to a Kafka Message. Default is the no-op kafka.serializer.DefaultEncoder.
Here is an example of a custom Encoder -</p>
<pre>
Expand All @@ -158,23 +156,23 @@ <h5>Producer API </h5>
</pre>
</li>
<li>Using static list of brokers, instead of zookeeper based broker discovery
<p>Some applications would rather not depend on zookeeper. In that case, the config parameter <code>broker.list</code>
can be used to specify the list of all brokers in the Kafka cluster.- the list of all brokers in your Kafka cluster in the following format -
<p>Some applications would rather not depend on zookeeper. In that case, the config parameter <code>broker.list</code>
can be used to specify the list of all brokers in the Kafka cluster.- the list of all brokers in your Kafka cluster in the following format -
<code>broker_id1:host1:port1, broker_id2:host2:port2...</code></p>
<pre>
<small>// you can stop the zookeeper instance as it is no longer required</small>
./bin/zookeeper-server-stop.sh
./bin/zookeeper-server-stop.sh
<small>// create the producer config object </small>
Properties props = new Properties();
props.put(“broker.list”, “0:localhost:9092”);
props.put("serializer.class", "kafka.serializer.StringEncoder");
ProducerConfig config = new ProducerConfig(props);
<small>// send a message using default partitioner </small>
Producer&lt;String, String&gt; producer = new Producer&lt;String, String&gt;(config);
Producer&lt;String, String&gt; producer = new Producer&lt;String, String&gt;(config);
List&lt;String&gt; messages = new java.util.ArrayList&lt;String&gt;();
messages.add("test-message");
ProducerData&lt;String, String&gt; data = new ProducerData&lt;String, String&gt;("test-topic", messages);
producer.send(data);
producer.send(data);
</pre>
</li>
<li>Use the asynchronous producer along with GZIP compression. This buffers writes in memory until either <code>batch.size</code> or <code>queue.time</code> is reached. After that, data is sent to the Kafka brokers
Expand All @@ -197,7 +195,7 @@ <h5>Producer API </h5>

<h5>Log4j appender </h5>

Data can also be produced to a Kafka server in the form of a log4j appender. In this way, minimal code needs to be written in order to send some data across to the Kafka server.
Data can also be produced to a Kafka server in the form of a log4j appender. In this way, minimal code needs to be written in order to send some data across to the Kafka server.
Here is an example of how to use the Kafka Log4j appender -

Start by defining the Kafka appender in your log4j.properties file.
Expand All @@ -221,12 +219,12 @@ <h5>Log4j appender </h5>
Data can be sent using a log4j appender as follows -

<pre>
Logger logger = Logger.getLogger([your.test.class])
Logger logger = Logger.getLogger([your.test.class])
logger.info("message from log4j appender");
</pre>

If your log4j appender fails to send messages, please verify that the correct
log4j properties file is being used. You can add
If your log4j appender fails to send messages, please verify that the correct
log4j properties file is being used. You can add
<code>-Dlog4j.debug=true</code> to your VM parameters to verify this.

<h4>Consumer Code</h4>
Expand All @@ -245,11 +243,11 @@ <h4>Consumer Code</h4>
ConsumerConnector consumerConnector = Consumer.createJavaConsumerConnector(consumerConfig);

// create 4 partitions of the stream for topic “test”, to allow 4 threads to consume
Map&lt;String, List&lt;KafkaStream&lt;Message&gt;&gt;&gt; topicMessageStreams =
Map&lt;String, List&lt;KafkaStream&lt;Message&gt;&gt;&gt; topicMessageStreams =
consumerConnector.createMessageStreams(ImmutableMap.of("test", 4));
List&lt;KafkaStream&lt;Message&gt;&gt; streams = topicMessageStreams.get("test");

// create list of 4 threads to consume from each of the partitions
// create list of 4 threads to consume from each of the partitions
ExecutorService executor = Executors.newFixedThreadPool(4);

// consume the messages in the threads
Expand All @@ -258,7 +256,7 @@ <h4>Consumer Code</h4>
public void run() {
for(MessageAndMetadata msgAndMetadata: stream) {
// process message (msgAndMetadata.message())
}
}
}
});
}
Expand Down Expand Up @@ -305,5 +303,3 @@ <h4>Simple Consumer</h4>
}
}
</pre>

<!--#include virtual="../includes/_footer.htm" -->
1 change: 0 additions & 1 deletion 08/documentation.html
Original file line number Diff line number Diff line change
Expand Up @@ -104,4 +104,3 @@ <h2><a id="tools">7. Tools</a></h2>
<!--#include virtual="tools.html" -->

<!--#include virtual="../includes/_footer.htm" -->
<!--#include virtual="../includes/_docs_footer.htm" -->
1 change: 0 additions & 1 deletion 081/documentation.html
Original file line number Diff line number Diff line change
Expand Up @@ -120,4 +120,3 @@ <h2><a id="operations">6. Operations</a></h2>
<!--#include virtual="ops.html" -->

<!--#include virtual="../includes/_footer.htm" -->
<!--#include virtual="../includes/_docs_footer.htm" -->
1 change: 0 additions & 1 deletion 082/documentation.html
Original file line number Diff line number Diff line change
Expand Up @@ -120,4 +120,3 @@ <h2><a id="operations">6. Operations</a></h2>
<!--#include virtual="ops.html" -->

<!--#include virtual="../includes/_footer.htm" -->
<!--#include virtual="../includes/_docs_footer.htm" -->
1 change: 0 additions & 1 deletion 090/documentation.html
Original file line number Diff line number Diff line change
Expand Up @@ -178,4 +178,3 @@ <h2><a id="connect" href="#connect">8. Kafka Connect</a></h2>
<!--#include virtual="connect.html" -->

<!--#include virtual="../includes/_footer.htm" -->
<!--#include virtual="../includes/_docs_footer.htm" -->
Loading

0 comments on commit e8fdfce

Please sign in to comment.