Skip to content

Commit

Permalink
Add kcc tls docs
Browse files Browse the repository at this point in the history
Signed-off-by: obaydullahmhs <[email protected]>
  • Loading branch information
obaydullahmhs committed Sep 6, 2024
1 parent 3a7f123 commit 5d469d1
Show file tree
Hide file tree
Showing 19 changed files with 856 additions and 41 deletions.
8 changes: 8 additions & 0 deletions docs/examples/kafka/tls/connectcluster-issuer.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,8 @@
apiVersion: cert-manager.io/v1
kind: Issuer
metadata:
name: connectcluster-ca-issuer
namespace: demo
spec:
ca:
secretName: connectcluster-ca
21 changes: 21 additions & 0 deletions docs/examples/kafka/tls/connectcluster-tls.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,21 @@
apiVersion: kafka.kubedb.com/v1alpha1
kind: ConnectCluster
metadata:
name: connectcluster-tls
namespace: demo
spec:
version: 3.6.1
enableSSL: true
tls:
issuerRef:
apiGroup: cert-manager.io
kind: Issuer
name: connectcluster-ca-issuer
replicas: 3
connectorPlugins:
- postgres-2.4.2.final
- jdbc-2.6.1.final
kafkaRef:
name: kafka-prod
namespace: demo
deletionPolicy: WipeOut
23 changes: 23 additions & 0 deletions docs/examples/kafka/tls/kafka-dev-tls.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,23 @@
apiVersion: kubedb.com/v1
kind: Kafka
metadata:
name: kafka-dev-tls
namespace: demo
spec:
version: 3.6.1
enableSSL: true
tls:
issuerRef:
apiGroup: "cert-manager.io"
kind: Issuer
name: kafka-ca-issuer
replicas: 3
storage:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
storageClassName: standard
storageType: Durable
deletionPolicy: WipeOut
34 changes: 34 additions & 0 deletions docs/examples/kafka/tls/kafka-prod-tls.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,34 @@
apiVersion: kubedb.com/v1
kind: Kafka
metadata:
name: kafka-prod-tls
namespace: demo
spec:
version: 3.6.1
enableSSL: true
tls:
issuerRef:
apiGroup: "cert-manager.io"
kind: Issuer
name: kafka-ca-issuer
topology:
broker:
replicas: 2
storage:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
storageClassName: standard
controller:
replicas: 2
storage:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
storageClassName: standard
storageType: Durable
deletionPolicy: WipeOut
2 changes: 1 addition & 1 deletion docs/guides/kafka/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -81,7 +81,7 @@ KubeDB supports The following Kafka versions. Supported version are applicable f

## User Guide
- [Quickstart Kafka](/docs/guides/kafka/quickstart/kafka/index.md) with KubeDB Operator.
- [Quickstart ConnectCluster](/docs/guides/kafka/quickstart/connectcluster/index.md) with KubeDB Operator.
- [Quickstart ConnectCluster](/docs/guides/kafka/connectcluster/overview.md) with KubeDB Operator.
- Kafka Clustering supported by KubeDB
- [Combined Clustering](/docs/guides/kafka/clustering/combined-cluster/index.md)
- [Topology Clustering](/docs/guides/kafka/clustering/topology-cluster/index.md)
Expand Down
47 changes: 24 additions & 23 deletions docs/guides/kafka/clustering/topology-cluster/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -141,22 +141,21 @@ Hence, the cluster is ready to use.
Let's check the k8s resources created by the operator on the deployment of Kafka CRO:

```bash
$ kubectl get all,secret,pvc -n demo -l 'app.kubernetes.io/instance=kafka-prod'
$ kubectl get all,petset,secret,pvc -n demo -l 'app.kubernetes.io/instance=kafka-prod'
NAME READY STATUS RESTARTS AGE
pod/kafka-prod-broker-0 1/1 Running 0 4m10s
pod/kafka-prod-broker-1 1/1 Running 0 4m4s
pod/kafka-prod-broker-2 1/1 Running 0 3m57s
pod/kafka-prod-controller-0 1/1 Running 0 4m8s
pod/kafka-prod-controller-1 1/1 Running 2 (3m35s ago) 4m
pod/kafka-prod-controller-1 1/1 Running 0 4m
pod/kafka-prod-controller-2 1/1 Running 0 3m53s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kafka-prod-broker ClusterIP None <none> 9092/TCP,29092/TCP 4m14s
service/kafka-prod-controller ClusterIP None <none> 9093/TCP 4m14s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kafka-prod-pods ClusterIP None <none> 9092/TCP,9093/TCP,29092/TCP 4m14s
NAME READY AGE
petset.apps/kafka-prod-broker 3/3 4m10s
petset.apps/kafka-prod-controller 3/3 4m8s
petset.apps.k8s.appscode.com/kafka-prod-broker 3/3 4m10s
petset.apps.k8s.appscode.com/kafka-prod-controller 3/3 4m8s
NAME TYPE VERSION AGE
appbinding.appcatalog.appscode.com/kafka-prod kubedb.com/kafka 3.6.1 4m8s
Expand Down Expand Up @@ -202,25 +201,28 @@ ssl.truststore.password=***********
Now, we have to use a bootstrap server to perform operations in a kafka broker. For this demo, we are going to use the http endpoint of the headless service `kafka-prod-broker` as bootstrap server for publishing & consuming messages to kafka brokers. These endpoints are pointing to all the kafka broker pods. We will set an environment variable for the `clientauth.properties` filepath as well. At first, describe the service to get the http endpoints.

```bash
$ kubectl describe svc -n demo kafka-prod-broker
Name: kafka-prod-broker
$ kubectl describe svc -n demo kafka-prod-pods
Name: kafka-prod-pods
Namespace: demo
Labels: app.kubernetes.io/component=database
app.kubernetes.io/instance=kafka-prod
app.kubernetes.io/managed-by=kubedb.com
app.kubernetes.io/name=kafkas.kubedb.com
Annotations: <none>
Selector: app.kubernetes.io/instance=kafka-prod,app.kubernetes.io/managed-by=kubedb.com,app.kubernetes.io/name=kafkas.kubedb.com,kubedb.com/role=broker
Selector: app.kubernetes.io/instance=kafka-prod,app.kubernetes.io/managed-by=kubedb.com,app.kubernetes.io/name=kafkas.kubedb.com
Type: ClusterIP
IP Family Policy: SingleStack
IP Families: IPv4
IP: None
IPs: None
Port: http 9092/TCP
TargetPort: http/TCP
Port: broker 9092/TCP
TargetPort: broker/TCP
Endpoints: 10.244.0.33:9092,10.244.0.37:9092,10.244.0.41:9092
Port: internal 29092/TCP
TargetPort: internal/TCP
Port: controller 9093/TCP
TargetPort: controller/TCP
Endpoints: 10.244.0.16:9093,10.244.0.20:9093,10.244.0.24:9093
Port: local 29092/TCP
TargetPort: local/TCP
Endpoints: 10.244.0.33:29092,10.244.0.37:29092,10.244.0.41:29092
Session Affinity: None
Events: <none>
Expand All @@ -229,7 +231,7 @@ Events: <none>
Use the `http endpoints` and `clientauth.properties` file to set environment variables. These environment variables will be useful for handling console command operations easily.

```bash
root@kafka-prod-broker-0:~# export SERVER="10.244.0.100:9092,10.244.0.104:9092,10.244.0.108:9092"
root@kafka-prod-broker-0:~# export SERVER=" 10.244.0.33:9092,10.244.0.37:9092,10.244.0.41:9092"
root@kafka-prod-broker-0:~# export CLIENTAUTHCONFIG="$HOME/config/clientauth.properties"
```

Expand All @@ -243,17 +245,17 @@ LeaderEpoch: 15
HighWatermark: 1820
MaxFollowerLag: 0
MaxFollowerLagTimeMs: 159
CurrentVoters: [0,1,2]
CurrentObservers: [3,4,5]
CurrentVoters: [1000,1001,1002]
CurrentObservers: [0,1,2]
```

It will show you important metadata information like clusterID, current leader ID, broker IDs which are participating in leader election voting and IDs of those brokers who are observers. It is important to mention that each broker is assigned a numeric ID which is called its broker ID. The ID is assigned sequentially with respect to the host pod name. In this case, The pods assigned broker IDs are as follows:

| Pods | Broker ID |
|---------------------|:---------:|
| kafka-prod-broker-0 | 3 |
| kafka-prod-broker-1 | 4 |
| kafka-prod-broker-2 | 5 |
| kafka-prod-broker-0 | 0 |
| kafka-prod-broker-1 | 1 |
| kafka-prod-broker-2 | 2 |

Let's create a topic named `sample` with 1 partitions and a replication factor of 1. Describe the topic once it's created. You will see the leader ID for each partition and their replica IDs along with in-sync-replicas(ISR).

Expand All @@ -264,12 +266,12 @@ Created topic sample.
root@kafka-prod-broker-0:~# kafka-topics.sh --command-config $CLIENTAUTHCONFIG --describe --topic sample --bootstrap-server localhost:9092
Topic: sample TopicId: mqlupmBhQj6OQxxG9m51CA PartitionCount: 1 ReplicationFactor: 1 Configs: segment.bytes=1073741824
Topic: sample Partition: 0 Leader: 4 Replicas: 4 Isr: 4
Topic: sample Partition: 0 Leader: 1 Replicas: 1 Isr: 1
```

Now, we are going to start a producer and a consumer for topic `sample` using console. Let's use this current terminal for producing messages and open a new terminal for consuming messages. Let's set the environment variables for bootstrap server and the configuration file in consumer terminal also.

From the topic description we can see that the leader partition for partition 0 is 4 that is `kafka-prod-broker-1`. If we produce messages to `kafka-prod-broker-1` broker(brokerID=4) it will store those messages in partition 0. Let's produce messages in the producer terminal and consume them from the consumer terminal.
From the topic description we can see that the leader partition for partition 0 is 1 that is `kafka-prod-broker-1`. If we produce messages to `kafka-prod-broker-1` broker(brokerID=1) it will store those messages in partition 0. Let's produce messages in the producer terminal and consume them from the consumer terminal.

```bash
root@kafka-prod-broker-1:~# kafka-console-producer.sh --producer.config $CLIENTAUTHCONFIG --topic sample --request-required-acks all --bootstrap-server localhost:9092
Expand All @@ -290,7 +292,6 @@ I hope it's received by console consumer

Notice that, messages are coming to the consumer as you continue sending messages via producer. So, we have created a kafka topic and used kafka console producer and consumer to test message publishing and consuming successfully.


## Cleaning Up

TO clean up the k8s resources created by this tutorial, run:
Expand Down
4 changes: 2 additions & 2 deletions docs/guides/kafka/concepts/connector.md
Original file line number Diff line number Diff line change
Expand Up @@ -70,8 +70,8 @@ Deletion policy `WipeOut` will delete the connector from the ConnectCluster when

## Next Steps

- Learn how to use KubeDB to run a Apache Kafka cluster [here](/docs/guides/kafka/quickstart/kafka/index.md).
- Learn how to use KubeDB to run a Apache Kafka Connect cluster [here](/docs/guides/kafka/quickstart/connectcluster/index.md).
- Learn how to use KubeDB to run Apache Kafka cluster [here](/docs/guides/kafka/quickstart/kafka/index.md).
- Learn how to use KubeDB to run Apache Kafka Connect cluster [here](/docs/guides/kafka/connectcluster/overview.md).
- Detail concepts of [KafkaConnectorVersion object](/docs/guides/kafka/concepts/kafkaconnectorversion.md).
- Learn to use KubeDB managed Kafka objects using [CLIs](/docs/guides/kafka/cli/cli.md).
- Want to hack on KubeDB? Check our [contribution guidelines](/docs/CONTRIBUTING.md).
3 changes: 2 additions & 1 deletion docs/guides/kafka/concepts/kafka.md
Original file line number Diff line number Diff line change
Expand Up @@ -302,7 +302,8 @@ NB. If `spec.topology` is set, then `spec.storage` needs to be empty. Instead us
### spec.monitor

Kafka managed by KubeDB can be monitored with Prometheus operator out-of-the-box. To learn more,
- [Monitor Apache with Prometheus operator](/docs/guides/kafka/monitoring/using-prometheus-operator.md)
- [Monitor Apache Kafka with Prometheus operator](/docs/guides/kafka/monitoring/using-prometheus-operator.md)
- [Monitor Apache Kafka with Built-in Prometheus](/docs/guides/kafka/monitoring/using-builtin-prometheus.md)

### spec.podTemplate

Expand Down
2 changes: 1 addition & 1 deletion docs/guides/kafka/concepts/kafkaconnectorversion.md
Original file line number Diff line number Diff line change
Expand Up @@ -88,4 +88,4 @@ helm upgrade -i kubedb oci://ghcr.io/appscode-charts/kubedb \

- Learn about Kafka CRD [here](/docs/guides/kafka/concepts/kafka.md).
- Learn about ConnectCluster CRD [here](/docs/guides/kafka/concepts/connectcluster.md).
- Deploy your first ConnectCluster with KubeDB by following the guide [here](/docs/guides/kafka/quickstart/connectcluster/index.md).
- Deploy your first ConnectCluster with KubeDB by following the guide [here](/docs/guides/kafka/connectcluster/overview.md).
2 changes: 1 addition & 1 deletion docs/guides/kafka/concepts/schemaregistryversion.md
Original file line number Diff line number Diff line change
Expand Up @@ -90,4 +90,4 @@ helm upgrade -i kubedb oci://ghcr.io/appscode-charts/kubedb \

- Learn about Kafka CRD [here](/docs/guides/kafka/concepts/kafka.md).
- Learn about SchemaRegistry CRD [here](/docs/guides/kafka/concepts/schemaregistry.md).
- Deploy your first ConnectCluster with KubeDB by following the guide [here](/docs/guides/kafka/quickstart/connectcluster/index.md).
- Deploy your first ConnectCluster with KubeDB by following the guide [here](/docs/guides/kafka/connectcluster/overview.md).
6 changes: 3 additions & 3 deletions docs/guides/kafka/connectcluster/connectcluster.md
Original file line number Diff line number Diff line change
Expand Up @@ -182,7 +182,7 @@ Hence, the cluster is ready to use.
Let's check the k8s resources created by the operator on the deployment of ConnectCluster:

```bash
$ kubectl get all,secret -n demo -l 'app.kubernetes.io/instance=connectcluster-distributed'
$ kubectl get all,petset,secret -n demo -l 'app.kubernetes.io/instance=connectcluster-distributed'
NAME READY STATUS RESTARTS AGE
pod/connectcluster-distributed-0 1/1 Running 0 8m55s
pod/connectcluster-distributed-1 1/1 Running 0 8m52s
Expand All @@ -191,8 +191,8 @@ NAME TYPE CLUSTER-IP EXTERNAL-IP
service/connectcluster-distributed ClusterIP 10.128.238.9 <none> 8083/TCP 17m
service/connectcluster-distributed-pods ClusterIP None <none> 8083/TCP 17m
NAME READY AGE
petset.apps/connectcluster-distributed 2/2 8m56s
NAME READY AGE
petset.apps.k8s.appscode.com/connectcluster-distributed 2/2 8m56s
NAME TYPE VERSION AGE
appbinding.appcatalog.appscode.com/connectcluster-distributed kafka.kubedb.com/connectcluster 3.6.1 8m56s
Expand Down
8 changes: 4 additions & 4 deletions docs/guides/kafka/connectcluster/overview.md
Original file line number Diff line number Diff line change
Expand Up @@ -39,7 +39,7 @@ demo Active 9s

> Note: YAML files used in this tutorial are stored in [guides/kafka/quickstart/connectcluster/yamls](https://github.com/kubedb/docs/tree/{{< param "info.version" >}}/docs/guides/kafka/quickstart/connectcluster/yamls) folder in GitHub repository [kubedb/docs](https://github.com/kubedb/docs).
> We have designed this tutorial to demonstrate a production setup of KubeDB managed Apache Kafka Connect Cluster. If you just want to try out KubeDB, you can bypass some safety features following the tips [here](/docs/guides/kafka/quickstart/connectcluster/index.md#tips-for-testing).
> We have designed this tutorial to demonstrate a production setup of KubeDB managed Apache Kafka Connect Cluster. If you just want to try out KubeDB, you can bypass some safety features following the tips [here](/docs/guides/kafka/connectcluster/overview.md#tips-for-testing).
## Find Available ConnectCluster Versions

Expand Down Expand Up @@ -336,7 +336,7 @@ Events: <none>
On deployment of a ConnectCluster CR, the operator creates the following resources:

```bash
$ kubectl get all,secret -n demo -l 'app.kubernetes.io/instance=connectcluster-quickstart'
$ kubectl get all,petset,secret -n demo -l 'app.kubernetes.io/instance=connectcluster-quickstart'
NAME READY STATUS RESTARTS AGE
pod/connectcluster-quickstart-0 1/1 Running 0 3m50s
pod/connectcluster-quickstart-1 1/1 Running 0 3m7s
Expand All @@ -346,8 +346,8 @@ NAME TYPE CLUSTER-IP EXTERNAL-IP
service/connectcluster-quickstart ClusterIP 10.128.221.44 <none> 8083/TCP 3m55s
service/connectcluster-quickstart-pods ClusterIP None <none> 8083/TCP 3m55s
NAME READY AGE
petset.apps/connectcluster-quickstart 3/3 3m50s
NAME READY AGE
petset.apps.k8s.appscode.com/connectcluster-quickstart 3/3 3m50s
NAME TYPE VERSION AGE
appbinding.appcatalog.appscode.com/connectcluster-quickstart kafka.kubedb.com/connectcluster 3.6.1 3m50s
Expand Down
2 changes: 1 addition & 1 deletion docs/guides/kafka/quickstart/kafka/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -435,7 +435,7 @@ If you are just testing some basic functionalities, you might want to avoid addi
## Next Steps

- [Quickstart Kafka](/docs/guides/kafka/quickstart/kafka/index.md) with KubeDB Operator.
- [Quickstart ConnectCluster](/docs/guides/kafka/quickstart/connectcluster/index.md) with KubeDB Operator.
- [Quickstart ConnectCluster](/docs/guides/kafka/connectcluster/overview.md) with KubeDB Operator.
- Kafka Clustering supported by KubeDB
- [Combined Clustering](/docs/guides/kafka/clustering/combined-cluster/index.md)
- [Topology Clustering](/docs/guides/kafka/clustering/topology-cluster/index.md)
Expand Down
2 changes: 1 addition & 1 deletion docs/guides/kafka/restproxy/overview.md
Original file line number Diff line number Diff line change
Expand Up @@ -39,7 +39,7 @@ demo Active 9s

> Note: YAML files used in this tutorial are stored in [examples/kafka/restproxy/](https://github.com/kubedb/docs/tree/{{< param "info.version" >}}/docs/examples/kafka/restproxy) folder in GitHub repository [kubedb/docs](https://github.com/kubedb/docs).
> We have designed this tutorial to demonstrate a production setup of KubeDB managed Schema Registry. If you just want to try out KubeDB, you can bypass some safety features following the tips [here](/docs/guides/kafka/quickstart/connectcluster/index.md#tips-for-testing).
> We have designed this tutorial to demonstrate a production setup of KubeDB managed Schema Registry. If you just want to try out KubeDB, you can bypass some safety features following the tips [here](/docs/guides/kafka/restproxy/overview.md#tips-for-testing).
## Find Available RestProxy Versions

Expand Down
2 changes: 1 addition & 1 deletion docs/guides/kafka/schemaregistry/overview.md
Original file line number Diff line number Diff line change
Expand Up @@ -39,7 +39,7 @@ demo Active 9s

> Note: YAML files used in this tutorial are stored in [examples/kafka/schemaregistry/](https://github.com/kubedb/docs/tree/{{< param "info.version" >}}/docs/examples/kafka/schemaregistry) folder in GitHub repository [kubedb/docs](https://github.com/kubedb/docs).
> We have designed this tutorial to demonstrate a production setup of KubeDB managed Schema Registry. If you just want to try out KubeDB, you can bypass some safety features following the tips [here](/docs/guides/kafka/quickstart/connectcluster/index.md#tips-for-testing).
> We have designed this tutorial to demonstrate a production setup of KubeDB managed Schema Registry. If you just want to try out KubeDB, you can bypass some safety features following the tips [here](/docs/guides/kafka/schemaregistry/overview.md#tips-for-testing).
## Find Available SchemaRegistry Versions

Expand Down
Loading

0 comments on commit 5d469d1

Please sign in to comment.