#Create Instance
You can create a Kafka instance to build a high-throughput, low-latency real-time data pipeline to support the diverse needs of business systems in scenarios such as streaming data processing and service decoupling.
#TOC
#Create Kafka Instance
#Procedure
Create a Kafka instance via CLI:
cat << EOF | kubectl -n default create -f -
apiVersion: middleware.alauda.io/v1
kind: RdsKafka
metadata:
name: my-cluster
spec:
entityOperator:
topicOperator:
resources:
limits:
cpu: 1
memory: 2Gi
requests:
cpu: 1
memory: 2Gi
userOperator:
resources:
limits:
cpu: 1
memory: 2Gi
requests:
cpu: 1
memory: 2Gi
tlsSidecar:
resources:
limits:
cpu: 200m
memory: 128Mi
requests:
cpu: 200m
memory: 128Mi
version: 3.8
replicas: 3
config:
auto.create.topics.enable: "false"
auto.leader.rebalance.enable: "true"
background.threads: "10"
compression.type: producer
default.replication.factor: "3"
delete.topic.enable: "true"
log.retention.hours: "168"
log.roll.hours: "168"
log.segment.bytes: "1073741824"
message.max.bytes: "1048588"
min.insync.replicas: "1"
num.io.threads: "8"
num.network.threads: "3"
num.recovery.threads.per.data.dir: "1"
num.replica.fetchers: "1"
unclean.leader.election.enable: "false"
resources:
limits:
cpu: 2
memory: 4Gi
requests:
cpu: 2
memory: 4Gi
storage:
size: 1Gi
# Replace with available storage class
class: sc-topolvm
deleteClaim: false
kafka:
listeners:
plain: {}
external:
type: nodeport
tls: false
mode: KRaft
controller:
replicas: 3
resources:
limits:
cpu: 1
memory: 2Gi
requests:
cpu: 1
memory: 2Gi
roles:
- controller
storage:
size: 1Gi
class: sc-topolvm
deleteClaim: false
EOF-
Click on Kafka in the left navigation bar.
-
Click on Namespace Name.
-
Click on Create Kafka Instance.
-
Complete the relevant configurations with the following instructions.
Configuration Configuration Items Description Parameter Configuration Parameter Template You can choose a system or custom parameter template. For custom parameter templates, please refer to Parameter Template. Kafka Nodes Broker Node Count To ensure high availability of the broker, each broker will be scheduled on different nodes in the cluster. Storage Class If no storage class is available in the drop-down list, please contact the platform administrator to add one. Access Method Authentication Method To ensure secure data transmission, it is recommended to enable encrypted authentication. For example: listening method as TLS, authentication method as SCRAM - SHA - 512. Specify Host Port When you use NodePort to access the cluster and open the specified host port, you can specify the service port number.
Note: When updating the instance, the ports are not interchangeable. If you need to swap ports, you can first update to another unoccupied port and then re-specify.Scheduling Configuration Node Label Filter available nodes in the current cluster by labels. Pods will be scheduled on available nodes.
Note: After the instance is created, this configuration is no longer allowed to be modified.Pod Tolerations If available nodes are tainted, only Pods that have set tolerations will tolerate these taints and may be scheduled on nodes that match the Pod tolerations and node taints. The matching rules are as follows. - Equal: When the key, value, effect of the Pod toleration completely matches the values set in the node taint, the Pod will tolerate the node taint.
- Exists: When the key, effect of the Pod toleration matches the values in the node taint, the Pod will tolerate the node taint.
Description: The effect determines whether Pods that do not tolerate node taints will be assigned to nodes that have those taints. For example, an effect of NoExecute indicates that only pods that tolerate the taints (including Pods from this Kafka instance) can be scheduled, and will evict Pods that are already running on that node but do not tolerate the taints (from other instances). For more information on matching rules, refer to the Kubernetes Official Documentation. -
Click on Create.
When the instance status changes to Running, the instance has been created successfully.
After creating the instance, you can check the status of the instance using the following command:
$ kubectl get rdskafka -n <namespace> -o=custom-columns=NAME:.metadata.name,VERSION:.spec.version,STATUS:.status.phase,MESSAGE:.status.reason,CreationTimestamp:.metadata.creationTimestamp
NAME VERSION STATUS MESSAGE CreationTimestamp
my-cluster 3.8 Active <none> 2025-03-06T08:46:57Z
test38 3.8 Failed Pod is unschedulable or is not starting 2025-03-06T08:46:36ZThe meanings of the output table fields are as follows:
| Field | Description |
|---|---|
| NAME | Instance Name |
| VERSION | Currently only supports these 4 versions: 2.5.0, 2.7.0, 2.8.2, 3.8 |
| STATUS | The current status of the instance, which may have the following statuses:
|
| MESSAGE | The reason for the instance's current status |
| CreationTimestamp | The timestamp when the instance was created |
#Create Single-Node Kafka Instance
It is recommended to set the Broker node count to 3 when creating an instance. If your Broker node count is less than the recommended value of 3, you will need to modify certain parameters during creation.
#Procedure
Create a single-node Kafka instance via CLI:
cat << EOF | kubectl -n default create -f -
apiVersion: middleware.alauda.io/v1
kind: RdsKafka
metadata:
name: my-cluster
spec:
entityOperator:
topicOperator:
resources:
limits:
cpu: 1
memory: 2Gi
requests:
cpu: 1
memory: 2Gi
userOperator:
resources:
limits:
cpu: 1
memory: 2Gi
requests:
cpu: 1
memory: 2Gi
tlsSidecar:
resources:
limits:
cpu: 200m
memory: 128Mi
requests:
cpu: 200m
memory: 128Mi
version: 3.8
replicas: 1
config:
auto.create.topics.enable: "false"
auto.leader.rebalance.enable: "true"
background.threads: "10"
compression.type: producer
delete.topic.enable: "true"
log.retention.hours: "168"
log.roll.hours: "168"
log.segment.bytes: "1073741824"
message.max.bytes: "1048588"
min.insync.replicas: "1"
num.io.threads: "8"
num.network.threads: "3"
num.recovery.threads.per.data.dir: "1"
num.replica.fetchers: "1"
unclean.leader.election.enable: "false"
## Ensure that the following parameter configurations are correct
default.replication.factor: "1"
offsets.topic.replication.factor: 1
transaction.state.log.replication.factor: 1
transaction.state.log.min.isr: 1
resources:
limits:
cpu: 2
memory: 4Gi
requests:
cpu: 2
memory: 4Gi
storage:
size: 1Gi
# Replace with available storage class
class: local-path
deleteClaim: false
kafka:
listeners:
plain: {}
external:
type: nodeport
tls: false
zookeeper:
# Currently the same as Kafka broker
replicas: 1
resources:
limits:
cpu: 1
memory: 2Gi
requests:
cpu: 1
memory: 2Gi
# Storage maintained the same as Kafka broker
storage:
size: 1Gi
# Replace with available storage class
class: local-path
deleteClaim: false
EOF-
Click on Kafka in the left navigation bar.
-
Click on Namespace Name.
-
Click on Create Kafka Instance.
-
Click on Expand Instance Parameters and set the following parameter values to 1:
Parameter Configuration Value Description default.replication.factor1 The replication factor for automatically created topics offsets.topic.replication.factor1 Replication factor for the offsets topic transaction.state.log.replication.factor1 Replication factor for the transaction state log transaction.state.log.min.isr1 Minimum in-sync replicas for the transaction state log -
Complete other configurations.
-
Switch to the YAML editing page and change the values of
spec.replicasandspec.zookeeper.replicasto 1 -
Click on Create.
When the instance status changes to Running, the instance has been created successfully.