Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

【documentation】Update docs #3223

Merged
merged 8 commits into from
Dec 13, 2024
Merged
Show file tree
Hide file tree
Changes from 2 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -6,8 +6,15 @@ description: "Learn how to create users and control their permissions by roles i
weight: 03
---

ifeval::["{file_output_type}" == "html"]
This section explains how to create users and control their permissions by roles in workspaces and projects.
For more information on permission control, please refer to link:../../05-users-and-roles/[Users and Roles].
endif::[]

ifeval::["{file_output_type}" == "pdf"]
This section explains how to create users and control their permissions by roles in workspaces and projects.
For more information on permission control, please refer to {ks_product-en} Users and Roles.
endif::[]

As a multi-tenant system, KubeSphere supports controlling user permissions based on roles at the platform, cluster, workspace, and project levels, achieving logical resource isolation.

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -210,53 +210,53 @@ Kind: Cluster
metadata:
name: sample
spec:
hosts:
- {name: controlplane1, address: 192.168.0.2, internalAddress: 192.168.0.2, port: 23, user: ubuntu, password: Testing123, arch: arm64} # For arm64 nodes, please add the parameter arch: arm64
- {name: controlplane2, address: 192.168.0.3, internalAddress: 192.168.0.3, user: ubuntu, privateKeyPath: "~/.ssh/id_rsa"}
- {name: worker1, address: 192.168.0.4, internalAddress: 192.168.0.4, user: ubuntu, password: Testing123}
- {name: worker2, address: 192.168.0.5, internalAddress: 192.168.0.5, user: ubuntu, password: Testing123}
- {name: registry, address: 192.168.0.6, internalAddress: 192.168.0.6, user: ubuntu, password: Testing123}
roleGroups:
etcd:
- controlplane1
- controlplane2
control-plane:
- controlplane1
- controlplane2
worker:
- worker1
- worker2
# If you want to use kk to automatically deploy the image registry, please set up the registry (it is recommended that the image registry and cluster nodes be deployed separately to reduce mutual influence)
registry:
-registry
controlPlaneEndpoint:
internalLoadbalancer: haproxy # If you need to deploy a high availability cluster and no load balancer is available, you can enable this parameter to perform load balancing within the cluster.
domain: lb.kubesphere.local
address: ""
port: 6443
kubernetes:
version: v1.23.15
clusterName: cluster.local
network:
plugin: calico
kubePodsCIDR: 10.233.64.0/18
kubeServiceCIDR: 10.233.0.0/18
## multus support. https://github.com/k8snetworkplumbingwg/multus-cni
enableMultusCNI: false
registry:
# If you want to use kk to deploy harbor, you can set this parameter to harbor. If you do not set this parameter and you need to use kk to deploy the image registry, docker registry will be deployed by default.
# Harbor does not support arm64. This parameter does not need to be configured when deploying in an arm64 environment.
type: harbor
# If you use kk to deploy harbor or other registries that require authentication, you need to set the auths of the corresponding registries. If you use kk to deploy the default docker registry, you do not need to configure the auths parameter.
auths:
"dockerhub.kubekey.local":
username: admin # harbor default username
password: Harbor12345 # harbor default password
plainHTTP: false # If the registry uses http, please set this parameter to true
privateRegistry: "dockerhub.kubekey.local/kse" #Set the private registry address used during cluster deployment
registryMirrors: []
insecureRegistries: []
addons: []
hosts:
- {name: controlplane1, address: 192.168.0.2, internalAddress: 192.168.0.2, port: 23, user: ubuntu, password: Testing123, arch: arm64} # For arm64 nodes, please add the parameter arch: arm64
- {name: controlplane2, address: 192.168.0.3, internalAddress: 192.168.0.3, user: ubuntu, privateKeyPath: "~/.ssh/id_rsa"}
- {name: worker1, address: 192.168.0.4, internalAddress: 192.168.0.4, user: ubuntu, password: Testing123}
- {name: worker2, address: 192.168.0.5, internalAddress: 192.168.0.5, user: ubuntu, password: Testing123}
- {name: registry, address: 192.168.0.6, internalAddress: 192.168.0.6, user: ubuntu, password: Testing123}
roleGroups:
etcd:
- controlplane1
- controlplane2
control-plane:
- controlplane1
- controlplane2
worker:
- worker1
- worker2
# If you want to use kk to automatically deploy the image registry, please set up the registry (it is recommended that the image registry and cluster nodes be deployed separately to reduce mutual influence)
registry:
-registry
controlPlaneEndpoint:
internalLoadbalancer: haproxy # If you need to deploy a high availability cluster and no load balancer is available, you can enable this parameter to perform load balancing within the cluster.
domain: lb.kubesphere.local
address: ""
port: 6443
kubernetes:
version: v1.23.15
clusterName: cluster.local
network:
plugin: calico
kubePodsCIDR: 10.233.64.0/18
kubeServiceCIDR: 10.233.0.0/18
## multus support. https://github.com/k8snetworkplumbingwg/multus-cni
enableMultusCNI: false
registry:
# If you want to use kk to deploy harbor, you can set this parameter to harbor. If you do not set this parameter and you need to use kk to deploy the image registry, docker registry will be deployed by default.
# Harbor does not support arm64. This parameter does not need to be configured when deploying in an arm64 environment.
type: harbor
# If you use kk to deploy harbor or other registries that require authentication, you need to set the auths of the corresponding registries. If you use kk to deploy the default docker registry, you do not need to configure the auths parameter.
auths:
"dockerhub.kubekey.local":
username: admin # harbor default username
password: Harbor12345 # harbor default password
plainHTTP: false # If the registry uses http, please set this parameter to true
privateRegistry: "dockerhub.kubekey.local/kse" #Set the private registry address used during cluster deployment
registryMirrors: []
insecureRegistries: []
addons: []
----
--

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -80,7 +80,7 @@ curl -sSL https://get-kk.kubesphere.io | sh -

[.admon.attention,cols="a"]
|===
|Note
|Attention

|
If you only want to use `kk` to package KubeSphere images into the air-gapped environment, you can directly use the manifest file received in the email to link:#_4_build_image_package[Build Image Package]. No need to create or edit the manifest file.
Expand Down Expand Up @@ -123,7 +123,7 @@ vi manifest-sample.yaml
--
[.admon.attention,cols="a"]
|===
|Note
|Attention

|The image list in the following manifest file is for example only. Please get the latest image list through https://get-images.kubesphere.io/.

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@ After configuring external identity providers, users can log in to the {ks_produ

. Log in to the {ks_product-en} web console with a user having the **platform-admin** role.

. Navigate to the project **kubesphere-system** under the workspace **system-workspace**.
. Click **Cluster Management** and then enter the **host** cluster.

+

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -20,8 +20,13 @@ Depending on your network environment, the host cluster and member clusters can
| Note

|
To use a agent connection, the **KubeSphere Multi-Cluster Agent Connection** extension needs to be installed and enabled on the KubeSphere platform.
// For more information, refer to link:../../../../11-use-extensions/19-tower/02-add-a-member-cluster-using-proxy-connection[Add a Member Cluster via Agent Connection].
ifeval::["{file_output_type}" == "html"]
To use a agent connection, the **KubeSphere Multi-Cluster Agent Connection** extension needs to be installed and enabled on the {ks_product-en} platform. For more information, refer to link:../../../../11-use-extensions/19-tower/02-add-a-member-cluster-using-proxy-connection[Add a Member Cluster via Agent Connection].
endif::[]

ifeval::["{file_output_type}" == "pdf"]
To use a agent connection, the **KubeSphere Multi-Cluster Agent Connection** extension needs to be installed and enabled on the {ks_product-en} platform. For more information, refer to "KubeSphere Multi-Cluster Agent Connection" section in the {ks_product-en} Extension User Guide.
endif::[]
|===

Whether using a direct connection or a agent connection, at least one of the host cluster and the member cluster must be able to access the services exposed by the other side.
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,7 @@ include::../../../_custom-en/platformManagement/platformManagement-oper-logIn.ad
. Click **Workspace Management**.
+
--
* The workspace list displays all workspaces on the KubeSphere platform.
* The workspace list displays all workspaces on the {ks_product-en} platform.

* In the workspace list, click the name of a workspace to view and manage resources within it.
--
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ This section describes how to publish an application template.

[.admon.attention,cols="a"]
|===
|Note
|Attention

|
Before listing the application template, at least one of its application versions must be in the **Published** status.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@ After installing the "KubeSphere Service Mesh" extension, **Service Mesh** will

[.admon.attention,cols="a"]
|===
|Note
|Attention

|
Before installing KubeSphere Service Mesh, you need to set up available Prometheus and OpenSearch services in the extension configuration. For more information about the extension configuration, see the details page of the "KubeSphere Service Mesh" extension in the Extensions Center.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -40,7 +40,7 @@ include::../../../../../../_custom-en/clusterManagement/logReceivers/logReceiver

[.admon.attention,cols="a"]
|===
|Note
|Attention

|
After modification, ensure that the `endpoints` of each extension under the configuration of **WhizardTelemetry Platform Service** is consistent with the modified service address, so that the {ks_product-en} platform can correctly query the log data. For more information, see the details page of the "WhizardTelemetry Platform Service" extension in the Extensions Center.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -27,7 +27,7 @@ After installing the "WhizardTelemetry Alerting" extension, the **Alerts** and *

[.admon.attention,cols="a"]
|===
|Note
|Attention

|
After installing WhizardTelemetry Alerting, if you enabled or disabled Whizard Observability Center in the WhizardTelemetry Monitoring extension, please update the configuration of WhizardTelemetry Alerting as follows.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -5,48 +5,33 @@ description: "Learn how to view the built-in Dashboards provided by the extensio
weight: 01
---

The Grafana for WhizardTelemetry extension comes with multiple Grafana Dashboards that allow direct querying of monitoring data for Kubernetes and KubeSphere without the need for manual configuration of Grafana Dashboards.
The Grafana for WhizardTelemetry extension comes with multiple Grafana Dashboards that allow direct querying of monitoring data for Kubernetes without the need for manual configuration of Grafana Dashboards.

== Steps

. After logging into the Grafana console, click **Dashboards** in the left navigation pane to view all built-in Dashboard templates, which are in four directories: `aicp`, `kube-prometheus-stack`, `whizard-loki`, and `whizard-monitoring`.
. After logging into the Grafana console, click **Dashboards** in the left navigation pane to view all built-in Dashboard templates.
+
--
image:/images/ks-qkcp/zh/v4.1.2/grafana/dashboard-list.png[dashboard-list]

[%header,cols="1a,3a"]
|===
|Directory |Description

|aicp
|Used for QingCloud AI Computing Platform, please view monitoring panels in the "AI Computing Management" platform.

|kube-prometheus-stack
|Visualizes monitoring data for Kubernetes.

|whizard-loki
|Visualizes logs, audits, events, and notification history of KubeSphere stored in Loki.

|whizard-monitoring
|Multi-cluster monitoring adapted for Whizard and KubeSphere.
|===

[.admon.attention,cols="a"]
|===
|Note
|Attention

|
* After installing the **WhizardTelemetry Monitoring** extension, the Dashboards in **kube-prometheus-stack** and **whizard-monitoring** will display monitoring data.
* To display monitoring data in the Dashboards of **whizard-loki**, see link:../../17-loki/01-display-loki-data[Grafana Loki for WhizardTelemetry].
After installing the **WhizardTelemetry Monitoring** extension, the Dashboards in **kube-prometheus-stack** will display monitoring data.
|===
--

. Click on a Dashboard template in the directory to view the corresponding monitoring data.
+
Below is an example using the **KubeSphere Nodes** template from the **whizard-monitoring** directory to introduce the Dashboard page.

. The **KubeSphere Nodes** dashboard displays monitoring information for each node, including resource utilization of CPU, memory, disk, and pods, disk IOPS, disk throughput, network bandwidth, etc.
+
image:/images/ks-qkcp/zh/v4.1.2/grafana/node-dashboard.png[node-dashboard]

. Click **data source**, **cluster**, and **node** at the top to select data from specified sources, clusters, and nodes.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -39,7 +39,7 @@ include::../../../../../_ks_components-en/oper-navigate.adoc[]

[.admon.attention,cols="a"]
|===
|Note
|Attention

|
* The CIDR of the pod IP pool must not overlap with the CIDR of the nodes and the CIDR of the services.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@ This section introduces how to enable cluster gateways.

[.admon.attention,cols="a"]
|===
|Note
|Attention

|
If a workspace gateway or project gateway has not been enabled, after enabling the cluster gateway, you will no longer be able to enable the workspace gateway and project gateway.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@ weight: 01

. 以具有 **platform-admin** 角色的用户登录{ks_product_left} Web 控制台。

. 进入企业空间 **system-workspace** 下的项目 **kubesphere-system**
. 点击**集群管理**,并进入 host 集群

+

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,13 @@ weight: 01
|说明

|
ifeval::["{file_output_type}" == "html"]
若要使用代理连接,{ks_product_both}平台需要安装并启用 **KubeSphere 多集群代理连接**扩展组件。有关更多信息,请参阅link:../../../../11-use-extensions/19-tower/02-add-a-member-cluster-using-proxy-connection/[通过代理连接添加成员集群]。
endif::[]

ifeval::["{file_output_type}" == "pdf"]
若要使用代理连接,{ks_product_both}平台需要安装并启用 **KubeSphere 多集群代理连接**扩展组件。有关更多信息,请参阅《{ks_product_right}扩展组件使用指南》的“KubeSphere 多集群代理连接”章节。
endif::[]
|===


Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,7 @@ include::../../../_custom/platformManagement/platformManagement-oper-logIn.adoc[
. 点击**企业空间管理**。
+
--
* 企业空间列表显示当前 KubeSphere 平台的所有企业空间。
* 企业空间列表显示当前{ks_product_both}平台的所有企业空间。

* 在企业空间列表中,点击企业空间的名称可进入企业空间,查看和管理企业空间中的资源。
--
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -5,50 +5,32 @@ description: "介绍如何查看扩展组件提供的内置 Dashboard。"
weight: 01
---

Grafana for WhizardTelemetry 扩展组件内置了多个 Grafana Dashboard 模板,可供直接查询 Kubernetes 和 KubeSphere 的监控数据,而无需自行配置 Grafana Dashboard。
Grafana for WhizardTelemetry 扩展组件内置了多个 Grafana Dashboard 模板,可供直接查询 Kubernetes 的监控数据,而无需自行配置 Grafana Dashboard。

== 操作步骤

. 登录 Grafana 控制台后,点击左侧导航栏的 **Dashboards**,查看所有内置的 Dashboard 模板。包含 aicp, kube-prometheus-stack, whizard-loki, whizard-monitoring 4 个目录。
. 登录 Grafana 控制台后,点击左侧导航栏的 **Dashboards**,查看所有内置的 Dashboard 模板。
+
--
image:/images/ks-qkcp/zh/v4.1/grafana/dashboard-list.png[dashboard-list]

[%header,cols="1a,3a"]
|===
|目录 |模板介绍

|aicp
|用于青云 AI 智算运维管理端,需在“AI 智算管理”平台中查看监控面板。

|kube-prometheus-stack
|可视化 Kubernetes 的监控数据。

|whizard-loki
|可视化存储到 Loki 的 KubeSphere 日志、审计、事件及通知历史。

|whizard-monitoring
|适配 Whizard 与 KubeSphere 后的多集群监控。
|===

[.admon.attention,cols="a"]
|===
|注意

|
* 安装 **WhizardTelemetry 监控**扩展组件后,**kube-prometheus-stack** 和 **whizard-monitoring** 中的 Dashboard 才会显示监控数据。
* 若要 **whizard-loki** 中的 Dashboard 显示监控数据,请参阅link:../../17-loki/01-display-loki-data[
Grafana Loki for WhizardTelemetry]。
安装 **WhizardTelemetry 监控**扩展组件后,**kube-prometheus-stack** 中的 Dashboard 才会显示监控数据。
|===
--

. 点击目录中的 Dashboard 模板,查看对应指标的监控数据。
+
下面以 **whizard-monitoring** 中的 **KubeSphere Nodes** 模板为例,介绍 Dashboard 页面。


. **KubeSphere Nodes** 看板展示了每个节点的 CPU、内存、磁盘和 pod 的资源利用率、磁盘 IOPS、磁盘吞吐量、网络带宽等监控信息。
+
image:/images/ks-qkcp/zh/v4.1.2/grafana/node-dashboard.png[node-dashboard]

. 点击上方的 **data source**、**cluster**、**node**,可选择查看指定数据源、集群和节点的相关数据。
Expand Down
Binary file modified static/images/ks-qkcp/zh/v4.1.2/grafana/node-dashboard.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.