Skip to content

Commit

Permalink
minor improvement
Browse files Browse the repository at this point in the history
  • Loading branch information
wolfboys committed Oct 4, 2023
1 parent 2f7917c commit d14556e
Show file tree
Hide file tree
Showing 3 changed files with 67 additions and 53 deletions.
16 changes: 8 additions & 8 deletions community/submit_guide/submit-code.md
Original file line number Diff line number Diff line change
Expand Up @@ -32,21 +32,21 @@ sidebar_position: 2
* Clone your repository to your local

```shell
```shell
git clone [email protected]:apache/incubator-streampark.git
```
```

* Add remote repository address, named upstream

```shell
git remote add upstream [email protected]:apache/incubator-streampark.git
```
```shell
git remote add upstream [email protected]:apache/incubator-streampark.git
```

* View repository

```shell
git remote -v
```
```shell
git remote -v
```

> At this time, there will be two repositories: origin (your own repository) and upstream (remote repository)
Expand Down
48 changes: 27 additions & 21 deletions docs/user-guide/4-dockerDeployment.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,51 +5,54 @@ sidebar_position: 4
---

This tutorial uses the docker method to deploy StreamPark via Docker.

## Prepare
Docker 1.13.1+
Docker Compose 1.28.0+
### Installing docker

### 1. Install docker

To start the service with docker, you need to install [docker](https://www.docker.com/) first

### Installing docker-compose
### 2. Install docker-compose

To start the service with docker-compose, you need to install [docker-compose](https://docs.docker.com/compose/install/) first
## Rapid StreamPark Deployment

### StreamPark deployment based on h2 and docker-compose
## StreamPark Deployment

### 1. StreamPark deployment based on h2 and docker-compose

This method is suitable for beginners to learn and become familiar with the features. The configuration will reset after the container is restarted. Below, you can configure Mysql or Pgsql for persistence.

#### Deployment
#### 2. Deployment

```html
```shell
wget https://raw.githubusercontent.com/apache/incubator-streampark/dev/deploy/docker/docker-compose.yaml
wget https://raw.githubusercontent.com/apache/incubator-streampark/dev/deploy/docker/.env
docker-compose up -d
```

Once the service is started, StreamPark can be accessed through http://localhost:10000 and also through http://localhost:8081 to access Flink. Accessing the StreamPark link will redirect you to the login page, where the default user and password for StreamPark are admin and streampark respectively. To learn more about the operation, please refer to the user manual for a quick start.

#### Configure flink home
#### 3. Configure flink home

![](/doc/image/streampark_flinkhome.png)

#### Configure flink-session cluster
#### 4. Configure flink-session cluster

![](/doc/image/remote.png)

Note:When configuring the flink-sessin cluster address, the ip address is not localhost, but the host network ip, which can be obtained through ifconfig

#### Submit a task
#### 5. Submit flink job

![](/doc/image/remoteSubmission.png)

### Use existing Mysql services
This approach is suitable for enterprise production, where you can quickly deploy strempark based on docker and associate it with an online database
##### Use existing Mysql services
This approach is suitable for enterprise production, where you can quickly deploy StreamPark based on docker and associate it with an online database
Note: The diversity of deployment support is maintained through the .env configuration file, make sure there is one and only one .env file in the directory

```html
```shell
wget https://raw.githubusercontent.com/apache/incubator-streampark/dev/deploy/docker/docker-compose.yaml
wget https://raw.githubusercontent.com/apache/incubator-streampark/dev/deploy/docker/mysql/.env
vim .env
Expand All @@ -59,7 +62,7 @@ First, you need to create the "streampark" database in MySQL, and then manually

After that, modify the corresponding connection information.

```html
```shell
SPRING_PROFILES_ACTIVE=mysql
SPRING_DATASOURCE_URL=jdbc:mysql://localhost:3306/streampark?useSSL=false&useUnicode=true&characterEncoding=UTF-8&allowPublicKeyRetrieval=false&useJDBCCompliantTimezoneShift=true&useLegacyDatetimeCode=false&serverTimezone=GMT%2B8
SPRING_DATASOURCE_USERNAME=root
Expand All @@ -69,20 +72,23 @@ SPRING_DATASOURCE_PASSWORD=streampark
```
docker-compose up -d
```
### Use existing Pgsql services
```html
##### Use existing Pgsql services

```shell
wget https://raw.githubusercontent.com/apache/incubator-streampark/dev/deploy/docker/docker-compose.yaml
wget https://raw.githubusercontent.com/apache/incubator-streampark/dev/deploy/docker/pgsql/.env
vim .env
```
Modify the corresponding connection information
```html

```shell
SPRING_PROFILES_ACTIVE=pgsql
SPRING_DATASOURCE_URL=jdbc:postgresql://localhost:5432/streampark?stringtype=unspecified
SPRING_DATASOURCE_USERNAME=postgres
SPRING_DATASOURCE_PASSWORD=streampark
```
```

```shell
docker-compose up -d
```

Expand All @@ -93,14 +99,14 @@ cd incubator-streampark/deploy/docker
vim docker-compose
```

```html
```shell
build:
context: ../..
dockerfile: deploy/docker/console/Dockerfile
# image: ${HUB}:${TAG}
```

```
```shell
docker-compose up -d
```

Expand Down Expand Up @@ -177,7 +183,7 @@ volumes:
Finally, execute the start command:
```sh
```shell
cd deploy/docker
docker-compose up -d
```
Expand All @@ -190,7 +196,7 @@ You can use `docker ps` to check if the installation was successful. If the foll

In the previous `env` file, `HADOOP_HOME` was declared, with the corresponding directory being "/streampark/hadoop". Therefore, you need to upload the `/etc/hadoop` from the Hadoop installation package to the `/streampark/hadoop` directory. The commands are as follows:

```sh
```shell
## Upload Hadoop resources
docker cp entire etc directory streampark-docker_streampark-console_1:/streampark/hadoop
## Enter the container
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -4,53 +4,57 @@ title: 'Docker 快速使用教程'
sidebar_position: 4
---

本教程使用 Docker 完成StreamPark的部署。
本教程使用 Docker 完成 StreamPark 的部署。

## 前置条件

Docker 1.13.1+
Docker Compose 1.28.0+

### 安装docker
### 1. 安装 docker
使用 docker 启动服务,需要先安装 [docker](https://www.docker.com/)

### 安装docker-compose
### 2. 安装 docker-compose
使用 docker-compose 启动服务,需要先安装 [docker-compose](https://docs.docker.com/compose/install/)

## 快速StreamPark部署
## 部署 StreamPark

### 基于h2和docker-compose进行StreamPark部署
### 1. 基于 h2 和 docker-compose 部署 StreamPark

该方式适用于入门学习、熟悉功能特性,容器重启后配置会失效,下方可以配置Mysql、Pgsql进行持久化
#### 部署

```sh
#### 2. 部署

```shell
wget https://raw.githubusercontent.com/apache/incubator-streampark/dev/deploy/docker/docker-compose.yaml
wget https://raw.githubusercontent.com/apache/incubator-streampark/dev/deploy/docker/.env
docker-compose up -d
```
服务启动后,可以通过 http://localhost:10000 访问 StreamPark,同时也可以通过 http://localhost:8081访问Flink。访问StreamPark链接后会跳转到登陆页面,StreamPark 默认的用户和密码分别为 admin 和 streampark。想要了解更多操作请参考用户手册快速上手。
![](/doc/image/streampark_docker-compose.png)

该部署方式会自动给你启动一个flink-session集群供你去进行flink任务使用,同时也会挂载本地docker服务以及~/.kube来用于k8s模式的任务提交

#### 配置flink home
#### 3. 配置flink home

![](/doc/image/streampark_flinkhome.png)

#### 配置session集群
#### 4. 配置session集群

![](/doc/image/remote.png)

注意:在配置flink-sessin集群地址时,填写的ip地址不是localhost,而是宿主机网络ip,该ip地址可以通过ifconfig来进行获取

#### 提交任务
#### 5. 提交 Flink 作业

![](/doc/image/remoteSubmission.png)


### 沿用已有的 Mysql 服务
该方式适用于企业生产,你可以基于docker快速部署strempark并将其和线上数据库进行关联
#### 使用已有的 Mysql 服务
该方式适用于企业生产,你可以基于 docker 快速部署 StreamPark 并将其和线上数据库进行关联
注意:部署支持的多样性是通过.env这个配置文件来进行维护的,要保证目录下有且仅有一个.env文件
```sh

```shell
wget https://raw.githubusercontent.com/apache/incubator-streampark/dev/deploy/docker/docker-compose.yaml
wget https://raw.githubusercontent.com/apache/incubator-streampark/dev/deploy/docker/mysql/.env
vim .env
Expand All @@ -59,49 +63,53 @@ vim .env
需要先在mysql先创建streampark数据库,然后手动执行对应的schema和data里面对应数据源的sql

然后修改对应的连接信息
```sh

```shell
SPRING_PROFILES_ACTIVE=mysql
SPRING_DATASOURCE_URL=jdbc:mysql://localhost:3306/streampark?useSSL=false&useUnicode=true&characterEncoding=UTF-8&allowPublicKeyRetrieval=false&useJDBCCompliantTimezoneShift=true&useLegacyDatetimeCode=false&serverTimezone=GMT%2B8
SPRING_DATASOURCE_USERNAME=root
SPRING_DATASOURCE_PASSWORD=streampark
```

```sh
```shell
docker-compose up -d
```
### 沿用已有的 Pgsql 服务
```html

```shell
wget https://raw.githubusercontent.com/apache/incubator-streampark/dev/deploy/docker/docker-compose.yaml
wget https://raw.githubusercontent.com/apache/incubator-streampark/dev/deploy/docker/pgsql/.env
vim .env
```

修改对应的连接信息
```sh
```shell
SPRING_PROFILES_ACTIVE=pgsql
SPRING_DATASOURCE_URL=jdbc:postgresql://localhost:5432/streampark?stringtype=unspecified
SPRING_DATASOURCE_USERNAME=postgres
SPRING_DATASOURCE_PASSWORD=streampark
```
```sh
```shell
docker-compose up -d
```

## 基于源码构建镜像进行StreamPark部署
```sh

```shell
git clone https://github.com/apache/incubator-streampark.git
cd incubator-streampark/deploy/docker
vim docker-compose.yaml
```

```sh
```shell
build:
context: ../..
dockerfile: deploy/docker/Dockerfile
# image: ${HUB}:${TAG}
```
![](/doc/image/streampark_source_generation_image.png)

```sh
```shell
cd ../..
./build.sh
```
Expand Down Expand Up @@ -180,7 +188,7 @@ volumes:
最后,执行启动命令:
```sh
```shell
cd deploy/docker
docker-compose up -d
```
Expand All @@ -191,9 +199,9 @@ docker-compose up -d

## 上传配置至容器

在前面的env文件,声明了HADOOP_HOME,对应的目录为/streampark/hadoop,所以需要上传hadoop安装包下的/etc/hadoop至/streampark/hadoop目录,命令如下:
在前面的env文件,声明了HADOOP_HOME,对应的目录为 `/streampark/hadoop`,所以需要上传hadoop安装包下的 `/etc/hadoop``/streampark/hadoop` 目录,命令如下:

```sh
```shell
## 上传hadoop资源
docker cp etc整个目录 streampark-docker_streampark-console_1:/streampark/hadoop
## 进入容器
Expand Down

0 comments on commit d14556e

Please sign in to comment.