Skip to content

Commit

Permalink
fix some typos (milvus-io#27851)
Browse files Browse the repository at this point in the history
1. fix some typos in md,yaml milvus-io#22893

Signed-off-by: Sheldon <[email protected]>
  • Loading branch information
locustbaby authored Oct 24, 2023
1 parent 6e6de17 commit 351c64b
Show file tree
Hide file tree
Showing 15 changed files with 26 additions and 26 deletions.
2 changes: 1 addition & 1 deletion DEVELOPMENT.md
Original file line number Diff line number Diff line change
Expand Up @@ -288,7 +288,7 @@ start the cluster on your host machine

```shell
$ ./build/builder.sh make install // build milvus
$ ./build/build_image.sh // build milvus lastest docker image
$ ./build/build_image.sh // build milvus latest docker image
$ docker images // check if milvus latest image is ready
REPOSITORY TAG IMAGE ID CREATED SIZE
milvusdb/milvus latest 63c62ff7c1b7 52 minutes ago 570MB
Expand Down
2 changes: 1 addition & 1 deletion ci/jenkins/PublishImages.groovy
Original file line number Diff line number Diff line change
Expand Up @@ -27,7 +27,7 @@ pipeline {
}

stages {
stage('Generat Image Tag') {
stage('Generate Image Tag') {
steps {
script {
def date = sh(returnStdout: true, script: 'date +%Y%m%d').trim()
Expand Down
2 changes: 1 addition & 1 deletion configs/advanced/etcd.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@
# limitations under the License.

# This is the configuration file for the etcd server.
# Only standalone users with embeded etcd should change this file, others could just keep this file As Is.
# Only standalone users with embedded etcd should change this file, others could just keep this file As Is.
# All the etcd client should be added to milvus.yaml if necessary

# Human-readable name for this member.
Expand Down
4 changes: 2 additions & 2 deletions configs/milvus.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -111,7 +111,7 @@ mq:
pulsar:
address: localhost # Address of pulsar
port: 6650 # Port of Pulsar
webport: 80 # Web port of pulsar, if you connect direcly without proxy, should use 8080
webport: 80 # Web port of pulsar, if you connect directly without proxy, should use 8080
maxMessageSize: 5242880 # 5 * 1024 * 1024 Bytes, Maximum size of each message in pulsar.
tenant: public
namespace: default
Expand Down Expand Up @@ -346,7 +346,7 @@ dataCoord:
balanceInterval: 360 #The interval for the channelBalancer on datacoord to check balance status
segment:
maxSize: 512 # Maximum size of a segment in MB
diskSegmentMaxSize: 2048 # Maximun size of a segment in MB for collection which has Disk index
diskSegmentMaxSize: 2048 # Maximum size of a segment in MB for collection which has Disk index
sealProportion: 0.23
# The time of the assignment expiration in ms
# Warning! this parameter is an expert variable and closely related to data integrity. Without specific
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -74,7 +74,7 @@ Supposing we have segments `s1, s2, s3`, corresponding positions `p1, p2, p3`
const filter_threshold = recovery_time
// mp means msgPack
for mp := seeking(p1) {
if mp.position.endtime < filter_threshod {
if mp.position.endtime < filter_threshold {
if mp.position < p3 {
filter s3
}
Expand Down
4 changes: 2 additions & 2 deletions docs/design_docs/20211217-milvus_create_collection.md
Original file line number Diff line number Diff line change
Expand Up @@ -86,7 +86,7 @@ type createCollectionTask struct {
}
```

- `PostExecute`, `CreateCollectonTask` does nothing at this phase, and return directly.
- `PostExecute`, `CreateCollectionTask` does nothing at this phase, and return directly.

4. `RootCoord` would wrap the `CreateCollection` request into `CreateCollectionReqTask`, and then call function `executeTask`. `executeTask` would return until the `context` is done or `CreateCollectionReqTask.Execute` is returned.

Expand All @@ -104,7 +104,7 @@ type CreateCollectionReqTask struct {
}
```

5. `CreateCollectionReqTask.Execute` would alloc `CollecitonID` and default `PartitionID`, and set `Virtual Channel` and `Physical Channel`, which are used by `MsgStream`, then write the `Collection`'s meta into `metaTable`
5. `CreateCollectionReqTask.Execute` would alloc `CollectionID` and default `PartitionID`, and set `Virtual Channel` and `Physical Channel`, which are used by `MsgStream`, then write the `Collection`'s meta into `metaTable`

6. After `Collection`'s meta written into `metaTable`, `Milvus` would consider this collection has been created successfully.

Expand Down
2 changes: 1 addition & 1 deletion docs/design_docs/20220105-proxy.md
Original file line number Diff line number Diff line change
Expand Up @@ -127,7 +127,7 @@ future work.

For DqRequest, request and result data are written to the stream. The request data will be written to DqRequestChannel,
and the result data will be written to DqResultChannel. Proxy will write the request of the collection into the
DqRequestChannel, and the DqReqeustChannel will be jointly subscribed by a group of query nodes. When all query nodes
DqRequestChannel, and the DqRequestChannel will be jointly subscribed by a group of query nodes. When all query nodes
receive the DqRequest, they will write the query results into the DqResultChannel corresponding to the collection. As
the consumer of the DqResultChannel, Proxy is responsible for collecting the query results and aggregating them,
The result is then returned to the client.
Expand Down
4 changes: 2 additions & 2 deletions docs/design_docs/20220105-query_boolean_expr.md
Original file line number Diff line number Diff line change
Expand Up @@ -31,7 +31,7 @@ ConstantExpr :=
| UnaryArithOp ConstantExpr

Constant :=
INTERGER
INTEGER
| FLOAT_NUMBER

UnaryArithOp :=
Expand Down Expand Up @@ -64,7 +64,7 @@ CmpOp :=
| "=="
| "!="

INTERGER := 整数
INTEGER := 整数
FLOAT_NUM := 浮点数
IDENTIFIER := 列名
```
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -61,7 +61,7 @@ The rules system shall follow is:

{% note %}

**Note:** Segments meta shall be updated *BEFORE* changing the channel checkpoint in case of datanode crashing during the prodedure. Under this premise, reconsuming from the old checkpoint shall recover all the data and duplidated entires will be discarded by segment checkpoints.
**Note:** Segments meta shall be updated *BEFORE* changing the channel checkpoint in case of datanode crashing during the prodedure. Under this premise, reconsuming from the old checkpoint shall recover all the data and duplidated entries will be discarded by segment checkpoints.

{% endnote %}

Expand All @@ -78,7 +78,7 @@ The winning option is to:

**Note:** `Datacoord` reloads from metastore periodically.
Optimization 1: reload channel checkpoint first, then reload segment meta if newly read revision is greater than in-memory one.
Optimization 2: After `L0 segemnt` is implemented, datacoord shall refresh growing segments only.
Optimization 2: After `L0 segment` is implemented, datacoord shall refresh growing segments only.

{% endnote %}

Expand Down
6 changes: 3 additions & 3 deletions docs/design_docs/segcore/segment_growing.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,13 +2,13 @@

Growing segment has the following additional interfaces:

1. `PreInsert(size) -> reseveredOffset`: serial interface, which reserves space for future insertion and returns the `reseveredOffset`.
1. `PreInsert(size) -> reservedOffset`: serial interface, which reserves space for future insertion and returns the `reservedOffset`.

2. `Insert(reseveredOffset, size, ...Data...)`: write `...Data...` into range `[reseveredOffset, reseveredOffset + size)`. This interface is allowed to be called concurrently.
2. `Insert(reservedOffset, size, ...Data...)`: write `...Data...` into range `[reservedOffset, reservedOffset + size)`. This interface is allowed to be called concurrently.

1. `...Data...` contains row_ids, timestamps two system attributes, and other columns
2. data columns can be stored either row-based or column-based.
3. `PreDelete & Delete(reseveredOffset, row_ids, timestamps)` is a delete interface similar to insert interface.
3. `PreDelete & Delete(reservedOffset, row_ids, timestamps)` is a delete interface similar to insert interface.

Growing segment stores data in the form of chunk. The number of rows in each chunk is restricted by configs.

Expand Down
2 changes: 1 addition & 1 deletion docs/developer_guides/appendix_a_basic_components.md
Original file line number Diff line number Diff line change
Expand Up @@ -107,7 +107,7 @@ type Session struct {
}

// NewSession is a helper to build Session object.
// ServerID, ServerName, Address, Exclusive will be assigned after registeration.
// ServerID, ServerName, Address, Exclusive will be assigned after registration.
// metaRoot is a path in etcd to save session information.
// etcdEndpoints is to init etcdCli when NewSession
func NewSession(ctx context.Context, metaRoot string, etcdEndpoints []string) *Session {}
Expand Down
6 changes: 3 additions & 3 deletions docs/developer_guides/chap04_message_stream.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@
```go
type Client interface {
CreateChannels(req CreateChannelRequest) (CreateChannelResponse, error)
DestoryChannels(req DestoryChannelRequest) error
DestroyChannels(req DestroyChannelRequest) error
DescribeChannels(req DescribeChannelRequest) (DescribeChannelResponse, error)
}
```
Expand All @@ -32,10 +32,10 @@ type CreateChannelResponse struct {
}
```

- _DestoryChannels_
- _DestroyChannels_

```go
type DestoryChannelRequest struct {
type DestroyChannelRequest struct {
ChannelNames []string
}
```
Expand Down
4 changes: 2 additions & 2 deletions docs/developer_guides/chap05_proxy.md
Original file line number Diff line number Diff line change
Expand Up @@ -105,7 +105,7 @@ type MilvusService interface {
CreatePartition(ctx context.Context, request *milvuspb.CreatePartitionRequest) (*commonpb.Status, error)
DropPartition(ctx context.Context, request *milvuspb.DropPartitionRequest) (*commonpb.Status, error)
HasPartition(ctx context.Context, request *milvuspb.HasPartitionRequest) (*milvuspb.BoolResponse, error)
LoadPartitions(ctx context.Context, request *milvuspb.LoadPartitonRequest) (*commonpb.Status, error)
LoadPartitions(ctx context.Context, request *milvuspb.LoadPartitionRequest) (*commonpb.Status, error)
ReleasePartitions(ctx context.Context, request *milvuspb.ReleasePartitionRequest) (*commonpb.Status, error)
GetPartitionStatistics(ctx context.Context, request *milvuspb.PartitionStatsRequest) (*milvuspb.PartitionStatsResponse, error)
ShowPartitions(ctx context.Context, request *milvuspb.ShowPartitionRequest) (*milvuspb.ShowPartitionResponse, error)
Expand Down Expand Up @@ -225,7 +225,7 @@ type CollectionSchema struct {
Fields []*FieldSchema
}

type LoadPartitonRequest struct {
type LoadPartitionRequest struct {
Base *commonpb.MsgBase
DbID UniqueID
CollectionID UniqueID
Expand Down
2 changes: 1 addition & 1 deletion docs/developer_guides/chap07_query_coordinator.md
Original file line number Diff line number Diff line change
Expand Up @@ -134,7 +134,7 @@ type PartitionStatesResponse struct {
- _LoadPartitions_

```go
type LoadPartitonRequest struct {
type LoadPartitionRequest struct {
Base *commonpb.MsgBase
DbID UniqueID
CollectionID UniqueID
Expand Down
6 changes: 3 additions & 3 deletions docs/user_guides/tls_proxy.md
Original file line number Diff line number Diff line change
Expand Up @@ -78,7 +78,7 @@ certs = $dir/certs # Where the issued certs are kept
crl_dir = $dir/crl # Where the issued crl are kept
database = $dir/index.txt # database index file.
#unique_subject = no # Set to 'no' to allow creation of
# several ctificates with same subject.
# several certificates with same subject.
new_certs_dir = $dir/newcerts # default place for new certs.

certificate = $dir/cacert.pem # The CA certificate
Expand All @@ -89,7 +89,7 @@ crl = $dir/crl.pem # The current CRL
private_key = $dir/private/cakey.pem# The private key
RANDFILE = $dir/private/.rand # private random number file

x509_extensions = usr_cert # The extentions to add to the cert
x509_extensions = usr_cert # The extensions to add to the cert

# Comment out the following two lines for the "traditional"
# (and highly broken) format.
Expand Down Expand Up @@ -141,7 +141,7 @@ default_bits = 2048
default_keyfile = privkey.pem
distinguished_name = req_distinguished_name
attributes = req_attributes
x509_extensions = v3_ca # The extentions to add to the self signed cert
x509_extensions = v3_ca # The extensions to add to the self signed cert

# Passwords for private keys if not present they will be prompted for
# input_password = secret
Expand Down

0 comments on commit 351c64b

Please sign in to comment.