-
Notifications
You must be signed in to change notification settings - Fork 69
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
RFC:TiKV RawKV MVCC GC #90
base: master
Are you sure you want to change the base?
Changes from 6 commits
ed3ba92
04c39e3
66a5cb8
92b6b54
2464d0c
2186c22
15573ff
a86c475
552fbd6
03e416f
dcf16a3
89194ba
3489396
b211769
27c71e8
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change | ||||
---|---|---|---|---|---|---|
@@ -0,0 +1,156 @@ | ||||||
# RFC: TiKV RawKV MVCC GC | ||||||
|
||||||
|
||||||
## Summary | ||||||
Move TiKV MVCC GC worker from TiDB into a group of independent GC worker component and implement a new GC process in TiKV for RawKV. | ||||||
|
||||||
## Motivation | ||||||
1.GC worker is an important component for TiKV that deletes outdated MVCC data so as to not explode the storage. But currently, the GC worker is implemented in TiDB, which makes TiKV not usable without TiDB.And current GC process is just for transaction of TiDB,it's not usable for RawKV. | ||||||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
Suggested change
|
||||||
2.Standardize the API used to set and obtain GC status in PD to improve the developer's experience. | ||||||
|
||||||
We change RawKV encoding to MVCC, so the GC is necessary. | ||||||
No GC for TxnKV scenario when TiDB is not deployed. | ||||||
The GC safe point & Txn safe point is easy to be misunderstand. | ||||||
|
||||||
## Background | ||||||
According to the documentation for the current GC worker in a TiDB cluster, the GC process is as follows: | ||||||
|
||||||
In TiDB GC worker leader: | ||||||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Currently, TiDB GC worker is not standardized use There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I have read this document and discussed with @MyonKeminta .For implement this proposal, it need a clearer design and more development. |
||||||
1. Regularly calculates a new timestamp called "GC safe point"(The default interval is 10min), and push the safe point to PD. | ||||||
2. Get the minimal Service safe point among all services from the response of step 1, which is GC safe point . | ||||||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
Suggested change
|
||||||
3. Txn GC process: resolve locks and record delete ranges information. | ||||||
4. Save Txn safe point. | ||||||
5. Upload GC safe point to PD. | ||||||
|
||||||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Refer to RunDistributedGCJob, where are two more steps:
|
||||||
In PD leader: | ||||||
1. Receive update safe point requests from TiDB or other tools (e.g. CDC, BR). | ||||||
2. Calculate the minimal timestamp = min(all service safe point, now - gc_life_time). | ||||||
|
||||||
the GC safe point data in etcd of PD as follows: | ||||||
- safe point generated by TiDB: | ||||||
```shell | ||||||
/gc/safe_point/service | ||||||
``` | ||||||
- service safe point generated by CDC,BR or Lighting: | ||||||
```shell | ||||||
/gc/safe_point/service/$serviceID | ||||||
``` | ||||||
|
||||||
In every TiKV nodes: | ||||||
1. Get GC safe point from PD regularly. | ||||||
2. Deletion will be triggered in CompactionFilter and GcTask thread; | ||||||
|
||||||
## New GC worker implementation | ||||||
In a TiKV cluster without TiDB nodes , there are a few different points as follows: | ||||||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
Suggested change
|
||||||
1. We need to move GC worker into another component. | ||||||
2. For [API V2](https://github.com/tikv/rfcs/blob/master/text/0069-api-v2.md) , it need gc the earlier version in default cf. But Txn GC worker process will be triggered by WriteCompactionFilter of write cf. | ||||||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
Suggested change
|
||||||
3. RawKV encoded code is different with Txn data in TiDB. | ||||||
|
||||||
So we designed a new GC architecture and process for TiKV cluster.It will be extended on the original interface to support the RawKV MVCC GC. For the original TiDB scenario, the old GC implementation can be used first. | ||||||
|
||||||
## Detailed design | ||||||
For support RawKV GC in TiKV cluster deploy without TiDB nodes. | ||||||
1. Add a new component instead of GC worker in TiDB nodes. | ||||||
- Why we choose to create a new component: | ||||||
- IF we add GC Worker in PD: It will cause the problem of client-go circular dependency. | ||||||
- IF we add GC Worker in TiKV: Because the logic required by GC worker is well implemented in client-go, but it is missing in client-rust, adding the implementation of GC worker in TiKV will increase more development work. | ||||||
|
||||||
So after discussion, we decided to add a new role for GC Worker. | ||||||
- It's mainly to regularly calculates a new timestamp called "GC safe point", and push the safe point to PD. | ||||||
- It is implemented in golang, which is convenient to call the interface of client-go. | ||||||
- The code of new GC worker, will be added into [tikv/migration](https://github.com/tikv/migration) | ||||||
- the GC Worker configuration in config/gc_worker_conf.toml file. | ||||||
- The default interval for generating GC safepoint is still '10m0s'. | ||||||
- And TiKV will get the GC safe point from PD. GC safe point = min(all service safe point, gc worker safe point). | ||||||
|
||||||
|
||||||
2. Changes on PD: | ||||||
1. A new concept is 'service group': | ||||||
- Due to TiDB, TxnKV and RawKV are allowed to coexist. Because the data of the three scenarios are independent, Because the data of the three scenarios are independent, separate safepoints are used in the GC, which helps to reduce the interference between businesses and speed up the GC. | ||||||
- If multi tenancy is supported in the future, 'service group' can also support it. | ||||||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Where do we put start and end key for a |
||||||
- Need to design new interfaces for update service safepoint with 'service group'. | ||||||
- How to allocate $service_group_id ? | ||||||
2. design the etcd path to save service safepoint of service group: | ||||||
- the safepoint data path in etcd of PD,will be changed. The new safe point path in etcd as follows: | ||||||
- gc_worker safe point | ||||||
```shell | ||||||
/gc_servicegroup/$service_group_id/safe_point | ||||||
``` | ||||||
- CDC,BR service safepoint | ||||||
```shell | ||||||
/gc_servicegroup/$service_group_id/service/$serviceId | ||||||
``` | ||||||
3. design interface to standardize the interface: | ||||||
the interface as follows: | ||||||
1. Gc worker will call pbclient.UpdateGCSafepointByServiceGroup to update the gc worker safepoint as follows: | ||||||
1. interface : | ||||||
```shell | ||||||
func (s *GrpcServer) UpdateGCSafepointByServiceGroup(ctx context.Context, request *pdpb.UpdateServiceGroupGCSafePointRequest) (*pdpb.UpdateServiceGroupGCSafePointResponse, error) | ||||||
``` | ||||||
|
||||||
2. added related pb info in pdpb.proto | ||||||
```proto | ||||||
message UpdateGCSafepointByServiceGroupRequest { | ||||||
RequestHeader header = 1; | ||||||
bytes service_group_id = 2; | ||||||
uint64 safe_point = 3; | ||||||
} | ||||||
|
||||||
message UpdateGCSafepointByServiceGroupResponse { | ||||||
ResponseHeader header = 1; | ||||||
bytes service_group_id = 2; | ||||||
uint64 new_safe_point = 3; | ||||||
} | ||||||
``` | ||||||
2. used to get GC safepoint for TiKV: | ||||||
1. interface: | ||||||
```shell | ||||||
func (s *GrpcServer) GetAllServiceGroupGcSafePoint(ctx context.Context, request *pdpb.GetServiceGroupServiceGcSafeRequest) (*pdpb.UpdateServiceGCSafePointResponse, error) | ||||||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Should |
||||||
``` | ||||||
2. pb info | ||||||
```proto | ||||||
message GetAllServiceGroupGcSafePointRequest { | ||||||
RequestHeader header = 1; | ||||||
bytes service_group_id = 2; | ||||||
} | ||||||
|
||||||
message GetAllServiceGroupGcSafePointResponse { | ||||||
ResponseHeader header = 1; | ||||||
bytes service_group_id = 2; | ||||||
uint64 safe_point = 3; | ||||||
} | ||||||
``` | ||||||
3. Used to update service safepoint for CDC/BR/Lightning: | ||||||
1. interface: | ||||||
```shell | ||||||
func (s *GrpcServer) UpdateServiceSafePointByServiceGroup(ctx context.Context, request *pdpb.UpdateServiceGroupServiceGcSafeRequest) (*pdpb.UpdateServiceGroupServiceGcSafeResponse, error) | ||||||
``` | ||||||
2. pb info | ||||||
```proto | ||||||
message UpdateServiceSafePointByServiceGroupRequest { | ||||||
RequestHeader header = 1; | ||||||
bytes service_group_id = 2; | ||||||
bytes service_id = 3; | ||||||
int64 TTL = 4; | ||||||
uint64 safe_point = 5; | ||||||
} | ||||||
message UpdateServiceSafePointByServiceGroupResponse { | ||||||
ResponseHeader header = 1; | ||||||
bytes service_group_id = 2; | ||||||
bytes service_id = 3; | ||||||
int64 TTL = 4; | ||||||
uint64 min_safe_point = 5; | ||||||
} | ||||||
``` | ||||||
|
||||||
3.Changes on TiKV: | ||||||
- Get GC safe point from PD by getGCSafepointByServiceGroup interface. | ||||||
- For API V2, we need add new CompactionFilter which is named RawGCcompactionFilter, and add a new GCTask type implementation. | ||||||
- GC conditions in RawGCcompactionFilter is: (ts < GCSafePoint) && ( ttl-expired || deleted-mark || not the newest version ). | ||||||
- If the newest version is earlier than GC safe point and it's delete marked or expired ttl,those keys and earlier versions of the same userkey will be sent to a gc scheduler thread to gc asynchronous. | ||||||
![raw gc copaction filter ](../media/tikv-rawkv-gc-compactionfilter.png) | ||||||
|
||||||
|
||||||
## Reference | ||||||
https://docs.google.com/document/d/1jA3lK9QbYlwsvn67wGsSuusD1Dzx7ANq_vya384RBIg/edit#heading=h.rr3hcmc7ejb8 | ||||||
https://docs.pingcap.com/tidb/stable/garbage-collection-overview |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The description of GC worker would be better to place in "Background" section.
I think the motivation includes: