Skip to content

Commit

Permalink
Enables backup-restore to handle S3's object lock mechanism which wil…
Browse files Browse the repository at this point in the history
…l make snapshots immutable.

Adjusted the Restoration and GC functionality to handle immutable snapshots for S3 object store.
  • Loading branch information
ishan16696 committed Dec 25, 2024
1 parent 0e411a2 commit 38c36a7
Show file tree
Hide file tree
Showing 6 changed files with 214 additions and 45 deletions.
Binary file added docs/images/S3_immutability_working.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
82 changes: 72 additions & 10 deletions docs/usage/enabling_immutable_snapshots.md
Original file line number Diff line number Diff line change
@@ -1,18 +1,23 @@
# Enabling Immutable Snapshots in `etcd-backup-restore`

This guide walks you through the process of enabling immutable snapshots in `etcd-backup-restore` by leveraging bucket-level immutability features provided by cloud storage providers like Google Cloud Storage (GCS) and Azure Blob Storage (ABS). Enabling immutability ensures that your backups are tamper-proof and comply with regulatory requirements.
This guide walks you through the process of enabling immutable snapshots in `etcd-backup-restore` by leveraging bucket-level immutability features for various object storage providers:

1. Google Cloud Storage (GCS)
2. Azure Blob Storage (ABS)
3. Amazon Simple Storage Service (S3)
Enabling immutability of your bucket will ensure that your backups are tamper-proof and comply with regulatory requirements.

---

## Terminology

- **Bucket / Container**: A storage resource in cloud storage services where objects (such as snapshots) are stored. GCS uses the term **bucket**, while ABS uses **container**.
- **Bucket / Container**: A storage resource in cloud storage services where objects (such as snapshots) are stored. GCS and S3 uses the term **bucket**, while ABS uses **container**.

- **Immutability Policy**: A configuration that specifies a minimum period during which objects in a bucket/container are protected from deletion or modification.
- **Immutability**: The property of an object being unmodifiable after creation, until the immutability period expires.

- **Immutability Period**: The duration defined by the immutability policy during which objects remain immutable.
- **Immutability Policy**: A configuration that specifies a minimum retention period during which objects in a bucket/container are protected from deletion or modification.

- **Immutability**: The property of an object being unmodifiable after creation, until the immutability period expires.
- **Immutability Period**: The duration defined by the immutability policy during which objects remain immutable.

- **Locking**: The action of making an immutability policy permanent, preventing any reduction or removal of the immutability period.

Expand All @@ -22,7 +27,9 @@ This guide walks you through the process of enabling immutable snapshots in `etc

## Overview

Currently, `etcd-backup-restore` supports bucket-level immutability for GCS and ABS.
Currently, `etcd-backup-restore` supports bucket-level immutability for GCS, ABS and S3.

> Note: Currently, Openstack object storage (swift) doesn't support immutability for objects: https://blueprints.launchpad.net/swift/+spec/immutability-middleware.
- **Immutability Policy**: You can add an immutability policy to a bucket/container to specify an immutability period.
- When an immutability policy is set, objects in the bucket/container can only be deleted or replaced once their age exceeds the immutability period.
Expand All @@ -37,7 +44,6 @@ Currently, `etcd-backup-restore` supports bucket-level immutability for GCS and
- You can increase the immutability period of a locked policy if needed.
- A locked bucket/container can only be deleted once all objects present in the bucket/container are deleted.


---

## Configure Bucket-Level Immutability
Expand Down Expand Up @@ -100,6 +106,32 @@ To configure an immutability policy on an Azure Blob Storage container:
--period 4
```

#### AWS S3

1. To enable the object lock on new buckets

* Create a new bucket with object lock then update the bucket with object lock configuration.

```bash
# create new bucket with object lock enabled
aws s3api create-bucket --bucket <your-bucket-name> --region <region> --create-bucket-configuration LocationConstraint=<region> --object-lock-enabled-for-bucket

# update the bucket with object lock configuration
aws s3api put-object-lock-configuration --bucket <your-bucket-name> --object-lock-configuration='{ "ObjectLockEnabled": "Enabled", "Rule": { "DefaultRetention": { "Mode": "COMPLIANCE/GOVERNANCE", "Days": X }}}'
```

2. To enable the object lock on existing buckets

* First enable the object versioning on existing bucket then enable the object lock on bucket with its configurations (say `X` retention period).

```bash
# enable the object versioning on existing bucket
aws s3api put-bucket-versioning --bucket <your-bucket-name> --versioning-configuration Status=Enabled

# now, enable the object lock on bucket with its configurations
aws s3api put-object-lock-configuration --bucket <your-bucket-name> --object-lock-configuration='{ "ObjectLockEnabled": "Enabled", "Rule": { "DefaultRetention": { "Mode": "COMPLIANCE/GOVERNANCE", "Days": X }}}'
```

### Modify an Unlocked Immutability Policy

You can modify an unlocked immutability policy to adjust the immutability period or to allow additional writes to the bucket/container.
Expand Down Expand Up @@ -199,7 +231,6 @@ To lock the immutability policy:
--lock-retention-period
```


#### Azure Blob Storage (ABS)

To lock the immutability policy:
Expand Down Expand Up @@ -240,8 +271,29 @@ To lock the immutability policy:
--if-match $etag
```

### S3 Object Lock and working with snapshots

#### Object Lock

- S3 Object Lock blocks permanent object deletion for a user defined retention period.
- It works on WORM(write once read many) model.
- With S3 object lock, S3 versioning will automatically get enabled, it only prevent locked object versions from being permanently deleted.

> Note: The consumer of etcd-backup-restore must enable the object lock with the appropriate settings on their buckets to consume this feature. This is because backup-restore doesn't manage or interfere with the bucket (object store) creation process.
#### Working with snapshots

- S3 Object Lock can be activated at either on the bucket or object level. Moreover, it can be enabled when creating a new buckets or on a existing buckets.
- For new buckets: These buckets will only contains the new snapshots, hence all the snapshots inside this bucket will be versioned locked snapshots.
- For existing/old buckets: These buckets can contain a mix of pre-existing non-versioned, non-locked snapshots and new snapshots which are versioned and locked with retention period.
The following diagram illustrates the working of snapshots with S3 for existing/old buckets as well as for new buckets.

![Working with S3](../images/S3_immutability_working.png)

---

> Note: If immutable snapshots are not enabled then the object's immutability expiry will be considered as zero, hence causing no effect on current functionality.
## Ignoring Snapshots During Restoration

In certain scenarios, you might want `etcd-backup-restore` to ignore specific snapshots present in the object store during the restoration of etcd's data directory. When snapshots were mutable, operators could simply delete these snapshots, and subsequent restorations would not include them. However, once immutability is enabled, it is no longer possible to delete these snapshots.
Expand Down Expand Up @@ -319,6 +371,14 @@ To add the tag:

After adding the annotation or tag, `etcd-backup-restore` will ignore these snapshots during the restoration process.

#### AWS S3

- This method of tagging the snapshots to skip any snapshots during restoration is not supported for `AWS S3` buckets.
- For object lock, S3 object versioning will automatically get enabled. So this extra handling is not required as user can simply soft delete those snapshots.
- With object versioning inplace, a deletion marker will get added on top of those snapshots, and during the restoration of backup-restore, it will only considers the latest snapshots.
- If you want your snapshot back, just delete the deletio-marker.
- For more info: https://docs.aws.amazon.com/AmazonS3/latest/userguide/DeletingObjectVersions.html

---

## Setting the Immutability Period
Expand Down Expand Up @@ -369,5 +429,7 @@ By following best practices and regularly reviewing your backup and immutability
- [Configure Immutability Policies](https://learn.microsoft.com/azure/storage/blobs/immutable-policy-configure-container-scope)
- [Blob Index Tags](https://learn.microsoft.com/azure/storage/blobs/storage-index-tags-overview)

---

- **AWS S3**
- [Object Lock Documentation](https://aws.amazon.com/s3/features/object-lock/)
- [Object Lock policies]()
- [Deletion of object](https://docs.aws.amazon.com/AmazonS3/latest/userguide/DeletingObjectVersions.html)
4 changes: 4 additions & 0 deletions pkg/snapshot/snapshotter/garbagecollector.go
Original file line number Diff line number Diff line change
Expand Up @@ -181,6 +181,10 @@ func (ssr *Snapshotter) RunGarbageCollector(stopCh <-chan struct{}) {
if fullSnapshotIndex < len(fullSnapshotIndexList)-int(ssr.config.MaxBackups) {
snap := snapList[fullSnapshotIndexList[fullSnapshotIndex]]
snapPath := path.Join(snap.SnapDir, snap.SnapName)
if !snap.IsDeletable() {
ssr.logger.Infof("GC: Skipping the snapshot: %s, since its immutability period hasn't expired yet", snap.SnapName)
continue
}
ssr.logger.Infof("GC: Deleting old full snapshot: %s", snapPath)
if err := ssr.store.Delete(*snap); errors.Is(err, brtypes.ErrSnapshotDeleteFailDueToImmutability) {
// The snapshot is still immutable, attempt to gargbage collect it in the next run
Expand Down
1 change: 0 additions & 1 deletion pkg/snapstore/oss_snapstore.go
Original file line number Diff line number Diff line change
Expand Up @@ -52,7 +52,6 @@ type authOptions struct {
type OSSSnapStore struct {
prefix string
bucket OSSBucket
multiPart sync.Mutex
maxParallelChunkUploads uint
minChunkSize int64
tempDir string
Expand Down
168 changes: 135 additions & 33 deletions pkg/snapstore/s3_snapstore.go
Original file line number Diff line number Diff line change
Expand Up @@ -64,10 +64,9 @@ type SSECredentials struct {

// S3SnapStore is snapstore with AWS S3 object store as backend
type S3SnapStore struct {
prefix string
client s3iface.S3API
bucket string
multiPart sync.Mutex
prefix string
client s3iface.S3API
bucket string
// maxParallelChunkUploads hold the maximum number of parallel chunk uploads allowed.
maxParallelChunkUploads uint
minChunkSize int64
Expand Down Expand Up @@ -137,7 +136,7 @@ func readAWSCredentialsJSONFile(filename string) (session.Options, SSECredential
}

httpClient := http.DefaultClient
if awsConfig.InsecureSkipVerify != nil && *awsConfig.InsecureSkipVerify == true {
if awsConfig.InsecureSkipVerify != nil && *awsConfig.InsecureSkipVerify {
httpClient.Transport = &http.Transport{
TLSClientConfig: &tls.Config{InsecureSkipVerify: *awsConfig.InsecureSkipVerify},
}
Expand Down Expand Up @@ -191,7 +190,7 @@ func readAWSCredentialFiles(dirname string) (session.Options, SSECredentials, er
}

httpClient := http.DefaultClient
if awsConfig.InsecureSkipVerify != nil && *awsConfig.InsecureSkipVerify == true {
if awsConfig.InsecureSkipVerify != nil && *awsConfig.InsecureSkipVerify {
httpClient.Transport = &http.Transport{
TLSClientConfig: &tls.Config{InsecureSkipVerify: *awsConfig.InsecureSkipVerify},
}
Expand Down Expand Up @@ -320,9 +319,18 @@ func NewS3FromClient(bucket, prefix, tempDir string, maxParallelChunkUploads uin

// Fetch should open reader for the snapshot file from store
func (s *S3SnapStore) Fetch(snap brtypes.Snapshot) (io.ReadCloser, error) {
getObjectInput := &s3.GetObjectInput{
Bucket: aws.String(s.bucket),
Key: aws.String(path.Join(snap.Prefix, snap.SnapDir, snap.SnapName)),
getObjectInput := &s3.GetObjectInput{}
if len(snap.VersionID) > 0 {
getObjectInput = &s3.GetObjectInput{
Bucket: aws.String(s.bucket),
Key: aws.String(path.Join(snap.Prefix, snap.SnapDir, snap.SnapName)),
VersionId: &snap.VersionID,
}
} else {
getObjectInput = &s3.GetObjectInput{
Bucket: aws.String(s.bucket),
Key: aws.String(path.Join(snap.Prefix, snap.SnapDir, snap.SnapName)),
}
}
if s.sseCustomerKey != "" {
// Customer managed Server Side Encryption
Expand Down Expand Up @@ -511,34 +519,98 @@ func (s *S3SnapStore) partUploader(wg *sync.WaitGroup, stopCh <-chan struct{}, s
}

// List will return sorted list with all snapshot files on store.
func (s *S3SnapStore) List(_ bool) (brtypes.SnapList, error) {
// For S3 object List will return the list of all
func (s *S3SnapStore) List(includeAll bool) (brtypes.SnapList, error) {
var snapList brtypes.SnapList
prefixTokens := strings.Split(s.prefix, "/")
// Last element of the tokens is backup version
// Consider the parent of the backup version level (Required for Backward Compatibility)
prefix := path.Join(strings.Join(prefixTokens[:len(prefixTokens)-1], "/"))

var snapList brtypes.SnapList
in := &s3.ListObjectsInput{
Bucket: aws.String(s.bucket),
Prefix: aws.String(prefix),
}
err := s.client.ListObjectsPages(in, func(page *s3.ListObjectsOutput, lastPage bool) bool {
for _, key := range page.Contents {
k := (*key.Key)[len(*page.Prefix):]
if strings.Contains(k, backupVersionV1) || strings.Contains(k, backupVersionV2) {
snap, err := ParseSnapshot(path.Join(prefix, k))
if err != nil {
// Warning
logrus.Warnf("Invalid snapshot found. Ignoring it: %s", k)
// Get the status of bucket versioning.
// Note: Bucket versioning will always be enabled for object lock.
versioningStatus, err := s.client.GetBucketVersioning(&s3.GetBucketVersioningInput{Bucket: &s.bucket})
if err != nil {
return nil, err
}

if versioningStatus.Status != nil && *versioningStatus.Status == "Enabled" {
// object/bucket versioning is found to be enabled on given bucket.
logrus.Info("Object versioning is found to be enabled.")

isObjectLockEnabled, bucketImmutableExpiryTimeInDays, err := getBucketImmutabilityTime(s)
if err != nil {
logrus.Warnf("unable to check object lock configuration for the bucket: %v", err)
} else if !isObjectLockEnabled {
logrus.Warnf("Object versioning is found to be enabled but object lock is not found to be enabled.")
logrus.Warnf("Please enable the object lock as well for the given bucket for immutability of snapshots.")
}

in := &s3.ListObjectVersionsInput{
Bucket: aws.String(s.bucket),
Prefix: aws.String(prefix),
}

if err := s.client.ListObjectVersionsPages(in, func(page *s3.ListObjectVersionsOutput, lastPage bool) bool {
for _, version := range page.Versions {
if *version.IsLatest {
k := (*version.Key)[len(*page.Prefix):]
if strings.Contains(k, backupVersionV1) || strings.Contains(k, backupVersionV2) {
snap, err := ParseSnapshot(path.Join(prefix, k))
if err != nil {
// Warning
logrus.Warnf("Invalid snapshot found. Ignoring it: %s", k)
} else {
// capture the versionID of snapshot and expiry time of snapshot
snap.VersionID = *version.VersionId
if bucketImmutableExpiryTimeInDays != nil {
// To get S3's object "RetainUntilDate" or "ImmutabilityExpiryTime", backup-restore need to make an API call for each snapshots.
// To avoid API calls for each snapshots, backup-restore is calculating the "ImmutabilityExpiryTime" using bucket retention period.
// ImmutabilityExpiryTime = SnapshotCreationTime + ObjectRetentionTimeInDays
snap.ImmutabilityExpiryTime = snap.CreatedOn.Add(time.Duration(*bucketImmutableExpiryTimeInDays) * 24 * time.Hour)
} else {
_, bucketImmutableExpiryTimeInDays, err = getBucketImmutabilityTime(s)
if err != nil {
logrus.Warnf("unable to get bucket immutability expiry time: %v", err)
}
}
snapList = append(snapList, snap)
}
}
} else {
snapList = append(snapList, snap)
// Warning
logrus.Warnf("Snapshot: %s with versionID: %s found to be not latest, it was last modified: %s. Ignoring it.", *version.Key, *version.VersionId, version.LastModified)
}
}
return !lastPage
}); err != nil {
return nil, err
}
} else {
// object/bucket versioning is not found to be enabled on given bucket.
logrus.Info("Object versioning is not found to be enabled.")
in := &s3.ListObjectsInput{
Bucket: aws.String(s.bucket),
Prefix: aws.String(prefix),
}

if err := s.client.ListObjectsPages(in, func(page *s3.ListObjectsOutput, lastPage bool) bool {
for _, key := range page.Contents {
k := (*key.Key)[len(*page.Prefix):]
if strings.Contains(k, backupVersionV1) || strings.Contains(k, backupVersionV2) {
snap, err := ParseSnapshot(path.Join(prefix, k))
if err != nil {
// Warning
logrus.Warnf("Invalid snapshot found. Ignoring it: %s", k)
} else {
snapList = append(snapList, snap)
}
}
}
return !lastPage
}); err != nil {
return nil, err
}
return !lastPage
})
if err != nil {
return nil, err
}

sort.Sort(snapList)
Expand All @@ -547,11 +619,25 @@ func (s *S3SnapStore) List(_ bool) (brtypes.SnapList, error) {

// Delete should delete the snapshot file from store
func (s *S3SnapStore) Delete(snap brtypes.Snapshot) error {
_, err := s.client.DeleteObject(&s3.DeleteObjectInput{
Bucket: aws.String(s.bucket),
Key: aws.String(path.Join(snap.Prefix, snap.SnapDir, snap.SnapName)),
})
return err
if len(snap.VersionID) > 0 {
// to delete versioned snapshots present in bucket.
if _, err := s.client.DeleteObject(&s3.DeleteObjectInput{
Bucket: aws.String(s.bucket),
Key: aws.String(path.Join(snap.Prefix, snap.SnapDir, snap.SnapName)),
VersionId: &snap.VersionID,
}); err != nil {
return err
}
} else {
// to delete non-versioned snapshots present in bucket.
if _, err := s.client.DeleteObject(&s3.DeleteObjectInput{
Bucket: aws.String(s.bucket),
Key: aws.String(path.Join(snap.Prefix, snap.SnapDir, snap.SnapName)),
}); err != nil {
return err
}
}
return nil
}

// GetS3CredentialsLastModifiedTime returns the latest modification timestamp of the AWS credential file(s)
Expand Down Expand Up @@ -622,3 +708,19 @@ func getSSECreds(sseCustomerKey, sseCustomerAlgorithm *string) (SSECredentials,
sseCustomerAlgorithm: *sseCustomerAlgorithm,
}, nil
}

func getBucketImmutabilityTime(s *S3SnapStore) (bool, *int64, error) {
objectConfig, err := s.client.GetObjectLockConfiguration(&s3.GetObjectLockConfigurationInput{
Bucket: aws.String(s.bucket),
})
if err != nil {
return false, nil, err
}

if *objectConfig.ObjectLockConfiguration.ObjectLockEnabled == "Enabled" {
// assumption: retention period of bucket will always be in days, not years.
return true, objectConfig.ObjectLockConfiguration.Rule.DefaultRetention.Days, nil
}

return false, nil, nil
}
Loading

0 comments on commit 38c36a7

Please sign in to comment.