forked from alephao/bitrise-step-s3-cache-push
-
Notifications
You must be signed in to change notification settings - Fork 0
/
step.yml
172 lines (152 loc) · 4.99 KB
/
step.yml
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
title: |-
S3 Cache Push
summary: |
Store your cache in a s3 bucket with custom keys.
description: |
A step to store your cache in a s3 bucket using custom keys.
This should be used with the s3-cache-pull step to retrieve the cache.
If you want to cache multiple items, you'll need run this step multiple times.
*Bucket Access*
For this step to work you'll need an user in aws with programmatic access to a bucket.
The user should have permissions to list, get, and put objects in the bucket.
You can set the credentials using the Bitrise Secrets with the keys specified in the inputs
or set them directly in the inputs.
website: https://github.com/alephao/bitrise-step-s3-cache-push
source_code_url: https://github.com/alephao/bitrise-step-s3-cache-push
support_url: https://github.com/alephao/bitrise-step-s3-cache-push/issues
host_os_tags:
- osx-10.10
- ubuntu-16.04
type_tags:
- utility
is_requires_admin_user: true
is_always_run: false
is_skippable: false
run_if: ""
toolkit:
go:
package_name: github.com/alephao/bitrise-step-s3-cache-push
inputs:
- cache_aws_access_key_id:
opts:
title: AWS_ACCESS_KEY_ID
category: AWS Access
is_expand: true
is_required: true
is_sensitive: true
summary: The AWS_ACCESS_KEY_ID to access the bucket.
description: |
The access key id that matches the secret access key.
The credentials need to be from a user that has at least the following permissions
in the bucket specified bellow `s3:ListObjects`, `s3:PutObject`, and `s3:GetObject`.
- cache_aws_secret_access_key:
opts:
title: AWS_SECRET_ACCESS_KEY
summary: The AWS_SECRET_ACCESS_KEY to access the bucket.
description: |
The secret access key that matches the access key id.
The credentials need to be from a user that has at least the following permissions
in the bucket specified bellow `s3:ListObjects`, `s3:PutObject`, and `s3:GetObject`.
category: AWS Access
is_expand: true
is_required: true
is_sensitive: true
- cache_aws_endpoint:
opts:
title: AWS Endpoint
summary: Custom URL to override default AWS one.
description: |
Set a custom URL if your S3 bucket is not hosted on a AWS domain.
For instance, if you have a min.io server running on `https://your-domain.com`, it will use that endpoint instead.
category: AWS Access
is_expand: true
is_required: false
- cache_aws_region:
opts:
title: AWS Region
summary: The region of the S3 bucket
category: AWS Bucket
is_expand: true
is_required: true
value_options:
- us-east-1
- us-east-2
- us-west-1
- us-west-2
- ca-central-1
- eu-north-1
- eu-west-3
- eu-west-2
- eu-west-1
- eu-central-1
- eu-south-1
- ap-south-1
- ap-northeast-1
- ap-northeast-2
- ap-northeast-3
- ap-southeast-1
- ap-southeast-2
- ap-east-1
- sa-east-1
- cn-north-1
- cn-northwest-1
- us-gov-east-1
- us-gov-west-1
- us-gov-secret-1
- us-gov-topsecret-1
- me-south-1
- af-south-1
- cache_bucket_name:
opts:
title: Bucket Name
summary: The name of the s3 bucket where you want to store the cache
category: AWS Bucket
is_expand: true
is_required: true
- cache_key:
opts:
title: Cache key
summary: The key that will be used on S3 as the file key. This is used to retrieve the cache with s3-cache-pull.
description: |
The cache key can contain special values for convenience.
You can use '{{ checksum path/to/file }}' to get the file content's sha256 checksum.
You can use '{{ branch }}' to get the name of the current branch.
You can use '{{ stackrev }}' to get the machine's stack id.
E.g.: key: {{ stackrev }}-carthage-{{ branch }}-{{ checksum "Cartfile.resolved" }}
category: Cache
is_expand: false
is_required: true
- cache_path:
opts:
title: Cache path
summary: Path to file or directory to be cached. Relative to the root of the git repo.
description: |
The entire folder will be compressed before sending to the S3 bucket
For instance, if you cache `/path/to/my/folder`, only "folder" will be compressed.
When retrieving the cache with s3-cache-pull, you will have to use `/path/to/my/` to extract the folder there.
category: Cache
is_expand: false
is_required: true
- cache_archive_extension: "zip"
opts:
title: Archive Extension
summary: The extension of the generated archive
category: Archive
is_expand: true
is_required: true
value_options:
- zip
- tar
- tar.br
- tar.gz
- tar.bz2
- tar.xz
- tar.lz4
- tar.sz
- zst
- tar.zst
- bz2
- gz
- lz4
- sz
- xz