diff --git a/vendor/dario.cat/mergo/.deepsource.toml b/vendor/dario.cat/mergo/.deepsource.toml
new file mode 100644
index 000000000..a8bc979e0
--- /dev/null
+++ b/vendor/dario.cat/mergo/.deepsource.toml
@@ -0,0 +1,12 @@
+version = 1
+
+test_patterns = [
+ "*_test.go"
+]
+
+[[analyzers]]
+name = "go"
+enabled = true
+
+ [analyzers.meta]
+ import_path = "dario.cat/mergo"
\ No newline at end of file
diff --git a/vendor/dario.cat/mergo/.gitignore b/vendor/dario.cat/mergo/.gitignore
new file mode 100644
index 000000000..45ad0f1ae
--- /dev/null
+++ b/vendor/dario.cat/mergo/.gitignore
@@ -0,0 +1,36 @@
+#### joe made this: http://goel.io/joe
+
+#### go ####
+# Binaries for programs and plugins
+*.exe
+*.dll
+*.so
+*.dylib
+
+# Test binary, build with `go test -c`
+*.test
+
+# Output of the go coverage tool, specifically when used with LiteIDE
+*.out
+
+# Golang/Intellij
+.idea
+
+# Project-local glide cache, RE: https://github.com/Masterminds/glide/issues/736
+.glide/
+
+#### vim ####
+# Swap
+[._]*.s[a-v][a-z]
+[._]*.sw[a-p]
+[._]s[a-v][a-z]
+[._]sw[a-p]
+
+# Session
+Session.vim
+
+# Temporary
+.netrwhist
+*~
+# Auto-generated tag files
+tags
diff --git a/vendor/dario.cat/mergo/.travis.yml b/vendor/dario.cat/mergo/.travis.yml
new file mode 100644
index 000000000..d324c43ba
--- /dev/null
+++ b/vendor/dario.cat/mergo/.travis.yml
@@ -0,0 +1,12 @@
+language: go
+arch:
+ - amd64
+ - ppc64le
+install:
+ - go get -t
+ - go get golang.org/x/tools/cmd/cover
+ - go get github.com/mattn/goveralls
+script:
+ - go test -race -v ./...
+after_script:
+ - $HOME/gopath/bin/goveralls -service=travis-ci -repotoken $COVERALLS_TOKEN
diff --git a/vendor/dario.cat/mergo/CODE_OF_CONDUCT.md b/vendor/dario.cat/mergo/CODE_OF_CONDUCT.md
new file mode 100644
index 000000000..469b44907
--- /dev/null
+++ b/vendor/dario.cat/mergo/CODE_OF_CONDUCT.md
@@ -0,0 +1,46 @@
+# Contributor Covenant Code of Conduct
+
+## Our Pledge
+
+In the interest of fostering an open and welcoming environment, we as contributors and maintainers pledge to making participation in our project and our community a harassment-free experience for everyone, regardless of age, body size, disability, ethnicity, gender identity and expression, level of experience, nationality, personal appearance, race, religion, or sexual identity and orientation.
+
+## Our Standards
+
+Examples of behavior that contributes to creating a positive environment include:
+
+* Using welcoming and inclusive language
+* Being respectful of differing viewpoints and experiences
+* Gracefully accepting constructive criticism
+* Focusing on what is best for the community
+* Showing empathy towards other community members
+
+Examples of unacceptable behavior by participants include:
+
+* The use of sexualized language or imagery and unwelcome sexual attention or advances
+* Trolling, insulting/derogatory comments, and personal or political attacks
+* Public or private harassment
+* Publishing others' private information, such as a physical or electronic address, without explicit permission
+* Other conduct which could reasonably be considered inappropriate in a professional setting
+
+## Our Responsibilities
+
+Project maintainers are responsible for clarifying the standards of acceptable behavior and are expected to take appropriate and fair corrective action in response to any instances of unacceptable behavior.
+
+Project maintainers have the right and responsibility to remove, edit, or reject comments, commits, code, wiki edits, issues, and other contributions that are not aligned to this Code of Conduct, or to ban temporarily or permanently any contributor for other behaviors that they deem inappropriate, threatening, offensive, or harmful.
+
+## Scope
+
+This Code of Conduct applies both within project spaces and in public spaces when an individual is representing the project or its community. Examples of representing a project or community include using an official project e-mail address, posting via an official social media account, or acting as an appointed representative at an online or offline event. Representation of a project may be further defined and clarified by project maintainers.
+
+## Enforcement
+
+Instances of abusive, harassing, or otherwise unacceptable behavior may be reported by contacting the project team at i@dario.im. The project team will review and investigate all complaints, and will respond in a way that it deems appropriate to the circumstances. The project team is obligated to maintain confidentiality with regard to the reporter of an incident. Further details of specific enforcement policies may be posted separately.
+
+Project maintainers who do not follow or enforce the Code of Conduct in good faith may face temporary or permanent repercussions as determined by other members of the project's leadership.
+
+## Attribution
+
+This Code of Conduct is adapted from the [Contributor Covenant][homepage], version 1.4, available at [http://contributor-covenant.org/version/1/4][version]
+
+[homepage]: http://contributor-covenant.org
+[version]: http://contributor-covenant.org/version/1/4/
diff --git a/vendor/dario.cat/mergo/CONTRIBUTING.md b/vendor/dario.cat/mergo/CONTRIBUTING.md
new file mode 100644
index 000000000..0a1ff9f94
--- /dev/null
+++ b/vendor/dario.cat/mergo/CONTRIBUTING.md
@@ -0,0 +1,112 @@
+
+# Contributing to mergo
+
+First off, thanks for taking the time to contribute! ❤️
+
+All types of contributions are encouraged and valued. See the [Table of Contents](#table-of-contents) for different ways to help and details about how this project handles them. Please make sure to read the relevant section before making your contribution. It will make it a lot easier for us maintainers and smooth out the experience for all involved. The community looks forward to your contributions. 🎉
+
+> And if you like the project, but just don't have time to contribute, that's fine. There are other easy ways to support the project and show your appreciation, which we would also be very happy about:
+> - Star the project
+> - Tweet about it
+> - Refer this project in your project's readme
+> - Mention the project at local meetups and tell your friends/colleagues
+
+
+## Table of Contents
+
+- [Code of Conduct](#code-of-conduct)
+- [I Have a Question](#i-have-a-question)
+- [I Want To Contribute](#i-want-to-contribute)
+- [Reporting Bugs](#reporting-bugs)
+- [Suggesting Enhancements](#suggesting-enhancements)
+
+## Code of Conduct
+
+This project and everyone participating in it is governed by the
+[mergo Code of Conduct](https://github.com/imdario/mergoblob/master/CODE_OF_CONDUCT.md).
+By participating, you are expected to uphold this code. Please report unacceptable behavior
+to <>.
+
+
+## I Have a Question
+
+> If you want to ask a question, we assume that you have read the available [Documentation](https://pkg.go.dev/github.com/imdario/mergo).
+
+Before you ask a question, it is best to search for existing [Issues](https://github.com/imdario/mergo/issues) that might help you. In case you have found a suitable issue and still need clarification, you can write your question in this issue. It is also advisable to search the internet for answers first.
+
+If you then still feel the need to ask a question and need clarification, we recommend the following:
+
+- Open an [Issue](https://github.com/imdario/mergo/issues/new).
+- Provide as much context as you can about what you're running into.
+- Provide project and platform versions (nodejs, npm, etc), depending on what seems relevant.
+
+We will then take care of the issue as soon as possible.
+
+## I Want To Contribute
+
+> ### Legal Notice
+> When contributing to this project, you must agree that you have authored 100% of the content, that you have the necessary rights to the content and that the content you contribute may be provided under the project license.
+
+### Reporting Bugs
+
+
+#### Before Submitting a Bug Report
+
+A good bug report shouldn't leave others needing to chase you up for more information. Therefore, we ask you to investigate carefully, collect information and describe the issue in detail in your report. Please complete the following steps in advance to help us fix any potential bug as fast as possible.
+
+- Make sure that you are using the latest version.
+- Determine if your bug is really a bug and not an error on your side e.g. using incompatible environment components/versions (Make sure that you have read the [documentation](). If you are looking for support, you might want to check [this section](#i-have-a-question)).
+- To see if other users have experienced (and potentially already solved) the same issue you are having, check if there is not already a bug report existing for your bug or error in the [bug tracker](https://github.com/imdario/mergoissues?q=label%3Abug).
+- Also make sure to search the internet (including Stack Overflow) to see if users outside of the GitHub community have discussed the issue.
+- Collect information about the bug:
+- Stack trace (Traceback)
+- OS, Platform and Version (Windows, Linux, macOS, x86, ARM)
+- Version of the interpreter, compiler, SDK, runtime environment, package manager, depending on what seems relevant.
+- Possibly your input and the output
+- Can you reliably reproduce the issue? And can you also reproduce it with older versions?
+
+
+#### How Do I Submit a Good Bug Report?
+
+> You must never report security related issues, vulnerabilities or bugs including sensitive information to the issue tracker, or elsewhere in public. Instead sensitive bugs must be sent by email to .
+
+
+We use GitHub issues to track bugs and errors. If you run into an issue with the project:
+
+- Open an [Issue](https://github.com/imdario/mergo/issues/new). (Since we can't be sure at this point whether it is a bug or not, we ask you not to talk about a bug yet and not to label the issue.)
+- Explain the behavior you would expect and the actual behavior.
+- Please provide as much context as possible and describe the *reproduction steps* that someone else can follow to recreate the issue on their own. This usually includes your code. For good bug reports you should isolate the problem and create a reduced test case.
+- Provide the information you collected in the previous section.
+
+Once it's filed:
+
+- The project team will label the issue accordingly.
+- A team member will try to reproduce the issue with your provided steps. If there are no reproduction steps or no obvious way to reproduce the issue, the team will ask you for those steps and mark the issue as `needs-repro`. Bugs with the `needs-repro` tag will not be addressed until they are reproduced.
+- If the team is able to reproduce the issue, it will be marked `needs-fix`, as well as possibly other tags (such as `critical`), and the issue will be left to be implemented by someone.
+
+### Suggesting Enhancements
+
+This section guides you through submitting an enhancement suggestion for mergo, **including completely new features and minor improvements to existing functionality**. Following these guidelines will help maintainers and the community to understand your suggestion and find related suggestions.
+
+
+#### Before Submitting an Enhancement
+
+- Make sure that you are using the latest version.
+- Read the [documentation]() carefully and find out if the functionality is already covered, maybe by an individual configuration.
+- Perform a [search](https://github.com/imdario/mergo/issues) to see if the enhancement has already been suggested. If it has, add a comment to the existing issue instead of opening a new one.
+- Find out whether your idea fits with the scope and aims of the project. It's up to you to make a strong case to convince the project's developers of the merits of this feature. Keep in mind that we want features that will be useful to the majority of our users and not just a small subset. If you're just targeting a minority of users, consider writing an add-on/plugin library.
+
+
+#### How Do I Submit a Good Enhancement Suggestion?
+
+Enhancement suggestions are tracked as [GitHub issues](https://github.com/imdario/mergo/issues).
+
+- Use a **clear and descriptive title** for the issue to identify the suggestion.
+- Provide a **step-by-step description of the suggested enhancement** in as many details as possible.
+- **Describe the current behavior** and **explain which behavior you expected to see instead** and why. At this point you can also tell which alternatives do not work for you.
+- You may want to **include screenshots and animated GIFs** which help you demonstrate the steps or point out the part which the suggestion is related to. You can use [this tool](https://www.cockos.com/licecap/) to record GIFs on macOS and Windows, and [this tool](https://github.com/colinkeenan/silentcast) or [this tool](https://github.com/GNOME/byzanz) on Linux.
+- **Explain why this enhancement would be useful** to most mergo users. You may also want to point out the other projects that solved it better and which could serve as inspiration.
+
+
+## Attribution
+This guide is based on the **contributing-gen**. [Make your own](https://github.com/bttger/contributing-gen)!
diff --git a/vendor/dario.cat/mergo/LICENSE b/vendor/dario.cat/mergo/LICENSE
new file mode 100644
index 000000000..686680298
--- /dev/null
+++ b/vendor/dario.cat/mergo/LICENSE
@@ -0,0 +1,28 @@
+Copyright (c) 2013 Dario Castañé. All rights reserved.
+Copyright (c) 2012 The Go Authors. All rights reserved.
+
+Redistribution and use in source and binary forms, with or without
+modification, are permitted provided that the following conditions are
+met:
+
+ * Redistributions of source code must retain the above copyright
+notice, this list of conditions and the following disclaimer.
+ * Redistributions in binary form must reproduce the above
+copyright notice, this list of conditions and the following disclaimer
+in the documentation and/or other materials provided with the
+distribution.
+ * Neither the name of Google Inc. nor the names of its
+contributors may be used to endorse or promote products derived from
+this software without specific prior written permission.
+
+THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
diff --git a/vendor/dario.cat/mergo/README.md b/vendor/dario.cat/mergo/README.md
new file mode 100644
index 000000000..0b3c48889
--- /dev/null
+++ b/vendor/dario.cat/mergo/README.md
@@ -0,0 +1,258 @@
+# Mergo
+
+[![GitHub release][5]][6]
+[![GoCard][7]][8]
+[![Test status][1]][2]
+[![OpenSSF Scorecard][21]][22]
+[![OpenSSF Best Practices][19]][20]
+[![Coverage status][9]][10]
+[![Sourcegraph][11]][12]
+[![FOSSA status][13]][14]
+
+[![GoDoc][3]][4]
+[![Become my sponsor][15]][16]
+[![Tidelift][17]][18]
+
+[1]: https://github.com/imdario/mergo/workflows/tests/badge.svg?branch=master
+[2]: https://github.com/imdario/mergo/actions/workflows/tests.yml
+[3]: https://godoc.org/github.com/imdario/mergo?status.svg
+[4]: https://godoc.org/github.com/imdario/mergo
+[5]: https://img.shields.io/github/release/imdario/mergo.svg
+[6]: https://github.com/imdario/mergo/releases
+[7]: https://goreportcard.com/badge/imdario/mergo
+[8]: https://goreportcard.com/report/github.com/imdario/mergo
+[9]: https://coveralls.io/repos/github/imdario/mergo/badge.svg?branch=master
+[10]: https://coveralls.io/github/imdario/mergo?branch=master
+[11]: https://sourcegraph.com/github.com/imdario/mergo/-/badge.svg
+[12]: https://sourcegraph.com/github.com/imdario/mergo?badge
+[13]: https://app.fossa.io/api/projects/git%2Bgithub.com%2Fimdario%2Fmergo.svg?type=shield
+[14]: https://app.fossa.io/projects/git%2Bgithub.com%2Fimdario%2Fmergo?ref=badge_shield
+[15]: https://img.shields.io/github/sponsors/imdario
+[16]: https://github.com/sponsors/imdario
+[17]: https://tidelift.com/badges/package/go/github.com%2Fimdario%2Fmergo
+[18]: https://tidelift.com/subscription/pkg/go-github.com-imdario-mergo
+[19]: https://bestpractices.coreinfrastructure.org/projects/7177/badge
+[20]: https://bestpractices.coreinfrastructure.org/projects/7177
+[21]: https://api.securityscorecards.dev/projects/github.com/imdario/mergo/badge
+[22]: https://api.securityscorecards.dev/projects/github.com/imdario/mergo
+
+A helper to merge structs and maps in Golang. Useful for configuration default values, avoiding messy if-statements.
+
+Mergo merges same-type structs and maps by setting default values in zero-value fields. Mergo won't merge unexported (private) fields. It will do recursively any exported one. It also won't merge structs inside maps (because they are not addressable using Go reflection).
+
+Also a lovely [comune](http://en.wikipedia.org/wiki/Mergo) (municipality) in the Province of Ancona in the Italian region of Marche.
+
+## Status
+
+Mergo is stable and frozen, ready for production. Check a short list of the projects using at large scale it [here](https://github.com/imdario/mergo#mergo-in-the-wild).
+
+No new features are accepted. They will be considered for a future v2 that improves the implementation and fixes bugs for corner cases.
+
+### Important notes
+
+#### 1.0.0
+
+In [1.0.0](//github.com/imdario/mergo/releases/tag/1.0.0) Mergo moves to a vanity URL `dario.cat/mergo`. No more v1 versions will be released.
+
+If the vanity URL is causing issues in your project due to a dependency pulling Mergo - it isn't a direct dependency in your project - it is recommended to use [replace](https://github.com/golang/go/wiki/Modules#when-should-i-use-the-replace-directive) to pin the version to the last one with the old import URL:
+
+```
+replace github.com/imdario/mergo => github.com/imdario/mergo v0.3.16
+```
+
+#### 0.3.9
+
+Please keep in mind that a problematic PR broke [0.3.9](//github.com/imdario/mergo/releases/tag/0.3.9). I reverted it in [0.3.10](//github.com/imdario/mergo/releases/tag/0.3.10), and I consider it stable but not bug-free. Also, this version adds support for go modules.
+
+Keep in mind that in [0.3.2](//github.com/imdario/mergo/releases/tag/0.3.2), Mergo changed `Merge()`and `Map()` signatures to support [transformers](#transformers). I added an optional/variadic argument so that it won't break the existing code.
+
+If you were using Mergo before April 6th, 2015, please check your project works as intended after updating your local copy with ```go get -u dario.cat/mergo```. I apologize for any issue caused by its previous behavior and any future bug that Mergo could cause in existing projects after the change (release 0.2.0).
+
+### Donations
+
+If Mergo is useful to you, consider buying me a coffee, a beer, or making a monthly donation to allow me to keep building great free software. :heart_eyes:
+
+
+
+
+### Mergo in the wild
+
+Mergo is used by [thousands](https://deps.dev/go/dario.cat%2Fmergo/v1.0.0/dependents) [of](https://deps.dev/go/github.com%2Fimdario%2Fmergo/v0.3.16/dependents) [projects](https://deps.dev/go/github.com%2Fimdario%2Fmergo/v0.3.12), including:
+
+* [containerd/containerd](https://github.com/containerd/containerd)
+* [datadog/datadog-agent](https://github.com/datadog/datadog-agent)
+* [docker/cli/](https://github.com/docker/cli/)
+* [goreleaser/goreleaser](https://github.com/goreleaser/goreleaser)
+* [go-micro/go-micro](https://github.com/go-micro/go-micro)
+* [grafana/loki](https://github.com/grafana/loki)
+* [kubernetes/kubernetes](https://github.com/kubernetes/kubernetes)
+* [masterminds/sprig](github.com/Masterminds/sprig)
+* [moby/moby](https://github.com/moby/moby)
+* [slackhq/nebula](https://github.com/slackhq/nebula)
+* [volcano-sh/volcano](https://github.com/volcano-sh/volcano)
+
+## Install
+
+ go get dario.cat/mergo
+
+ // use in your .go code
+ import (
+ "dario.cat/mergo"
+ )
+
+## Usage
+
+You can only merge same-type structs with exported fields initialized as zero value of their type and same-types maps. Mergo won't merge unexported (private) fields but will do recursively any exported one. It won't merge empty structs value as [they are zero values](https://golang.org/ref/spec#The_zero_value) too. Also, maps will be merged recursively except for structs inside maps (because they are not addressable using Go reflection).
+
+```go
+if err := mergo.Merge(&dst, src); err != nil {
+ // ...
+}
+```
+
+Also, you can merge overwriting values using the transformer `WithOverride`.
+
+```go
+if err := mergo.Merge(&dst, src, mergo.WithOverride); err != nil {
+ // ...
+}
+```
+
+If you need to override pointers, so the source pointer's value is assigned to the destination's pointer, you must use `WithoutDereference`:
+
+```go
+package main
+
+import (
+ "fmt"
+
+ "dario.cat/mergo"
+)
+
+type Foo struct {
+ A *string
+ B int64
+}
+
+func main() {
+ first := "first"
+ second := "second"
+ src := Foo{
+ A: &first,
+ B: 2,
+ }
+
+ dest := Foo{
+ A: &second,
+ B: 1,
+ }
+
+ mergo.Merge(&dest, src, mergo.WithOverride, mergo.WithoutDereference)
+}
+```
+
+Additionally, you can map a `map[string]interface{}` to a struct (and otherwise, from struct to map), following the same restrictions as in `Merge()`. Keys are capitalized to find each corresponding exported field.
+
+```go
+if err := mergo.Map(&dst, srcMap); err != nil {
+ // ...
+}
+```
+
+Warning: if you map a struct to map, it won't do it recursively. Don't expect Mergo to map struct members of your struct as `map[string]interface{}`. They will be just assigned as values.
+
+Here is a nice example:
+
+```go
+package main
+
+import (
+ "fmt"
+ "dario.cat/mergo"
+)
+
+type Foo struct {
+ A string
+ B int64
+}
+
+func main() {
+ src := Foo{
+ A: "one",
+ B: 2,
+ }
+ dest := Foo{
+ A: "two",
+ }
+ mergo.Merge(&dest, src)
+ fmt.Println(dest)
+ // Will print
+ // {two 2}
+}
+```
+
+Note: if test are failing due missing package, please execute:
+
+ go get gopkg.in/yaml.v3
+
+### Transformers
+
+Transformers allow to merge specific types differently than in the default behavior. In other words, now you can customize how some types are merged. For example, `time.Time` is a struct; it doesn't have zero value but IsZero can return true because it has fields with zero value. How can we merge a non-zero `time.Time`?
+
+```go
+package main
+
+import (
+ "fmt"
+ "dario.cat/mergo"
+ "reflect"
+ "time"
+)
+
+type timeTransformer struct {
+}
+
+func (t timeTransformer) Transformer(typ reflect.Type) func(dst, src reflect.Value) error {
+ if typ == reflect.TypeOf(time.Time{}) {
+ return func(dst, src reflect.Value) error {
+ if dst.CanSet() {
+ isZero := dst.MethodByName("IsZero")
+ result := isZero.Call([]reflect.Value{})
+ if result[0].Bool() {
+ dst.Set(src)
+ }
+ }
+ return nil
+ }
+ }
+ return nil
+}
+
+type Snapshot struct {
+ Time time.Time
+ // ...
+}
+
+func main() {
+ src := Snapshot{time.Now()}
+ dest := Snapshot{}
+ mergo.Merge(&dest, src, mergo.WithTransformers(timeTransformer{}))
+ fmt.Println(dest)
+ // Will print
+ // { 2018-01-12 01:15:00 +0000 UTC m=+0.000000001 }
+}
+```
+
+## Contact me
+
+If I can help you, you have an idea or you are using Mergo in your projects, don't hesitate to drop me a line (or a pull request): [@im_dario](https://twitter.com/im_dario)
+
+## About
+
+Written by [Dario Castañé](http://dario.im).
+
+## License
+
+[BSD 3-Clause](http://opensource.org/licenses/BSD-3-Clause) license, as [Go language](http://golang.org/LICENSE).
+
+[![FOSSA Status](https://app.fossa.io/api/projects/git%2Bgithub.com%2Fimdario%2Fmergo.svg?type=large)](https://app.fossa.io/projects/git%2Bgithub.com%2Fimdario%2Fmergo?ref=badge_large)
diff --git a/vendor/dario.cat/mergo/SECURITY.md b/vendor/dario.cat/mergo/SECURITY.md
new file mode 100644
index 000000000..a5de61f77
--- /dev/null
+++ b/vendor/dario.cat/mergo/SECURITY.md
@@ -0,0 +1,14 @@
+# Security Policy
+
+## Supported Versions
+
+| Version | Supported |
+| ------- | ------------------ |
+| 0.3.x | :white_check_mark: |
+| < 0.3 | :x: |
+
+## Security contact information
+
+To report a security vulnerability, please use the
+[Tidelift security contact](https://tidelift.com/security).
+Tidelift will coordinate the fix and disclosure.
diff --git a/vendor/dario.cat/mergo/doc.go b/vendor/dario.cat/mergo/doc.go
new file mode 100644
index 000000000..7d96ec054
--- /dev/null
+++ b/vendor/dario.cat/mergo/doc.go
@@ -0,0 +1,148 @@
+// Copyright 2013 Dario Castañé. All rights reserved.
+// Copyright 2009 The Go Authors. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+/*
+A helper to merge structs and maps in Golang. Useful for configuration default values, avoiding messy if-statements.
+
+Mergo merges same-type structs and maps by setting default values in zero-value fields. Mergo won't merge unexported (private) fields. It will do recursively any exported one. It also won't merge structs inside maps (because they are not addressable using Go reflection).
+
+# Status
+
+It is ready for production use. It is used in several projects by Docker, Google, The Linux Foundation, VMWare, Shopify, etc.
+
+# Important notes
+
+1.0.0
+
+In 1.0.0 Mergo moves to a vanity URL `dario.cat/mergo`.
+
+0.3.9
+
+Please keep in mind that a problematic PR broke 0.3.9. We reverted it in 0.3.10. We consider 0.3.10 as stable but not bug-free. . Also, this version adds suppot for go modules.
+
+Keep in mind that in 0.3.2, Mergo changed Merge() and Map() signatures to support transformers. We added an optional/variadic argument so that it won't break the existing code.
+
+If you were using Mergo before April 6th, 2015, please check your project works as intended after updating your local copy with go get -u dario.cat/mergo. I apologize for any issue caused by its previous behavior and any future bug that Mergo could cause in existing projects after the change (release 0.2.0).
+
+# Install
+
+Do your usual installation procedure:
+
+ go get dario.cat/mergo
+
+ // use in your .go code
+ import (
+ "dario.cat/mergo"
+ )
+
+# Usage
+
+You can only merge same-type structs with exported fields initialized as zero value of their type and same-types maps. Mergo won't merge unexported (private) fields but will do recursively any exported one. It won't merge empty structs value as they are zero values too. Also, maps will be merged recursively except for structs inside maps (because they are not addressable using Go reflection).
+
+ if err := mergo.Merge(&dst, src); err != nil {
+ // ...
+ }
+
+Also, you can merge overwriting values using the transformer WithOverride.
+
+ if err := mergo.Merge(&dst, src, mergo.WithOverride); err != nil {
+ // ...
+ }
+
+Additionally, you can map a map[string]interface{} to a struct (and otherwise, from struct to map), following the same restrictions as in Merge(). Keys are capitalized to find each corresponding exported field.
+
+ if err := mergo.Map(&dst, srcMap); err != nil {
+ // ...
+ }
+
+Warning: if you map a struct to map, it won't do it recursively. Don't expect Mergo to map struct members of your struct as map[string]interface{}. They will be just assigned as values.
+
+Here is a nice example:
+
+ package main
+
+ import (
+ "fmt"
+ "dario.cat/mergo"
+ )
+
+ type Foo struct {
+ A string
+ B int64
+ }
+
+ func main() {
+ src := Foo{
+ A: "one",
+ B: 2,
+ }
+ dest := Foo{
+ A: "two",
+ }
+ mergo.Merge(&dest, src)
+ fmt.Println(dest)
+ // Will print
+ // {two 2}
+ }
+
+# Transformers
+
+Transformers allow to merge specific types differently than in the default behavior. In other words, now you can customize how some types are merged. For example, time.Time is a struct; it doesn't have zero value but IsZero can return true because it has fields with zero value. How can we merge a non-zero time.Time?
+
+ package main
+
+ import (
+ "fmt"
+ "dario.cat/mergo"
+ "reflect"
+ "time"
+ )
+
+ type timeTransformer struct {
+ }
+
+ func (t timeTransformer) Transformer(typ reflect.Type) func(dst, src reflect.Value) error {
+ if typ == reflect.TypeOf(time.Time{}) {
+ return func(dst, src reflect.Value) error {
+ if dst.CanSet() {
+ isZero := dst.MethodByName("IsZero")
+ result := isZero.Call([]reflect.Value{})
+ if result[0].Bool() {
+ dst.Set(src)
+ }
+ }
+ return nil
+ }
+ }
+ return nil
+ }
+
+ type Snapshot struct {
+ Time time.Time
+ // ...
+ }
+
+ func main() {
+ src := Snapshot{time.Now()}
+ dest := Snapshot{}
+ mergo.Merge(&dest, src, mergo.WithTransformers(timeTransformer{}))
+ fmt.Println(dest)
+ // Will print
+ // { 2018-01-12 01:15:00 +0000 UTC m=+0.000000001 }
+ }
+
+# Contact me
+
+If I can help you, you have an idea or you are using Mergo in your projects, don't hesitate to drop me a line (or a pull request): https://twitter.com/im_dario
+
+# About
+
+Written by Dario Castañé: https://da.rio.hn
+
+# License
+
+BSD 3-Clause license, as Go language.
+*/
+package mergo
diff --git a/vendor/dario.cat/mergo/map.go b/vendor/dario.cat/mergo/map.go
new file mode 100644
index 000000000..759b4f74f
--- /dev/null
+++ b/vendor/dario.cat/mergo/map.go
@@ -0,0 +1,178 @@
+// Copyright 2014 Dario Castañé. All rights reserved.
+// Copyright 2009 The Go Authors. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+// Based on src/pkg/reflect/deepequal.go from official
+// golang's stdlib.
+
+package mergo
+
+import (
+ "fmt"
+ "reflect"
+ "unicode"
+ "unicode/utf8"
+)
+
+func changeInitialCase(s string, mapper func(rune) rune) string {
+ if s == "" {
+ return s
+ }
+ r, n := utf8.DecodeRuneInString(s)
+ return string(mapper(r)) + s[n:]
+}
+
+func isExported(field reflect.StructField) bool {
+ r, _ := utf8.DecodeRuneInString(field.Name)
+ return r >= 'A' && r <= 'Z'
+}
+
+// Traverses recursively both values, assigning src's fields values to dst.
+// The map argument tracks comparisons that have already been seen, which allows
+// short circuiting on recursive types.
+func deepMap(dst, src reflect.Value, visited map[uintptr]*visit, depth int, config *Config) (err error) {
+ overwrite := config.Overwrite
+ if dst.CanAddr() {
+ addr := dst.UnsafeAddr()
+ h := 17 * addr
+ seen := visited[h]
+ typ := dst.Type()
+ for p := seen; p != nil; p = p.next {
+ if p.ptr == addr && p.typ == typ {
+ return nil
+ }
+ }
+ // Remember, remember...
+ visited[h] = &visit{typ, seen, addr}
+ }
+ zeroValue := reflect.Value{}
+ switch dst.Kind() {
+ case reflect.Map:
+ dstMap := dst.Interface().(map[string]interface{})
+ for i, n := 0, src.NumField(); i < n; i++ {
+ srcType := src.Type()
+ field := srcType.Field(i)
+ if !isExported(field) {
+ continue
+ }
+ fieldName := field.Name
+ fieldName = changeInitialCase(fieldName, unicode.ToLower)
+ if _, ok := dstMap[fieldName]; !ok || (!isEmptyValue(reflect.ValueOf(src.Field(i).Interface()), !config.ShouldNotDereference) && overwrite) || config.overwriteWithEmptyValue {
+ dstMap[fieldName] = src.Field(i).Interface()
+ }
+ }
+ case reflect.Ptr:
+ if dst.IsNil() {
+ v := reflect.New(dst.Type().Elem())
+ dst.Set(v)
+ }
+ dst = dst.Elem()
+ fallthrough
+ case reflect.Struct:
+ srcMap := src.Interface().(map[string]interface{})
+ for key := range srcMap {
+ config.overwriteWithEmptyValue = true
+ srcValue := srcMap[key]
+ fieldName := changeInitialCase(key, unicode.ToUpper)
+ dstElement := dst.FieldByName(fieldName)
+ if dstElement == zeroValue {
+ // We discard it because the field doesn't exist.
+ continue
+ }
+ srcElement := reflect.ValueOf(srcValue)
+ dstKind := dstElement.Kind()
+ srcKind := srcElement.Kind()
+ if srcKind == reflect.Ptr && dstKind != reflect.Ptr {
+ srcElement = srcElement.Elem()
+ srcKind = reflect.TypeOf(srcElement.Interface()).Kind()
+ } else if dstKind == reflect.Ptr {
+ // Can this work? I guess it can't.
+ if srcKind != reflect.Ptr && srcElement.CanAddr() {
+ srcPtr := srcElement.Addr()
+ srcElement = reflect.ValueOf(srcPtr)
+ srcKind = reflect.Ptr
+ }
+ }
+
+ if !srcElement.IsValid() {
+ continue
+ }
+ if srcKind == dstKind {
+ if err = deepMerge(dstElement, srcElement, visited, depth+1, config); err != nil {
+ return
+ }
+ } else if dstKind == reflect.Interface && dstElement.Kind() == reflect.Interface {
+ if err = deepMerge(dstElement, srcElement, visited, depth+1, config); err != nil {
+ return
+ }
+ } else if srcKind == reflect.Map {
+ if err = deepMap(dstElement, srcElement, visited, depth+1, config); err != nil {
+ return
+ }
+ } else {
+ return fmt.Errorf("type mismatch on %s field: found %v, expected %v", fieldName, srcKind, dstKind)
+ }
+ }
+ }
+ return
+}
+
+// Map sets fields' values in dst from src.
+// src can be a map with string keys or a struct. dst must be the opposite:
+// if src is a map, dst must be a valid pointer to struct. If src is a struct,
+// dst must be map[string]interface{}.
+// It won't merge unexported (private) fields and will do recursively
+// any exported field.
+// If dst is a map, keys will be src fields' names in lower camel case.
+// Missing key in src that doesn't match a field in dst will be skipped. This
+// doesn't apply if dst is a map.
+// This is separated method from Merge because it is cleaner and it keeps sane
+// semantics: merging equal types, mapping different (restricted) types.
+func Map(dst, src interface{}, opts ...func(*Config)) error {
+ return _map(dst, src, opts...)
+}
+
+// MapWithOverwrite will do the same as Map except that non-empty dst attributes will be overridden by
+// non-empty src attribute values.
+// Deprecated: Use Map(…) with WithOverride
+func MapWithOverwrite(dst, src interface{}, opts ...func(*Config)) error {
+ return _map(dst, src, append(opts, WithOverride)...)
+}
+
+func _map(dst, src interface{}, opts ...func(*Config)) error {
+ if dst != nil && reflect.ValueOf(dst).Kind() != reflect.Ptr {
+ return ErrNonPointerArgument
+ }
+ var (
+ vDst, vSrc reflect.Value
+ err error
+ )
+ config := &Config{}
+
+ for _, opt := range opts {
+ opt(config)
+ }
+
+ if vDst, vSrc, err = resolveValues(dst, src); err != nil {
+ return err
+ }
+ // To be friction-less, we redirect equal-type arguments
+ // to deepMerge. Only because arguments can be anything.
+ if vSrc.Kind() == vDst.Kind() {
+ return deepMerge(vDst, vSrc, make(map[uintptr]*visit), 0, config)
+ }
+ switch vSrc.Kind() {
+ case reflect.Struct:
+ if vDst.Kind() != reflect.Map {
+ return ErrExpectedMapAsDestination
+ }
+ case reflect.Map:
+ if vDst.Kind() != reflect.Struct {
+ return ErrExpectedStructAsDestination
+ }
+ default:
+ return ErrNotSupported
+ }
+ return deepMap(vDst, vSrc, make(map[uintptr]*visit), 0, config)
+}
diff --git a/vendor/dario.cat/mergo/merge.go b/vendor/dario.cat/mergo/merge.go
new file mode 100644
index 000000000..fd47c95b2
--- /dev/null
+++ b/vendor/dario.cat/mergo/merge.go
@@ -0,0 +1,409 @@
+// Copyright 2013 Dario Castañé. All rights reserved.
+// Copyright 2009 The Go Authors. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+// Based on src/pkg/reflect/deepequal.go from official
+// golang's stdlib.
+
+package mergo
+
+import (
+ "fmt"
+ "reflect"
+)
+
+func hasMergeableFields(dst reflect.Value) (exported bool) {
+ for i, n := 0, dst.NumField(); i < n; i++ {
+ field := dst.Type().Field(i)
+ if field.Anonymous && dst.Field(i).Kind() == reflect.Struct {
+ exported = exported || hasMergeableFields(dst.Field(i))
+ } else if isExportedComponent(&field) {
+ exported = exported || len(field.PkgPath) == 0
+ }
+ }
+ return
+}
+
+func isExportedComponent(field *reflect.StructField) bool {
+ pkgPath := field.PkgPath
+ if len(pkgPath) > 0 {
+ return false
+ }
+ c := field.Name[0]
+ if 'a' <= c && c <= 'z' || c == '_' {
+ return false
+ }
+ return true
+}
+
+type Config struct {
+ Transformers Transformers
+ Overwrite bool
+ ShouldNotDereference bool
+ AppendSlice bool
+ TypeCheck bool
+ overwriteWithEmptyValue bool
+ overwriteSliceWithEmptyValue bool
+ sliceDeepCopy bool
+ debug bool
+}
+
+type Transformers interface {
+ Transformer(reflect.Type) func(dst, src reflect.Value) error
+}
+
+// Traverses recursively both values, assigning src's fields values to dst.
+// The map argument tracks comparisons that have already been seen, which allows
+// short circuiting on recursive types.
+func deepMerge(dst, src reflect.Value, visited map[uintptr]*visit, depth int, config *Config) (err error) {
+ overwrite := config.Overwrite
+ typeCheck := config.TypeCheck
+ overwriteWithEmptySrc := config.overwriteWithEmptyValue
+ overwriteSliceWithEmptySrc := config.overwriteSliceWithEmptyValue
+ sliceDeepCopy := config.sliceDeepCopy
+
+ if !src.IsValid() {
+ return
+ }
+ if dst.CanAddr() {
+ addr := dst.UnsafeAddr()
+ h := 17 * addr
+ seen := visited[h]
+ typ := dst.Type()
+ for p := seen; p != nil; p = p.next {
+ if p.ptr == addr && p.typ == typ {
+ return nil
+ }
+ }
+ // Remember, remember...
+ visited[h] = &visit{typ, seen, addr}
+ }
+
+ if config.Transformers != nil && !isReflectNil(dst) && dst.IsValid() {
+ if fn := config.Transformers.Transformer(dst.Type()); fn != nil {
+ err = fn(dst, src)
+ return
+ }
+ }
+
+ switch dst.Kind() {
+ case reflect.Struct:
+ if hasMergeableFields(dst) {
+ for i, n := 0, dst.NumField(); i < n; i++ {
+ if err = deepMerge(dst.Field(i), src.Field(i), visited, depth+1, config); err != nil {
+ return
+ }
+ }
+ } else {
+ if dst.CanSet() && (isReflectNil(dst) || overwrite) && (!isEmptyValue(src, !config.ShouldNotDereference) || overwriteWithEmptySrc) {
+ dst.Set(src)
+ }
+ }
+ case reflect.Map:
+ if dst.IsNil() && !src.IsNil() {
+ if dst.CanSet() {
+ dst.Set(reflect.MakeMap(dst.Type()))
+ } else {
+ dst = src
+ return
+ }
+ }
+
+ if src.Kind() != reflect.Map {
+ if overwrite && dst.CanSet() {
+ dst.Set(src)
+ }
+ return
+ }
+
+ for _, key := range src.MapKeys() {
+ srcElement := src.MapIndex(key)
+ if !srcElement.IsValid() {
+ continue
+ }
+ dstElement := dst.MapIndex(key)
+ switch srcElement.Kind() {
+ case reflect.Chan, reflect.Func, reflect.Map, reflect.Interface, reflect.Slice:
+ if srcElement.IsNil() {
+ if overwrite {
+ dst.SetMapIndex(key, srcElement)
+ }
+ continue
+ }
+ fallthrough
+ default:
+ if !srcElement.CanInterface() {
+ continue
+ }
+ switch reflect.TypeOf(srcElement.Interface()).Kind() {
+ case reflect.Struct:
+ fallthrough
+ case reflect.Ptr:
+ fallthrough
+ case reflect.Map:
+ srcMapElm := srcElement
+ dstMapElm := dstElement
+ if srcMapElm.CanInterface() {
+ srcMapElm = reflect.ValueOf(srcMapElm.Interface())
+ if dstMapElm.IsValid() {
+ dstMapElm = reflect.ValueOf(dstMapElm.Interface())
+ }
+ }
+ if err = deepMerge(dstMapElm, srcMapElm, visited, depth+1, config); err != nil {
+ return
+ }
+ case reflect.Slice:
+ srcSlice := reflect.ValueOf(srcElement.Interface())
+
+ var dstSlice reflect.Value
+ if !dstElement.IsValid() || dstElement.IsNil() {
+ dstSlice = reflect.MakeSlice(srcSlice.Type(), 0, srcSlice.Len())
+ } else {
+ dstSlice = reflect.ValueOf(dstElement.Interface())
+ }
+
+ if (!isEmptyValue(src, !config.ShouldNotDereference) || overwriteWithEmptySrc || overwriteSliceWithEmptySrc) && (overwrite || isEmptyValue(dst, !config.ShouldNotDereference)) && !config.AppendSlice && !sliceDeepCopy {
+ if typeCheck && srcSlice.Type() != dstSlice.Type() {
+ return fmt.Errorf("cannot override two slices with different type (%s, %s)", srcSlice.Type(), dstSlice.Type())
+ }
+ dstSlice = srcSlice
+ } else if config.AppendSlice {
+ if srcSlice.Type() != dstSlice.Type() {
+ return fmt.Errorf("cannot append two slices with different type (%s, %s)", srcSlice.Type(), dstSlice.Type())
+ }
+ dstSlice = reflect.AppendSlice(dstSlice, srcSlice)
+ } else if sliceDeepCopy {
+ i := 0
+ for ; i < srcSlice.Len() && i < dstSlice.Len(); i++ {
+ srcElement := srcSlice.Index(i)
+ dstElement := dstSlice.Index(i)
+
+ if srcElement.CanInterface() {
+ srcElement = reflect.ValueOf(srcElement.Interface())
+ }
+ if dstElement.CanInterface() {
+ dstElement = reflect.ValueOf(dstElement.Interface())
+ }
+
+ if err = deepMerge(dstElement, srcElement, visited, depth+1, config); err != nil {
+ return
+ }
+ }
+
+ }
+ dst.SetMapIndex(key, dstSlice)
+ }
+ }
+
+ if dstElement.IsValid() && !isEmptyValue(dstElement, !config.ShouldNotDereference) {
+ if reflect.TypeOf(srcElement.Interface()).Kind() == reflect.Slice {
+ continue
+ }
+ if reflect.TypeOf(srcElement.Interface()).Kind() == reflect.Map && reflect.TypeOf(dstElement.Interface()).Kind() == reflect.Map {
+ continue
+ }
+ }
+
+ if srcElement.IsValid() && ((srcElement.Kind() != reflect.Ptr && overwrite) || !dstElement.IsValid() || isEmptyValue(dstElement, !config.ShouldNotDereference)) {
+ if dst.IsNil() {
+ dst.Set(reflect.MakeMap(dst.Type()))
+ }
+ dst.SetMapIndex(key, srcElement)
+ }
+ }
+
+ // Ensure that all keys in dst are deleted if they are not in src.
+ if overwriteWithEmptySrc {
+ for _, key := range dst.MapKeys() {
+ srcElement := src.MapIndex(key)
+ if !srcElement.IsValid() {
+ dst.SetMapIndex(key, reflect.Value{})
+ }
+ }
+ }
+ case reflect.Slice:
+ if !dst.CanSet() {
+ break
+ }
+ if (!isEmptyValue(src, !config.ShouldNotDereference) || overwriteWithEmptySrc || overwriteSliceWithEmptySrc) && (overwrite || isEmptyValue(dst, !config.ShouldNotDereference)) && !config.AppendSlice && !sliceDeepCopy {
+ dst.Set(src)
+ } else if config.AppendSlice {
+ if src.Type() != dst.Type() {
+ return fmt.Errorf("cannot append two slice with different type (%s, %s)", src.Type(), dst.Type())
+ }
+ dst.Set(reflect.AppendSlice(dst, src))
+ } else if sliceDeepCopy {
+ for i := 0; i < src.Len() && i < dst.Len(); i++ {
+ srcElement := src.Index(i)
+ dstElement := dst.Index(i)
+ if srcElement.CanInterface() {
+ srcElement = reflect.ValueOf(srcElement.Interface())
+ }
+ if dstElement.CanInterface() {
+ dstElement = reflect.ValueOf(dstElement.Interface())
+ }
+
+ if err = deepMerge(dstElement, srcElement, visited, depth+1, config); err != nil {
+ return
+ }
+ }
+ }
+ case reflect.Ptr:
+ fallthrough
+ case reflect.Interface:
+ if isReflectNil(src) {
+ if overwriteWithEmptySrc && dst.CanSet() && src.Type().AssignableTo(dst.Type()) {
+ dst.Set(src)
+ }
+ break
+ }
+
+ if src.Kind() != reflect.Interface {
+ if dst.IsNil() || (src.Kind() != reflect.Ptr && overwrite) {
+ if dst.CanSet() && (overwrite || isEmptyValue(dst, !config.ShouldNotDereference)) {
+ dst.Set(src)
+ }
+ } else if src.Kind() == reflect.Ptr {
+ if !config.ShouldNotDereference {
+ if err = deepMerge(dst.Elem(), src.Elem(), visited, depth+1, config); err != nil {
+ return
+ }
+ } else if src.Elem().Kind() != reflect.Struct {
+ if overwriteWithEmptySrc || (overwrite && !src.IsNil()) || dst.IsNil() {
+ dst.Set(src)
+ }
+ }
+ } else if dst.Elem().Type() == src.Type() {
+ if err = deepMerge(dst.Elem(), src, visited, depth+1, config); err != nil {
+ return
+ }
+ } else {
+ return ErrDifferentArgumentsTypes
+ }
+ break
+ }
+
+ if dst.IsNil() || overwrite {
+ if dst.CanSet() && (overwrite || isEmptyValue(dst, !config.ShouldNotDereference)) {
+ dst.Set(src)
+ }
+ break
+ }
+
+ if dst.Elem().Kind() == src.Elem().Kind() {
+ if err = deepMerge(dst.Elem(), src.Elem(), visited, depth+1, config); err != nil {
+ return
+ }
+ break
+ }
+ default:
+ mustSet := (isEmptyValue(dst, !config.ShouldNotDereference) || overwrite) && (!isEmptyValue(src, !config.ShouldNotDereference) || overwriteWithEmptySrc)
+ if mustSet {
+ if dst.CanSet() {
+ dst.Set(src)
+ } else {
+ dst = src
+ }
+ }
+ }
+
+ return
+}
+
+// Merge will fill any empty for value type attributes on the dst struct using corresponding
+// src attributes if they themselves are not empty. dst and src must be valid same-type structs
+// and dst must be a pointer to struct.
+// It won't merge unexported (private) fields and will do recursively any exported field.
+func Merge(dst, src interface{}, opts ...func(*Config)) error {
+ return merge(dst, src, opts...)
+}
+
+// MergeWithOverwrite will do the same as Merge except that non-empty dst attributes will be overridden by
+// non-empty src attribute values.
+// Deprecated: use Merge(…) with WithOverride
+func MergeWithOverwrite(dst, src interface{}, opts ...func(*Config)) error {
+ return merge(dst, src, append(opts, WithOverride)...)
+}
+
+// WithTransformers adds transformers to merge, allowing to customize the merging of some types.
+func WithTransformers(transformers Transformers) func(*Config) {
+ return func(config *Config) {
+ config.Transformers = transformers
+ }
+}
+
+// WithOverride will make merge override non-empty dst attributes with non-empty src attributes values.
+func WithOverride(config *Config) {
+ config.Overwrite = true
+}
+
+// WithOverwriteWithEmptyValue will make merge override non empty dst attributes with empty src attributes values.
+func WithOverwriteWithEmptyValue(config *Config) {
+ config.Overwrite = true
+ config.overwriteWithEmptyValue = true
+}
+
+// WithOverrideEmptySlice will make merge override empty dst slice with empty src slice.
+func WithOverrideEmptySlice(config *Config) {
+ config.overwriteSliceWithEmptyValue = true
+}
+
+// WithoutDereference prevents dereferencing pointers when evaluating whether they are empty
+// (i.e. a non-nil pointer is never considered empty).
+func WithoutDereference(config *Config) {
+ config.ShouldNotDereference = true
+}
+
+// WithAppendSlice will make merge append slices instead of overwriting it.
+func WithAppendSlice(config *Config) {
+ config.AppendSlice = true
+}
+
+// WithTypeCheck will make merge check types while overwriting it (must be used with WithOverride).
+func WithTypeCheck(config *Config) {
+ config.TypeCheck = true
+}
+
+// WithSliceDeepCopy will merge slice element one by one with Overwrite flag.
+func WithSliceDeepCopy(config *Config) {
+ config.sliceDeepCopy = true
+ config.Overwrite = true
+}
+
+func merge(dst, src interface{}, opts ...func(*Config)) error {
+ if dst != nil && reflect.ValueOf(dst).Kind() != reflect.Ptr {
+ return ErrNonPointerArgument
+ }
+ var (
+ vDst, vSrc reflect.Value
+ err error
+ )
+
+ config := &Config{}
+
+ for _, opt := range opts {
+ opt(config)
+ }
+
+ if vDst, vSrc, err = resolveValues(dst, src); err != nil {
+ return err
+ }
+ if vDst.Type() != vSrc.Type() {
+ return ErrDifferentArgumentsTypes
+ }
+ return deepMerge(vDst, vSrc, make(map[uintptr]*visit), 0, config)
+}
+
+// IsReflectNil is the reflect value provided nil
+func isReflectNil(v reflect.Value) bool {
+ k := v.Kind()
+ switch k {
+ case reflect.Interface, reflect.Slice, reflect.Chan, reflect.Func, reflect.Map, reflect.Ptr:
+ // Both interface and slice are nil if first word is 0.
+ // Both are always bigger than a word; assume flagIndir.
+ return v.IsNil()
+ default:
+ return false
+ }
+}
diff --git a/vendor/dario.cat/mergo/mergo.go b/vendor/dario.cat/mergo/mergo.go
new file mode 100644
index 000000000..0a721e2d8
--- /dev/null
+++ b/vendor/dario.cat/mergo/mergo.go
@@ -0,0 +1,81 @@
+// Copyright 2013 Dario Castañé. All rights reserved.
+// Copyright 2009 The Go Authors. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+// Based on src/pkg/reflect/deepequal.go from official
+// golang's stdlib.
+
+package mergo
+
+import (
+ "errors"
+ "reflect"
+)
+
+// Errors reported by Mergo when it finds invalid arguments.
+var (
+ ErrNilArguments = errors.New("src and dst must not be nil")
+ ErrDifferentArgumentsTypes = errors.New("src and dst must be of same type")
+ ErrNotSupported = errors.New("only structs, maps, and slices are supported")
+ ErrExpectedMapAsDestination = errors.New("dst was expected to be a map")
+ ErrExpectedStructAsDestination = errors.New("dst was expected to be a struct")
+ ErrNonPointerArgument = errors.New("dst must be a pointer")
+)
+
+// During deepMerge, must keep track of checks that are
+// in progress. The comparison algorithm assumes that all
+// checks in progress are true when it reencounters them.
+// Visited are stored in a map indexed by 17 * a1 + a2;
+type visit struct {
+ typ reflect.Type
+ next *visit
+ ptr uintptr
+}
+
+// From src/pkg/encoding/json/encode.go.
+func isEmptyValue(v reflect.Value, shouldDereference bool) bool {
+ switch v.Kind() {
+ case reflect.Array, reflect.Map, reflect.Slice, reflect.String:
+ return v.Len() == 0
+ case reflect.Bool:
+ return !v.Bool()
+ case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64:
+ return v.Int() == 0
+ case reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64, reflect.Uintptr:
+ return v.Uint() == 0
+ case reflect.Float32, reflect.Float64:
+ return v.Float() == 0
+ case reflect.Interface, reflect.Ptr:
+ if v.IsNil() {
+ return true
+ }
+ if shouldDereference {
+ return isEmptyValue(v.Elem(), shouldDereference)
+ }
+ return false
+ case reflect.Func:
+ return v.IsNil()
+ case reflect.Invalid:
+ return true
+ }
+ return false
+}
+
+func resolveValues(dst, src interface{}) (vDst, vSrc reflect.Value, err error) {
+ if dst == nil || src == nil {
+ err = ErrNilArguments
+ return
+ }
+ vDst = reflect.ValueOf(dst).Elem()
+ if vDst.Kind() != reflect.Struct && vDst.Kind() != reflect.Map && vDst.Kind() != reflect.Slice {
+ err = ErrNotSupported
+ return
+ }
+ vSrc = reflect.ValueOf(src)
+ // We check if vSrc is a pointer to dereference it.
+ if vSrc.Kind() == reflect.Ptr {
+ vSrc = vSrc.Elem()
+ }
+ return
+}
diff --git a/vendor/github.com/AdaLogics/go-fuzz-headers/LICENSE b/vendor/github.com/AdaLogics/go-fuzz-headers/LICENSE
new file mode 100644
index 000000000..261eeb9e9
--- /dev/null
+++ b/vendor/github.com/AdaLogics/go-fuzz-headers/LICENSE
@@ -0,0 +1,201 @@
+ Apache License
+ Version 2.0, January 2004
+ http://www.apache.org/licenses/
+
+ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
+
+ 1. Definitions.
+
+ "License" shall mean the terms and conditions for use, reproduction,
+ and distribution as defined by Sections 1 through 9 of this document.
+
+ "Licensor" shall mean the copyright owner or entity authorized by
+ the copyright owner that is granting the License.
+
+ "Legal Entity" shall mean the union of the acting entity and all
+ other entities that control, are controlled by, or are under common
+ control with that entity. For the purposes of this definition,
+ "control" means (i) the power, direct or indirect, to cause the
+ direction or management of such entity, whether by contract or
+ otherwise, or (ii) ownership of fifty percent (50%) or more of the
+ outstanding shares, or (iii) beneficial ownership of such entity.
+
+ "You" (or "Your") shall mean an individual or Legal Entity
+ exercising permissions granted by this License.
+
+ "Source" form shall mean the preferred form for making modifications,
+ including but not limited to software source code, documentation
+ source, and configuration files.
+
+ "Object" form shall mean any form resulting from mechanical
+ transformation or translation of a Source form, including but
+ not limited to compiled object code, generated documentation,
+ and conversions to other media types.
+
+ "Work" shall mean the work of authorship, whether in Source or
+ Object form, made available under the License, as indicated by a
+ copyright notice that is included in or attached to the work
+ (an example is provided in the Appendix below).
+
+ "Derivative Works" shall mean any work, whether in Source or Object
+ form, that is based on (or derived from) the Work and for which the
+ editorial revisions, annotations, elaborations, or other modifications
+ represent, as a whole, an original work of authorship. For the purposes
+ of this License, Derivative Works shall not include works that remain
+ separable from, or merely link (or bind by name) to the interfaces of,
+ the Work and Derivative Works thereof.
+
+ "Contribution" shall mean any work of authorship, including
+ the original version of the Work and any modifications or additions
+ to that Work or Derivative Works thereof, that is intentionally
+ submitted to Licensor for inclusion in the Work by the copyright owner
+ or by an individual or Legal Entity authorized to submit on behalf of
+ the copyright owner. For the purposes of this definition, "submitted"
+ means any form of electronic, verbal, or written communication sent
+ to the Licensor or its representatives, including but not limited to
+ communication on electronic mailing lists, source code control systems,
+ and issue tracking systems that are managed by, or on behalf of, the
+ Licensor for the purpose of discussing and improving the Work, but
+ excluding communication that is conspicuously marked or otherwise
+ designated in writing by the copyright owner as "Not a Contribution."
+
+ "Contributor" shall mean Licensor and any individual or Legal Entity
+ on behalf of whom a Contribution has been received by Licensor and
+ subsequently incorporated within the Work.
+
+ 2. Grant of Copyright License. Subject to the terms and conditions of
+ this License, each Contributor hereby grants to You a perpetual,
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
+ copyright license to reproduce, prepare Derivative Works of,
+ publicly display, publicly perform, sublicense, and distribute the
+ Work and such Derivative Works in Source or Object form.
+
+ 3. Grant of Patent License. Subject to the terms and conditions of
+ this License, each Contributor hereby grants to You a perpetual,
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
+ (except as stated in this section) patent license to make, have made,
+ use, offer to sell, sell, import, and otherwise transfer the Work,
+ where such license applies only to those patent claims licensable
+ by such Contributor that are necessarily infringed by their
+ Contribution(s) alone or by combination of their Contribution(s)
+ with the Work to which such Contribution(s) was submitted. If You
+ institute patent litigation against any entity (including a
+ cross-claim or counterclaim in a lawsuit) alleging that the Work
+ or a Contribution incorporated within the Work constitutes direct
+ or contributory patent infringement, then any patent licenses
+ granted to You under this License for that Work shall terminate
+ as of the date such litigation is filed.
+
+ 4. Redistribution. You may reproduce and distribute copies of the
+ Work or Derivative Works thereof in any medium, with or without
+ modifications, and in Source or Object form, provided that You
+ meet the following conditions:
+
+ (a) You must give any other recipients of the Work or
+ Derivative Works a copy of this License; and
+
+ (b) You must cause any modified files to carry prominent notices
+ stating that You changed the files; and
+
+ (c) You must retain, in the Source form of any Derivative Works
+ that You distribute, all copyright, patent, trademark, and
+ attribution notices from the Source form of the Work,
+ excluding those notices that do not pertain to any part of
+ the Derivative Works; and
+
+ (d) If the Work includes a "NOTICE" text file as part of its
+ distribution, then any Derivative Works that You distribute must
+ include a readable copy of the attribution notices contained
+ within such NOTICE file, excluding those notices that do not
+ pertain to any part of the Derivative Works, in at least one
+ of the following places: within a NOTICE text file distributed
+ as part of the Derivative Works; within the Source form or
+ documentation, if provided along with the Derivative Works; or,
+ within a display generated by the Derivative Works, if and
+ wherever such third-party notices normally appear. The contents
+ of the NOTICE file are for informational purposes only and
+ do not modify the License. You may add Your own attribution
+ notices within Derivative Works that You distribute, alongside
+ or as an addendum to the NOTICE text from the Work, provided
+ that such additional attribution notices cannot be construed
+ as modifying the License.
+
+ You may add Your own copyright statement to Your modifications and
+ may provide additional or different license terms and conditions
+ for use, reproduction, or distribution of Your modifications, or
+ for any such Derivative Works as a whole, provided Your use,
+ reproduction, and distribution of the Work otherwise complies with
+ the conditions stated in this License.
+
+ 5. Submission of Contributions. Unless You explicitly state otherwise,
+ any Contribution intentionally submitted for inclusion in the Work
+ by You to the Licensor shall be under the terms and conditions of
+ this License, without any additional terms or conditions.
+ Notwithstanding the above, nothing herein shall supersede or modify
+ the terms of any separate license agreement you may have executed
+ with Licensor regarding such Contributions.
+
+ 6. Trademarks. This License does not grant permission to use the trade
+ names, trademarks, service marks, or product names of the Licensor,
+ except as required for reasonable and customary use in describing the
+ origin of the Work and reproducing the content of the NOTICE file.
+
+ 7. Disclaimer of Warranty. Unless required by applicable law or
+ agreed to in writing, Licensor provides the Work (and each
+ Contributor provides its Contributions) on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
+ implied, including, without limitation, any warranties or conditions
+ of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
+ PARTICULAR PURPOSE. You are solely responsible for determining the
+ appropriateness of using or redistributing the Work and assume any
+ risks associated with Your exercise of permissions under this License.
+
+ 8. Limitation of Liability. In no event and under no legal theory,
+ whether in tort (including negligence), contract, or otherwise,
+ unless required by applicable law (such as deliberate and grossly
+ negligent acts) or agreed to in writing, shall any Contributor be
+ liable to You for damages, including any direct, indirect, special,
+ incidental, or consequential damages of any character arising as a
+ result of this License or out of the use or inability to use the
+ Work (including but not limited to damages for loss of goodwill,
+ work stoppage, computer failure or malfunction, or any and all
+ other commercial damages or losses), even if such Contributor
+ has been advised of the possibility of such damages.
+
+ 9. Accepting Warranty or Additional Liability. While redistributing
+ the Work or Derivative Works thereof, You may choose to offer,
+ and charge a fee for, acceptance of support, warranty, indemnity,
+ or other liability obligations and/or rights consistent with this
+ License. However, in accepting such obligations, You may act only
+ on Your own behalf and on Your sole responsibility, not on behalf
+ of any other Contributor, and only if You agree to indemnify,
+ defend, and hold each Contributor harmless for any liability
+ incurred by, or claims asserted against, such Contributor by reason
+ of your accepting any such warranty or additional liability.
+
+ END OF TERMS AND CONDITIONS
+
+ APPENDIX: How to apply the Apache License to your work.
+
+ To apply the Apache License to your work, attach the following
+ boilerplate notice, with the fields enclosed by brackets "[]"
+ replaced with your own identifying information. (Don't include
+ the brackets!) The text should be enclosed in the appropriate
+ comment syntax for the file format. We also recommend that a
+ file or class name and description of purpose be included on the
+ same "printed page" as the copyright notice for easier
+ identification within third-party archives.
+
+ Copyright [yyyy] [name of copyright owner]
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
diff --git a/vendor/github.com/AdaLogics/go-fuzz-headers/README.md b/vendor/github.com/AdaLogics/go-fuzz-headers/README.md
new file mode 100644
index 000000000..0a0d60c74
--- /dev/null
+++ b/vendor/github.com/AdaLogics/go-fuzz-headers/README.md
@@ -0,0 +1,93 @@
+# go-fuzz-headers
+This repository contains various helper functions for go fuzzing. It is mostly used in combination with [go-fuzz](https://github.com/dvyukov/go-fuzz), but compatibility with fuzzing in the standard library will also be supported. Any coverage guided fuzzing engine that provides an array or slice of bytes can be used with go-fuzz-headers.
+
+
+## Usage
+Using go-fuzz-headers is easy. First create a new consumer with the bytes provided by the fuzzing engine:
+
+```go
+import (
+ fuzz "github.com/AdaLogics/go-fuzz-headers"
+)
+data := []byte{'R', 'a', 'n', 'd', 'o', 'm'}
+f := fuzz.NewConsumer(data)
+
+```
+
+This creates a `Consumer` that consumes the bytes of the input as it uses them to fuzz different types.
+
+After that, `f` can be used to easily create fuzzed instances of different types. Below are some examples:
+
+### Structs
+One of the most useful features of go-fuzz-headers is its ability to fill structs with the data provided by the fuzzing engine. This is done with a single line:
+```go
+type Person struct {
+ Name string
+ Age int
+}
+p := Person{}
+// Fill p with values based on the data provided by the fuzzing engine:
+err := f.GenerateStruct(&p)
+```
+
+This includes nested structs too. In this example, the fuzz Consumer will also insert values in `p.BestFriend`:
+```go
+type PersonI struct {
+ Name string
+ Age int
+ BestFriend PersonII
+}
+type PersonII struct {
+ Name string
+ Age int
+}
+p := PersonI{}
+err := f.GenerateStruct(&p)
+```
+
+If the consumer should insert values for unexported fields as well as exported, this can be enabled with:
+
+```go
+f.AllowUnexportedFields()
+```
+
+...and disabled with:
+
+```go
+f.DisallowUnexportedFields()
+```
+
+### Other types:
+
+Other useful APIs:
+
+```go
+createdString, err := f.GetString() // Gets a string
+createdInt, err := f.GetInt() // Gets an integer
+createdByte, err := f.GetByte() // Gets a byte
+createdBytes, err := f.GetBytes() // Gets a byte slice
+createdBool, err := f.GetBool() // Gets a boolean
+err := f.FuzzMap(target_map) // Fills a map
+createdTarBytes, err := f.TarBytes() // Gets bytes of a valid tar archive
+err := f.CreateFiles(inThisDir) // Fills inThisDir with files
+createdString, err := f.GetStringFrom("anyCharInThisString", ofThisLength) // Gets a string that consists of chars from "anyCharInThisString" and has the exact length "ofThisLength"
+```
+
+Most APIs are added as they are needed.
+
+## Projects that use go-fuzz-headers
+- [runC](https://github.com/opencontainers/runc)
+- [Istio](https://github.com/istio/istio)
+- [Vitess](https://github.com/vitessio/vitess)
+- [Containerd](https://github.com/containerd/containerd)
+
+Feel free to add your own project to the list, if you use go-fuzz-headers to fuzz it.
+
+
+
+
+## Status
+The project is under development and will be updated regularly.
+
+## References
+go-fuzz-headers' approach to fuzzing structs is strongly inspired by [gofuzz](https://github.com/google/gofuzz).
\ No newline at end of file
diff --git a/vendor/github.com/AdaLogics/go-fuzz-headers/consumer.go b/vendor/github.com/AdaLogics/go-fuzz-headers/consumer.go
new file mode 100644
index 000000000..adfeedf5e
--- /dev/null
+++ b/vendor/github.com/AdaLogics/go-fuzz-headers/consumer.go
@@ -0,0 +1,914 @@
+// Copyright 2023 The go-fuzz-headers Authors.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package gofuzzheaders
+
+import (
+ "archive/tar"
+ "bytes"
+ "encoding/binary"
+ "errors"
+ "fmt"
+ "io"
+ "math"
+ "os"
+ "path/filepath"
+ "reflect"
+ "strconv"
+ "strings"
+ "time"
+ "unsafe"
+)
+
+var (
+ MaxTotalLen uint32 = 2000000
+ maxDepth = 100
+)
+
+func SetMaxTotalLen(newLen uint32) {
+ MaxTotalLen = newLen
+}
+
+type ConsumeFuzzer struct {
+ data []byte
+ dataTotal uint32
+ CommandPart []byte
+ RestOfArray []byte
+ NumberOfCalls int
+ position uint32
+ fuzzUnexportedFields bool
+ curDepth int
+ Funcs map[reflect.Type]reflect.Value
+}
+
+func IsDivisibleBy(n int, divisibleby int) bool {
+ return (n % divisibleby) == 0
+}
+
+func NewConsumer(fuzzData []byte) *ConsumeFuzzer {
+ return &ConsumeFuzzer{
+ data: fuzzData,
+ dataTotal: uint32(len(fuzzData)),
+ Funcs: make(map[reflect.Type]reflect.Value),
+ curDepth: 0,
+ }
+}
+
+func (f *ConsumeFuzzer) Split(minCalls, maxCalls int) error {
+ if f.dataTotal == 0 {
+ return errors.New("could not split")
+ }
+ numberOfCalls := int(f.data[0])
+ if numberOfCalls < minCalls || numberOfCalls > maxCalls {
+ return errors.New("bad number of calls")
+ }
+ if int(f.dataTotal) < numberOfCalls+numberOfCalls+1 {
+ return errors.New("length of data does not match required parameters")
+ }
+
+ // Define part 2 and 3 of the data array
+ commandPart := f.data[1 : numberOfCalls+1]
+ restOfArray := f.data[numberOfCalls+1:]
+
+ // Just a small check. It is necessary
+ if len(commandPart) != numberOfCalls {
+ return errors.New("length of commandPart does not match number of calls")
+ }
+
+ // Check if restOfArray is divisible by numberOfCalls
+ if !IsDivisibleBy(len(restOfArray), numberOfCalls) {
+ return errors.New("length of commandPart does not match number of calls")
+ }
+ f.CommandPart = commandPart
+ f.RestOfArray = restOfArray
+ f.NumberOfCalls = numberOfCalls
+ return nil
+}
+
+func (f *ConsumeFuzzer) AllowUnexportedFields() {
+ f.fuzzUnexportedFields = true
+}
+
+func (f *ConsumeFuzzer) DisallowUnexportedFields() {
+ f.fuzzUnexportedFields = false
+}
+
+func (f *ConsumeFuzzer) GenerateStruct(targetStruct interface{}) error {
+ e := reflect.ValueOf(targetStruct).Elem()
+ return f.fuzzStruct(e, false)
+}
+
+func (f *ConsumeFuzzer) setCustom(v reflect.Value) error {
+ // First: see if we have a fuzz function for it.
+ doCustom, ok := f.Funcs[v.Type()]
+ if !ok {
+ return fmt.Errorf("could not find a custom function")
+ }
+
+ switch v.Kind() {
+ case reflect.Ptr:
+ if v.IsNil() {
+ if !v.CanSet() {
+ return fmt.Errorf("could not use a custom function")
+ }
+ v.Set(reflect.New(v.Type().Elem()))
+ }
+ case reflect.Map:
+ if v.IsNil() {
+ if !v.CanSet() {
+ return fmt.Errorf("could not use a custom function")
+ }
+ v.Set(reflect.MakeMap(v.Type()))
+ }
+ default:
+ return fmt.Errorf("could not use a custom function")
+ }
+
+ verr := doCustom.Call([]reflect.Value{v, reflect.ValueOf(Continue{
+ F: f,
+ })})
+
+ // check if we return an error
+ if verr[0].IsNil() {
+ return nil
+ }
+ return fmt.Errorf("could not use a custom function")
+}
+
+func (f *ConsumeFuzzer) fuzzStruct(e reflect.Value, customFunctions bool) error {
+ if f.curDepth >= maxDepth {
+ // return err or nil here?
+ return nil
+ }
+ f.curDepth++
+ defer func() { f.curDepth-- }()
+
+ // We check if we should check for custom functions
+ if customFunctions && e.IsValid() && e.CanAddr() {
+ err := f.setCustom(e.Addr())
+ if err != nil {
+ return err
+ }
+ }
+
+ switch e.Kind() {
+ case reflect.Struct:
+ for i := 0; i < e.NumField(); i++ {
+ var v reflect.Value
+ if !e.Field(i).CanSet() {
+ if f.fuzzUnexportedFields {
+ v = reflect.NewAt(e.Field(i).Type(), unsafe.Pointer(e.Field(i).UnsafeAddr())).Elem()
+ }
+ if err := f.fuzzStruct(v, customFunctions); err != nil {
+ return err
+ }
+ } else {
+ v = e.Field(i)
+ if err := f.fuzzStruct(v, customFunctions); err != nil {
+ return err
+ }
+ }
+ }
+ case reflect.String:
+ str, err := f.GetString()
+ if err != nil {
+ return err
+ }
+ if e.CanSet() {
+ e.SetString(str)
+ }
+ case reflect.Slice:
+ var maxElements uint32
+ // Byte slices should not be restricted
+ if e.Type().String() == "[]uint8" {
+ maxElements = 10000000
+ } else {
+ maxElements = 50
+ }
+
+ randQty, err := f.GetUint32()
+ if err != nil {
+ return err
+ }
+ numOfElements := randQty % maxElements
+ if (f.dataTotal - f.position) < numOfElements {
+ numOfElements = f.dataTotal - f.position
+ }
+
+ uu := reflect.MakeSlice(e.Type(), int(numOfElements), int(numOfElements))
+
+ for i := 0; i < int(numOfElements); i++ {
+ // If we have more than 10, then we can proceed with that.
+ if err := f.fuzzStruct(uu.Index(i), customFunctions); err != nil {
+ if i >= 10 {
+ if e.CanSet() {
+ e.Set(uu)
+ }
+ return nil
+ } else {
+ return err
+ }
+ }
+ }
+ if e.CanSet() {
+ e.Set(uu)
+ }
+ case reflect.Uint16:
+ newInt, err := f.GetUint16()
+ if err != nil {
+ return err
+ }
+ if e.CanSet() {
+ e.SetUint(uint64(newInt))
+ }
+ case reflect.Uint32:
+ newInt, err := f.GetUint32()
+ if err != nil {
+ return err
+ }
+ if e.CanSet() {
+ e.SetUint(uint64(newInt))
+ }
+ case reflect.Uint64:
+ newInt, err := f.GetInt()
+ if err != nil {
+ return err
+ }
+ if e.CanSet() {
+ e.SetUint(uint64(newInt))
+ }
+ case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64:
+ newInt, err := f.GetInt()
+ if err != nil {
+ return err
+ }
+ if e.CanSet() {
+ e.SetInt(int64(newInt))
+ }
+ case reflect.Float32:
+ newFloat, err := f.GetFloat32()
+ if err != nil {
+ return err
+ }
+ if e.CanSet() {
+ e.SetFloat(float64(newFloat))
+ }
+ case reflect.Float64:
+ newFloat, err := f.GetFloat64()
+ if err != nil {
+ return err
+ }
+ if e.CanSet() {
+ e.SetFloat(float64(newFloat))
+ }
+ case reflect.Map:
+ if e.CanSet() {
+ e.Set(reflect.MakeMap(e.Type()))
+ const maxElements = 50
+ randQty, err := f.GetInt()
+ if err != nil {
+ return err
+ }
+ numOfElements := randQty % maxElements
+ for i := 0; i < numOfElements; i++ {
+ key := reflect.New(e.Type().Key()).Elem()
+ if err := f.fuzzStruct(key, customFunctions); err != nil {
+ return err
+ }
+ val := reflect.New(e.Type().Elem()).Elem()
+ if err = f.fuzzStruct(val, customFunctions); err != nil {
+ return err
+ }
+ e.SetMapIndex(key, val)
+ }
+ }
+ case reflect.Ptr:
+ if e.CanSet() {
+ e.Set(reflect.New(e.Type().Elem()))
+ if err := f.fuzzStruct(e.Elem(), customFunctions); err != nil {
+ return err
+ }
+ return nil
+ }
+ case reflect.Uint8:
+ b, err := f.GetByte()
+ if err != nil {
+ return err
+ }
+ if e.CanSet() {
+ e.SetUint(uint64(b))
+ }
+ }
+ return nil
+}
+
+func (f *ConsumeFuzzer) GetStringArray() (reflect.Value, error) {
+ // The max size of the array:
+ const max uint32 = 20
+
+ arraySize := f.position
+ if arraySize > max {
+ arraySize = max
+ }
+ stringArray := reflect.MakeSlice(reflect.SliceOf(reflect.TypeOf("string")), int(arraySize), int(arraySize))
+ if f.position+arraySize >= f.dataTotal {
+ return stringArray, errors.New("could not make string array")
+ }
+
+ for i := 0; i < int(arraySize); i++ {
+ stringSize := uint32(f.data[f.position])
+ if f.position+stringSize >= f.dataTotal {
+ return stringArray, nil
+ }
+ stringToAppend := string(f.data[f.position : f.position+stringSize])
+ strVal := reflect.ValueOf(stringToAppend)
+ stringArray = reflect.Append(stringArray, strVal)
+ f.position += stringSize
+ }
+ return stringArray, nil
+}
+
+func (f *ConsumeFuzzer) GetInt() (int, error) {
+ if f.position >= f.dataTotal {
+ return 0, errors.New("not enough bytes to create int")
+ }
+ returnInt := int(f.data[f.position])
+ f.position++
+ return returnInt, nil
+}
+
+func (f *ConsumeFuzzer) GetByte() (byte, error) {
+ if f.position >= f.dataTotal {
+ return 0x00, errors.New("not enough bytes to get byte")
+ }
+ returnByte := f.data[f.position]
+ f.position++
+ return returnByte, nil
+}
+
+func (f *ConsumeFuzzer) GetNBytes(numberOfBytes int) ([]byte, error) {
+ if f.position >= f.dataTotal {
+ return nil, errors.New("not enough bytes to get byte")
+ }
+ returnBytes := make([]byte, 0, numberOfBytes)
+ for i := 0; i < numberOfBytes; i++ {
+ newByte, err := f.GetByte()
+ if err != nil {
+ return nil, err
+ }
+ returnBytes = append(returnBytes, newByte)
+ }
+ return returnBytes, nil
+}
+
+func (f *ConsumeFuzzer) GetUint16() (uint16, error) {
+ u16, err := f.GetNBytes(2)
+ if err != nil {
+ return 0, err
+ }
+ littleEndian, err := f.GetBool()
+ if err != nil {
+ return 0, err
+ }
+ if littleEndian {
+ return binary.LittleEndian.Uint16(u16), nil
+ }
+ return binary.BigEndian.Uint16(u16), nil
+}
+
+func (f *ConsumeFuzzer) GetUint32() (uint32, error) {
+ u32, err := f.GetNBytes(4)
+ if err != nil {
+ return 0, err
+ }
+ return binary.BigEndian.Uint32(u32), nil
+}
+
+func (f *ConsumeFuzzer) GetUint64() (uint64, error) {
+ u64, err := f.GetNBytes(8)
+ if err != nil {
+ return 0, err
+ }
+ littleEndian, err := f.GetBool()
+ if err != nil {
+ return 0, err
+ }
+ if littleEndian {
+ return binary.LittleEndian.Uint64(u64), nil
+ }
+ return binary.BigEndian.Uint64(u64), nil
+}
+
+func (f *ConsumeFuzzer) GetBytes() ([]byte, error) {
+ var length uint32
+ var err error
+ length, err = f.GetUint32()
+ if err != nil {
+ return nil, errors.New("not enough bytes to create byte array")
+ }
+
+ if length == 0 {
+ length = 30
+ }
+ bytesLeft := f.dataTotal - f.position
+ if bytesLeft <= 0 {
+ return nil, errors.New("not enough bytes to create byte array")
+ }
+
+ // If the length is the same as bytes left, we will not overflow
+ // the remaining bytes.
+ if length != bytesLeft {
+ length = length % bytesLeft
+ }
+ byteBegin := f.position
+ if byteBegin+length < byteBegin {
+ return nil, errors.New("numbers overflow")
+ }
+ f.position = byteBegin + length
+ return f.data[byteBegin:f.position], nil
+}
+
+func (f *ConsumeFuzzer) GetString() (string, error) {
+ if f.position >= f.dataTotal {
+ return "nil", errors.New("not enough bytes to create string")
+ }
+ length, err := f.GetUint32()
+ if err != nil {
+ return "nil", errors.New("not enough bytes to create string")
+ }
+ if f.position > MaxTotalLen {
+ return "nil", errors.New("created too large a string")
+ }
+ byteBegin := f.position
+ if byteBegin >= f.dataTotal {
+ return "nil", errors.New("not enough bytes to create string")
+ }
+ if byteBegin+length > f.dataTotal {
+ return "nil", errors.New("not enough bytes to create string")
+ }
+ if byteBegin > byteBegin+length {
+ return "nil", errors.New("numbers overflow")
+ }
+ f.position = byteBegin + length
+ return string(f.data[byteBegin:f.position]), nil
+}
+
+func (f *ConsumeFuzzer) GetBool() (bool, error) {
+ if f.position >= f.dataTotal {
+ return false, errors.New("not enough bytes to create bool")
+ }
+ if IsDivisibleBy(int(f.data[f.position]), 2) {
+ f.position++
+ return true, nil
+ } else {
+ f.position++
+ return false, nil
+ }
+}
+
+func (f *ConsumeFuzzer) FuzzMap(m interface{}) error {
+ return f.GenerateStruct(m)
+}
+
+func returnTarBytes(buf []byte) ([]byte, error) {
+ return buf, nil
+ // Count files
+ var fileCounter int
+ tr := tar.NewReader(bytes.NewReader(buf))
+ for {
+ _, err := tr.Next()
+ if err == io.EOF {
+ break
+ }
+ if err != nil {
+ return nil, err
+ }
+ fileCounter++
+ }
+ if fileCounter >= 1 {
+ return buf, nil
+ }
+ return nil, fmt.Errorf("not enough files were created\n")
+}
+
+func setTarHeaderFormat(hdr *tar.Header, f *ConsumeFuzzer) error {
+ ind, err := f.GetInt()
+ if err != nil {
+ hdr.Format = tar.FormatGNU
+ //return nil
+ }
+ switch ind % 4 {
+ case 0:
+ hdr.Format = tar.FormatUnknown
+ case 1:
+ hdr.Format = tar.FormatUSTAR
+ case 2:
+ hdr.Format = tar.FormatPAX
+ case 3:
+ hdr.Format = tar.FormatGNU
+ }
+ return nil
+}
+
+func setTarHeaderTypeflag(hdr *tar.Header, f *ConsumeFuzzer) error {
+ ind, err := f.GetInt()
+ if err != nil {
+ return err
+ }
+ switch ind % 13 {
+ case 0:
+ hdr.Typeflag = tar.TypeReg
+ case 1:
+ hdr.Typeflag = tar.TypeLink
+ linkname, err := f.GetString()
+ if err != nil {
+ return err
+ }
+ hdr.Linkname = linkname
+ case 2:
+ hdr.Typeflag = tar.TypeSymlink
+ linkname, err := f.GetString()
+ if err != nil {
+ return err
+ }
+ hdr.Linkname = linkname
+ case 3:
+ hdr.Typeflag = tar.TypeChar
+ case 4:
+ hdr.Typeflag = tar.TypeBlock
+ case 5:
+ hdr.Typeflag = tar.TypeDir
+ case 6:
+ hdr.Typeflag = tar.TypeFifo
+ case 7:
+ hdr.Typeflag = tar.TypeCont
+ case 8:
+ hdr.Typeflag = tar.TypeXHeader
+ case 9:
+ hdr.Typeflag = tar.TypeXGlobalHeader
+ case 10:
+ hdr.Typeflag = tar.TypeGNUSparse
+ case 11:
+ hdr.Typeflag = tar.TypeGNULongName
+ case 12:
+ hdr.Typeflag = tar.TypeGNULongLink
+ }
+ return nil
+}
+
+func (f *ConsumeFuzzer) createTarFileBody() ([]byte, error) {
+ return f.GetBytes()
+ /*length, err := f.GetUint32()
+ if err != nil {
+ return nil, errors.New("not enough bytes to create byte array")
+ }
+
+ // A bit of optimization to attempt to create a file body
+ // when we don't have as many bytes left as "length"
+ remainingBytes := f.dataTotal - f.position
+ if remainingBytes <= 0 {
+ return nil, errors.New("created too large a string")
+ }
+ if f.position+length > MaxTotalLen {
+ return nil, errors.New("created too large a string")
+ }
+ byteBegin := f.position
+ if byteBegin >= f.dataTotal {
+ return nil, errors.New("not enough bytes to create byte array")
+ }
+ if length == 0 {
+ return nil, errors.New("zero-length is not supported")
+ }
+ if byteBegin+length >= f.dataTotal {
+ return nil, errors.New("not enough bytes to create byte array")
+ }
+ if byteBegin+length < byteBegin {
+ return nil, errors.New("numbers overflow")
+ }
+ f.position = byteBegin + length
+ return f.data[byteBegin:f.position], nil*/
+}
+
+// getTarFileName is similar to GetString(), but creates string based
+// on the length of f.data to reduce the likelihood of overflowing
+// f.data.
+func (f *ConsumeFuzzer) getTarFilename() (string, error) {
+ return f.GetString()
+ /*length, err := f.GetUint32()
+ if err != nil {
+ return "nil", errors.New("not enough bytes to create string")
+ }
+
+ // A bit of optimization to attempt to create a file name
+ // when we don't have as many bytes left as "length"
+ remainingBytes := f.dataTotal - f.position
+ if remainingBytes <= 0 {
+ return "nil", errors.New("created too large a string")
+ }
+ if f.position > MaxTotalLen {
+ return "nil", errors.New("created too large a string")
+ }
+ byteBegin := f.position
+ if byteBegin >= f.dataTotal {
+ return "nil", errors.New("not enough bytes to create string")
+ }
+ if byteBegin+length > f.dataTotal {
+ return "nil", errors.New("not enough bytes to create string")
+ }
+ if byteBegin > byteBegin+length {
+ return "nil", errors.New("numbers overflow")
+ }
+ f.position = byteBegin + length
+ return string(f.data[byteBegin:f.position]), nil*/
+}
+
+type TarFile struct {
+ Hdr *tar.Header
+ Body []byte
+}
+
+// TarBytes returns valid bytes for a tar archive
+func (f *ConsumeFuzzer) TarBytes() ([]byte, error) {
+ numberOfFiles, err := f.GetInt()
+ if err != nil {
+ return nil, err
+ }
+ var tarFiles []*TarFile
+ tarFiles = make([]*TarFile, 0)
+
+ const maxNoOfFiles = 100
+ for i := 0; i < numberOfFiles%maxNoOfFiles; i++ {
+ var filename string
+ var filebody []byte
+ var sec, nsec int
+ var err error
+
+ filename, err = f.getTarFilename()
+ if err != nil {
+ var sb strings.Builder
+ sb.WriteString("file-")
+ sb.WriteString(strconv.Itoa(i))
+ filename = sb.String()
+ }
+ filebody, err = f.createTarFileBody()
+ if err != nil {
+ var sb strings.Builder
+ sb.WriteString("filebody-")
+ sb.WriteString(strconv.Itoa(i))
+ filebody = []byte(sb.String())
+ }
+
+ sec, err = f.GetInt()
+ if err != nil {
+ sec = 1672531200 // beginning of 2023
+ }
+ nsec, err = f.GetInt()
+ if err != nil {
+ nsec = 1703980800 // end of 2023
+ }
+
+ hdr := &tar.Header{
+ Name: filename,
+ Size: int64(len(filebody)),
+ Mode: 0o600,
+ ModTime: time.Unix(int64(sec), int64(nsec)),
+ }
+ if err := setTarHeaderTypeflag(hdr, f); err != nil {
+ return []byte(""), err
+ }
+ if err := setTarHeaderFormat(hdr, f); err != nil {
+ return []byte(""), err
+ }
+ tf := &TarFile{
+ Hdr: hdr,
+ Body: filebody,
+ }
+ tarFiles = append(tarFiles, tf)
+ }
+
+ var buf bytes.Buffer
+ tw := tar.NewWriter(&buf)
+ defer tw.Close()
+
+ for _, tf := range tarFiles {
+ tw.WriteHeader(tf.Hdr)
+ tw.Write(tf.Body)
+ }
+ return buf.Bytes(), nil
+}
+
+// This is similar to TarBytes, but it returns a series of
+// files instead of raw tar bytes. The advantage of this
+// api is that it is cheaper in terms of cpu power to
+// modify or check the files in the fuzzer with TarFiles()
+// because it avoids creating a tar reader.
+func (f *ConsumeFuzzer) TarFiles() ([]*TarFile, error) {
+ numberOfFiles, err := f.GetInt()
+ if err != nil {
+ return nil, err
+ }
+ var tarFiles []*TarFile
+ tarFiles = make([]*TarFile, 0)
+
+ const maxNoOfFiles = 100
+ for i := 0; i < numberOfFiles%maxNoOfFiles; i++ {
+ filename, err := f.getTarFilename()
+ if err != nil {
+ return tarFiles, err
+ }
+ filebody, err := f.createTarFileBody()
+ if err != nil {
+ return tarFiles, err
+ }
+
+ sec, err := f.GetInt()
+ if err != nil {
+ return tarFiles, err
+ }
+ nsec, err := f.GetInt()
+ if err != nil {
+ return tarFiles, err
+ }
+
+ hdr := &tar.Header{
+ Name: filename,
+ Size: int64(len(filebody)),
+ Mode: 0o600,
+ ModTime: time.Unix(int64(sec), int64(nsec)),
+ }
+ if err := setTarHeaderTypeflag(hdr, f); err != nil {
+ hdr.Typeflag = tar.TypeReg
+ }
+ if err := setTarHeaderFormat(hdr, f); err != nil {
+ return tarFiles, err // should not happend
+ }
+ tf := &TarFile{
+ Hdr: hdr,
+ Body: filebody,
+ }
+ tarFiles = append(tarFiles, tf)
+ }
+ return tarFiles, nil
+}
+
+// CreateFiles creates pseudo-random files in rootDir.
+// It creates subdirs and places the files there.
+// It is the callers responsibility to ensure that
+// rootDir exists.
+func (f *ConsumeFuzzer) CreateFiles(rootDir string) error {
+ numberOfFiles, err := f.GetInt()
+ if err != nil {
+ return err
+ }
+ maxNumberOfFiles := numberOfFiles % 4000 // This is completely arbitrary
+ if maxNumberOfFiles == 0 {
+ return errors.New("maxNumberOfFiles is nil")
+ }
+
+ var noOfCreatedFiles int
+ for i := 0; i < maxNumberOfFiles; i++ {
+ // The file to create:
+ fileName, err := f.GetString()
+ if err != nil {
+ if noOfCreatedFiles > 0 {
+ // If files have been created, we don't return an error.
+ break
+ } else {
+ return errors.New("could not get fileName")
+ }
+ }
+ if strings.Contains(fileName, "..") || (len(fileName) > 0 && fileName[0] == 47) || strings.Contains(fileName, "\\") {
+ continue
+ }
+ fullFilePath := filepath.Join(rootDir, fileName)
+
+ // Find the subdirectory of the file
+ if subDir := filepath.Dir(fileName); subDir != "" && subDir != "." {
+ // create the dir first; avoid going outside the root dir
+ if strings.Contains(subDir, "../") || (len(subDir) > 0 && subDir[0] == 47) || strings.Contains(subDir, "\\") {
+ continue
+ }
+ dirPath := filepath.Join(rootDir, subDir)
+ if _, err := os.Stat(dirPath); os.IsNotExist(err) {
+ err2 := os.MkdirAll(dirPath, 0o777)
+ if err2 != nil {
+ continue
+ }
+ }
+ fullFilePath = filepath.Join(dirPath, fileName)
+ } else {
+ // Create symlink
+ createSymlink, err := f.GetBool()
+ if err != nil {
+ if noOfCreatedFiles > 0 {
+ break
+ } else {
+ return errors.New("could not create the symlink")
+ }
+ }
+ if createSymlink {
+ symlinkTarget, err := f.GetString()
+ if err != nil {
+ return err
+ }
+ err = os.Symlink(symlinkTarget, fullFilePath)
+ if err != nil {
+ return err
+ }
+ // stop loop here, since a symlink needs no further action
+ noOfCreatedFiles++
+ continue
+ }
+ // We create a normal file
+ fileContents, err := f.GetBytes()
+ if err != nil {
+ if noOfCreatedFiles > 0 {
+ break
+ } else {
+ return errors.New("could not create the file")
+ }
+ }
+ err = os.WriteFile(fullFilePath, fileContents, 0o666)
+ if err != nil {
+ continue
+ }
+ noOfCreatedFiles++
+ }
+ }
+ return nil
+}
+
+// GetStringFrom returns a string that can only consist of characters
+// included in possibleChars. It returns an error if the created string
+// does not have the specified length.
+func (f *ConsumeFuzzer) GetStringFrom(possibleChars string, length int) (string, error) {
+ if (f.dataTotal - f.position) < uint32(length) {
+ return "", errors.New("not enough bytes to create a string")
+ }
+ output := make([]byte, 0, length)
+ for i := 0; i < length; i++ {
+ charIndex, err := f.GetInt()
+ if err != nil {
+ return string(output), err
+ }
+ output = append(output, possibleChars[charIndex%len(possibleChars)])
+ }
+ return string(output), nil
+}
+
+func (f *ConsumeFuzzer) GetRune() ([]rune, error) {
+ stringToConvert, err := f.GetString()
+ if err != nil {
+ return []rune("nil"), err
+ }
+ return []rune(stringToConvert), nil
+}
+
+func (f *ConsumeFuzzer) GetFloat32() (float32, error) {
+ u32, err := f.GetNBytes(4)
+ if err != nil {
+ return 0, err
+ }
+ littleEndian, err := f.GetBool()
+ if err != nil {
+ return 0, err
+ }
+ if littleEndian {
+ u32LE := binary.LittleEndian.Uint32(u32)
+ return math.Float32frombits(u32LE), nil
+ }
+ u32BE := binary.BigEndian.Uint32(u32)
+ return math.Float32frombits(u32BE), nil
+}
+
+func (f *ConsumeFuzzer) GetFloat64() (float64, error) {
+ u64, err := f.GetNBytes(8)
+ if err != nil {
+ return 0, err
+ }
+ littleEndian, err := f.GetBool()
+ if err != nil {
+ return 0, err
+ }
+ if littleEndian {
+ u64LE := binary.LittleEndian.Uint64(u64)
+ return math.Float64frombits(u64LE), nil
+ }
+ u64BE := binary.BigEndian.Uint64(u64)
+ return math.Float64frombits(u64BE), nil
+}
+
+func (f *ConsumeFuzzer) CreateSlice(targetSlice interface{}) error {
+ return f.GenerateStruct(targetSlice)
+}
diff --git a/vendor/github.com/AdaLogics/go-fuzz-headers/funcs.go b/vendor/github.com/AdaLogics/go-fuzz-headers/funcs.go
new file mode 100644
index 000000000..8ca3a61b8
--- /dev/null
+++ b/vendor/github.com/AdaLogics/go-fuzz-headers/funcs.go
@@ -0,0 +1,62 @@
+// Copyright 2023 The go-fuzz-headers Authors.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package gofuzzheaders
+
+import (
+ "fmt"
+ "reflect"
+)
+
+type Continue struct {
+ F *ConsumeFuzzer
+}
+
+func (f *ConsumeFuzzer) AddFuncs(fuzzFuncs []interface{}) {
+ for i := range fuzzFuncs {
+ v := reflect.ValueOf(fuzzFuncs[i])
+ if v.Kind() != reflect.Func {
+ panic("Need only funcs!")
+ }
+ t := v.Type()
+ if t.NumIn() != 2 || t.NumOut() != 1 {
+ fmt.Println(t.NumIn(), t.NumOut())
+
+ panic("Need 2 in and 1 out params. In must be the type. Out must be an error")
+ }
+ argT := t.In(0)
+ switch argT.Kind() {
+ case reflect.Ptr, reflect.Map:
+ default:
+ panic("fuzzFunc must take pointer or map type")
+ }
+ if t.In(1) != reflect.TypeOf(Continue{}) {
+ panic("fuzzFunc's second parameter must be type Continue")
+ }
+ f.Funcs[argT] = v
+ }
+}
+
+func (f *ConsumeFuzzer) GenerateWithCustom(targetStruct interface{}) error {
+ e := reflect.ValueOf(targetStruct).Elem()
+ return f.fuzzStruct(e, true)
+}
+
+func (c Continue) GenerateStruct(targetStruct interface{}) error {
+ return c.F.GenerateStruct(targetStruct)
+}
+
+func (c Continue) GenerateStructWithCustom(targetStruct interface{}) error {
+ return c.F.GenerateWithCustom(targetStruct)
+}
diff --git a/vendor/github.com/AdaLogics/go-fuzz-headers/sql.go b/vendor/github.com/AdaLogics/go-fuzz-headers/sql.go
new file mode 100644
index 000000000..2afd49f84
--- /dev/null
+++ b/vendor/github.com/AdaLogics/go-fuzz-headers/sql.go
@@ -0,0 +1,556 @@
+// Copyright 2023 The go-fuzz-headers Authors.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package gofuzzheaders
+
+import (
+ "fmt"
+ "strings"
+)
+
+// returns a keyword by index
+func getKeyword(f *ConsumeFuzzer) (string, error) {
+ index, err := f.GetInt()
+ if err != nil {
+ return keywords[0], err
+ }
+ for i, k := range keywords {
+ if i == index {
+ return k, nil
+ }
+ }
+ return keywords[0], fmt.Errorf("could not get a kw")
+}
+
+// Simple utility function to check if a string
+// slice contains a string.
+func containsString(s []string, e string) bool {
+ for _, a := range s {
+ if a == e {
+ return true
+ }
+ }
+ return false
+}
+
+// These keywords are used specifically for fuzzing Vitess
+var keywords = []string{
+ "accessible", "action", "add", "after", "against", "algorithm",
+ "all", "alter", "always", "analyze", "and", "as", "asc", "asensitive",
+ "auto_increment", "avg_row_length", "before", "begin", "between",
+ "bigint", "binary", "_binary", "_utf8mb4", "_utf8", "_latin1", "bit",
+ "blob", "bool", "boolean", "both", "by", "call", "cancel", "cascade",
+ "cascaded", "case", "cast", "channel", "change", "char", "character",
+ "charset", "check", "checksum", "coalesce", "code", "collate", "collation",
+ "column", "columns", "comment", "committed", "commit", "compact", "complete",
+ "compressed", "compression", "condition", "connection", "constraint", "continue",
+ "convert", "copy", "cume_dist", "substr", "substring", "create", "cross",
+ "csv", "current_date", "current_time", "current_timestamp", "current_user",
+ "cursor", "data", "database", "databases", "day", "day_hour", "day_microsecond",
+ "day_minute", "day_second", "date", "datetime", "dec", "decimal", "declare",
+ "default", "definer", "delay_key_write", "delayed", "delete", "dense_rank",
+ "desc", "describe", "deterministic", "directory", "disable", "discard",
+ "disk", "distinct", "distinctrow", "div", "double", "do", "drop", "dumpfile",
+ "duplicate", "dynamic", "each", "else", "elseif", "empty", "enable",
+ "enclosed", "encryption", "end", "enforced", "engine", "engines", "enum",
+ "error", "escape", "escaped", "event", "exchange", "exclusive", "exists",
+ "exit", "explain", "expansion", "export", "extended", "extract", "false",
+ "fetch", "fields", "first", "first_value", "fixed", "float", "float4",
+ "float8", "flush", "for", "force", "foreign", "format", "from", "full",
+ "fulltext", "function", "general", "generated", "geometry", "geometrycollection",
+ "get", "global", "gtid_executed", "grant", "group", "grouping", "groups",
+ "group_concat", "having", "header", "high_priority", "hosts", "hour", "hour_microsecond",
+ "hour_minute", "hour_second", "if", "ignore", "import", "in", "index", "indexes",
+ "infile", "inout", "inner", "inplace", "insensitive", "insert", "insert_method",
+ "int", "int1", "int2", "int3", "int4", "int8", "integer", "interval",
+ "into", "io_after_gtids", "is", "isolation", "iterate", "invoker", "join",
+ "json", "json_table", "key", "keys", "keyspaces", "key_block_size", "kill", "lag",
+ "language", "last", "last_value", "last_insert_id", "lateral", "lead", "leading",
+ "leave", "left", "less", "level", "like", "limit", "linear", "lines",
+ "linestring", "load", "local", "localtime", "localtimestamp", "lock", "logs",
+ "long", "longblob", "longtext", "loop", "low_priority", "manifest",
+ "master_bind", "match", "max_rows", "maxvalue", "mediumblob", "mediumint",
+ "mediumtext", "memory", "merge", "microsecond", "middleint", "min_rows", "minute",
+ "minute_microsecond", "minute_second", "mod", "mode", "modify", "modifies",
+ "multilinestring", "multipoint", "multipolygon", "month", "name",
+ "names", "natural", "nchar", "next", "no", "none", "not", "no_write_to_binlog",
+ "nth_value", "ntile", "null", "numeric", "of", "off", "offset", "on",
+ "only", "open", "optimize", "optimizer_costs", "option", "optionally",
+ "or", "order", "out", "outer", "outfile", "over", "overwrite", "pack_keys",
+ "parser", "partition", "partitioning", "password", "percent_rank", "plugins",
+ "point", "polygon", "precision", "primary", "privileges", "processlist",
+ "procedure", "query", "quarter", "range", "rank", "read", "reads", "read_write",
+ "real", "rebuild", "recursive", "redundant", "references", "regexp", "relay",
+ "release", "remove", "rename", "reorganize", "repair", "repeat", "repeatable",
+ "replace", "require", "resignal", "restrict", "return", "retry", "revert",
+ "revoke", "right", "rlike", "rollback", "row", "row_format", "row_number",
+ "rows", "s3", "savepoint", "schema", "schemas", "second", "second_microsecond",
+ "security", "select", "sensitive", "separator", "sequence", "serializable",
+ "session", "set", "share", "shared", "show", "signal", "signed", "slow",
+ "smallint", "spatial", "specific", "sql", "sqlexception", "sqlstate",
+ "sqlwarning", "sql_big_result", "sql_cache", "sql_calc_found_rows",
+ "sql_no_cache", "sql_small_result", "ssl", "start", "starting",
+ "stats_auto_recalc", "stats_persistent", "stats_sample_pages", "status",
+ "storage", "stored", "straight_join", "stream", "system", "vstream",
+ "table", "tables", "tablespace", "temporary", "temptable", "terminated",
+ "text", "than", "then", "time", "timestamp", "timestampadd", "timestampdiff",
+ "tinyblob", "tinyint", "tinytext", "to", "trailing", "transaction", "tree",
+ "traditional", "trigger", "triggers", "true", "truncate", "uncommitted",
+ "undefined", "undo", "union", "unique", "unlock", "unsigned", "update",
+ "upgrade", "usage", "use", "user", "user_resources", "using", "utc_date",
+ "utc_time", "utc_timestamp", "validation", "values", "variables", "varbinary",
+ "varchar", "varcharacter", "varying", "vgtid_executed", "virtual", "vindex",
+ "vindexes", "view", "vitess", "vitess_keyspaces", "vitess_metadata",
+ "vitess_migration", "vitess_migrations", "vitess_replication_status",
+ "vitess_shards", "vitess_tablets", "vschema", "warnings", "when",
+ "where", "while", "window", "with", "without", "work", "write", "xor",
+ "year", "year_month", "zerofill",
+}
+
+// Keywords that could get an additional keyword
+var needCustomString = []string{
+ "DISTINCTROW", "FROM", // Select keywords:
+ "GROUP BY", "HAVING", "WINDOW",
+ "FOR",
+ "ORDER BY", "LIMIT",
+ "INTO", "PARTITION", "AS", // Insert Keywords:
+ "ON DUPLICATE KEY UPDATE",
+ "WHERE", "LIMIT", // Delete keywords
+ "INFILE", "INTO TABLE", "CHARACTER SET", // Load keywords
+ "TERMINATED BY", "ENCLOSED BY",
+ "ESCAPED BY", "STARTING BY",
+ "TERMINATED BY", "STARTING BY",
+ "IGNORE",
+ "VALUE", "VALUES", // Replace tokens
+ "SET", // Update tokens
+ "ENGINE =", // Drop tokens
+ "DEFINER =", "ON SCHEDULE", "RENAME TO", // Alter tokens
+ "COMMENT", "DO", "INITIAL_SIZE = ", "OPTIONS",
+}
+
+var alterTableTokens = [][]string{
+ {"CUSTOM_FUZZ_STRING"},
+ {"CUSTOM_ALTTER_TABLE_OPTIONS"},
+ {"PARTITION_OPTIONS_FOR_ALTER_TABLE"},
+}
+
+var alterTokens = [][]string{
+ {
+ "DATABASE", "SCHEMA", "DEFINER = ", "EVENT", "FUNCTION", "INSTANCE",
+ "LOGFILE GROUP", "PROCEDURE", "SERVER",
+ },
+ {"CUSTOM_FUZZ_STRING"},
+ {
+ "ON SCHEDULE", "ON COMPLETION PRESERVE", "ON COMPLETION NOT PRESERVE",
+ "ADD UNDOFILE", "OPTIONS",
+ },
+ {"RENAME TO", "INITIAL_SIZE = "},
+ {"ENABLE", "DISABLE", "DISABLE ON SLAVE", "ENGINE"},
+ {"COMMENT"},
+ {"DO"},
+}
+
+var setTokens = [][]string{
+ {"CHARACTER SET", "CHARSET", "CUSTOM_FUZZ_STRING", "NAMES"},
+ {"CUSTOM_FUZZ_STRING", "DEFAULT", "="},
+ {"CUSTOM_FUZZ_STRING"},
+}
+
+var dropTokens = [][]string{
+ {"TEMPORARY", "UNDO"},
+ {
+ "DATABASE", "SCHEMA", "EVENT", "INDEX", "LOGFILE GROUP",
+ "PROCEDURE", "FUNCTION", "SERVER", "SPATIAL REFERENCE SYSTEM",
+ "TABLE", "TABLESPACE", "TRIGGER", "VIEW",
+ },
+ {"IF EXISTS"},
+ {"CUSTOM_FUZZ_STRING"},
+ {"ON", "ENGINE = ", "RESTRICT", "CASCADE"},
+}
+
+var renameTokens = [][]string{
+ {"TABLE"},
+ {"CUSTOM_FUZZ_STRING"},
+ {"TO"},
+ {"CUSTOM_FUZZ_STRING"},
+}
+
+var truncateTokens = [][]string{
+ {"TABLE"},
+ {"CUSTOM_FUZZ_STRING"},
+}
+
+var createTokens = [][]string{
+ {"OR REPLACE", "TEMPORARY", "UNDO"}, // For create spatial reference system
+ {
+ "UNIQUE", "FULLTEXT", "SPATIAL", "ALGORITHM = UNDEFINED", "ALGORITHM = MERGE",
+ "ALGORITHM = TEMPTABLE",
+ },
+ {
+ "DATABASE", "SCHEMA", "EVENT", "FUNCTION", "INDEX", "LOGFILE GROUP",
+ "PROCEDURE", "SERVER", "SPATIAL REFERENCE SYSTEM", "TABLE", "TABLESPACE",
+ "TRIGGER", "VIEW",
+ },
+ {"IF NOT EXISTS"},
+ {"CUSTOM_FUZZ_STRING"},
+}
+
+/*
+// For future use.
+var updateTokens = [][]string{
+ {"LOW_PRIORITY"},
+ {"IGNORE"},
+ {"SET"},
+ {"WHERE"},
+ {"ORDER BY"},
+ {"LIMIT"},
+}
+*/
+
+var replaceTokens = [][]string{
+ {"LOW_PRIORITY", "DELAYED"},
+ {"INTO"},
+ {"PARTITION"},
+ {"CUSTOM_FUZZ_STRING"},
+ {"VALUES", "VALUE"},
+}
+
+var loadTokens = [][]string{
+ {"DATA"},
+ {"LOW_PRIORITY", "CONCURRENT", "LOCAL"},
+ {"INFILE"},
+ {"REPLACE", "IGNORE"},
+ {"INTO TABLE"},
+ {"PARTITION"},
+ {"CHARACTER SET"},
+ {"FIELDS", "COLUMNS"},
+ {"TERMINATED BY"},
+ {"OPTIONALLY"},
+ {"ENCLOSED BY"},
+ {"ESCAPED BY"},
+ {"LINES"},
+ {"STARTING BY"},
+ {"TERMINATED BY"},
+ {"IGNORE"},
+ {"LINES", "ROWS"},
+ {"CUSTOM_FUZZ_STRING"},
+}
+
+// These Are everything that comes after "INSERT"
+var insertTokens = [][]string{
+ {"LOW_PRIORITY", "DELAYED", "HIGH_PRIORITY", "IGNORE"},
+ {"INTO"},
+ {"PARTITION"},
+ {"CUSTOM_FUZZ_STRING"},
+ {"AS"},
+ {"ON DUPLICATE KEY UPDATE"},
+}
+
+// These are everything that comes after "SELECT"
+var selectTokens = [][]string{
+ {"*", "CUSTOM_FUZZ_STRING", "DISTINCTROW"},
+ {"HIGH_PRIORITY"},
+ {"STRAIGHT_JOIN"},
+ {"SQL_SMALL_RESULT", "SQL_BIG_RESULT", "SQL_BUFFER_RESULT"},
+ {"SQL_NO_CACHE", "SQL_CALC_FOUND_ROWS"},
+ {"CUSTOM_FUZZ_STRING"},
+ {"FROM"},
+ {"WHERE"},
+ {"GROUP BY"},
+ {"HAVING"},
+ {"WINDOW"},
+ {"ORDER BY"},
+ {"LIMIT"},
+ {"CUSTOM_FUZZ_STRING"},
+ {"FOR"},
+}
+
+// These are everything that comes after "DELETE"
+var deleteTokens = [][]string{
+ {"LOW_PRIORITY", "QUICK", "IGNORE", "FROM", "AS"},
+ {"PARTITION"},
+ {"WHERE"},
+ {"ORDER BY"},
+ {"LIMIT"},
+}
+
+var alter_table_options = []string{
+ "ADD", "COLUMN", "FIRST", "AFTER", "INDEX", "KEY", "FULLTEXT", "SPATIAL",
+ "CONSTRAINT", "UNIQUE", "FOREIGN KEY", "CHECK", "ENFORCED", "DROP", "ALTER",
+ "NOT", "INPLACE", "COPY", "SET", "VISIBLE", "INVISIBLE", "DEFAULT", "CHANGE",
+ "CHARACTER SET", "COLLATE", "DISABLE", "ENABLE", "KEYS", "TABLESPACE", "LOCK",
+ "FORCE", "MODIFY", "SHARED", "EXCLUSIVE", "NONE", "ORDER BY", "RENAME COLUMN",
+ "AS", "=", "ASC", "DESC", "WITH", "WITHOUT", "VALIDATION", "ADD PARTITION",
+ "DROP PARTITION", "DISCARD PARTITION", "IMPORT PARTITION", "TRUNCATE PARTITION",
+ "COALESCE PARTITION", "REORGANIZE PARTITION", "EXCHANGE PARTITION",
+ "ANALYZE PARTITION", "CHECK PARTITION", "OPTIMIZE PARTITION", "REBUILD PARTITION",
+ "REPAIR PARTITION", "REMOVE PARTITIONING", "USING", "BTREE", "HASH", "COMMENT",
+ "KEY_BLOCK_SIZE", "WITH PARSER", "AUTOEXTEND_SIZE", "AUTO_INCREMENT", "AVG_ROW_LENGTH",
+ "CHECKSUM", "INSERT_METHOD", "ROW_FORMAT", "DYNAMIC", "FIXED", "COMPRESSED", "REDUNDANT",
+ "COMPACT", "SECONDARY_ENGINE_ATTRIBUTE", "STATS_AUTO_RECALC", "STATS_PERSISTENT",
+ "STATS_SAMPLE_PAGES", "ZLIB", "LZ4", "ENGINE_ATTRIBUTE", "KEY_BLOCK_SIZE", "MAX_ROWS",
+ "MIN_ROWS", "PACK_KEYS", "PASSWORD", "COMPRESSION", "CONNECTION", "DIRECTORY",
+ "DELAY_KEY_WRITE", "ENCRYPTION", "STORAGE", "DISK", "MEMORY", "UNION",
+}
+
+// Creates an 'alter table' statement. 'alter table' is an exception
+// in that it has its own function. The majority of statements
+// are created by 'createStmt()'.
+func createAlterTableStmt(f *ConsumeFuzzer) (string, error) {
+ maxArgs, err := f.GetInt()
+ if err != nil {
+ return "", err
+ }
+ maxArgs = maxArgs % 30
+ if maxArgs == 0 {
+ return "", fmt.Errorf("could not create alter table stmt")
+ }
+
+ var stmt strings.Builder
+ stmt.WriteString("ALTER TABLE ")
+ for i := 0; i < maxArgs; i++ {
+ // Calculate if we get existing token or custom string
+ tokenType, err := f.GetInt()
+ if err != nil {
+ return "", err
+ }
+ if tokenType%4 == 1 {
+ customString, err := f.GetString()
+ if err != nil {
+ return "", err
+ }
+ stmt.WriteString(" " + customString)
+ } else {
+ tokenIndex, err := f.GetInt()
+ if err != nil {
+ return "", err
+ }
+ stmt.WriteString(" " + alter_table_options[tokenIndex%len(alter_table_options)])
+ }
+ }
+ return stmt.String(), nil
+}
+
+func chooseToken(tokens []string, f *ConsumeFuzzer) (string, error) {
+ index, err := f.GetInt()
+ if err != nil {
+ return "", err
+ }
+ var token strings.Builder
+ token.WriteString(tokens[index%len(tokens)])
+ if token.String() == "CUSTOM_FUZZ_STRING" {
+ customFuzzString, err := f.GetString()
+ if err != nil {
+ return "", err
+ }
+ return customFuzzString, nil
+ }
+
+ // Check if token requires an argument
+ if containsString(needCustomString, token.String()) {
+ customFuzzString, err := f.GetString()
+ if err != nil {
+ return "", err
+ }
+ token.WriteString(" " + customFuzzString)
+ }
+ return token.String(), nil
+}
+
+var stmtTypes = map[string][][]string{
+ "DELETE": deleteTokens,
+ "INSERT": insertTokens,
+ "SELECT": selectTokens,
+ "LOAD": loadTokens,
+ "REPLACE": replaceTokens,
+ "CREATE": createTokens,
+ "DROP": dropTokens,
+ "RENAME": renameTokens,
+ "TRUNCATE": truncateTokens,
+ "SET": setTokens,
+ "ALTER": alterTokens,
+ "ALTER TABLE": alterTableTokens, // ALTER TABLE has its own set of tokens
+}
+
+var stmtTypeEnum = map[int]string{
+ 0: "DELETE",
+ 1: "INSERT",
+ 2: "SELECT",
+ 3: "LOAD",
+ 4: "REPLACE",
+ 5: "CREATE",
+ 6: "DROP",
+ 7: "RENAME",
+ 8: "TRUNCATE",
+ 9: "SET",
+ 10: "ALTER",
+ 11: "ALTER TABLE",
+}
+
+func createStmt(f *ConsumeFuzzer) (string, error) {
+ stmtIndex, err := f.GetInt()
+ if err != nil {
+ return "", err
+ }
+ stmtIndex = stmtIndex % len(stmtTypes)
+
+ queryType := stmtTypeEnum[stmtIndex]
+ tokens := stmtTypes[queryType]
+
+ // We have custom creator for ALTER TABLE
+ if queryType == "ALTER TABLE" {
+ query, err := createAlterTableStmt(f)
+ if err != nil {
+ return "", err
+ }
+ return query, nil
+ }
+
+ // Here we are creating a query that is not
+ // an 'alter table' query. For available
+ // queries, see "stmtTypes"
+
+ // First specify the first query keyword:
+ var query strings.Builder
+ query.WriteString(queryType)
+
+ // Next create the args for the
+ queryArgs, err := createStmtArgs(tokens, f)
+ if err != nil {
+ return "", err
+ }
+ query.WriteString(" " + queryArgs)
+ return query.String(), nil
+}
+
+// Creates the arguments of a statements. In a select statement
+// that would be everything after "select".
+func createStmtArgs(tokenslice [][]string, f *ConsumeFuzzer) (string, error) {
+ var query, token strings.Builder
+
+ // We go through the tokens in the tokenslice,
+ // create the respective token and add it to
+ // "query"
+ for _, tokens := range tokenslice {
+ // For extra randomization, the fuzzer can
+ // choose to not include this token.
+ includeThisToken, err := f.GetBool()
+ if err != nil {
+ return "", err
+ }
+ if !includeThisToken {
+ continue
+ }
+
+ // There may be several tokens to choose from:
+ if len(tokens) > 1 {
+ chosenToken, err := chooseToken(tokens, f)
+ if err != nil {
+ return "", err
+ }
+ query.WriteString(" " + chosenToken)
+ } else {
+ token.WriteString(tokens[0])
+
+ // In case the token is "CUSTOM_FUZZ_STRING"
+ // we will then create a non-structured string
+ if token.String() == "CUSTOM_FUZZ_STRING" {
+ customFuzzString, err := f.GetString()
+ if err != nil {
+ return "", err
+ }
+ query.WriteString(" " + customFuzzString)
+ continue
+ }
+
+ // Check if token requires an argument.
+ // Tokens that take an argument can be found
+ // in 'needCustomString'. If so, we add a
+ // non-structured string to the token.
+ if containsString(needCustomString, token.String()) {
+ customFuzzString, err := f.GetString()
+ if err != nil {
+ return "", err
+ }
+ token.WriteString(fmt.Sprintf(" %s", customFuzzString))
+ }
+ query.WriteString(fmt.Sprintf(" %s", token.String()))
+ }
+ }
+ return query.String(), nil
+}
+
+// Creates a semi-structured query. It creates a string
+// that is a combination of the keywords and random strings.
+func createQuery(f *ConsumeFuzzer) (string, error) {
+ queryLen, err := f.GetInt()
+ if err != nil {
+ return "", err
+ }
+ maxLen := queryLen % 60
+ if maxLen == 0 {
+ return "", fmt.Errorf("could not create a query")
+ }
+ var query strings.Builder
+ for i := 0; i < maxLen; i++ {
+ // Get a new token:
+ useKeyword, err := f.GetBool()
+ if err != nil {
+ return "", err
+ }
+ if useKeyword {
+ keyword, err := getKeyword(f)
+ if err != nil {
+ return "", err
+ }
+ query.WriteString(" " + keyword)
+ } else {
+ customString, err := f.GetString()
+ if err != nil {
+ return "", err
+ }
+ query.WriteString(" " + customString)
+ }
+ }
+ if query.String() == "" {
+ return "", fmt.Errorf("could not create a query")
+ }
+ return query.String(), nil
+}
+
+// GetSQLString is the API that users interact with.
+//
+// Usage:
+//
+// f := NewConsumer(data)
+// sqlString, err := f.GetSQLString()
+func (f *ConsumeFuzzer) GetSQLString() (string, error) {
+ var query string
+ veryStructured, err := f.GetBool()
+ if err != nil {
+ return "", err
+ }
+ if veryStructured {
+ query, err = createStmt(f)
+ if err != nil {
+ return "", err
+ }
+ } else {
+ query, err = createQuery(f)
+ if err != nil {
+ return "", err
+ }
+ }
+ return query, nil
+}
diff --git a/vendor/github.com/BurntSushi/toml/.gitignore b/vendor/github.com/BurntSushi/toml/.gitignore
new file mode 100644
index 000000000..fe79e3add
--- /dev/null
+++ b/vendor/github.com/BurntSushi/toml/.gitignore
@@ -0,0 +1,2 @@
+/toml.test
+/toml-test
diff --git a/vendor/github.com/BurntSushi/toml/COPYING b/vendor/github.com/BurntSushi/toml/COPYING
new file mode 100644
index 000000000..01b574320
--- /dev/null
+++ b/vendor/github.com/BurntSushi/toml/COPYING
@@ -0,0 +1,21 @@
+The MIT License (MIT)
+
+Copyright (c) 2013 TOML authors
+
+Permission is hereby granted, free of charge, to any person obtaining a copy
+of this software and associated documentation files (the "Software"), to deal
+in the Software without restriction, including without limitation the rights
+to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
+copies of the Software, and to permit persons to whom the Software is
+furnished to do so, subject to the following conditions:
+
+The above copyright notice and this permission notice shall be included in
+all copies or substantial portions of the Software.
+
+THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
+AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
+THE SOFTWARE.
diff --git a/vendor/github.com/BurntSushi/toml/README.md b/vendor/github.com/BurntSushi/toml/README.md
new file mode 100644
index 000000000..3651cfa96
--- /dev/null
+++ b/vendor/github.com/BurntSushi/toml/README.md
@@ -0,0 +1,120 @@
+TOML stands for Tom's Obvious, Minimal Language. This Go package provides a
+reflection interface similar to Go's standard library `json` and `xml` packages.
+
+Compatible with TOML version [v1.0.0](https://toml.io/en/v1.0.0).
+
+Documentation: https://godocs.io/github.com/BurntSushi/toml
+
+See the [releases page](https://github.com/BurntSushi/toml/releases) for a
+changelog; this information is also in the git tag annotations (e.g. `git show
+v0.4.0`).
+
+This library requires Go 1.13 or newer; add it to your go.mod with:
+
+ % go get github.com/BurntSushi/toml@latest
+
+It also comes with a TOML validator CLI tool:
+
+ % go install github.com/BurntSushi/toml/cmd/tomlv@latest
+ % tomlv some-toml-file.toml
+
+### Examples
+For the simplest example, consider some TOML file as just a list of keys and
+values:
+
+```toml
+Age = 25
+Cats = [ "Cauchy", "Plato" ]
+Pi = 3.14
+Perfection = [ 6, 28, 496, 8128 ]
+DOB = 1987-07-05T05:45:00Z
+```
+
+Which can be decoded with:
+
+```go
+type Config struct {
+ Age int
+ Cats []string
+ Pi float64
+ Perfection []int
+ DOB time.Time
+}
+
+var conf Config
+_, err := toml.Decode(tomlData, &conf)
+```
+
+You can also use struct tags if your struct field name doesn't map to a TOML key
+value directly:
+
+```toml
+some_key_NAME = "wat"
+```
+
+```go
+type TOML struct {
+ ObscureKey string `toml:"some_key_NAME"`
+}
+```
+
+Beware that like other decoders **only exported fields** are considered when
+encoding and decoding; private fields are silently ignored.
+
+### Using the `Marshaler` and `encoding.TextUnmarshaler` interfaces
+Here's an example that automatically parses values in a `mail.Address`:
+
+```toml
+contacts = [
+ "Donald Duck ",
+ "Scrooge McDuck ",
+]
+```
+
+Can be decoded with:
+
+```go
+// Create address type which satisfies the encoding.TextUnmarshaler interface.
+type address struct {
+ *mail.Address
+}
+
+func (a *address) UnmarshalText(text []byte) error {
+ var err error
+ a.Address, err = mail.ParseAddress(string(text))
+ return err
+}
+
+// Decode it.
+func decode() {
+ blob := `
+ contacts = [
+ "Donald Duck ",
+ "Scrooge McDuck ",
+ ]
+ `
+
+ var contacts struct {
+ Contacts []address
+ }
+
+ _, err := toml.Decode(blob, &contacts)
+ if err != nil {
+ log.Fatal(err)
+ }
+
+ for _, c := range contacts.Contacts {
+ fmt.Printf("%#v\n", c.Address)
+ }
+
+ // Output:
+ // &mail.Address{Name:"Donald Duck", Address:"donald@duckburg.com"}
+ // &mail.Address{Name:"Scrooge McDuck", Address:"scrooge@duckburg.com"}
+}
+```
+
+To target TOML specifically you can implement `UnmarshalTOML` TOML interface in
+a similar way.
+
+### More complex usage
+See the [`_example/`](/_example) directory for a more complex example.
diff --git a/vendor/github.com/BurntSushi/toml/decode.go b/vendor/github.com/BurntSushi/toml/decode.go
new file mode 100644
index 000000000..4d38f3bfc
--- /dev/null
+++ b/vendor/github.com/BurntSushi/toml/decode.go
@@ -0,0 +1,602 @@
+package toml
+
+import (
+ "bytes"
+ "encoding"
+ "encoding/json"
+ "fmt"
+ "io"
+ "io/ioutil"
+ "math"
+ "os"
+ "reflect"
+ "strconv"
+ "strings"
+ "time"
+)
+
+// Unmarshaler is the interface implemented by objects that can unmarshal a
+// TOML description of themselves.
+type Unmarshaler interface {
+ UnmarshalTOML(interface{}) error
+}
+
+// Unmarshal decodes the contents of data in TOML format into a pointer v.
+//
+// See [Decoder] for a description of the decoding process.
+func Unmarshal(data []byte, v interface{}) error {
+ _, err := NewDecoder(bytes.NewReader(data)).Decode(v)
+ return err
+}
+
+// Decode the TOML data in to the pointer v.
+//
+// See [Decoder] for a description of the decoding process.
+func Decode(data string, v interface{}) (MetaData, error) {
+ return NewDecoder(strings.NewReader(data)).Decode(v)
+}
+
+// DecodeFile reads the contents of a file and decodes it with [Decode].
+func DecodeFile(path string, v interface{}) (MetaData, error) {
+ fp, err := os.Open(path)
+ if err != nil {
+ return MetaData{}, err
+ }
+ defer fp.Close()
+ return NewDecoder(fp).Decode(v)
+}
+
+// Primitive is a TOML value that hasn't been decoded into a Go value.
+//
+// This type can be used for any value, which will cause decoding to be delayed.
+// You can use [PrimitiveDecode] to "manually" decode these values.
+//
+// NOTE: The underlying representation of a `Primitive` value is subject to
+// change. Do not rely on it.
+//
+// NOTE: Primitive values are still parsed, so using them will only avoid the
+// overhead of reflection. They can be useful when you don't know the exact type
+// of TOML data until runtime.
+type Primitive struct {
+ undecoded interface{}
+ context Key
+}
+
+// The significand precision for float32 and float64 is 24 and 53 bits; this is
+// the range a natural number can be stored in a float without loss of data.
+const (
+ maxSafeFloat32Int = 16777215 // 2^24-1
+ maxSafeFloat64Int = int64(9007199254740991) // 2^53-1
+)
+
+// Decoder decodes TOML data.
+//
+// TOML tables correspond to Go structs or maps; they can be used
+// interchangeably, but structs offer better type safety.
+//
+// TOML table arrays correspond to either a slice of structs or a slice of maps.
+//
+// TOML datetimes correspond to [time.Time]. Local datetimes are parsed in the
+// local timezone.
+//
+// [time.Duration] types are treated as nanoseconds if the TOML value is an
+// integer, or they're parsed with time.ParseDuration() if they're strings.
+//
+// All other TOML types (float, string, int, bool and array) correspond to the
+// obvious Go types.
+//
+// An exception to the above rules is if a type implements the TextUnmarshaler
+// interface, in which case any primitive TOML value (floats, strings, integers,
+// booleans, datetimes) will be converted to a []byte and given to the value's
+// UnmarshalText method. See the Unmarshaler example for a demonstration with
+// email addresses.
+//
+// # Key mapping
+//
+// TOML keys can map to either keys in a Go map or field names in a Go struct.
+// The special `toml` struct tag can be used to map TOML keys to struct fields
+// that don't match the key name exactly (see the example). A case insensitive
+// match to struct names will be tried if an exact match can't be found.
+//
+// The mapping between TOML values and Go values is loose. That is, there may
+// exist TOML values that cannot be placed into your representation, and there
+// may be parts of your representation that do not correspond to TOML values.
+// This loose mapping can be made stricter by using the IsDefined and/or
+// Undecoded methods on the MetaData returned.
+//
+// This decoder does not handle cyclic types. Decode will not terminate if a
+// cyclic type is passed.
+type Decoder struct {
+ r io.Reader
+}
+
+// NewDecoder creates a new Decoder.
+func NewDecoder(r io.Reader) *Decoder {
+ return &Decoder{r: r}
+}
+
+var (
+ unmarshalToml = reflect.TypeOf((*Unmarshaler)(nil)).Elem()
+ unmarshalText = reflect.TypeOf((*encoding.TextUnmarshaler)(nil)).Elem()
+ primitiveType = reflect.TypeOf((*Primitive)(nil)).Elem()
+)
+
+// Decode TOML data in to the pointer `v`.
+func (dec *Decoder) Decode(v interface{}) (MetaData, error) {
+ rv := reflect.ValueOf(v)
+ if rv.Kind() != reflect.Ptr {
+ s := "%q"
+ if reflect.TypeOf(v) == nil {
+ s = "%v"
+ }
+
+ return MetaData{}, fmt.Errorf("toml: cannot decode to non-pointer "+s, reflect.TypeOf(v))
+ }
+ if rv.IsNil() {
+ return MetaData{}, fmt.Errorf("toml: cannot decode to nil value of %q", reflect.TypeOf(v))
+ }
+
+ // Check if this is a supported type: struct, map, interface{}, or something
+ // that implements UnmarshalTOML or UnmarshalText.
+ rv = indirect(rv)
+ rt := rv.Type()
+ if rv.Kind() != reflect.Struct && rv.Kind() != reflect.Map &&
+ !(rv.Kind() == reflect.Interface && rv.NumMethod() == 0) &&
+ !rt.Implements(unmarshalToml) && !rt.Implements(unmarshalText) {
+ return MetaData{}, fmt.Errorf("toml: cannot decode to type %s", rt)
+ }
+
+ // TODO: parser should read from io.Reader? Or at the very least, make it
+ // read from []byte rather than string
+ data, err := ioutil.ReadAll(dec.r)
+ if err != nil {
+ return MetaData{}, err
+ }
+
+ p, err := parse(string(data))
+ if err != nil {
+ return MetaData{}, err
+ }
+
+ md := MetaData{
+ mapping: p.mapping,
+ keyInfo: p.keyInfo,
+ keys: p.ordered,
+ decoded: make(map[string]struct{}, len(p.ordered)),
+ context: nil,
+ data: data,
+ }
+ return md, md.unify(p.mapping, rv)
+}
+
+// PrimitiveDecode is just like the other Decode* functions, except it decodes a
+// TOML value that has already been parsed. Valid primitive values can *only* be
+// obtained from values filled by the decoder functions, including this method.
+// (i.e., v may contain more [Primitive] values.)
+//
+// Meta data for primitive values is included in the meta data returned by the
+// Decode* functions with one exception: keys returned by the Undecoded method
+// will only reflect keys that were decoded. Namely, any keys hidden behind a
+// Primitive will be considered undecoded. Executing this method will update the
+// undecoded keys in the meta data. (See the example.)
+func (md *MetaData) PrimitiveDecode(primValue Primitive, v interface{}) error {
+ md.context = primValue.context
+ defer func() { md.context = nil }()
+ return md.unify(primValue.undecoded, rvalue(v))
+}
+
+// unify performs a sort of type unification based on the structure of `rv`,
+// which is the client representation.
+//
+// Any type mismatch produces an error. Finding a type that we don't know
+// how to handle produces an unsupported type error.
+func (md *MetaData) unify(data interface{}, rv reflect.Value) error {
+ // Special case. Look for a `Primitive` value.
+ // TODO: #76 would make this superfluous after implemented.
+ if rv.Type() == primitiveType {
+ // Save the undecoded data and the key context into the primitive
+ // value.
+ context := make(Key, len(md.context))
+ copy(context, md.context)
+ rv.Set(reflect.ValueOf(Primitive{
+ undecoded: data,
+ context: context,
+ }))
+ return nil
+ }
+
+ rvi := rv.Interface()
+ if v, ok := rvi.(Unmarshaler); ok {
+ return v.UnmarshalTOML(data)
+ }
+ if v, ok := rvi.(encoding.TextUnmarshaler); ok {
+ return md.unifyText(data, v)
+ }
+
+ // TODO:
+ // The behavior here is incorrect whenever a Go type satisfies the
+ // encoding.TextUnmarshaler interface but also corresponds to a TOML hash or
+ // array. In particular, the unmarshaler should only be applied to primitive
+ // TOML values. But at this point, it will be applied to all kinds of values
+ // and produce an incorrect error whenever those values are hashes or arrays
+ // (including arrays of tables).
+
+ k := rv.Kind()
+
+ if k >= reflect.Int && k <= reflect.Uint64 {
+ return md.unifyInt(data, rv)
+ }
+ switch k {
+ case reflect.Ptr:
+ elem := reflect.New(rv.Type().Elem())
+ err := md.unify(data, reflect.Indirect(elem))
+ if err != nil {
+ return err
+ }
+ rv.Set(elem)
+ return nil
+ case reflect.Struct:
+ return md.unifyStruct(data, rv)
+ case reflect.Map:
+ return md.unifyMap(data, rv)
+ case reflect.Array:
+ return md.unifyArray(data, rv)
+ case reflect.Slice:
+ return md.unifySlice(data, rv)
+ case reflect.String:
+ return md.unifyString(data, rv)
+ case reflect.Bool:
+ return md.unifyBool(data, rv)
+ case reflect.Interface:
+ if rv.NumMethod() > 0 { /// Only empty interfaces are supported.
+ return md.e("unsupported type %s", rv.Type())
+ }
+ return md.unifyAnything(data, rv)
+ case reflect.Float32, reflect.Float64:
+ return md.unifyFloat64(data, rv)
+ }
+ return md.e("unsupported type %s", rv.Kind())
+}
+
+func (md *MetaData) unifyStruct(mapping interface{}, rv reflect.Value) error {
+ tmap, ok := mapping.(map[string]interface{})
+ if !ok {
+ if mapping == nil {
+ return nil
+ }
+ return md.e("type mismatch for %s: expected table but found %T",
+ rv.Type().String(), mapping)
+ }
+
+ for key, datum := range tmap {
+ var f *field
+ fields := cachedTypeFields(rv.Type())
+ for i := range fields {
+ ff := &fields[i]
+ if ff.name == key {
+ f = ff
+ break
+ }
+ if f == nil && strings.EqualFold(ff.name, key) {
+ f = ff
+ }
+ }
+ if f != nil {
+ subv := rv
+ for _, i := range f.index {
+ subv = indirect(subv.Field(i))
+ }
+
+ if isUnifiable(subv) {
+ md.decoded[md.context.add(key).String()] = struct{}{}
+ md.context = append(md.context, key)
+
+ err := md.unify(datum, subv)
+ if err != nil {
+ return err
+ }
+ md.context = md.context[0 : len(md.context)-1]
+ } else if f.name != "" {
+ return md.e("cannot write unexported field %s.%s", rv.Type().String(), f.name)
+ }
+ }
+ }
+ return nil
+}
+
+func (md *MetaData) unifyMap(mapping interface{}, rv reflect.Value) error {
+ keyType := rv.Type().Key().Kind()
+ if keyType != reflect.String && keyType != reflect.Interface {
+ return fmt.Errorf("toml: cannot decode to a map with non-string key type (%s in %q)",
+ keyType, rv.Type())
+ }
+
+ tmap, ok := mapping.(map[string]interface{})
+ if !ok {
+ if tmap == nil {
+ return nil
+ }
+ return md.badtype("map", mapping)
+ }
+ if rv.IsNil() {
+ rv.Set(reflect.MakeMap(rv.Type()))
+ }
+ for k, v := range tmap {
+ md.decoded[md.context.add(k).String()] = struct{}{}
+ md.context = append(md.context, k)
+
+ rvval := reflect.Indirect(reflect.New(rv.Type().Elem()))
+
+ err := md.unify(v, indirect(rvval))
+ if err != nil {
+ return err
+ }
+ md.context = md.context[0 : len(md.context)-1]
+
+ rvkey := indirect(reflect.New(rv.Type().Key()))
+
+ switch keyType {
+ case reflect.Interface:
+ rvkey.Set(reflect.ValueOf(k))
+ case reflect.String:
+ rvkey.SetString(k)
+ }
+
+ rv.SetMapIndex(rvkey, rvval)
+ }
+ return nil
+}
+
+func (md *MetaData) unifyArray(data interface{}, rv reflect.Value) error {
+ datav := reflect.ValueOf(data)
+ if datav.Kind() != reflect.Slice {
+ if !datav.IsValid() {
+ return nil
+ }
+ return md.badtype("slice", data)
+ }
+ if l := datav.Len(); l != rv.Len() {
+ return md.e("expected array length %d; got TOML array of length %d", rv.Len(), l)
+ }
+ return md.unifySliceArray(datav, rv)
+}
+
+func (md *MetaData) unifySlice(data interface{}, rv reflect.Value) error {
+ datav := reflect.ValueOf(data)
+ if datav.Kind() != reflect.Slice {
+ if !datav.IsValid() {
+ return nil
+ }
+ return md.badtype("slice", data)
+ }
+ n := datav.Len()
+ if rv.IsNil() || rv.Cap() < n {
+ rv.Set(reflect.MakeSlice(rv.Type(), n, n))
+ }
+ rv.SetLen(n)
+ return md.unifySliceArray(datav, rv)
+}
+
+func (md *MetaData) unifySliceArray(data, rv reflect.Value) error {
+ l := data.Len()
+ for i := 0; i < l; i++ {
+ err := md.unify(data.Index(i).Interface(), indirect(rv.Index(i)))
+ if err != nil {
+ return err
+ }
+ }
+ return nil
+}
+
+func (md *MetaData) unifyString(data interface{}, rv reflect.Value) error {
+ _, ok := rv.Interface().(json.Number)
+ if ok {
+ if i, ok := data.(int64); ok {
+ rv.SetString(strconv.FormatInt(i, 10))
+ } else if f, ok := data.(float64); ok {
+ rv.SetString(strconv.FormatFloat(f, 'f', -1, 64))
+ } else {
+ return md.badtype("string", data)
+ }
+ return nil
+ }
+
+ if s, ok := data.(string); ok {
+ rv.SetString(s)
+ return nil
+ }
+ return md.badtype("string", data)
+}
+
+func (md *MetaData) unifyFloat64(data interface{}, rv reflect.Value) error {
+ rvk := rv.Kind()
+
+ if num, ok := data.(float64); ok {
+ switch rvk {
+ case reflect.Float32:
+ if num < -math.MaxFloat32 || num > math.MaxFloat32 {
+ return md.parseErr(errParseRange{i: num, size: rvk.String()})
+ }
+ fallthrough
+ case reflect.Float64:
+ rv.SetFloat(num)
+ default:
+ panic("bug")
+ }
+ return nil
+ }
+
+ if num, ok := data.(int64); ok {
+ if (rvk == reflect.Float32 && (num < -maxSafeFloat32Int || num > maxSafeFloat32Int)) ||
+ (rvk == reflect.Float64 && (num < -maxSafeFloat64Int || num > maxSafeFloat64Int)) {
+ return md.parseErr(errParseRange{i: num, size: rvk.String()})
+ }
+ rv.SetFloat(float64(num))
+ return nil
+ }
+
+ return md.badtype("float", data)
+}
+
+func (md *MetaData) unifyInt(data interface{}, rv reflect.Value) error {
+ _, ok := rv.Interface().(time.Duration)
+ if ok {
+ // Parse as string duration, and fall back to regular integer parsing
+ // (as nanosecond) if this is not a string.
+ if s, ok := data.(string); ok {
+ dur, err := time.ParseDuration(s)
+ if err != nil {
+ return md.parseErr(errParseDuration{s})
+ }
+ rv.SetInt(int64(dur))
+ return nil
+ }
+ }
+
+ num, ok := data.(int64)
+ if !ok {
+ return md.badtype("integer", data)
+ }
+
+ rvk := rv.Kind()
+ switch {
+ case rvk >= reflect.Int && rvk <= reflect.Int64:
+ if (rvk == reflect.Int8 && (num < math.MinInt8 || num > math.MaxInt8)) ||
+ (rvk == reflect.Int16 && (num < math.MinInt16 || num > math.MaxInt16)) ||
+ (rvk == reflect.Int32 && (num < math.MinInt32 || num > math.MaxInt32)) {
+ return md.parseErr(errParseRange{i: num, size: rvk.String()})
+ }
+ rv.SetInt(num)
+ case rvk >= reflect.Uint && rvk <= reflect.Uint64:
+ unum := uint64(num)
+ if rvk == reflect.Uint8 && (num < 0 || unum > math.MaxUint8) ||
+ rvk == reflect.Uint16 && (num < 0 || unum > math.MaxUint16) ||
+ rvk == reflect.Uint32 && (num < 0 || unum > math.MaxUint32) {
+ return md.parseErr(errParseRange{i: num, size: rvk.String()})
+ }
+ rv.SetUint(unum)
+ default:
+ panic("unreachable")
+ }
+ return nil
+}
+
+func (md *MetaData) unifyBool(data interface{}, rv reflect.Value) error {
+ if b, ok := data.(bool); ok {
+ rv.SetBool(b)
+ return nil
+ }
+ return md.badtype("boolean", data)
+}
+
+func (md *MetaData) unifyAnything(data interface{}, rv reflect.Value) error {
+ rv.Set(reflect.ValueOf(data))
+ return nil
+}
+
+func (md *MetaData) unifyText(data interface{}, v encoding.TextUnmarshaler) error {
+ var s string
+ switch sdata := data.(type) {
+ case Marshaler:
+ text, err := sdata.MarshalTOML()
+ if err != nil {
+ return err
+ }
+ s = string(text)
+ case encoding.TextMarshaler:
+ text, err := sdata.MarshalText()
+ if err != nil {
+ return err
+ }
+ s = string(text)
+ case fmt.Stringer:
+ s = sdata.String()
+ case string:
+ s = sdata
+ case bool:
+ s = fmt.Sprintf("%v", sdata)
+ case int64:
+ s = fmt.Sprintf("%d", sdata)
+ case float64:
+ s = fmt.Sprintf("%f", sdata)
+ default:
+ return md.badtype("primitive (string-like)", data)
+ }
+ if err := v.UnmarshalText([]byte(s)); err != nil {
+ return err
+ }
+ return nil
+}
+
+func (md *MetaData) badtype(dst string, data interface{}) error {
+ return md.e("incompatible types: TOML value has type %T; destination has type %s", data, dst)
+}
+
+func (md *MetaData) parseErr(err error) error {
+ k := md.context.String()
+ return ParseError{
+ LastKey: k,
+ Position: md.keyInfo[k].pos,
+ Line: md.keyInfo[k].pos.Line,
+ err: err,
+ input: string(md.data),
+ }
+}
+
+func (md *MetaData) e(format string, args ...interface{}) error {
+ f := "toml: "
+ if len(md.context) > 0 {
+ f = fmt.Sprintf("toml: (last key %q): ", md.context)
+ p := md.keyInfo[md.context.String()].pos
+ if p.Line > 0 {
+ f = fmt.Sprintf("toml: line %d (last key %q): ", p.Line, md.context)
+ }
+ }
+ return fmt.Errorf(f+format, args...)
+}
+
+// rvalue returns a reflect.Value of `v`. All pointers are resolved.
+func rvalue(v interface{}) reflect.Value {
+ return indirect(reflect.ValueOf(v))
+}
+
+// indirect returns the value pointed to by a pointer.
+//
+// Pointers are followed until the value is not a pointer. New values are
+// allocated for each nil pointer.
+//
+// An exception to this rule is if the value satisfies an interface of interest
+// to us (like encoding.TextUnmarshaler).
+func indirect(v reflect.Value) reflect.Value {
+ if v.Kind() != reflect.Ptr {
+ if v.CanSet() {
+ pv := v.Addr()
+ pvi := pv.Interface()
+ if _, ok := pvi.(encoding.TextUnmarshaler); ok {
+ return pv
+ }
+ if _, ok := pvi.(Unmarshaler); ok {
+ return pv
+ }
+ }
+ return v
+ }
+ if v.IsNil() {
+ v.Set(reflect.New(v.Type().Elem()))
+ }
+ return indirect(reflect.Indirect(v))
+}
+
+func isUnifiable(rv reflect.Value) bool {
+ if rv.CanSet() {
+ return true
+ }
+ rvi := rv.Interface()
+ if _, ok := rvi.(encoding.TextUnmarshaler); ok {
+ return true
+ }
+ if _, ok := rvi.(Unmarshaler); ok {
+ return true
+ }
+ return false
+}
diff --git a/vendor/github.com/BurntSushi/toml/decode_go116.go b/vendor/github.com/BurntSushi/toml/decode_go116.go
new file mode 100644
index 000000000..086d0b686
--- /dev/null
+++ b/vendor/github.com/BurntSushi/toml/decode_go116.go
@@ -0,0 +1,19 @@
+//go:build go1.16
+// +build go1.16
+
+package toml
+
+import (
+ "io/fs"
+)
+
+// DecodeFS reads the contents of a file from [fs.FS] and decodes it with
+// [Decode].
+func DecodeFS(fsys fs.FS, path string, v interface{}) (MetaData, error) {
+ fp, err := fsys.Open(path)
+ if err != nil {
+ return MetaData{}, err
+ }
+ defer fp.Close()
+ return NewDecoder(fp).Decode(v)
+}
diff --git a/vendor/github.com/BurntSushi/toml/deprecated.go b/vendor/github.com/BurntSushi/toml/deprecated.go
new file mode 100644
index 000000000..b9e309717
--- /dev/null
+++ b/vendor/github.com/BurntSushi/toml/deprecated.go
@@ -0,0 +1,29 @@
+package toml
+
+import (
+ "encoding"
+ "io"
+)
+
+// TextMarshaler is an alias for encoding.TextMarshaler.
+//
+// Deprecated: use encoding.TextMarshaler
+type TextMarshaler encoding.TextMarshaler
+
+// TextUnmarshaler is an alias for encoding.TextUnmarshaler.
+//
+// Deprecated: use encoding.TextUnmarshaler
+type TextUnmarshaler encoding.TextUnmarshaler
+
+// PrimitiveDecode is an alias for MetaData.PrimitiveDecode().
+//
+// Deprecated: use MetaData.PrimitiveDecode.
+func PrimitiveDecode(primValue Primitive, v interface{}) error {
+ md := MetaData{decoded: make(map[string]struct{})}
+ return md.unify(primValue.undecoded, rvalue(v))
+}
+
+// DecodeReader is an alias for NewDecoder(r).Decode(v).
+//
+// Deprecated: use NewDecoder(reader).Decode(&value).
+func DecodeReader(r io.Reader, v interface{}) (MetaData, error) { return NewDecoder(r).Decode(v) }
diff --git a/vendor/github.com/BurntSushi/toml/doc.go b/vendor/github.com/BurntSushi/toml/doc.go
new file mode 100644
index 000000000..81a7c0fe9
--- /dev/null
+++ b/vendor/github.com/BurntSushi/toml/doc.go
@@ -0,0 +1,11 @@
+// Package toml implements decoding and encoding of TOML files.
+//
+// This package supports TOML v1.0.0, as specified at https://toml.io
+//
+// There is also support for delaying decoding with the Primitive type, and
+// querying the set of keys in a TOML document with the MetaData type.
+//
+// The github.com/BurntSushi/toml/cmd/tomlv package implements a TOML validator,
+// and can be used to verify if TOML document is valid. It can also be used to
+// print the type of each key.
+package toml
diff --git a/vendor/github.com/BurntSushi/toml/encode.go b/vendor/github.com/BurntSushi/toml/encode.go
new file mode 100644
index 000000000..9cd25d757
--- /dev/null
+++ b/vendor/github.com/BurntSushi/toml/encode.go
@@ -0,0 +1,759 @@
+package toml
+
+import (
+ "bufio"
+ "encoding"
+ "encoding/json"
+ "errors"
+ "fmt"
+ "io"
+ "math"
+ "reflect"
+ "sort"
+ "strconv"
+ "strings"
+ "time"
+
+ "github.com/BurntSushi/toml/internal"
+)
+
+type tomlEncodeError struct{ error }
+
+var (
+ errArrayNilElement = errors.New("toml: cannot encode array with nil element")
+ errNonString = errors.New("toml: cannot encode a map with non-string key type")
+ errNoKey = errors.New("toml: top-level values must be Go maps or structs")
+ errAnything = errors.New("") // used in testing
+)
+
+var dblQuotedReplacer = strings.NewReplacer(
+ "\"", "\\\"",
+ "\\", "\\\\",
+ "\x00", `\u0000`,
+ "\x01", `\u0001`,
+ "\x02", `\u0002`,
+ "\x03", `\u0003`,
+ "\x04", `\u0004`,
+ "\x05", `\u0005`,
+ "\x06", `\u0006`,
+ "\x07", `\u0007`,
+ "\b", `\b`,
+ "\t", `\t`,
+ "\n", `\n`,
+ "\x0b", `\u000b`,
+ "\f", `\f`,
+ "\r", `\r`,
+ "\x0e", `\u000e`,
+ "\x0f", `\u000f`,
+ "\x10", `\u0010`,
+ "\x11", `\u0011`,
+ "\x12", `\u0012`,
+ "\x13", `\u0013`,
+ "\x14", `\u0014`,
+ "\x15", `\u0015`,
+ "\x16", `\u0016`,
+ "\x17", `\u0017`,
+ "\x18", `\u0018`,
+ "\x19", `\u0019`,
+ "\x1a", `\u001a`,
+ "\x1b", `\u001b`,
+ "\x1c", `\u001c`,
+ "\x1d", `\u001d`,
+ "\x1e", `\u001e`,
+ "\x1f", `\u001f`,
+ "\x7f", `\u007f`,
+)
+
+var (
+ marshalToml = reflect.TypeOf((*Marshaler)(nil)).Elem()
+ marshalText = reflect.TypeOf((*encoding.TextMarshaler)(nil)).Elem()
+ timeType = reflect.TypeOf((*time.Time)(nil)).Elem()
+)
+
+// Marshaler is the interface implemented by types that can marshal themselves
+// into valid TOML.
+type Marshaler interface {
+ MarshalTOML() ([]byte, error)
+}
+
+// Encoder encodes a Go to a TOML document.
+//
+// The mapping between Go values and TOML values should be precisely the same as
+// for [Decode].
+//
+// time.Time is encoded as a RFC 3339 string, and time.Duration as its string
+// representation.
+//
+// The [Marshaler] and [encoding.TextMarshaler] interfaces are supported to
+// encoding the value as custom TOML.
+//
+// If you want to write arbitrary binary data then you will need to use
+// something like base64 since TOML does not have any binary types.
+//
+// When encoding TOML hashes (Go maps or structs), keys without any sub-hashes
+// are encoded first.
+//
+// Go maps will be sorted alphabetically by key for deterministic output.
+//
+// The toml struct tag can be used to provide the key name; if omitted the
+// struct field name will be used. If the "omitempty" option is present the
+// following value will be skipped:
+//
+// - arrays, slices, maps, and string with len of 0
+// - struct with all zero values
+// - bool false
+//
+// If omitzero is given all int and float types with a value of 0 will be
+// skipped.
+//
+// Encoding Go values without a corresponding TOML representation will return an
+// error. Examples of this includes maps with non-string keys, slices with nil
+// elements, embedded non-struct types, and nested slices containing maps or
+// structs. (e.g. [][]map[string]string is not allowed but []map[string]string
+// is okay, as is []map[string][]string).
+//
+// NOTE: only exported keys are encoded due to the use of reflection. Unexported
+// keys are silently discarded.
+type Encoder struct {
+ // String to use for a single indentation level; default is two spaces.
+ Indent string
+
+ w *bufio.Writer
+ hasWritten bool // written any output to w yet?
+}
+
+// NewEncoder create a new Encoder.
+func NewEncoder(w io.Writer) *Encoder {
+ return &Encoder{
+ w: bufio.NewWriter(w),
+ Indent: " ",
+ }
+}
+
+// Encode writes a TOML representation of the Go value to the [Encoder]'s writer.
+//
+// An error is returned if the value given cannot be encoded to a valid TOML
+// document.
+func (enc *Encoder) Encode(v interface{}) error {
+ rv := eindirect(reflect.ValueOf(v))
+ err := enc.safeEncode(Key([]string{}), rv)
+ if err != nil {
+ return err
+ }
+ return enc.w.Flush()
+}
+
+func (enc *Encoder) safeEncode(key Key, rv reflect.Value) (err error) {
+ defer func() {
+ if r := recover(); r != nil {
+ if terr, ok := r.(tomlEncodeError); ok {
+ err = terr.error
+ return
+ }
+ panic(r)
+ }
+ }()
+ enc.encode(key, rv)
+ return nil
+}
+
+func (enc *Encoder) encode(key Key, rv reflect.Value) {
+ // If we can marshal the type to text, then we use that. This prevents the
+ // encoder for handling these types as generic structs (or whatever the
+ // underlying type of a TextMarshaler is).
+ switch {
+ case isMarshaler(rv):
+ enc.writeKeyValue(key, rv, false)
+ return
+ case rv.Type() == primitiveType: // TODO: #76 would make this superfluous after implemented.
+ enc.encode(key, reflect.ValueOf(rv.Interface().(Primitive).undecoded))
+ return
+ }
+
+ k := rv.Kind()
+ switch k {
+ case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32,
+ reflect.Int64,
+ reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32,
+ reflect.Uint64,
+ reflect.Float32, reflect.Float64, reflect.String, reflect.Bool:
+ enc.writeKeyValue(key, rv, false)
+ case reflect.Array, reflect.Slice:
+ if typeEqual(tomlArrayHash, tomlTypeOfGo(rv)) {
+ enc.eArrayOfTables(key, rv)
+ } else {
+ enc.writeKeyValue(key, rv, false)
+ }
+ case reflect.Interface:
+ if rv.IsNil() {
+ return
+ }
+ enc.encode(key, rv.Elem())
+ case reflect.Map:
+ if rv.IsNil() {
+ return
+ }
+ enc.eTable(key, rv)
+ case reflect.Ptr:
+ if rv.IsNil() {
+ return
+ }
+ enc.encode(key, rv.Elem())
+ case reflect.Struct:
+ enc.eTable(key, rv)
+ default:
+ encPanic(fmt.Errorf("unsupported type for key '%s': %s", key, k))
+ }
+}
+
+// eElement encodes any value that can be an array element.
+func (enc *Encoder) eElement(rv reflect.Value) {
+ switch v := rv.Interface().(type) {
+ case time.Time: // Using TextMarshaler adds extra quotes, which we don't want.
+ format := time.RFC3339Nano
+ switch v.Location() {
+ case internal.LocalDatetime:
+ format = "2006-01-02T15:04:05.999999999"
+ case internal.LocalDate:
+ format = "2006-01-02"
+ case internal.LocalTime:
+ format = "15:04:05.999999999"
+ }
+ switch v.Location() {
+ default:
+ enc.wf(v.Format(format))
+ case internal.LocalDatetime, internal.LocalDate, internal.LocalTime:
+ enc.wf(v.In(time.UTC).Format(format))
+ }
+ return
+ case Marshaler:
+ s, err := v.MarshalTOML()
+ if err != nil {
+ encPanic(err)
+ }
+ if s == nil {
+ encPanic(errors.New("MarshalTOML returned nil and no error"))
+ }
+ enc.w.Write(s)
+ return
+ case encoding.TextMarshaler:
+ s, err := v.MarshalText()
+ if err != nil {
+ encPanic(err)
+ }
+ if s == nil {
+ encPanic(errors.New("MarshalText returned nil and no error"))
+ }
+ enc.writeQuoted(string(s))
+ return
+ case time.Duration:
+ enc.writeQuoted(v.String())
+ return
+ case json.Number:
+ n, _ := rv.Interface().(json.Number)
+
+ if n == "" { /// Useful zero value.
+ enc.w.WriteByte('0')
+ return
+ } else if v, err := n.Int64(); err == nil {
+ enc.eElement(reflect.ValueOf(v))
+ return
+ } else if v, err := n.Float64(); err == nil {
+ enc.eElement(reflect.ValueOf(v))
+ return
+ }
+ encPanic(fmt.Errorf("unable to convert %q to int64 or float64", n))
+ }
+
+ switch rv.Kind() {
+ case reflect.Ptr:
+ enc.eElement(rv.Elem())
+ return
+ case reflect.String:
+ enc.writeQuoted(rv.String())
+ case reflect.Bool:
+ enc.wf(strconv.FormatBool(rv.Bool()))
+ case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64:
+ enc.wf(strconv.FormatInt(rv.Int(), 10))
+ case reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64:
+ enc.wf(strconv.FormatUint(rv.Uint(), 10))
+ case reflect.Float32:
+ f := rv.Float()
+ if math.IsNaN(f) {
+ enc.wf("nan")
+ } else if math.IsInf(f, 0) {
+ enc.wf("%cinf", map[bool]byte{true: '-', false: '+'}[math.Signbit(f)])
+ } else {
+ enc.wf(floatAddDecimal(strconv.FormatFloat(f, 'f', -1, 32)))
+ }
+ case reflect.Float64:
+ f := rv.Float()
+ if math.IsNaN(f) {
+ enc.wf("nan")
+ } else if math.IsInf(f, 0) {
+ enc.wf("%cinf", map[bool]byte{true: '-', false: '+'}[math.Signbit(f)])
+ } else {
+ enc.wf(floatAddDecimal(strconv.FormatFloat(f, 'f', -1, 64)))
+ }
+ case reflect.Array, reflect.Slice:
+ enc.eArrayOrSliceElement(rv)
+ case reflect.Struct:
+ enc.eStruct(nil, rv, true)
+ case reflect.Map:
+ enc.eMap(nil, rv, true)
+ case reflect.Interface:
+ enc.eElement(rv.Elem())
+ default:
+ encPanic(fmt.Errorf("unexpected type: %T", rv.Interface()))
+ }
+}
+
+// By the TOML spec, all floats must have a decimal with at least one number on
+// either side.
+func floatAddDecimal(fstr string) string {
+ if !strings.Contains(fstr, ".") {
+ return fstr + ".0"
+ }
+ return fstr
+}
+
+func (enc *Encoder) writeQuoted(s string) {
+ enc.wf("\"%s\"", dblQuotedReplacer.Replace(s))
+}
+
+func (enc *Encoder) eArrayOrSliceElement(rv reflect.Value) {
+ length := rv.Len()
+ enc.wf("[")
+ for i := 0; i < length; i++ {
+ elem := eindirect(rv.Index(i))
+ enc.eElement(elem)
+ if i != length-1 {
+ enc.wf(", ")
+ }
+ }
+ enc.wf("]")
+}
+
+func (enc *Encoder) eArrayOfTables(key Key, rv reflect.Value) {
+ if len(key) == 0 {
+ encPanic(errNoKey)
+ }
+ for i := 0; i < rv.Len(); i++ {
+ trv := eindirect(rv.Index(i))
+ if isNil(trv) {
+ continue
+ }
+ enc.newline()
+ enc.wf("%s[[%s]]", enc.indentStr(key), key)
+ enc.newline()
+ enc.eMapOrStruct(key, trv, false)
+ }
+}
+
+func (enc *Encoder) eTable(key Key, rv reflect.Value) {
+ if len(key) == 1 {
+ // Output an extra newline between top-level tables.
+ // (The newline isn't written if nothing else has been written though.)
+ enc.newline()
+ }
+ if len(key) > 0 {
+ enc.wf("%s[%s]", enc.indentStr(key), key)
+ enc.newline()
+ }
+ enc.eMapOrStruct(key, rv, false)
+}
+
+func (enc *Encoder) eMapOrStruct(key Key, rv reflect.Value, inline bool) {
+ switch rv.Kind() {
+ case reflect.Map:
+ enc.eMap(key, rv, inline)
+ case reflect.Struct:
+ enc.eStruct(key, rv, inline)
+ default:
+ // Should never happen?
+ panic("eTable: unhandled reflect.Value Kind: " + rv.Kind().String())
+ }
+}
+
+func (enc *Encoder) eMap(key Key, rv reflect.Value, inline bool) {
+ rt := rv.Type()
+ if rt.Key().Kind() != reflect.String {
+ encPanic(errNonString)
+ }
+
+ // Sort keys so that we have deterministic output. And write keys directly
+ // underneath this key first, before writing sub-structs or sub-maps.
+ var mapKeysDirect, mapKeysSub []string
+ for _, mapKey := range rv.MapKeys() {
+ k := mapKey.String()
+ if typeIsTable(tomlTypeOfGo(eindirect(rv.MapIndex(mapKey)))) {
+ mapKeysSub = append(mapKeysSub, k)
+ } else {
+ mapKeysDirect = append(mapKeysDirect, k)
+ }
+ }
+
+ var writeMapKeys = func(mapKeys []string, trailC bool) {
+ sort.Strings(mapKeys)
+ for i, mapKey := range mapKeys {
+ val := eindirect(rv.MapIndex(reflect.ValueOf(mapKey)))
+ if isNil(val) {
+ continue
+ }
+
+ if inline {
+ enc.writeKeyValue(Key{mapKey}, val, true)
+ if trailC || i != len(mapKeys)-1 {
+ enc.wf(", ")
+ }
+ } else {
+ enc.encode(key.add(mapKey), val)
+ }
+ }
+ }
+
+ if inline {
+ enc.wf("{")
+ }
+ writeMapKeys(mapKeysDirect, len(mapKeysSub) > 0)
+ writeMapKeys(mapKeysSub, false)
+ if inline {
+ enc.wf("}")
+ }
+}
+
+const is32Bit = (32 << (^uint(0) >> 63)) == 32
+
+func pointerTo(t reflect.Type) reflect.Type {
+ if t.Kind() == reflect.Ptr {
+ return pointerTo(t.Elem())
+ }
+ return t
+}
+
+func (enc *Encoder) eStruct(key Key, rv reflect.Value, inline bool) {
+ // Write keys for fields directly under this key first, because if we write
+ // a field that creates a new table then all keys under it will be in that
+ // table (not the one we're writing here).
+ //
+ // Fields is a [][]int: for fieldsDirect this always has one entry (the
+ // struct index). For fieldsSub it contains two entries: the parent field
+ // index from tv, and the field indexes for the fields of the sub.
+ var (
+ rt = rv.Type()
+ fieldsDirect, fieldsSub [][]int
+ addFields func(rt reflect.Type, rv reflect.Value, start []int)
+ )
+ addFields = func(rt reflect.Type, rv reflect.Value, start []int) {
+ for i := 0; i < rt.NumField(); i++ {
+ f := rt.Field(i)
+ isEmbed := f.Anonymous && pointerTo(f.Type).Kind() == reflect.Struct
+ if f.PkgPath != "" && !isEmbed { /// Skip unexported fields.
+ continue
+ }
+ opts := getOptions(f.Tag)
+ if opts.skip {
+ continue
+ }
+
+ frv := eindirect(rv.Field(i))
+
+ if is32Bit {
+ // Copy so it works correct on 32bit archs; not clear why this
+ // is needed. See #314, and https://www.reddit.com/r/golang/comments/pnx8v4
+ // This also works fine on 64bit, but 32bit archs are somewhat
+ // rare and this is a wee bit faster.
+ copyStart := make([]int, len(start))
+ copy(copyStart, start)
+ start = copyStart
+ }
+
+ // Treat anonymous struct fields with tag names as though they are
+ // not anonymous, like encoding/json does.
+ //
+ // Non-struct anonymous fields use the normal encoding logic.
+ if isEmbed {
+ if getOptions(f.Tag).name == "" && frv.Kind() == reflect.Struct {
+ addFields(frv.Type(), frv, append(start, f.Index...))
+ continue
+ }
+ }
+
+ if typeIsTable(tomlTypeOfGo(frv)) {
+ fieldsSub = append(fieldsSub, append(start, f.Index...))
+ } else {
+ fieldsDirect = append(fieldsDirect, append(start, f.Index...))
+ }
+ }
+ }
+ addFields(rt, rv, nil)
+
+ writeFields := func(fields [][]int) {
+ for _, fieldIndex := range fields {
+ fieldType := rt.FieldByIndex(fieldIndex)
+ fieldVal := rv.FieldByIndex(fieldIndex)
+
+ opts := getOptions(fieldType.Tag)
+ if opts.skip {
+ continue
+ }
+ if opts.omitempty && isEmpty(fieldVal) {
+ continue
+ }
+
+ fieldVal = eindirect(fieldVal)
+
+ if isNil(fieldVal) { /// Don't write anything for nil fields.
+ continue
+ }
+
+ keyName := fieldType.Name
+ if opts.name != "" {
+ keyName = opts.name
+ }
+
+ if opts.omitzero && isZero(fieldVal) {
+ continue
+ }
+
+ if inline {
+ enc.writeKeyValue(Key{keyName}, fieldVal, true)
+ if fieldIndex[0] != len(fields)-1 {
+ enc.wf(", ")
+ }
+ } else {
+ enc.encode(key.add(keyName), fieldVal)
+ }
+ }
+ }
+
+ if inline {
+ enc.wf("{")
+ }
+ writeFields(fieldsDirect)
+ writeFields(fieldsSub)
+ if inline {
+ enc.wf("}")
+ }
+}
+
+// tomlTypeOfGo returns the TOML type name of the Go value's type.
+//
+// It is used to determine whether the types of array elements are mixed (which
+// is forbidden). If the Go value is nil, then it is illegal for it to be an
+// array element, and valueIsNil is returned as true.
+//
+// The type may be `nil`, which means no concrete TOML type could be found.
+func tomlTypeOfGo(rv reflect.Value) tomlType {
+ if isNil(rv) || !rv.IsValid() {
+ return nil
+ }
+
+ if rv.Kind() == reflect.Struct {
+ if rv.Type() == timeType {
+ return tomlDatetime
+ }
+ if isMarshaler(rv) {
+ return tomlString
+ }
+ return tomlHash
+ }
+
+ if isMarshaler(rv) {
+ return tomlString
+ }
+
+ switch rv.Kind() {
+ case reflect.Bool:
+ return tomlBool
+ case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32,
+ reflect.Int64,
+ reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32,
+ reflect.Uint64:
+ return tomlInteger
+ case reflect.Float32, reflect.Float64:
+ return tomlFloat
+ case reflect.Array, reflect.Slice:
+ if isTableArray(rv) {
+ return tomlArrayHash
+ }
+ return tomlArray
+ case reflect.Ptr, reflect.Interface:
+ return tomlTypeOfGo(rv.Elem())
+ case reflect.String:
+ return tomlString
+ case reflect.Map:
+ return tomlHash
+ default:
+ encPanic(errors.New("unsupported type: " + rv.Kind().String()))
+ panic("unreachable")
+ }
+}
+
+func isMarshaler(rv reflect.Value) bool {
+ return rv.Type().Implements(marshalText) || rv.Type().Implements(marshalToml)
+}
+
+// isTableArray reports if all entries in the array or slice are a table.
+func isTableArray(arr reflect.Value) bool {
+ if isNil(arr) || !arr.IsValid() || arr.Len() == 0 {
+ return false
+ }
+
+ ret := true
+ for i := 0; i < arr.Len(); i++ {
+ tt := tomlTypeOfGo(eindirect(arr.Index(i)))
+ // Don't allow nil.
+ if tt == nil {
+ encPanic(errArrayNilElement)
+ }
+
+ if ret && !typeEqual(tomlHash, tt) {
+ ret = false
+ }
+ }
+ return ret
+}
+
+type tagOptions struct {
+ skip bool // "-"
+ name string
+ omitempty bool
+ omitzero bool
+}
+
+func getOptions(tag reflect.StructTag) tagOptions {
+ t := tag.Get("toml")
+ if t == "-" {
+ return tagOptions{skip: true}
+ }
+ var opts tagOptions
+ parts := strings.Split(t, ",")
+ opts.name = parts[0]
+ for _, s := range parts[1:] {
+ switch s {
+ case "omitempty":
+ opts.omitempty = true
+ case "omitzero":
+ opts.omitzero = true
+ }
+ }
+ return opts
+}
+
+func isZero(rv reflect.Value) bool {
+ switch rv.Kind() {
+ case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64:
+ return rv.Int() == 0
+ case reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64:
+ return rv.Uint() == 0
+ case reflect.Float32, reflect.Float64:
+ return rv.Float() == 0.0
+ }
+ return false
+}
+
+func isEmpty(rv reflect.Value) bool {
+ switch rv.Kind() {
+ case reflect.Array, reflect.Slice, reflect.Map, reflect.String:
+ return rv.Len() == 0
+ case reflect.Struct:
+ if rv.Type().Comparable() {
+ return reflect.Zero(rv.Type()).Interface() == rv.Interface()
+ }
+ // Need to also check if all the fields are empty, otherwise something
+ // like this with uncomparable types will always return true:
+ //
+ // type a struct{ field b }
+ // type b struct{ s []string }
+ // s := a{field: b{s: []string{"AAA"}}}
+ for i := 0; i < rv.NumField(); i++ {
+ if !isEmpty(rv.Field(i)) {
+ return false
+ }
+ }
+ return true
+ case reflect.Bool:
+ return !rv.Bool()
+ case reflect.Ptr:
+ return rv.IsNil()
+ }
+ return false
+}
+
+func (enc *Encoder) newline() {
+ if enc.hasWritten {
+ enc.wf("\n")
+ }
+}
+
+// Write a key/value pair:
+//
+// key =
+//
+// This is also used for "k = v" in inline tables; so something like this will
+// be written in three calls:
+//
+// ┌───────────────────┐
+// │ ┌───┐ ┌────┐│
+// v v v v vv
+// key = {k = 1, k2 = 2}
+func (enc *Encoder) writeKeyValue(key Key, val reflect.Value, inline bool) {
+ /// Marshaler used on top-level document; call eElement() to just call
+ /// Marshal{TOML,Text}.
+ if len(key) == 0 {
+ enc.eElement(val)
+ return
+ }
+ enc.wf("%s%s = ", enc.indentStr(key), key.maybeQuoted(len(key)-1))
+ enc.eElement(val)
+ if !inline {
+ enc.newline()
+ }
+}
+
+func (enc *Encoder) wf(format string, v ...interface{}) {
+ _, err := fmt.Fprintf(enc.w, format, v...)
+ if err != nil {
+ encPanic(err)
+ }
+ enc.hasWritten = true
+}
+
+func (enc *Encoder) indentStr(key Key) string {
+ return strings.Repeat(enc.Indent, len(key)-1)
+}
+
+func encPanic(err error) {
+ panic(tomlEncodeError{err})
+}
+
+// Resolve any level of pointers to the actual value (e.g. **string → string).
+func eindirect(v reflect.Value) reflect.Value {
+ if v.Kind() != reflect.Ptr && v.Kind() != reflect.Interface {
+ if isMarshaler(v) {
+ return v
+ }
+ if v.CanAddr() { /// Special case for marshalers; see #358.
+ if pv := v.Addr(); isMarshaler(pv) {
+ return pv
+ }
+ }
+ return v
+ }
+
+ if v.IsNil() {
+ return v
+ }
+
+ return eindirect(v.Elem())
+}
+
+func isNil(rv reflect.Value) bool {
+ switch rv.Kind() {
+ case reflect.Interface, reflect.Map, reflect.Ptr, reflect.Slice:
+ return rv.IsNil()
+ default:
+ return false
+ }
+}
diff --git a/vendor/github.com/BurntSushi/toml/error.go b/vendor/github.com/BurntSushi/toml/error.go
new file mode 100644
index 000000000..efd68865b
--- /dev/null
+++ b/vendor/github.com/BurntSushi/toml/error.go
@@ -0,0 +1,279 @@
+package toml
+
+import (
+ "fmt"
+ "strings"
+)
+
+// ParseError is returned when there is an error parsing the TOML syntax such as
+// invalid syntax, duplicate keys, etc.
+//
+// In addition to the error message itself, you can also print detailed location
+// information with context by using [ErrorWithPosition]:
+//
+// toml: error: Key 'fruit' was already created and cannot be used as an array.
+//
+// At line 4, column 2-7:
+//
+// 2 | fruit = []
+// 3 |
+// 4 | [[fruit]] # Not allowed
+// ^^^^^
+//
+// [ErrorWithUsage] can be used to print the above with some more detailed usage
+// guidance:
+//
+// toml: error: newlines not allowed within inline tables
+//
+// At line 1, column 18:
+//
+// 1 | x = [{ key = 42 #
+// ^
+//
+// Error help:
+//
+// Inline tables must always be on a single line:
+//
+// table = {key = 42, second = 43}
+//
+// It is invalid to split them over multiple lines like so:
+//
+// # INVALID
+// table = {
+// key = 42,
+// second = 43
+// }
+//
+// Use regular for this:
+//
+// [table]
+// key = 42
+// second = 43
+type ParseError struct {
+ Message string // Short technical message.
+ Usage string // Longer message with usage guidance; may be blank.
+ Position Position // Position of the error
+ LastKey string // Last parsed key, may be blank.
+
+ // Line the error occurred.
+ //
+ // Deprecated: use [Position].
+ Line int
+
+ err error
+ input string
+}
+
+// Position of an error.
+type Position struct {
+ Line int // Line number, starting at 1.
+ Start int // Start of error, as byte offset starting at 0.
+ Len int // Lenght in bytes.
+}
+
+func (pe ParseError) Error() string {
+ msg := pe.Message
+ if msg == "" { // Error from errorf()
+ msg = pe.err.Error()
+ }
+
+ if pe.LastKey == "" {
+ return fmt.Sprintf("toml: line %d: %s", pe.Position.Line, msg)
+ }
+ return fmt.Sprintf("toml: line %d (last key %q): %s",
+ pe.Position.Line, pe.LastKey, msg)
+}
+
+// ErrorWithPosition returns the error with detailed location context.
+//
+// See the documentation on [ParseError].
+func (pe ParseError) ErrorWithPosition() string {
+ if pe.input == "" { // Should never happen, but just in case.
+ return pe.Error()
+ }
+
+ var (
+ lines = strings.Split(pe.input, "\n")
+ col = pe.column(lines)
+ b = new(strings.Builder)
+ )
+
+ msg := pe.Message
+ if msg == "" {
+ msg = pe.err.Error()
+ }
+
+ // TODO: don't show control characters as literals? This may not show up
+ // well everywhere.
+
+ if pe.Position.Len == 1 {
+ fmt.Fprintf(b, "toml: error: %s\n\nAt line %d, column %d:\n\n",
+ msg, pe.Position.Line, col+1)
+ } else {
+ fmt.Fprintf(b, "toml: error: %s\n\nAt line %d, column %d-%d:\n\n",
+ msg, pe.Position.Line, col, col+pe.Position.Len)
+ }
+ if pe.Position.Line > 2 {
+ fmt.Fprintf(b, "% 7d | %s\n", pe.Position.Line-2, lines[pe.Position.Line-3])
+ }
+ if pe.Position.Line > 1 {
+ fmt.Fprintf(b, "% 7d | %s\n", pe.Position.Line-1, lines[pe.Position.Line-2])
+ }
+ fmt.Fprintf(b, "% 7d | %s\n", pe.Position.Line, lines[pe.Position.Line-1])
+ fmt.Fprintf(b, "% 10s%s%s\n", "", strings.Repeat(" ", col), strings.Repeat("^", pe.Position.Len))
+ return b.String()
+}
+
+// ErrorWithUsage returns the error with detailed location context and usage
+// guidance.
+//
+// See the documentation on [ParseError].
+func (pe ParseError) ErrorWithUsage() string {
+ m := pe.ErrorWithPosition()
+ if u, ok := pe.err.(interface{ Usage() string }); ok && u.Usage() != "" {
+ lines := strings.Split(strings.TrimSpace(u.Usage()), "\n")
+ for i := range lines {
+ if lines[i] != "" {
+ lines[i] = " " + lines[i]
+ }
+ }
+ return m + "Error help:\n\n" + strings.Join(lines, "\n") + "\n"
+ }
+ return m
+}
+
+func (pe ParseError) column(lines []string) int {
+ var pos, col int
+ for i := range lines {
+ ll := len(lines[i]) + 1 // +1 for the removed newline
+ if pos+ll >= pe.Position.Start {
+ col = pe.Position.Start - pos
+ if col < 0 { // Should never happen, but just in case.
+ col = 0
+ }
+ break
+ }
+ pos += ll
+ }
+
+ return col
+}
+
+type (
+ errLexControl struct{ r rune }
+ errLexEscape struct{ r rune }
+ errLexUTF8 struct{ b byte }
+ errLexInvalidNum struct{ v string }
+ errLexInvalidDate struct{ v string }
+ errLexInlineTableNL struct{}
+ errLexStringNL struct{}
+ errParseRange struct {
+ i interface{} // int or float
+ size string // "int64", "uint16", etc.
+ }
+ errParseDuration struct{ d string }
+)
+
+func (e errLexControl) Error() string {
+ return fmt.Sprintf("TOML files cannot contain control characters: '0x%02x'", e.r)
+}
+func (e errLexControl) Usage() string { return "" }
+
+func (e errLexEscape) Error() string { return fmt.Sprintf(`invalid escape in string '\%c'`, e.r) }
+func (e errLexEscape) Usage() string { return usageEscape }
+func (e errLexUTF8) Error() string { return fmt.Sprintf("invalid UTF-8 byte: 0x%02x", e.b) }
+func (e errLexUTF8) Usage() string { return "" }
+func (e errLexInvalidNum) Error() string { return fmt.Sprintf("invalid number: %q", e.v) }
+func (e errLexInvalidNum) Usage() string { return "" }
+func (e errLexInvalidDate) Error() string { return fmt.Sprintf("invalid date: %q", e.v) }
+func (e errLexInvalidDate) Usage() string { return "" }
+func (e errLexInlineTableNL) Error() string { return "newlines not allowed within inline tables" }
+func (e errLexInlineTableNL) Usage() string { return usageInlineNewline }
+func (e errLexStringNL) Error() string { return "strings cannot contain newlines" }
+func (e errLexStringNL) Usage() string { return usageStringNewline }
+func (e errParseRange) Error() string { return fmt.Sprintf("%v is out of range for %s", e.i, e.size) }
+func (e errParseRange) Usage() string { return usageIntOverflow }
+func (e errParseDuration) Error() string { return fmt.Sprintf("invalid duration: %q", e.d) }
+func (e errParseDuration) Usage() string { return usageDuration }
+
+const usageEscape = `
+A '\' inside a "-delimited string is interpreted as an escape character.
+
+The following escape sequences are supported:
+\b, \t, \n, \f, \r, \", \\, \uXXXX, and \UXXXXXXXX
+
+To prevent a '\' from being recognized as an escape character, use either:
+
+- a ' or '''-delimited string; escape characters aren't processed in them; or
+- write two backslashes to get a single backslash: '\\'.
+
+If you're trying to add a Windows path (e.g. "C:\Users\martin") then using '/'
+instead of '\' will usually also work: "C:/Users/martin".
+`
+
+const usageInlineNewline = `
+Inline tables must always be on a single line:
+
+ table = {key = 42, second = 43}
+
+It is invalid to split them over multiple lines like so:
+
+ # INVALID
+ table = {
+ key = 42,
+ second = 43
+ }
+
+Use regular for this:
+
+ [table]
+ key = 42
+ second = 43
+`
+
+const usageStringNewline = `
+Strings must always be on a single line, and cannot span more than one line:
+
+ # INVALID
+ string = "Hello,
+ world!"
+
+Instead use """ or ''' to split strings over multiple lines:
+
+ string = """Hello,
+ world!"""
+`
+
+const usageIntOverflow = `
+This number is too large; this may be an error in the TOML, but it can also be a
+bug in the program that uses too small of an integer.
+
+The maximum and minimum values are:
+
+ size │ lowest │ highest
+ ───────┼────────────────┼──────────
+ int8 │ -128 │ 127
+ int16 │ -32,768 │ 32,767
+ int32 │ -2,147,483,648 │ 2,147,483,647
+ int64 │ -9.2 × 10¹⁷ │ 9.2 × 10¹⁷
+ uint8 │ 0 │ 255
+ uint16 │ 0 │ 65535
+ uint32 │ 0 │ 4294967295
+ uint64 │ 0 │ 1.8 × 10¹⁸
+
+int refers to int32 on 32-bit systems and int64 on 64-bit systems.
+`
+
+const usageDuration = `
+A duration must be as "number", without any spaces. Valid units are:
+
+ ns nanoseconds (billionth of a second)
+ us, µs microseconds (millionth of a second)
+ ms milliseconds (thousands of a second)
+ s seconds
+ m minutes
+ h hours
+
+You can combine multiple units; for example "5m10s" for 5 minutes and 10
+seconds.
+`
diff --git a/vendor/github.com/BurntSushi/toml/internal/tz.go b/vendor/github.com/BurntSushi/toml/internal/tz.go
new file mode 100644
index 000000000..022f15bc2
--- /dev/null
+++ b/vendor/github.com/BurntSushi/toml/internal/tz.go
@@ -0,0 +1,36 @@
+package internal
+
+import "time"
+
+// Timezones used for local datetime, date, and time TOML types.
+//
+// The exact way times and dates without a timezone should be interpreted is not
+// well-defined in the TOML specification and left to the implementation. These
+// defaults to current local timezone offset of the computer, but this can be
+// changed by changing these variables before decoding.
+//
+// TODO:
+// Ideally we'd like to offer people the ability to configure the used timezone
+// by setting Decoder.Timezone and Encoder.Timezone; however, this is a bit
+// tricky: the reason we use three different variables for this is to support
+// round-tripping – without these specific TZ names we wouldn't know which
+// format to use.
+//
+// There isn't a good way to encode this right now though, and passing this sort
+// of information also ties in to various related issues such as string format
+// encoding, encoding of comments, etc.
+//
+// So, for the time being, just put this in internal until we can write a good
+// comprehensive API for doing all of this.
+//
+// The reason they're exported is because they're referred from in e.g.
+// internal/tag.
+//
+// Note that this behaviour is valid according to the TOML spec as the exact
+// behaviour is left up to implementations.
+var (
+ localOffset = func() int { _, o := time.Now().Zone(); return o }()
+ LocalDatetime = time.FixedZone("datetime-local", localOffset)
+ LocalDate = time.FixedZone("date-local", localOffset)
+ LocalTime = time.FixedZone("time-local", localOffset)
+)
diff --git a/vendor/github.com/BurntSushi/toml/lex.go b/vendor/github.com/BurntSushi/toml/lex.go
new file mode 100644
index 000000000..3545a6ad6
--- /dev/null
+++ b/vendor/github.com/BurntSushi/toml/lex.go
@@ -0,0 +1,1283 @@
+package toml
+
+import (
+ "fmt"
+ "reflect"
+ "runtime"
+ "strings"
+ "unicode"
+ "unicode/utf8"
+)
+
+type itemType int
+
+const (
+ itemError itemType = iota
+ itemNIL // used in the parser to indicate no type
+ itemEOF
+ itemText
+ itemString
+ itemRawString
+ itemMultilineString
+ itemRawMultilineString
+ itemBool
+ itemInteger
+ itemFloat
+ itemDatetime
+ itemArray // the start of an array
+ itemArrayEnd
+ itemTableStart
+ itemTableEnd
+ itemArrayTableStart
+ itemArrayTableEnd
+ itemKeyStart
+ itemKeyEnd
+ itemCommentStart
+ itemInlineTableStart
+ itemInlineTableEnd
+)
+
+const eof = 0
+
+type stateFn func(lx *lexer) stateFn
+
+func (p Position) String() string {
+ return fmt.Sprintf("at line %d; start %d; length %d", p.Line, p.Start, p.Len)
+}
+
+type lexer struct {
+ input string
+ start int
+ pos int
+ line int
+ state stateFn
+ items chan item
+ tomlNext bool
+
+ // Allow for backing up up to 4 runes. This is necessary because TOML
+ // contains 3-rune tokens (""" and ''').
+ prevWidths [4]int
+ nprev int // how many of prevWidths are in use
+ atEOF bool // If we emit an eof, we can still back up, but it is not OK to call next again.
+
+ // A stack of state functions used to maintain context.
+ //
+ // The idea is to reuse parts of the state machine in various places. For
+ // example, values can appear at the top level or within arbitrarily nested
+ // arrays. The last state on the stack is used after a value has been lexed.
+ // Similarly for comments.
+ stack []stateFn
+}
+
+type item struct {
+ typ itemType
+ val string
+ err error
+ pos Position
+}
+
+func (lx *lexer) nextItem() item {
+ for {
+ select {
+ case item := <-lx.items:
+ return item
+ default:
+ lx.state = lx.state(lx)
+ //fmt.Printf(" STATE %-24s current: %-10s stack: %s\n", lx.state, lx.current(), lx.stack)
+ }
+ }
+}
+
+func lex(input string, tomlNext bool) *lexer {
+ lx := &lexer{
+ input: input,
+ state: lexTop,
+ items: make(chan item, 10),
+ stack: make([]stateFn, 0, 10),
+ line: 1,
+ tomlNext: tomlNext,
+ }
+ return lx
+}
+
+func (lx *lexer) push(state stateFn) {
+ lx.stack = append(lx.stack, state)
+}
+
+func (lx *lexer) pop() stateFn {
+ if len(lx.stack) == 0 {
+ return lx.errorf("BUG in lexer: no states to pop")
+ }
+ last := lx.stack[len(lx.stack)-1]
+ lx.stack = lx.stack[0 : len(lx.stack)-1]
+ return last
+}
+
+func (lx *lexer) current() string {
+ return lx.input[lx.start:lx.pos]
+}
+
+func (lx lexer) getPos() Position {
+ p := Position{
+ Line: lx.line,
+ Start: lx.start,
+ Len: lx.pos - lx.start,
+ }
+ if p.Len <= 0 {
+ p.Len = 1
+ }
+ return p
+}
+
+func (lx *lexer) emit(typ itemType) {
+ // Needed for multiline strings ending with an incomplete UTF-8 sequence.
+ if lx.start > lx.pos {
+ lx.error(errLexUTF8{lx.input[lx.pos]})
+ return
+ }
+ lx.items <- item{typ: typ, pos: lx.getPos(), val: lx.current()}
+ lx.start = lx.pos
+}
+
+func (lx *lexer) emitTrim(typ itemType) {
+ lx.items <- item{typ: typ, pos: lx.getPos(), val: strings.TrimSpace(lx.current())}
+ lx.start = lx.pos
+}
+
+func (lx *lexer) next() (r rune) {
+ if lx.atEOF {
+ panic("BUG in lexer: next called after EOF")
+ }
+ if lx.pos >= len(lx.input) {
+ lx.atEOF = true
+ return eof
+ }
+
+ if lx.input[lx.pos] == '\n' {
+ lx.line++
+ }
+ lx.prevWidths[3] = lx.prevWidths[2]
+ lx.prevWidths[2] = lx.prevWidths[1]
+ lx.prevWidths[1] = lx.prevWidths[0]
+ if lx.nprev < 4 {
+ lx.nprev++
+ }
+
+ r, w := utf8.DecodeRuneInString(lx.input[lx.pos:])
+ if r == utf8.RuneError {
+ lx.error(errLexUTF8{lx.input[lx.pos]})
+ return utf8.RuneError
+ }
+
+ // Note: don't use peek() here, as this calls next().
+ if isControl(r) || (r == '\r' && (len(lx.input)-1 == lx.pos || lx.input[lx.pos+1] != '\n')) {
+ lx.errorControlChar(r)
+ return utf8.RuneError
+ }
+
+ lx.prevWidths[0] = w
+ lx.pos += w
+ return r
+}
+
+// ignore skips over the pending input before this point.
+func (lx *lexer) ignore() {
+ lx.start = lx.pos
+}
+
+// backup steps back one rune. Can be called 4 times between calls to next.
+func (lx *lexer) backup() {
+ if lx.atEOF {
+ lx.atEOF = false
+ return
+ }
+ if lx.nprev < 1 {
+ panic("BUG in lexer: backed up too far")
+ }
+ w := lx.prevWidths[0]
+ lx.prevWidths[0] = lx.prevWidths[1]
+ lx.prevWidths[1] = lx.prevWidths[2]
+ lx.prevWidths[2] = lx.prevWidths[3]
+ lx.nprev--
+
+ lx.pos -= w
+ if lx.pos < len(lx.input) && lx.input[lx.pos] == '\n' {
+ lx.line--
+ }
+}
+
+// accept consumes the next rune if it's equal to `valid`.
+func (lx *lexer) accept(valid rune) bool {
+ if lx.next() == valid {
+ return true
+ }
+ lx.backup()
+ return false
+}
+
+// peek returns but does not consume the next rune in the input.
+func (lx *lexer) peek() rune {
+ r := lx.next()
+ lx.backup()
+ return r
+}
+
+// skip ignores all input that matches the given predicate.
+func (lx *lexer) skip(pred func(rune) bool) {
+ for {
+ r := lx.next()
+ if pred(r) {
+ continue
+ }
+ lx.backup()
+ lx.ignore()
+ return
+ }
+}
+
+// error stops all lexing by emitting an error and returning `nil`.
+//
+// Note that any value that is a character is escaped if it's a special
+// character (newlines, tabs, etc.).
+func (lx *lexer) error(err error) stateFn {
+ if lx.atEOF {
+ return lx.errorPrevLine(err)
+ }
+ lx.items <- item{typ: itemError, pos: lx.getPos(), err: err}
+ return nil
+}
+
+// errorfPrevline is like error(), but sets the position to the last column of
+// the previous line.
+//
+// This is so that unexpected EOF or NL errors don't show on a new blank line.
+func (lx *lexer) errorPrevLine(err error) stateFn {
+ pos := lx.getPos()
+ pos.Line--
+ pos.Len = 1
+ pos.Start = lx.pos - 1
+ lx.items <- item{typ: itemError, pos: pos, err: err}
+ return nil
+}
+
+// errorPos is like error(), but allows explicitly setting the position.
+func (lx *lexer) errorPos(start, length int, err error) stateFn {
+ pos := lx.getPos()
+ pos.Start = start
+ pos.Len = length
+ lx.items <- item{typ: itemError, pos: pos, err: err}
+ return nil
+}
+
+// errorf is like error, and creates a new error.
+func (lx *lexer) errorf(format string, values ...interface{}) stateFn {
+ if lx.atEOF {
+ pos := lx.getPos()
+ pos.Line--
+ pos.Len = 1
+ pos.Start = lx.pos - 1
+ lx.items <- item{typ: itemError, pos: pos, err: fmt.Errorf(format, values...)}
+ return nil
+ }
+ lx.items <- item{typ: itemError, pos: lx.getPos(), err: fmt.Errorf(format, values...)}
+ return nil
+}
+
+func (lx *lexer) errorControlChar(cc rune) stateFn {
+ return lx.errorPos(lx.pos-1, 1, errLexControl{cc})
+}
+
+// lexTop consumes elements at the top level of TOML data.
+func lexTop(lx *lexer) stateFn {
+ r := lx.next()
+ if isWhitespace(r) || isNL(r) {
+ return lexSkip(lx, lexTop)
+ }
+ switch r {
+ case '#':
+ lx.push(lexTop)
+ return lexCommentStart
+ case '[':
+ return lexTableStart
+ case eof:
+ if lx.pos > lx.start {
+ return lx.errorf("unexpected EOF")
+ }
+ lx.emit(itemEOF)
+ return nil
+ }
+
+ // At this point, the only valid item can be a key, so we back up
+ // and let the key lexer do the rest.
+ lx.backup()
+ lx.push(lexTopEnd)
+ return lexKeyStart
+}
+
+// lexTopEnd is entered whenever a top-level item has been consumed. (A value
+// or a table.) It must see only whitespace, and will turn back to lexTop
+// upon a newline. If it sees EOF, it will quit the lexer successfully.
+func lexTopEnd(lx *lexer) stateFn {
+ r := lx.next()
+ switch {
+ case r == '#':
+ // a comment will read to a newline for us.
+ lx.push(lexTop)
+ return lexCommentStart
+ case isWhitespace(r):
+ return lexTopEnd
+ case isNL(r):
+ lx.ignore()
+ return lexTop
+ case r == eof:
+ lx.emit(itemEOF)
+ return nil
+ }
+ return lx.errorf(
+ "expected a top-level item to end with a newline, comment, or EOF, but got %q instead",
+ r)
+}
+
+// lexTable lexes the beginning of a table. Namely, it makes sure that
+// it starts with a character other than '.' and ']'.
+// It assumes that '[' has already been consumed.
+// It also handles the case that this is an item in an array of tables.
+// e.g., '[[name]]'.
+func lexTableStart(lx *lexer) stateFn {
+ if lx.peek() == '[' {
+ lx.next()
+ lx.emit(itemArrayTableStart)
+ lx.push(lexArrayTableEnd)
+ } else {
+ lx.emit(itemTableStart)
+ lx.push(lexTableEnd)
+ }
+ return lexTableNameStart
+}
+
+func lexTableEnd(lx *lexer) stateFn {
+ lx.emit(itemTableEnd)
+ return lexTopEnd
+}
+
+func lexArrayTableEnd(lx *lexer) stateFn {
+ if r := lx.next(); r != ']' {
+ return lx.errorf("expected end of table array name delimiter ']', but got %q instead", r)
+ }
+ lx.emit(itemArrayTableEnd)
+ return lexTopEnd
+}
+
+func lexTableNameStart(lx *lexer) stateFn {
+ lx.skip(isWhitespace)
+ switch r := lx.peek(); {
+ case r == ']' || r == eof:
+ return lx.errorf("unexpected end of table name (table names cannot be empty)")
+ case r == '.':
+ return lx.errorf("unexpected table separator (table names cannot be empty)")
+ case r == '"' || r == '\'':
+ lx.ignore()
+ lx.push(lexTableNameEnd)
+ return lexQuotedName
+ default:
+ lx.push(lexTableNameEnd)
+ return lexBareName
+ }
+}
+
+// lexTableNameEnd reads the end of a piece of a table name, optionally
+// consuming whitespace.
+func lexTableNameEnd(lx *lexer) stateFn {
+ lx.skip(isWhitespace)
+ switch r := lx.next(); {
+ case isWhitespace(r):
+ return lexTableNameEnd
+ case r == '.':
+ lx.ignore()
+ return lexTableNameStart
+ case r == ']':
+ return lx.pop()
+ default:
+ return lx.errorf("expected '.' or ']' to end table name, but got %q instead", r)
+ }
+}
+
+// lexBareName lexes one part of a key or table.
+//
+// It assumes that at least one valid character for the table has already been
+// read.
+//
+// Lexes only one part, e.g. only 'a' inside 'a.b'.
+func lexBareName(lx *lexer) stateFn {
+ r := lx.next()
+ if isBareKeyChar(r, lx.tomlNext) {
+ return lexBareName
+ }
+ lx.backup()
+ lx.emit(itemText)
+ return lx.pop()
+}
+
+// lexBareName lexes one part of a key or table.
+//
+// It assumes that at least one valid character for the table has already been
+// read.
+//
+// Lexes only one part, e.g. only '"a"' inside '"a".b'.
+func lexQuotedName(lx *lexer) stateFn {
+ r := lx.next()
+ switch {
+ case isWhitespace(r):
+ return lexSkip(lx, lexValue)
+ case r == '"':
+ lx.ignore() // ignore the '"'
+ return lexString
+ case r == '\'':
+ lx.ignore() // ignore the "'"
+ return lexRawString
+ case r == eof:
+ return lx.errorf("unexpected EOF; expected value")
+ default:
+ return lx.errorf("expected value but found %q instead", r)
+ }
+}
+
+// lexKeyStart consumes all key parts until a '='.
+func lexKeyStart(lx *lexer) stateFn {
+ lx.skip(isWhitespace)
+ switch r := lx.peek(); {
+ case r == '=' || r == eof:
+ return lx.errorf("unexpected '=': key name appears blank")
+ case r == '.':
+ return lx.errorf("unexpected '.': keys cannot start with a '.'")
+ case r == '"' || r == '\'':
+ lx.ignore()
+ fallthrough
+ default: // Bare key
+ lx.emit(itemKeyStart)
+ return lexKeyNameStart
+ }
+}
+
+func lexKeyNameStart(lx *lexer) stateFn {
+ lx.skip(isWhitespace)
+ switch r := lx.peek(); {
+ case r == '=' || r == eof:
+ return lx.errorf("unexpected '='")
+ case r == '.':
+ return lx.errorf("unexpected '.'")
+ case r == '"' || r == '\'':
+ lx.ignore()
+ lx.push(lexKeyEnd)
+ return lexQuotedName
+ default:
+ lx.push(lexKeyEnd)
+ return lexBareName
+ }
+}
+
+// lexKeyEnd consumes the end of a key and trims whitespace (up to the key
+// separator).
+func lexKeyEnd(lx *lexer) stateFn {
+ lx.skip(isWhitespace)
+ switch r := lx.next(); {
+ case isWhitespace(r):
+ return lexSkip(lx, lexKeyEnd)
+ case r == eof:
+ return lx.errorf("unexpected EOF; expected key separator '='")
+ case r == '.':
+ lx.ignore()
+ return lexKeyNameStart
+ case r == '=':
+ lx.emit(itemKeyEnd)
+ return lexSkip(lx, lexValue)
+ default:
+ return lx.errorf("expected '.' or '=', but got %q instead", r)
+ }
+}
+
+// lexValue starts the consumption of a value anywhere a value is expected.
+// lexValue will ignore whitespace.
+// After a value is lexed, the last state on the next is popped and returned.
+func lexValue(lx *lexer) stateFn {
+ // We allow whitespace to precede a value, but NOT newlines.
+ // In array syntax, the array states are responsible for ignoring newlines.
+ r := lx.next()
+ switch {
+ case isWhitespace(r):
+ return lexSkip(lx, lexValue)
+ case isDigit(r):
+ lx.backup() // avoid an extra state and use the same as above
+ return lexNumberOrDateStart
+ }
+ switch r {
+ case '[':
+ lx.ignore()
+ lx.emit(itemArray)
+ return lexArrayValue
+ case '{':
+ lx.ignore()
+ lx.emit(itemInlineTableStart)
+ return lexInlineTableValue
+ case '"':
+ if lx.accept('"') {
+ if lx.accept('"') {
+ lx.ignore() // Ignore """
+ return lexMultilineString
+ }
+ lx.backup()
+ }
+ lx.ignore() // ignore the '"'
+ return lexString
+ case '\'':
+ if lx.accept('\'') {
+ if lx.accept('\'') {
+ lx.ignore() // Ignore """
+ return lexMultilineRawString
+ }
+ lx.backup()
+ }
+ lx.ignore() // ignore the "'"
+ return lexRawString
+ case '.': // special error case, be kind to users
+ return lx.errorf("floats must start with a digit, not '.'")
+ case 'i', 'n':
+ if (lx.accept('n') && lx.accept('f')) || (lx.accept('a') && lx.accept('n')) {
+ lx.emit(itemFloat)
+ return lx.pop()
+ }
+ case '-', '+':
+ return lexDecimalNumberStart
+ }
+ if unicode.IsLetter(r) {
+ // Be permissive here; lexBool will give a nice error if the
+ // user wrote something like
+ // x = foo
+ // (i.e. not 'true' or 'false' but is something else word-like.)
+ lx.backup()
+ return lexBool
+ }
+ if r == eof {
+ return lx.errorf("unexpected EOF; expected value")
+ }
+ return lx.errorf("expected value but found %q instead", r)
+}
+
+// lexArrayValue consumes one value in an array. It assumes that '[' or ','
+// have already been consumed. All whitespace and newlines are ignored.
+func lexArrayValue(lx *lexer) stateFn {
+ r := lx.next()
+ switch {
+ case isWhitespace(r) || isNL(r):
+ return lexSkip(lx, lexArrayValue)
+ case r == '#':
+ lx.push(lexArrayValue)
+ return lexCommentStart
+ case r == ',':
+ return lx.errorf("unexpected comma")
+ case r == ']':
+ return lexArrayEnd
+ }
+
+ lx.backup()
+ lx.push(lexArrayValueEnd)
+ return lexValue
+}
+
+// lexArrayValueEnd consumes everything between the end of an array value and
+// the next value (or the end of the array): it ignores whitespace and newlines
+// and expects either a ',' or a ']'.
+func lexArrayValueEnd(lx *lexer) stateFn {
+ switch r := lx.next(); {
+ case isWhitespace(r) || isNL(r):
+ return lexSkip(lx, lexArrayValueEnd)
+ case r == '#':
+ lx.push(lexArrayValueEnd)
+ return lexCommentStart
+ case r == ',':
+ lx.ignore()
+ return lexArrayValue // move on to the next value
+ case r == ']':
+ return lexArrayEnd
+ default:
+ return lx.errorf("expected a comma (',') or array terminator (']'), but got %s", runeOrEOF(r))
+ }
+}
+
+// lexArrayEnd finishes the lexing of an array.
+// It assumes that a ']' has just been consumed.
+func lexArrayEnd(lx *lexer) stateFn {
+ lx.ignore()
+ lx.emit(itemArrayEnd)
+ return lx.pop()
+}
+
+// lexInlineTableValue consumes one key/value pair in an inline table.
+// It assumes that '{' or ',' have already been consumed. Whitespace is ignored.
+func lexInlineTableValue(lx *lexer) stateFn {
+ r := lx.next()
+ switch {
+ case isWhitespace(r):
+ return lexSkip(lx, lexInlineTableValue)
+ case isNL(r):
+ if lx.tomlNext {
+ return lexSkip(lx, lexInlineTableValue)
+ }
+ return lx.errorPrevLine(errLexInlineTableNL{})
+ case r == '#':
+ lx.push(lexInlineTableValue)
+ return lexCommentStart
+ case r == ',':
+ return lx.errorf("unexpected comma")
+ case r == '}':
+ return lexInlineTableEnd
+ }
+ lx.backup()
+ lx.push(lexInlineTableValueEnd)
+ return lexKeyStart
+}
+
+// lexInlineTableValueEnd consumes everything between the end of an inline table
+// key/value pair and the next pair (or the end of the table):
+// it ignores whitespace and expects either a ',' or a '}'.
+func lexInlineTableValueEnd(lx *lexer) stateFn {
+ switch r := lx.next(); {
+ case isWhitespace(r):
+ return lexSkip(lx, lexInlineTableValueEnd)
+ case isNL(r):
+ if lx.tomlNext {
+ return lexSkip(lx, lexInlineTableValueEnd)
+ }
+ return lx.errorPrevLine(errLexInlineTableNL{})
+ case r == '#':
+ lx.push(lexInlineTableValueEnd)
+ return lexCommentStart
+ case r == ',':
+ lx.ignore()
+ lx.skip(isWhitespace)
+ if lx.peek() == '}' {
+ if lx.tomlNext {
+ return lexInlineTableValueEnd
+ }
+ return lx.errorf("trailing comma not allowed in inline tables")
+ }
+ return lexInlineTableValue
+ case r == '}':
+ return lexInlineTableEnd
+ default:
+ return lx.errorf("expected a comma or an inline table terminator '}', but got %s instead", runeOrEOF(r))
+ }
+}
+
+func runeOrEOF(r rune) string {
+ if r == eof {
+ return "end of file"
+ }
+ return "'" + string(r) + "'"
+}
+
+// lexInlineTableEnd finishes the lexing of an inline table.
+// It assumes that a '}' has just been consumed.
+func lexInlineTableEnd(lx *lexer) stateFn {
+ lx.ignore()
+ lx.emit(itemInlineTableEnd)
+ return lx.pop()
+}
+
+// lexString consumes the inner contents of a string. It assumes that the
+// beginning '"' has already been consumed and ignored.
+func lexString(lx *lexer) stateFn {
+ r := lx.next()
+ switch {
+ case r == eof:
+ return lx.errorf(`unexpected EOF; expected '"'`)
+ case isNL(r):
+ return lx.errorPrevLine(errLexStringNL{})
+ case r == '\\':
+ lx.push(lexString)
+ return lexStringEscape
+ case r == '"':
+ lx.backup()
+ lx.emit(itemString)
+ lx.next()
+ lx.ignore()
+ return lx.pop()
+ }
+ return lexString
+}
+
+// lexMultilineString consumes the inner contents of a string. It assumes that
+// the beginning '"""' has already been consumed and ignored.
+func lexMultilineString(lx *lexer) stateFn {
+ r := lx.next()
+ switch r {
+ default:
+ return lexMultilineString
+ case eof:
+ return lx.errorf(`unexpected EOF; expected '"""'`)
+ case '\\':
+ return lexMultilineStringEscape
+ case '"':
+ /// Found " → try to read two more "".
+ if lx.accept('"') {
+ if lx.accept('"') {
+ /// Peek ahead: the string can contain " and "", including at the
+ /// end: """str"""""
+ /// 6 or more at the end, however, is an error.
+ if lx.peek() == '"' {
+ /// Check if we already lexed 5 's; if so we have 6 now, and
+ /// that's just too many man!
+ ///
+ /// Second check is for the edge case:
+ ///
+ /// two quotes allowed.
+ /// vv
+ /// """lol \""""""
+ /// ^^ ^^^---- closing three
+ /// escaped
+ ///
+ /// But ugly, but it works
+ if strings.HasSuffix(lx.current(), `"""""`) && !strings.HasSuffix(lx.current(), `\"""""`) {
+ return lx.errorf(`unexpected '""""""'`)
+ }
+ lx.backup()
+ lx.backup()
+ return lexMultilineString
+ }
+
+ lx.backup() /// backup: don't include the """ in the item.
+ lx.backup()
+ lx.backup()
+ lx.emit(itemMultilineString)
+ lx.next() /// Read over ''' again and discard it.
+ lx.next()
+ lx.next()
+ lx.ignore()
+ return lx.pop()
+ }
+ lx.backup()
+ }
+ return lexMultilineString
+ }
+}
+
+// lexRawString consumes a raw string. Nothing can be escaped in such a string.
+// It assumes that the beginning "'" has already been consumed and ignored.
+func lexRawString(lx *lexer) stateFn {
+ r := lx.next()
+ switch {
+ default:
+ return lexRawString
+ case r == eof:
+ return lx.errorf(`unexpected EOF; expected "'"`)
+ case isNL(r):
+ return lx.errorPrevLine(errLexStringNL{})
+ case r == '\'':
+ lx.backup()
+ lx.emit(itemRawString)
+ lx.next()
+ lx.ignore()
+ return lx.pop()
+ }
+}
+
+// lexMultilineRawString consumes a raw string. Nothing can be escaped in such a
+// string. It assumes that the beginning triple-' has already been consumed and
+// ignored.
+func lexMultilineRawString(lx *lexer) stateFn {
+ r := lx.next()
+ switch r {
+ default:
+ return lexMultilineRawString
+ case eof:
+ return lx.errorf(`unexpected EOF; expected "'''"`)
+ case '\'':
+ /// Found ' → try to read two more ''.
+ if lx.accept('\'') {
+ if lx.accept('\'') {
+ /// Peek ahead: the string can contain ' and '', including at the
+ /// end: '''str'''''
+ /// 6 or more at the end, however, is an error.
+ if lx.peek() == '\'' {
+ /// Check if we already lexed 5 's; if so we have 6 now, and
+ /// that's just too many man!
+ if strings.HasSuffix(lx.current(), "'''''") {
+ return lx.errorf(`unexpected "''''''"`)
+ }
+ lx.backup()
+ lx.backup()
+ return lexMultilineRawString
+ }
+
+ lx.backup() /// backup: don't include the ''' in the item.
+ lx.backup()
+ lx.backup()
+ lx.emit(itemRawMultilineString)
+ lx.next() /// Read over ''' again and discard it.
+ lx.next()
+ lx.next()
+ lx.ignore()
+ return lx.pop()
+ }
+ lx.backup()
+ }
+ return lexMultilineRawString
+ }
+}
+
+// lexMultilineStringEscape consumes an escaped character. It assumes that the
+// preceding '\\' has already been consumed.
+func lexMultilineStringEscape(lx *lexer) stateFn {
+ if isNL(lx.next()) { /// \ escaping newline.
+ return lexMultilineString
+ }
+ lx.backup()
+ lx.push(lexMultilineString)
+ return lexStringEscape(lx)
+}
+
+func lexStringEscape(lx *lexer) stateFn {
+ r := lx.next()
+ switch r {
+ case 'e':
+ if !lx.tomlNext {
+ return lx.error(errLexEscape{r})
+ }
+ fallthrough
+ case 'b':
+ fallthrough
+ case 't':
+ fallthrough
+ case 'n':
+ fallthrough
+ case 'f':
+ fallthrough
+ case 'r':
+ fallthrough
+ case '"':
+ fallthrough
+ case ' ', '\t':
+ // Inside """ .. """ strings you can use \ to escape newlines, and any
+ // amount of whitespace can be between the \ and \n.
+ fallthrough
+ case '\\':
+ return lx.pop()
+ case 'x':
+ if !lx.tomlNext {
+ return lx.error(errLexEscape{r})
+ }
+ return lexHexEscape
+ case 'u':
+ return lexShortUnicodeEscape
+ case 'U':
+ return lexLongUnicodeEscape
+ }
+ return lx.error(errLexEscape{r})
+}
+
+func lexHexEscape(lx *lexer) stateFn {
+ var r rune
+ for i := 0; i < 2; i++ {
+ r = lx.next()
+ if !isHexadecimal(r) {
+ return lx.errorf(
+ `expected two hexadecimal digits after '\x', but got %q instead`,
+ lx.current())
+ }
+ }
+ return lx.pop()
+}
+
+func lexShortUnicodeEscape(lx *lexer) stateFn {
+ var r rune
+ for i := 0; i < 4; i++ {
+ r = lx.next()
+ if !isHexadecimal(r) {
+ return lx.errorf(
+ `expected four hexadecimal digits after '\u', but got %q instead`,
+ lx.current())
+ }
+ }
+ return lx.pop()
+}
+
+func lexLongUnicodeEscape(lx *lexer) stateFn {
+ var r rune
+ for i := 0; i < 8; i++ {
+ r = lx.next()
+ if !isHexadecimal(r) {
+ return lx.errorf(
+ `expected eight hexadecimal digits after '\U', but got %q instead`,
+ lx.current())
+ }
+ }
+ return lx.pop()
+}
+
+// lexNumberOrDateStart processes the first character of a value which begins
+// with a digit. It exists to catch values starting with '0', so that
+// lexBaseNumberOrDate can differentiate base prefixed integers from other
+// types.
+func lexNumberOrDateStart(lx *lexer) stateFn {
+ r := lx.next()
+ switch r {
+ case '0':
+ return lexBaseNumberOrDate
+ }
+
+ if !isDigit(r) {
+ // The only way to reach this state is if the value starts
+ // with a digit, so specifically treat anything else as an
+ // error.
+ return lx.errorf("expected a digit but got %q", r)
+ }
+
+ return lexNumberOrDate
+}
+
+// lexNumberOrDate consumes either an integer, float or datetime.
+func lexNumberOrDate(lx *lexer) stateFn {
+ r := lx.next()
+ if isDigit(r) {
+ return lexNumberOrDate
+ }
+ switch r {
+ case '-', ':':
+ return lexDatetime
+ case '_':
+ return lexDecimalNumber
+ case '.', 'e', 'E':
+ return lexFloat
+ }
+
+ lx.backup()
+ lx.emit(itemInteger)
+ return lx.pop()
+}
+
+// lexDatetime consumes a Datetime, to a first approximation.
+// The parser validates that it matches one of the accepted formats.
+func lexDatetime(lx *lexer) stateFn {
+ r := lx.next()
+ if isDigit(r) {
+ return lexDatetime
+ }
+ switch r {
+ case '-', ':', 'T', 't', ' ', '.', 'Z', 'z', '+':
+ return lexDatetime
+ }
+
+ lx.backup()
+ lx.emitTrim(itemDatetime)
+ return lx.pop()
+}
+
+// lexHexInteger consumes a hexadecimal integer after seeing the '0x' prefix.
+func lexHexInteger(lx *lexer) stateFn {
+ r := lx.next()
+ if isHexadecimal(r) {
+ return lexHexInteger
+ }
+ switch r {
+ case '_':
+ return lexHexInteger
+ }
+
+ lx.backup()
+ lx.emit(itemInteger)
+ return lx.pop()
+}
+
+// lexOctalInteger consumes an octal integer after seeing the '0o' prefix.
+func lexOctalInteger(lx *lexer) stateFn {
+ r := lx.next()
+ if isOctal(r) {
+ return lexOctalInteger
+ }
+ switch r {
+ case '_':
+ return lexOctalInteger
+ }
+
+ lx.backup()
+ lx.emit(itemInteger)
+ return lx.pop()
+}
+
+// lexBinaryInteger consumes a binary integer after seeing the '0b' prefix.
+func lexBinaryInteger(lx *lexer) stateFn {
+ r := lx.next()
+ if isBinary(r) {
+ return lexBinaryInteger
+ }
+ switch r {
+ case '_':
+ return lexBinaryInteger
+ }
+
+ lx.backup()
+ lx.emit(itemInteger)
+ return lx.pop()
+}
+
+// lexDecimalNumber consumes a decimal float or integer.
+func lexDecimalNumber(lx *lexer) stateFn {
+ r := lx.next()
+ if isDigit(r) {
+ return lexDecimalNumber
+ }
+ switch r {
+ case '.', 'e', 'E':
+ return lexFloat
+ case '_':
+ return lexDecimalNumber
+ }
+
+ lx.backup()
+ lx.emit(itemInteger)
+ return lx.pop()
+}
+
+// lexDecimalNumber consumes the first digit of a number beginning with a sign.
+// It assumes the sign has already been consumed. Values which start with a sign
+// are only allowed to be decimal integers or floats.
+//
+// The special "nan" and "inf" values are also recognized.
+func lexDecimalNumberStart(lx *lexer) stateFn {
+ r := lx.next()
+
+ // Special error cases to give users better error messages
+ switch r {
+ case 'i':
+ if !lx.accept('n') || !lx.accept('f') {
+ return lx.errorf("invalid float: '%s'", lx.current())
+ }
+ lx.emit(itemFloat)
+ return lx.pop()
+ case 'n':
+ if !lx.accept('a') || !lx.accept('n') {
+ return lx.errorf("invalid float: '%s'", lx.current())
+ }
+ lx.emit(itemFloat)
+ return lx.pop()
+ case '0':
+ p := lx.peek()
+ switch p {
+ case 'b', 'o', 'x':
+ return lx.errorf("cannot use sign with non-decimal numbers: '%s%c'", lx.current(), p)
+ }
+ case '.':
+ return lx.errorf("floats must start with a digit, not '.'")
+ }
+
+ if isDigit(r) {
+ return lexDecimalNumber
+ }
+
+ return lx.errorf("expected a digit but got %q", r)
+}
+
+// lexBaseNumberOrDate differentiates between the possible values which
+// start with '0'. It assumes that before reaching this state, the initial '0'
+// has been consumed.
+func lexBaseNumberOrDate(lx *lexer) stateFn {
+ r := lx.next()
+ // Note: All datetimes start with at least two digits, so we don't
+ // handle date characters (':', '-', etc.) here.
+ if isDigit(r) {
+ return lexNumberOrDate
+ }
+ switch r {
+ case '_':
+ // Can only be decimal, because there can't be an underscore
+ // between the '0' and the base designator, and dates can't
+ // contain underscores.
+ return lexDecimalNumber
+ case '.', 'e', 'E':
+ return lexFloat
+ case 'b':
+ r = lx.peek()
+ if !isBinary(r) {
+ lx.errorf("not a binary number: '%s%c'", lx.current(), r)
+ }
+ return lexBinaryInteger
+ case 'o':
+ r = lx.peek()
+ if !isOctal(r) {
+ lx.errorf("not an octal number: '%s%c'", lx.current(), r)
+ }
+ return lexOctalInteger
+ case 'x':
+ r = lx.peek()
+ if !isHexadecimal(r) {
+ lx.errorf("not a hexidecimal number: '%s%c'", lx.current(), r)
+ }
+ return lexHexInteger
+ }
+
+ lx.backup()
+ lx.emit(itemInteger)
+ return lx.pop()
+}
+
+// lexFloat consumes the elements of a float. It allows any sequence of
+// float-like characters, so floats emitted by the lexer are only a first
+// approximation and must be validated by the parser.
+func lexFloat(lx *lexer) stateFn {
+ r := lx.next()
+ if isDigit(r) {
+ return lexFloat
+ }
+ switch r {
+ case '_', '.', '-', '+', 'e', 'E':
+ return lexFloat
+ }
+
+ lx.backup()
+ lx.emit(itemFloat)
+ return lx.pop()
+}
+
+// lexBool consumes a bool string: 'true' or 'false.
+func lexBool(lx *lexer) stateFn {
+ var rs []rune
+ for {
+ r := lx.next()
+ if !unicode.IsLetter(r) {
+ lx.backup()
+ break
+ }
+ rs = append(rs, r)
+ }
+ s := string(rs)
+ switch s {
+ case "true", "false":
+ lx.emit(itemBool)
+ return lx.pop()
+ }
+ return lx.errorf("expected value but found %q instead", s)
+}
+
+// lexCommentStart begins the lexing of a comment. It will emit
+// itemCommentStart and consume no characters, passing control to lexComment.
+func lexCommentStart(lx *lexer) stateFn {
+ lx.ignore()
+ lx.emit(itemCommentStart)
+ return lexComment
+}
+
+// lexComment lexes an entire comment. It assumes that '#' has been consumed.
+// It will consume *up to* the first newline character, and pass control
+// back to the last state on the stack.
+func lexComment(lx *lexer) stateFn {
+ switch r := lx.next(); {
+ case isNL(r) || r == eof:
+ lx.backup()
+ lx.emit(itemText)
+ return lx.pop()
+ default:
+ return lexComment
+ }
+}
+
+// lexSkip ignores all slurped input and moves on to the next state.
+func lexSkip(lx *lexer, nextState stateFn) stateFn {
+ lx.ignore()
+ return nextState
+}
+
+func (s stateFn) String() string {
+ name := runtime.FuncForPC(reflect.ValueOf(s).Pointer()).Name()
+ if i := strings.LastIndexByte(name, '.'); i > -1 {
+ name = name[i+1:]
+ }
+ if s == nil {
+ name = ""
+ }
+ return name + "()"
+}
+
+func (itype itemType) String() string {
+ switch itype {
+ case itemError:
+ return "Error"
+ case itemNIL:
+ return "NIL"
+ case itemEOF:
+ return "EOF"
+ case itemText:
+ return "Text"
+ case itemString, itemRawString, itemMultilineString, itemRawMultilineString:
+ return "String"
+ case itemBool:
+ return "Bool"
+ case itemInteger:
+ return "Integer"
+ case itemFloat:
+ return "Float"
+ case itemDatetime:
+ return "DateTime"
+ case itemTableStart:
+ return "TableStart"
+ case itemTableEnd:
+ return "TableEnd"
+ case itemKeyStart:
+ return "KeyStart"
+ case itemKeyEnd:
+ return "KeyEnd"
+ case itemArray:
+ return "Array"
+ case itemArrayEnd:
+ return "ArrayEnd"
+ case itemCommentStart:
+ return "CommentStart"
+ case itemInlineTableStart:
+ return "InlineTableStart"
+ case itemInlineTableEnd:
+ return "InlineTableEnd"
+ }
+ panic(fmt.Sprintf("BUG: Unknown type '%d'.", int(itype)))
+}
+
+func (item item) String() string {
+ return fmt.Sprintf("(%s, %s)", item.typ.String(), item.val)
+}
+
+func isWhitespace(r rune) bool { return r == '\t' || r == ' ' }
+func isNL(r rune) bool { return r == '\n' || r == '\r' }
+func isControl(r rune) bool { // Control characters except \t, \r, \n
+ switch r {
+ case '\t', '\r', '\n':
+ return false
+ default:
+ return (r >= 0x00 && r <= 0x1f) || r == 0x7f
+ }
+}
+func isDigit(r rune) bool { return r >= '0' && r <= '9' }
+func isBinary(r rune) bool { return r == '0' || r == '1' }
+func isOctal(r rune) bool { return r >= '0' && r <= '7' }
+func isHexadecimal(r rune) bool {
+ return (r >= '0' && r <= '9') || (r >= 'a' && r <= 'f') || (r >= 'A' && r <= 'F')
+}
+
+func isBareKeyChar(r rune, tomlNext bool) bool {
+ if tomlNext {
+ return (r >= 'A' && r <= 'Z') ||
+ (r >= 'a' && r <= 'z') ||
+ (r >= '0' && r <= '9') ||
+ r == '_' || r == '-' ||
+ r == 0xb2 || r == 0xb3 || r == 0xb9 || (r >= 0xbc && r <= 0xbe) ||
+ (r >= 0xc0 && r <= 0xd6) || (r >= 0xd8 && r <= 0xf6) || (r >= 0xf8 && r <= 0x037d) ||
+ (r >= 0x037f && r <= 0x1fff) ||
+ (r >= 0x200c && r <= 0x200d) || (r >= 0x203f && r <= 0x2040) ||
+ (r >= 0x2070 && r <= 0x218f) || (r >= 0x2460 && r <= 0x24ff) ||
+ (r >= 0x2c00 && r <= 0x2fef) || (r >= 0x3001 && r <= 0xd7ff) ||
+ (r >= 0xf900 && r <= 0xfdcf) || (r >= 0xfdf0 && r <= 0xfffd) ||
+ (r >= 0x10000 && r <= 0xeffff)
+ }
+
+ return (r >= 'A' && r <= 'Z') ||
+ (r >= 'a' && r <= 'z') ||
+ (r >= '0' && r <= '9') ||
+ r == '_' || r == '-'
+}
diff --git a/vendor/github.com/BurntSushi/toml/meta.go b/vendor/github.com/BurntSushi/toml/meta.go
new file mode 100644
index 000000000..2e78b24e9
--- /dev/null
+++ b/vendor/github.com/BurntSushi/toml/meta.go
@@ -0,0 +1,121 @@
+package toml
+
+import (
+ "strings"
+)
+
+// MetaData allows access to meta information about TOML data that's not
+// accessible otherwise.
+//
+// It allows checking if a key is defined in the TOML data, whether any keys
+// were undecoded, and the TOML type of a key.
+type MetaData struct {
+ context Key // Used only during decoding.
+
+ keyInfo map[string]keyInfo
+ mapping map[string]interface{}
+ keys []Key
+ decoded map[string]struct{}
+ data []byte // Input file; for errors.
+}
+
+// IsDefined reports if the key exists in the TOML data.
+//
+// The key should be specified hierarchically, for example to access the TOML
+// key "a.b.c" you would use IsDefined("a", "b", "c"). Keys are case sensitive.
+//
+// Returns false for an empty key.
+func (md *MetaData) IsDefined(key ...string) bool {
+ if len(key) == 0 {
+ return false
+ }
+
+ var (
+ hash map[string]interface{}
+ ok bool
+ hashOrVal interface{} = md.mapping
+ )
+ for _, k := range key {
+ if hash, ok = hashOrVal.(map[string]interface{}); !ok {
+ return false
+ }
+ if hashOrVal, ok = hash[k]; !ok {
+ return false
+ }
+ }
+ return true
+}
+
+// Type returns a string representation of the type of the key specified.
+//
+// Type will return the empty string if given an empty key or a key that does
+// not exist. Keys are case sensitive.
+func (md *MetaData) Type(key ...string) string {
+ if ki, ok := md.keyInfo[Key(key).String()]; ok {
+ return ki.tomlType.typeString()
+ }
+ return ""
+}
+
+// Keys returns a slice of every key in the TOML data, including key groups.
+//
+// Each key is itself a slice, where the first element is the top of the
+// hierarchy and the last is the most specific. The list will have the same
+// order as the keys appeared in the TOML data.
+//
+// All keys returned are non-empty.
+func (md *MetaData) Keys() []Key {
+ return md.keys
+}
+
+// Undecoded returns all keys that have not been decoded in the order in which
+// they appear in the original TOML document.
+//
+// This includes keys that haven't been decoded because of a [Primitive] value.
+// Once the Primitive value is decoded, the keys will be considered decoded.
+//
+// Also note that decoding into an empty interface will result in no decoding,
+// and so no keys will be considered decoded.
+//
+// In this sense, the Undecoded keys correspond to keys in the TOML document
+// that do not have a concrete type in your representation.
+func (md *MetaData) Undecoded() []Key {
+ undecoded := make([]Key, 0, len(md.keys))
+ for _, key := range md.keys {
+ if _, ok := md.decoded[key.String()]; !ok {
+ undecoded = append(undecoded, key)
+ }
+ }
+ return undecoded
+}
+
+// Key represents any TOML key, including key groups. Use [MetaData.Keys] to get
+// values of this type.
+type Key []string
+
+func (k Key) String() string {
+ ss := make([]string, len(k))
+ for i := range k {
+ ss[i] = k.maybeQuoted(i)
+ }
+ return strings.Join(ss, ".")
+}
+
+func (k Key) maybeQuoted(i int) string {
+ if k[i] == "" {
+ return `""`
+ }
+ for _, c := range k[i] {
+ if !isBareKeyChar(c, false) {
+ return `"` + dblQuotedReplacer.Replace(k[i]) + `"`
+ }
+ }
+ return k[i]
+}
+
+func (k Key) add(piece string) Key {
+ newKey := make(Key, len(k)+1)
+ copy(newKey, k)
+ newKey[len(k)] = piece
+ return newKey
+}
diff --git a/vendor/github.com/BurntSushi/toml/parse.go b/vendor/github.com/BurntSushi/toml/parse.go
new file mode 100644
index 000000000..9c1915369
--- /dev/null
+++ b/vendor/github.com/BurntSushi/toml/parse.go
@@ -0,0 +1,811 @@
+package toml
+
+import (
+ "fmt"
+ "os"
+ "strconv"
+ "strings"
+ "time"
+ "unicode/utf8"
+
+ "github.com/BurntSushi/toml/internal"
+)
+
+type parser struct {
+ lx *lexer
+ context Key // Full key for the current hash in scope.
+ currentKey string // Base key name for everything except hashes.
+ pos Position // Current position in the TOML file.
+ tomlNext bool
+
+ ordered []Key // List of keys in the order that they appear in the TOML data.
+
+ keyInfo map[string]keyInfo // Map keyname → info about the TOML key.
+ mapping map[string]interface{} // Map keyname → key value.
+ implicits map[string]struct{} // Record implicit keys (e.g. "key.group.names").
+}
+
+type keyInfo struct {
+ pos Position
+ tomlType tomlType
+}
+
+func parse(data string) (p *parser, err error) {
+ _, tomlNext := os.LookupEnv("BURNTSUSHI_TOML_110")
+
+ defer func() {
+ if r := recover(); r != nil {
+ if pErr, ok := r.(ParseError); ok {
+ pErr.input = data
+ err = pErr
+ return
+ }
+ panic(r)
+ }
+ }()
+
+ // Read over BOM; do this here as the lexer calls utf8.DecodeRuneInString()
+ // which mangles stuff. UTF-16 BOM isn't strictly valid, but some tools add
+ // it anyway.
+ if strings.HasPrefix(data, "\xff\xfe") || strings.HasPrefix(data, "\xfe\xff") { // UTF-16
+ data = data[2:]
+ } else if strings.HasPrefix(data, "\xef\xbb\xbf") { // UTF-8
+ data = data[3:]
+ }
+
+ // Examine first few bytes for NULL bytes; this probably means it's a UTF-16
+ // file (second byte in surrogate pair being NULL). Again, do this here to
+ // avoid having to deal with UTF-8/16 stuff in the lexer.
+ ex := 6
+ if len(data) < 6 {
+ ex = len(data)
+ }
+ if i := strings.IndexRune(data[:ex], 0); i > -1 {
+ return nil, ParseError{
+ Message: "files cannot contain NULL bytes; probably using UTF-16; TOML files must be UTF-8",
+ Position: Position{Line: 1, Start: i, Len: 1},
+ Line: 1,
+ input: data,
+ }
+ }
+
+ p = &parser{
+ keyInfo: make(map[string]keyInfo),
+ mapping: make(map[string]interface{}),
+ lx: lex(data, tomlNext),
+ ordered: make([]Key, 0),
+ implicits: make(map[string]struct{}),
+ tomlNext: tomlNext,
+ }
+ for {
+ item := p.next()
+ if item.typ == itemEOF {
+ break
+ }
+ p.topLevel(item)
+ }
+
+ return p, nil
+}
+
+func (p *parser) panicErr(it item, err error) {
+ panic(ParseError{
+ err: err,
+ Position: it.pos,
+ Line: it.pos.Len,
+ LastKey: p.current(),
+ })
+}
+
+func (p *parser) panicItemf(it item, format string, v ...interface{}) {
+ panic(ParseError{
+ Message: fmt.Sprintf(format, v...),
+ Position: it.pos,
+ Line: it.pos.Len,
+ LastKey: p.current(),
+ })
+}
+
+func (p *parser) panicf(format string, v ...interface{}) {
+ panic(ParseError{
+ Message: fmt.Sprintf(format, v...),
+ Position: p.pos,
+ Line: p.pos.Line,
+ LastKey: p.current(),
+ })
+}
+
+func (p *parser) next() item {
+ it := p.lx.nextItem()
+ //fmt.Printf("ITEM %-18s line %-3d │ %q\n", it.typ, it.pos.Line, it.val)
+ if it.typ == itemError {
+ if it.err != nil {
+ panic(ParseError{
+ Position: it.pos,
+ Line: it.pos.Line,
+ LastKey: p.current(),
+ err: it.err,
+ })
+ }
+
+ p.panicItemf(it, "%s", it.val)
+ }
+ return it
+}
+
+func (p *parser) nextPos() item {
+ it := p.next()
+ p.pos = it.pos
+ return it
+}
+
+func (p *parser) bug(format string, v ...interface{}) {
+ panic(fmt.Sprintf("BUG: "+format+"\n\n", v...))
+}
+
+func (p *parser) expect(typ itemType) item {
+ it := p.next()
+ p.assertEqual(typ, it.typ)
+ return it
+}
+
+func (p *parser) assertEqual(expected, got itemType) {
+ if expected != got {
+ p.bug("Expected '%s' but got '%s'.", expected, got)
+ }
+}
+
+func (p *parser) topLevel(item item) {
+ switch item.typ {
+ case itemCommentStart: // # ..
+ p.expect(itemText)
+ case itemTableStart: // [ .. ]
+ name := p.nextPos()
+
+ var key Key
+ for ; name.typ != itemTableEnd && name.typ != itemEOF; name = p.next() {
+ key = append(key, p.keyString(name))
+ }
+ p.assertEqual(itemTableEnd, name.typ)
+
+ p.addContext(key, false)
+ p.setType("", tomlHash, item.pos)
+ p.ordered = append(p.ordered, key)
+ case itemArrayTableStart: // [[ .. ]]
+ name := p.nextPos()
+
+ var key Key
+ for ; name.typ != itemArrayTableEnd && name.typ != itemEOF; name = p.next() {
+ key = append(key, p.keyString(name))
+ }
+ p.assertEqual(itemArrayTableEnd, name.typ)
+
+ p.addContext(key, true)
+ p.setType("", tomlArrayHash, item.pos)
+ p.ordered = append(p.ordered, key)
+ case itemKeyStart: // key = ..
+ outerContext := p.context
+ /// Read all the key parts (e.g. 'a' and 'b' in 'a.b')
+ k := p.nextPos()
+ var key Key
+ for ; k.typ != itemKeyEnd && k.typ != itemEOF; k = p.next() {
+ key = append(key, p.keyString(k))
+ }
+ p.assertEqual(itemKeyEnd, k.typ)
+
+ /// The current key is the last part.
+ p.currentKey = key[len(key)-1]
+
+ /// All the other parts (if any) are the context; need to set each part
+ /// as implicit.
+ context := key[:len(key)-1]
+ for i := range context {
+ p.addImplicitContext(append(p.context, context[i:i+1]...))
+ }
+ p.ordered = append(p.ordered, p.context.add(p.currentKey))
+
+ /// Set value.
+ vItem := p.next()
+ val, typ := p.value(vItem, false)
+ p.set(p.currentKey, val, typ, vItem.pos)
+
+ /// Remove the context we added (preserving any context from [tbl] lines).
+ p.context = outerContext
+ p.currentKey = ""
+ default:
+ p.bug("Unexpected type at top level: %s", item.typ)
+ }
+}
+
+// Gets a string for a key (or part of a key in a table name).
+func (p *parser) keyString(it item) string {
+ switch it.typ {
+ case itemText:
+ return it.val
+ case itemString, itemMultilineString,
+ itemRawString, itemRawMultilineString:
+ s, _ := p.value(it, false)
+ return s.(string)
+ default:
+ p.bug("Unexpected key type: %s", it.typ)
+ }
+ panic("unreachable")
+}
+
+var datetimeRepl = strings.NewReplacer(
+ "z", "Z",
+ "t", "T",
+ " ", "T")
+
+// value translates an expected value from the lexer into a Go value wrapped
+// as an empty interface.
+func (p *parser) value(it item, parentIsArray bool) (interface{}, tomlType) {
+ switch it.typ {
+ case itemString:
+ return p.replaceEscapes(it, it.val), p.typeOfPrimitive(it)
+ case itemMultilineString:
+ return p.replaceEscapes(it, p.stripEscapedNewlines(stripFirstNewline(it.val))), p.typeOfPrimitive(it)
+ case itemRawString:
+ return it.val, p.typeOfPrimitive(it)
+ case itemRawMultilineString:
+ return stripFirstNewline(it.val), p.typeOfPrimitive(it)
+ case itemInteger:
+ return p.valueInteger(it)
+ case itemFloat:
+ return p.valueFloat(it)
+ case itemBool:
+ switch it.val {
+ case "true":
+ return true, p.typeOfPrimitive(it)
+ case "false":
+ return false, p.typeOfPrimitive(it)
+ default:
+ p.bug("Expected boolean value, but got '%s'.", it.val)
+ }
+ case itemDatetime:
+ return p.valueDatetime(it)
+ case itemArray:
+ return p.valueArray(it)
+ case itemInlineTableStart:
+ return p.valueInlineTable(it, parentIsArray)
+ default:
+ p.bug("Unexpected value type: %s", it.typ)
+ }
+ panic("unreachable")
+}
+
+func (p *parser) valueInteger(it item) (interface{}, tomlType) {
+ if !numUnderscoresOK(it.val) {
+ p.panicItemf(it, "Invalid integer %q: underscores must be surrounded by digits", it.val)
+ }
+ if numHasLeadingZero(it.val) {
+ p.panicItemf(it, "Invalid integer %q: cannot have leading zeroes", it.val)
+ }
+
+ num, err := strconv.ParseInt(it.val, 0, 64)
+ if err != nil {
+ // Distinguish integer values. Normally, it'd be a bug if the lexer
+ // provides an invalid integer, but it's possible that the number is
+ // out of range of valid values (which the lexer cannot determine).
+ // So mark the former as a bug but the latter as a legitimate user
+ // error.
+ if e, ok := err.(*strconv.NumError); ok && e.Err == strconv.ErrRange {
+ p.panicErr(it, errParseRange{i: it.val, size: "int64"})
+ } else {
+ p.bug("Expected integer value, but got '%s'.", it.val)
+ }
+ }
+ return num, p.typeOfPrimitive(it)
+}
+
+func (p *parser) valueFloat(it item) (interface{}, tomlType) {
+ parts := strings.FieldsFunc(it.val, func(r rune) bool {
+ switch r {
+ case '.', 'e', 'E':
+ return true
+ }
+ return false
+ })
+ for _, part := range parts {
+ if !numUnderscoresOK(part) {
+ p.panicItemf(it, "Invalid float %q: underscores must be surrounded by digits", it.val)
+ }
+ }
+ if len(parts) > 0 && numHasLeadingZero(parts[0]) {
+ p.panicItemf(it, "Invalid float %q: cannot have leading zeroes", it.val)
+ }
+ if !numPeriodsOK(it.val) {
+ // As a special case, numbers like '123.' or '1.e2',
+ // which are valid as far as Go/strconv are concerned,
+ // must be rejected because TOML says that a fractional
+ // part consists of '.' followed by 1+ digits.
+ p.panicItemf(it, "Invalid float %q: '.' must be followed by one or more digits", it.val)
+ }
+ val := strings.Replace(it.val, "_", "", -1)
+ if val == "+nan" || val == "-nan" { // Go doesn't support this, but TOML spec does.
+ val = "nan"
+ }
+ num, err := strconv.ParseFloat(val, 64)
+ if err != nil {
+ if e, ok := err.(*strconv.NumError); ok && e.Err == strconv.ErrRange {
+ p.panicErr(it, errParseRange{i: it.val, size: "float64"})
+ } else {
+ p.panicItemf(it, "Invalid float value: %q", it.val)
+ }
+ }
+ return num, p.typeOfPrimitive(it)
+}
+
+var dtTypes = []struct {
+ fmt string
+ zone *time.Location
+ next bool
+}{
+ {time.RFC3339Nano, time.Local, false},
+ {"2006-01-02T15:04:05.999999999", internal.LocalDatetime, false},
+ {"2006-01-02", internal.LocalDate, false},
+ {"15:04:05.999999999", internal.LocalTime, false},
+
+ // tomlNext
+ {"2006-01-02T15:04Z07:00", time.Local, true},
+ {"2006-01-02T15:04", internal.LocalDatetime, true},
+ {"15:04", internal.LocalTime, true},
+}
+
+func (p *parser) valueDatetime(it item) (interface{}, tomlType) {
+ it.val = datetimeRepl.Replace(it.val)
+ var (
+ t time.Time
+ ok bool
+ err error
+ )
+ for _, dt := range dtTypes {
+ if dt.next && !p.tomlNext {
+ continue
+ }
+ t, err = time.ParseInLocation(dt.fmt, it.val, dt.zone)
+ if err == nil {
+ ok = true
+ break
+ }
+ }
+ if !ok {
+ p.panicItemf(it, "Invalid TOML Datetime: %q.", it.val)
+ }
+ return t, p.typeOfPrimitive(it)
+}
+
+func (p *parser) valueArray(it item) (interface{}, tomlType) {
+ p.setType(p.currentKey, tomlArray, it.pos)
+
+ var (
+ types []tomlType
+
+ // Initialize to a non-nil empty slice. This makes it consistent with
+ // how S = [] decodes into a non-nil slice inside something like struct
+ // { S []string }. See #338
+ array = []interface{}{}
+ )
+ for it = p.next(); it.typ != itemArrayEnd; it = p.next() {
+ if it.typ == itemCommentStart {
+ p.expect(itemText)
+ continue
+ }
+
+ val, typ := p.value(it, true)
+ array = append(array, val)
+ types = append(types, typ)
+
+ // XXX: types isn't used here, we need it to record the accurate type
+ // information.
+ //
+ // Not entirely sure how to best store this; could use "key[0]",
+ // "key[1]" notation, or maybe store it on the Array type?
+ _ = types
+ }
+ return array, tomlArray
+}
+
+func (p *parser) valueInlineTable(it item, parentIsArray bool) (interface{}, tomlType) {
+ var (
+ hash = make(map[string]interface{})
+ outerContext = p.context
+ outerKey = p.currentKey
+ )
+
+ p.context = append(p.context, p.currentKey)
+ prevContext := p.context
+ p.currentKey = ""
+
+ p.addImplicit(p.context)
+ p.addContext(p.context, parentIsArray)
+
+ /// Loop over all table key/value pairs.
+ for it := p.next(); it.typ != itemInlineTableEnd; it = p.next() {
+ if it.typ == itemCommentStart {
+ p.expect(itemText)
+ continue
+ }
+
+ /// Read all key parts.
+ k := p.nextPos()
+ var key Key
+ for ; k.typ != itemKeyEnd && k.typ != itemEOF; k = p.next() {
+ key = append(key, p.keyString(k))
+ }
+ p.assertEqual(itemKeyEnd, k.typ)
+
+ /// The current key is the last part.
+ p.currentKey = key[len(key)-1]
+
+ /// All the other parts (if any) are the context; need to set each part
+ /// as implicit.
+ context := key[:len(key)-1]
+ for i := range context {
+ p.addImplicitContext(append(p.context, context[i:i+1]...))
+ }
+ p.ordered = append(p.ordered, p.context.add(p.currentKey))
+
+ /// Set the value.
+ val, typ := p.value(p.next(), false)
+ p.set(p.currentKey, val, typ, it.pos)
+ hash[p.currentKey] = val
+
+ /// Restore context.
+ p.context = prevContext
+ }
+ p.context = outerContext
+ p.currentKey = outerKey
+ return hash, tomlHash
+}
+
+// numHasLeadingZero checks if this number has leading zeroes, allowing for '0',
+// +/- signs, and base prefixes.
+func numHasLeadingZero(s string) bool {
+ if len(s) > 1 && s[0] == '0' && !(s[1] == 'b' || s[1] == 'o' || s[1] == 'x') { // Allow 0b, 0o, 0x
+ return true
+ }
+ if len(s) > 2 && (s[0] == '-' || s[0] == '+') && s[1] == '0' {
+ return true
+ }
+ return false
+}
+
+// numUnderscoresOK checks whether each underscore in s is surrounded by
+// characters that are not underscores.
+func numUnderscoresOK(s string) bool {
+ switch s {
+ case "nan", "+nan", "-nan", "inf", "-inf", "+inf":
+ return true
+ }
+ accept := false
+ for _, r := range s {
+ if r == '_' {
+ if !accept {
+ return false
+ }
+ }
+
+ // isHexadecimal is a superset of all the permissable characters
+ // surrounding an underscore.
+ accept = isHexadecimal(r)
+ }
+ return accept
+}
+
+// numPeriodsOK checks whether every period in s is followed by a digit.
+func numPeriodsOK(s string) bool {
+ period := false
+ for _, r := range s {
+ if period && !isDigit(r) {
+ return false
+ }
+ period = r == '.'
+ }
+ return !period
+}
+
+// Set the current context of the parser, where the context is either a hash or
+// an array of hashes, depending on the value of the `array` parameter.
+//
+// Establishing the context also makes sure that the key isn't a duplicate, and
+// will create implicit hashes automatically.
+func (p *parser) addContext(key Key, array bool) {
+ var ok bool
+
+ // Always start at the top level and drill down for our context.
+ hashContext := p.mapping
+ keyContext := make(Key, 0)
+
+ // We only need implicit hashes for key[0:-1]
+ for _, k := range key[0 : len(key)-1] {
+ _, ok = hashContext[k]
+ keyContext = append(keyContext, k)
+
+ // No key? Make an implicit hash and move on.
+ if !ok {
+ p.addImplicit(keyContext)
+ hashContext[k] = make(map[string]interface{})
+ }
+
+ // If the hash context is actually an array of tables, then set
+ // the hash context to the last element in that array.
+ //
+ // Otherwise, it better be a table, since this MUST be a key group (by
+ // virtue of it not being the last element in a key).
+ switch t := hashContext[k].(type) {
+ case []map[string]interface{}:
+ hashContext = t[len(t)-1]
+ case map[string]interface{}:
+ hashContext = t
+ default:
+ p.panicf("Key '%s' was already created as a hash.", keyContext)
+ }
+ }
+
+ p.context = keyContext
+ if array {
+ // If this is the first element for this array, then allocate a new
+ // list of tables for it.
+ k := key[len(key)-1]
+ if _, ok := hashContext[k]; !ok {
+ hashContext[k] = make([]map[string]interface{}, 0, 4)
+ }
+
+ // Add a new table. But make sure the key hasn't already been used
+ // for something else.
+ if hash, ok := hashContext[k].([]map[string]interface{}); ok {
+ hashContext[k] = append(hash, make(map[string]interface{}))
+ } else {
+ p.panicf("Key '%s' was already created and cannot be used as an array.", key)
+ }
+ } else {
+ p.setValue(key[len(key)-1], make(map[string]interface{}))
+ }
+ p.context = append(p.context, key[len(key)-1])
+}
+
+// set calls setValue and setType.
+func (p *parser) set(key string, val interface{}, typ tomlType, pos Position) {
+ p.setValue(key, val)
+ p.setType(key, typ, pos)
+}
+
+// setValue sets the given key to the given value in the current context.
+// It will make sure that the key hasn't already been defined, account for
+// implicit key groups.
+func (p *parser) setValue(key string, value interface{}) {
+ var (
+ tmpHash interface{}
+ ok bool
+ hash = p.mapping
+ keyContext Key
+ )
+ for _, k := range p.context {
+ keyContext = append(keyContext, k)
+ if tmpHash, ok = hash[k]; !ok {
+ p.bug("Context for key '%s' has not been established.", keyContext)
+ }
+ switch t := tmpHash.(type) {
+ case []map[string]interface{}:
+ // The context is a table of hashes. Pick the most recent table
+ // defined as the current hash.
+ hash = t[len(t)-1]
+ case map[string]interface{}:
+ hash = t
+ default:
+ p.panicf("Key '%s' has already been defined.", keyContext)
+ }
+ }
+ keyContext = append(keyContext, key)
+
+ if _, ok := hash[key]; ok {
+ // Normally redefining keys isn't allowed, but the key could have been
+ // defined implicitly and it's allowed to be redefined concretely. (See
+ // the `valid/implicit-and-explicit-after.toml` in toml-test)
+ //
+ // But we have to make sure to stop marking it as an implicit. (So that
+ // another redefinition provokes an error.)
+ //
+ // Note that since it has already been defined (as a hash), we don't
+ // want to overwrite it. So our business is done.
+ if p.isArray(keyContext) {
+ p.removeImplicit(keyContext)
+ hash[key] = value
+ return
+ }
+ if p.isImplicit(keyContext) {
+ p.removeImplicit(keyContext)
+ return
+ }
+
+ // Otherwise, we have a concrete key trying to override a previous
+ // key, which is *always* wrong.
+ p.panicf("Key '%s' has already been defined.", keyContext)
+ }
+
+ hash[key] = value
+}
+
+// setType sets the type of a particular value at a given key. It should be
+// called immediately AFTER setValue.
+//
+// Note that if `key` is empty, then the type given will be applied to the
+// current context (which is either a table or an array of tables).
+func (p *parser) setType(key string, typ tomlType, pos Position) {
+ keyContext := make(Key, 0, len(p.context)+1)
+ keyContext = append(keyContext, p.context...)
+ if len(key) > 0 { // allow type setting for hashes
+ keyContext = append(keyContext, key)
+ }
+ // Special case to make empty keys ("" = 1) work.
+ // Without it it will set "" rather than `""`.
+ // TODO: why is this needed? And why is this only needed here?
+ if len(keyContext) == 0 {
+ keyContext = Key{""}
+ }
+ p.keyInfo[keyContext.String()] = keyInfo{tomlType: typ, pos: pos}
+}
+
+// Implicit keys need to be created when tables are implied in "a.b.c.d = 1" and
+// "[a.b.c]" (the "a", "b", and "c" hashes are never created explicitly).
+func (p *parser) addImplicit(key Key) { p.implicits[key.String()] = struct{}{} }
+func (p *parser) removeImplicit(key Key) { delete(p.implicits, key.String()) }
+func (p *parser) isImplicit(key Key) bool { _, ok := p.implicits[key.String()]; return ok }
+func (p *parser) isArray(key Key) bool { return p.keyInfo[key.String()].tomlType == tomlArray }
+func (p *parser) addImplicitContext(key Key) { p.addImplicit(key); p.addContext(key, false) }
+
+// current returns the full key name of the current context.
+func (p *parser) current() string {
+ if len(p.currentKey) == 0 {
+ return p.context.String()
+ }
+ if len(p.context) == 0 {
+ return p.currentKey
+ }
+ return fmt.Sprintf("%s.%s", p.context, p.currentKey)
+}
+
+func stripFirstNewline(s string) string {
+ if len(s) > 0 && s[0] == '\n' {
+ return s[1:]
+ }
+ if len(s) > 1 && s[0] == '\r' && s[1] == '\n' {
+ return s[2:]
+ }
+ return s
+}
+
+// stripEscapedNewlines removes whitespace after line-ending backslashes in
+// multiline strings.
+//
+// A line-ending backslash is an unescaped \ followed only by whitespace until
+// the next newline. After a line-ending backslash, all whitespace is removed
+// until the next non-whitespace character.
+func (p *parser) stripEscapedNewlines(s string) string {
+ var b strings.Builder
+ var i int
+ for {
+ ix := strings.Index(s[i:], `\`)
+ if ix < 0 {
+ b.WriteString(s)
+ return b.String()
+ }
+ i += ix
+
+ if len(s) > i+1 && s[i+1] == '\\' {
+ // Escaped backslash.
+ i += 2
+ continue
+ }
+ // Scan until the next non-whitespace.
+ j := i + 1
+ whitespaceLoop:
+ for ; j < len(s); j++ {
+ switch s[j] {
+ case ' ', '\t', '\r', '\n':
+ default:
+ break whitespaceLoop
+ }
+ }
+ if j == i+1 {
+ // Not a whitespace escape.
+ i++
+ continue
+ }
+ if !strings.Contains(s[i:j], "\n") {
+ // This is not a line-ending backslash.
+ // (It's a bad escape sequence, but we can let
+ // replaceEscapes catch it.)
+ i++
+ continue
+ }
+ b.WriteString(s[:i])
+ s = s[j:]
+ i = 0
+ }
+}
+
+func (p *parser) replaceEscapes(it item, str string) string {
+ replaced := make([]rune, 0, len(str))
+ s := []byte(str)
+ r := 0
+ for r < len(s) {
+ if s[r] != '\\' {
+ c, size := utf8.DecodeRune(s[r:])
+ r += size
+ replaced = append(replaced, c)
+ continue
+ }
+ r += 1
+ if r >= len(s) {
+ p.bug("Escape sequence at end of string.")
+ return ""
+ }
+ switch s[r] {
+ default:
+ p.bug("Expected valid escape code after \\, but got %q.", s[r])
+ case ' ', '\t':
+ p.panicItemf(it, "invalid escape: '\\%c'", s[r])
+ case 'b':
+ replaced = append(replaced, rune(0x0008))
+ r += 1
+ case 't':
+ replaced = append(replaced, rune(0x0009))
+ r += 1
+ case 'n':
+ replaced = append(replaced, rune(0x000A))
+ r += 1
+ case 'f':
+ replaced = append(replaced, rune(0x000C))
+ r += 1
+ case 'r':
+ replaced = append(replaced, rune(0x000D))
+ r += 1
+ case 'e':
+ if p.tomlNext {
+ replaced = append(replaced, rune(0x001B))
+ r += 1
+ }
+ case '"':
+ replaced = append(replaced, rune(0x0022))
+ r += 1
+ case '\\':
+ replaced = append(replaced, rune(0x005C))
+ r += 1
+ case 'x':
+ if p.tomlNext {
+ escaped := p.asciiEscapeToUnicode(it, s[r+1:r+3])
+ replaced = append(replaced, escaped)
+ r += 3
+ }
+ case 'u':
+ // At this point, we know we have a Unicode escape of the form
+ // `uXXXX` at [r, r+5). (Because the lexer guarantees this
+ // for us.)
+ escaped := p.asciiEscapeToUnicode(it, s[r+1:r+5])
+ replaced = append(replaced, escaped)
+ r += 5
+ case 'U':
+ // At this point, we know we have a Unicode escape of the form
+ // `uXXXX` at [r, r+9). (Because the lexer guarantees this
+ // for us.)
+ escaped := p.asciiEscapeToUnicode(it, s[r+1:r+9])
+ replaced = append(replaced, escaped)
+ r += 9
+ }
+ }
+ return string(replaced)
+}
+
+func (p *parser) asciiEscapeToUnicode(it item, bs []byte) rune {
+ s := string(bs)
+ hex, err := strconv.ParseUint(strings.ToLower(s), 16, 32)
+ if err != nil {
+ p.bug("Could not parse '%s' as a hexadecimal number, but the lexer claims it's OK: %s", s, err)
+ }
+ if !utf8.ValidRune(rune(hex)) {
+ p.panicItemf(it, "Escaped character '\\u%s' is not valid UTF-8.", s)
+ }
+ return rune(hex)
+}
diff --git a/vendor/github.com/BurntSushi/toml/type_fields.go b/vendor/github.com/BurntSushi/toml/type_fields.go
new file mode 100644
index 000000000..254ca82e5
--- /dev/null
+++ b/vendor/github.com/BurntSushi/toml/type_fields.go
@@ -0,0 +1,242 @@
+package toml
+
+// Struct field handling is adapted from code in encoding/json:
+//
+// Copyright 2010 The Go Authors. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the Go distribution.
+
+import (
+ "reflect"
+ "sort"
+ "sync"
+)
+
+// A field represents a single field found in a struct.
+type field struct {
+ name string // the name of the field (`toml` tag included)
+ tag bool // whether field has a `toml` tag
+ index []int // represents the depth of an anonymous field
+ typ reflect.Type // the type of the field
+}
+
+// byName sorts field by name, breaking ties with depth,
+// then breaking ties with "name came from toml tag", then
+// breaking ties with index sequence.
+type byName []field
+
+func (x byName) Len() int { return len(x) }
+
+func (x byName) Swap(i, j int) { x[i], x[j] = x[j], x[i] }
+
+func (x byName) Less(i, j int) bool {
+ if x[i].name != x[j].name {
+ return x[i].name < x[j].name
+ }
+ if len(x[i].index) != len(x[j].index) {
+ return len(x[i].index) < len(x[j].index)
+ }
+ if x[i].tag != x[j].tag {
+ return x[i].tag
+ }
+ return byIndex(x).Less(i, j)
+}
+
+// byIndex sorts field by index sequence.
+type byIndex []field
+
+func (x byIndex) Len() int { return len(x) }
+
+func (x byIndex) Swap(i, j int) { x[i], x[j] = x[j], x[i] }
+
+func (x byIndex) Less(i, j int) bool {
+ for k, xik := range x[i].index {
+ if k >= len(x[j].index) {
+ return false
+ }
+ if xik != x[j].index[k] {
+ return xik < x[j].index[k]
+ }
+ }
+ return len(x[i].index) < len(x[j].index)
+}
+
+// typeFields returns a list of fields that TOML should recognize for the given
+// type. The algorithm is breadth-first search over the set of structs to
+// include - the top struct and then any reachable anonymous structs.
+func typeFields(t reflect.Type) []field {
+ // Anonymous fields to explore at the current level and the next.
+ current := []field{}
+ next := []field{{typ: t}}
+
+ // Count of queued names for current level and the next.
+ var count map[reflect.Type]int
+ var nextCount map[reflect.Type]int
+
+ // Types already visited at an earlier level.
+ visited := map[reflect.Type]bool{}
+
+ // Fields found.
+ var fields []field
+
+ for len(next) > 0 {
+ current, next = next, current[:0]
+ count, nextCount = nextCount, map[reflect.Type]int{}
+
+ for _, f := range current {
+ if visited[f.typ] {
+ continue
+ }
+ visited[f.typ] = true
+
+ // Scan f.typ for fields to include.
+ for i := 0; i < f.typ.NumField(); i++ {
+ sf := f.typ.Field(i)
+ if sf.PkgPath != "" && !sf.Anonymous { // unexported
+ continue
+ }
+ opts := getOptions(sf.Tag)
+ if opts.skip {
+ continue
+ }
+ index := make([]int, len(f.index)+1)
+ copy(index, f.index)
+ index[len(f.index)] = i
+
+ ft := sf.Type
+ if ft.Name() == "" && ft.Kind() == reflect.Ptr {
+ // Follow pointer.
+ ft = ft.Elem()
+ }
+
+ // Record found field and index sequence.
+ if opts.name != "" || !sf.Anonymous || ft.Kind() != reflect.Struct {
+ tagged := opts.name != ""
+ name := opts.name
+ if name == "" {
+ name = sf.Name
+ }
+ fields = append(fields, field{name, tagged, index, ft})
+ if count[f.typ] > 1 {
+ // If there were multiple instances, add a second,
+ // so that the annihilation code will see a duplicate.
+ // It only cares about the distinction between 1 or 2,
+ // so don't bother generating any more copies.
+ fields = append(fields, fields[len(fields)-1])
+ }
+ continue
+ }
+
+ // Record new anonymous struct to explore in next round.
+ nextCount[ft]++
+ if nextCount[ft] == 1 {
+ f := field{name: ft.Name(), index: index, typ: ft}
+ next = append(next, f)
+ }
+ }
+ }
+ }
+
+ sort.Sort(byName(fields))
+
+ // Delete all fields that are hidden by the Go rules for embedded fields,
+ // except that fields with TOML tags are promoted.
+
+ // The fields are sorted in primary order of name, secondary order
+ // of field index length. Loop over names; for each name, delete
+ // hidden fields by choosing the one dominant field that survives.
+ out := fields[:0]
+ for advance, i := 0, 0; i < len(fields); i += advance {
+ // One iteration per name.
+ // Find the sequence of fields with the name of this first field.
+ fi := fields[i]
+ name := fi.name
+ for advance = 1; i+advance < len(fields); advance++ {
+ fj := fields[i+advance]
+ if fj.name != name {
+ break
+ }
+ }
+ if advance == 1 { // Only one field with this name
+ out = append(out, fi)
+ continue
+ }
+ dominant, ok := dominantField(fields[i : i+advance])
+ if ok {
+ out = append(out, dominant)
+ }
+ }
+
+ fields = out
+ sort.Sort(byIndex(fields))
+
+ return fields
+}
+
+// dominantField looks through the fields, all of which are known to
+// have the same name, to find the single field that dominates the
+// others using Go's embedding rules, modified by the presence of
+// TOML tags. If there are multiple top-level fields, the boolean
+// will be false: This condition is an error in Go and we skip all
+// the fields.
+func dominantField(fields []field) (field, bool) {
+ // The fields are sorted in increasing index-length order. The winner
+ // must therefore be one with the shortest index length. Drop all
+ // longer entries, which is easy: just truncate the slice.
+ length := len(fields[0].index)
+ tagged := -1 // Index of first tagged field.
+ for i, f := range fields {
+ if len(f.index) > length {
+ fields = fields[:i]
+ break
+ }
+ if f.tag {
+ if tagged >= 0 {
+ // Multiple tagged fields at the same level: conflict.
+ // Return no field.
+ return field{}, false
+ }
+ tagged = i
+ }
+ }
+ if tagged >= 0 {
+ return fields[tagged], true
+ }
+ // All remaining fields have the same length. If there's more than one,
+ // we have a conflict (two fields named "X" at the same level) and we
+ // return no field.
+ if len(fields) > 1 {
+ return field{}, false
+ }
+ return fields[0], true
+}
+
+var fieldCache struct {
+ sync.RWMutex
+ m map[reflect.Type][]field
+}
+
+// cachedTypeFields is like typeFields but uses a cache to avoid repeated work.
+func cachedTypeFields(t reflect.Type) []field {
+ fieldCache.RLock()
+ f := fieldCache.m[t]
+ fieldCache.RUnlock()
+ if f != nil {
+ return f
+ }
+
+ // Compute fields without lock.
+ // Might duplicate effort but won't hold other computations back.
+ f = typeFields(t)
+ if f == nil {
+ f = []field{}
+ }
+
+ fieldCache.Lock()
+ if fieldCache.m == nil {
+ fieldCache.m = map[reflect.Type][]field{}
+ }
+ fieldCache.m[t] = f
+ fieldCache.Unlock()
+ return f
+}
diff --git a/vendor/github.com/BurntSushi/toml/type_toml.go b/vendor/github.com/BurntSushi/toml/type_toml.go
new file mode 100644
index 000000000..4e90d7737
--- /dev/null
+++ b/vendor/github.com/BurntSushi/toml/type_toml.go
@@ -0,0 +1,70 @@
+package toml
+
+// tomlType represents any Go type that corresponds to a TOML type.
+// While the first draft of the TOML spec has a simplistic type system that
+// probably doesn't need this level of sophistication, we seem to be militating
+// toward adding real composite types.
+type tomlType interface {
+ typeString() string
+}
+
+// typeEqual accepts any two types and returns true if they are equal.
+func typeEqual(t1, t2 tomlType) bool {
+ if t1 == nil || t2 == nil {
+ return false
+ }
+ return t1.typeString() == t2.typeString()
+}
+
+func typeIsTable(t tomlType) bool {
+ return typeEqual(t, tomlHash) || typeEqual(t, tomlArrayHash)
+}
+
+type tomlBaseType string
+
+func (btype tomlBaseType) typeString() string {
+ return string(btype)
+}
+
+func (btype tomlBaseType) String() string {
+ return btype.typeString()
+}
+
+var (
+ tomlInteger tomlBaseType = "Integer"
+ tomlFloat tomlBaseType = "Float"
+ tomlDatetime tomlBaseType = "Datetime"
+ tomlString tomlBaseType = "String"
+ tomlBool tomlBaseType = "Bool"
+ tomlArray tomlBaseType = "Array"
+ tomlHash tomlBaseType = "Hash"
+ tomlArrayHash tomlBaseType = "ArrayHash"
+)
+
+// typeOfPrimitive returns a tomlType of any primitive value in TOML.
+// Primitive values are: Integer, Float, Datetime, String and Bool.
+//
+// Passing a lexer item other than the following will cause a BUG message
+// to occur: itemString, itemBool, itemInteger, itemFloat, itemDatetime.
+func (p *parser) typeOfPrimitive(lexItem item) tomlType {
+ switch lexItem.typ {
+ case itemInteger:
+ return tomlInteger
+ case itemFloat:
+ return tomlFloat
+ case itemDatetime:
+ return tomlDatetime
+ case itemString:
+ return tomlString
+ case itemMultilineString:
+ return tomlString
+ case itemRawString:
+ return tomlString
+ case itemRawMultilineString:
+ return tomlString
+ case itemBool:
+ return tomlBool
+ }
+ p.bug("Cannot infer primitive type of lex item '%s'.", lexItem)
+ panic("unreachable")
+}
diff --git a/vendor/github.com/Masterminds/goutils/.travis.yml b/vendor/github.com/Masterminds/goutils/.travis.yml
new file mode 100644
index 000000000..4025e01ec
--- /dev/null
+++ b/vendor/github.com/Masterminds/goutils/.travis.yml
@@ -0,0 +1,18 @@
+language: go
+
+go:
+ - 1.6
+ - 1.7
+ - 1.8
+ - tip
+
+script:
+ - go test -v
+
+notifications:
+ webhooks:
+ urls:
+ - https://webhooks.gitter.im/e/06e3328629952dabe3e0
+ on_success: change # options: [always|never|change] default: always
+ on_failure: always # options: [always|never|change] default: always
+ on_start: never # options: [always|never|change] default: always
diff --git a/vendor/github.com/Masterminds/goutils/CHANGELOG.md b/vendor/github.com/Masterminds/goutils/CHANGELOG.md
new file mode 100644
index 000000000..d700ec47f
--- /dev/null
+++ b/vendor/github.com/Masterminds/goutils/CHANGELOG.md
@@ -0,0 +1,8 @@
+# 1.0.1 (2017-05-31)
+
+## Fixed
+- #21: Fix generation of alphanumeric strings (thanks @dbarranco)
+
+# 1.0.0 (2014-04-30)
+
+- Initial release.
diff --git a/vendor/github.com/Masterminds/goutils/LICENSE.txt b/vendor/github.com/Masterminds/goutils/LICENSE.txt
new file mode 100644
index 000000000..d64569567
--- /dev/null
+++ b/vendor/github.com/Masterminds/goutils/LICENSE.txt
@@ -0,0 +1,202 @@
+
+ Apache License
+ Version 2.0, January 2004
+ http://www.apache.org/licenses/
+
+ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
+
+ 1. Definitions.
+
+ "License" shall mean the terms and conditions for use, reproduction,
+ and distribution as defined by Sections 1 through 9 of this document.
+
+ "Licensor" shall mean the copyright owner or entity authorized by
+ the copyright owner that is granting the License.
+
+ "Legal Entity" shall mean the union of the acting entity and all
+ other entities that control, are controlled by, or are under common
+ control with that entity. For the purposes of this definition,
+ "control" means (i) the power, direct or indirect, to cause the
+ direction or management of such entity, whether by contract or
+ otherwise, or (ii) ownership of fifty percent (50%) or more of the
+ outstanding shares, or (iii) beneficial ownership of such entity.
+
+ "You" (or "Your") shall mean an individual or Legal Entity
+ exercising permissions granted by this License.
+
+ "Source" form shall mean the preferred form for making modifications,
+ including but not limited to software source code, documentation
+ source, and configuration files.
+
+ "Object" form shall mean any form resulting from mechanical
+ transformation or translation of a Source form, including but
+ not limited to compiled object code, generated documentation,
+ and conversions to other media types.
+
+ "Work" shall mean the work of authorship, whether in Source or
+ Object form, made available under the License, as indicated by a
+ copyright notice that is included in or attached to the work
+ (an example is provided in the Appendix below).
+
+ "Derivative Works" shall mean any work, whether in Source or Object
+ form, that is based on (or derived from) the Work and for which the
+ editorial revisions, annotations, elaborations, or other modifications
+ represent, as a whole, an original work of authorship. For the purposes
+ of this License, Derivative Works shall not include works that remain
+ separable from, or merely link (or bind by name) to the interfaces of,
+ the Work and Derivative Works thereof.
+
+ "Contribution" shall mean any work of authorship, including
+ the original version of the Work and any modifications or additions
+ to that Work or Derivative Works thereof, that is intentionally
+ submitted to Licensor for inclusion in the Work by the copyright owner
+ or by an individual or Legal Entity authorized to submit on behalf of
+ the copyright owner. For the purposes of this definition, "submitted"
+ means any form of electronic, verbal, or written communication sent
+ to the Licensor or its representatives, including but not limited to
+ communication on electronic mailing lists, source code control systems,
+ and issue tracking systems that are managed by, or on behalf of, the
+ Licensor for the purpose of discussing and improving the Work, but
+ excluding communication that is conspicuously marked or otherwise
+ designated in writing by the copyright owner as "Not a Contribution."
+
+ "Contributor" shall mean Licensor and any individual or Legal Entity
+ on behalf of whom a Contribution has been received by Licensor and
+ subsequently incorporated within the Work.
+
+ 2. Grant of Copyright License. Subject to the terms and conditions of
+ this License, each Contributor hereby grants to You a perpetual,
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
+ copyright license to reproduce, prepare Derivative Works of,
+ publicly display, publicly perform, sublicense, and distribute the
+ Work and such Derivative Works in Source or Object form.
+
+ 3. Grant of Patent License. Subject to the terms and conditions of
+ this License, each Contributor hereby grants to You a perpetual,
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
+ (except as stated in this section) patent license to make, have made,
+ use, offer to sell, sell, import, and otherwise transfer the Work,
+ where such license applies only to those patent claims licensable
+ by such Contributor that are necessarily infringed by their
+ Contribution(s) alone or by combination of their Contribution(s)
+ with the Work to which such Contribution(s) was submitted. If You
+ institute patent litigation against any entity (including a
+ cross-claim or counterclaim in a lawsuit) alleging that the Work
+ or a Contribution incorporated within the Work constitutes direct
+ or contributory patent infringement, then any patent licenses
+ granted to You under this License for that Work shall terminate
+ as of the date such litigation is filed.
+
+ 4. Redistribution. You may reproduce and distribute copies of the
+ Work or Derivative Works thereof in any medium, with or without
+ modifications, and in Source or Object form, provided that You
+ meet the following conditions:
+
+ (a) You must give any other recipients of the Work or
+ Derivative Works a copy of this License; and
+
+ (b) You must cause any modified files to carry prominent notices
+ stating that You changed the files; and
+
+ (c) You must retain, in the Source form of any Derivative Works
+ that You distribute, all copyright, patent, trademark, and
+ attribution notices from the Source form of the Work,
+ excluding those notices that do not pertain to any part of
+ the Derivative Works; and
+
+ (d) If the Work includes a "NOTICE" text file as part of its
+ distribution, then any Derivative Works that You distribute must
+ include a readable copy of the attribution notices contained
+ within such NOTICE file, excluding those notices that do not
+ pertain to any part of the Derivative Works, in at least one
+ of the following places: within a NOTICE text file distributed
+ as part of the Derivative Works; within the Source form or
+ documentation, if provided along with the Derivative Works; or,
+ within a display generated by the Derivative Works, if and
+ wherever such third-party notices normally appear. The contents
+ of the NOTICE file are for informational purposes only and
+ do not modify the License. You may add Your own attribution
+ notices within Derivative Works that You distribute, alongside
+ or as an addendum to the NOTICE text from the Work, provided
+ that such additional attribution notices cannot be construed
+ as modifying the License.
+
+ You may add Your own copyright statement to Your modifications and
+ may provide additional or different license terms and conditions
+ for use, reproduction, or distribution of Your modifications, or
+ for any such Derivative Works as a whole, provided Your use,
+ reproduction, and distribution of the Work otherwise complies with
+ the conditions stated in this License.
+
+ 5. Submission of Contributions. Unless You explicitly state otherwise,
+ any Contribution intentionally submitted for inclusion in the Work
+ by You to the Licensor shall be under the terms and conditions of
+ this License, without any additional terms or conditions.
+ Notwithstanding the above, nothing herein shall supersede or modify
+ the terms of any separate license agreement you may have executed
+ with Licensor regarding such Contributions.
+
+ 6. Trademarks. This License does not grant permission to use the trade
+ names, trademarks, service marks, or product names of the Licensor,
+ except as required for reasonable and customary use in describing the
+ origin of the Work and reproducing the content of the NOTICE file.
+
+ 7. Disclaimer of Warranty. Unless required by applicable law or
+ agreed to in writing, Licensor provides the Work (and each
+ Contributor provides its Contributions) on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
+ implied, including, without limitation, any warranties or conditions
+ of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
+ PARTICULAR PURPOSE. You are solely responsible for determining the
+ appropriateness of using or redistributing the Work and assume any
+ risks associated with Your exercise of permissions under this License.
+
+ 8. Limitation of Liability. In no event and under no legal theory,
+ whether in tort (including negligence), contract, or otherwise,
+ unless required by applicable law (such as deliberate and grossly
+ negligent acts) or agreed to in writing, shall any Contributor be
+ liable to You for damages, including any direct, indirect, special,
+ incidental, or consequential damages of any character arising as a
+ result of this License or out of the use or inability to use the
+ Work (including but not limited to damages for loss of goodwill,
+ work stoppage, computer failure or malfunction, or any and all
+ other commercial damages or losses), even if such Contributor
+ has been advised of the possibility of such damages.
+
+ 9. Accepting Warranty or Additional Liability. While redistributing
+ the Work or Derivative Works thereof, You may choose to offer,
+ and charge a fee for, acceptance of support, warranty, indemnity,
+ or other liability obligations and/or rights consistent with this
+ License. However, in accepting such obligations, You may act only
+ on Your own behalf and on Your sole responsibility, not on behalf
+ of any other Contributor, and only if You agree to indemnify,
+ defend, and hold each Contributor harmless for any liability
+ incurred by, or claims asserted against, such Contributor by reason
+ of your accepting any such warranty or additional liability.
+
+ END OF TERMS AND CONDITIONS
+
+ APPENDIX: How to apply the Apache License to your work.
+
+ To apply the Apache License to your work, attach the following
+ boilerplate notice, with the fields enclosed by brackets "[]"
+ replaced with your own identifying information. (Don't include
+ the brackets!) The text should be enclosed in the appropriate
+ comment syntax for the file format. We also recommend that a
+ file or class name and description of purpose be included on the
+ same "printed page" as the copyright notice for easier
+ identification within third-party archives.
+
+ Copyright [yyyy] [name of copyright owner]
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
diff --git a/vendor/github.com/Masterminds/goutils/README.md b/vendor/github.com/Masterminds/goutils/README.md
new file mode 100644
index 000000000..163ffe72a
--- /dev/null
+++ b/vendor/github.com/Masterminds/goutils/README.md
@@ -0,0 +1,70 @@
+GoUtils
+===========
+[![Stability: Maintenance](https://masterminds.github.io/stability/maintenance.svg)](https://masterminds.github.io/stability/maintenance.html)
+[![GoDoc](https://godoc.org/github.com/Masterminds/goutils?status.png)](https://godoc.org/github.com/Masterminds/goutils) [![Build Status](https://travis-ci.org/Masterminds/goutils.svg?branch=master)](https://travis-ci.org/Masterminds/goutils) [![Build status](https://ci.appveyor.com/api/projects/status/sc2b1ew0m7f0aiju?svg=true)](https://ci.appveyor.com/project/mattfarina/goutils)
+
+
+GoUtils provides users with utility functions to manipulate strings in various ways. It is a Go implementation of some
+string manipulation libraries of Java Apache Commons. GoUtils includes the following Java Apache Commons classes:
+* WordUtils
+* RandomStringUtils
+* StringUtils (partial implementation)
+
+## Installation
+If you have Go set up on your system, from the GOPATH directory within the command line/terminal, enter this:
+
+ go get github.com/Masterminds/goutils
+
+If you do not have Go set up on your system, please follow the [Go installation directions from the documenation](http://golang.org/doc/install), and then follow the instructions above to install GoUtils.
+
+
+## Documentation
+GoUtils doc is available here: [![GoDoc](https://godoc.org/github.com/Masterminds/goutils?status.png)](https://godoc.org/github.com/Masterminds/goutils)
+
+
+## Usage
+The code snippets below show examples of how to use GoUtils. Some functions return errors while others do not. The first instance below, which does not return an error, is the `Initials` function (located within the `wordutils.go` file).
+
+ package main
+
+ import (
+ "fmt"
+ "github.com/Masterminds/goutils"
+ )
+
+ func main() {
+
+ // EXAMPLE 1: A goutils function which returns no errors
+ fmt.Println (goutils.Initials("John Doe Foo")) // Prints out "JDF"
+
+ }
+Some functions return errors mainly due to illegal arguements used as parameters. The code example below illustrates how to deal with function that returns an error. In this instance, the function is the `Random` function (located within the `randomstringutils.go` file).
+
+ package main
+
+ import (
+ "fmt"
+ "github.com/Masterminds/goutils"
+ )
+
+ func main() {
+
+ // EXAMPLE 2: A goutils function which returns an error
+ rand1, err1 := goutils.Random (-1, 0, 0, true, true)
+
+ if err1 != nil {
+ fmt.Println(err1) // Prints out error message because -1 was entered as the first parameter in goutils.Random(...)
+ } else {
+ fmt.Println(rand1)
+ }
+
+ }
+
+## License
+GoUtils is licensed under the Apache License, Version 2.0. Please check the LICENSE.txt file or visit http://www.apache.org/licenses/LICENSE-2.0 for a copy of the license.
+
+## Issue Reporting
+Make suggestions or report issues using the Git issue tracker: https://github.com/Masterminds/goutils/issues
+
+## Website
+* [GoUtils webpage](http://Masterminds.github.io/goutils/)
diff --git a/vendor/github.com/Masterminds/goutils/appveyor.yml b/vendor/github.com/Masterminds/goutils/appveyor.yml
new file mode 100644
index 000000000..657564a84
--- /dev/null
+++ b/vendor/github.com/Masterminds/goutils/appveyor.yml
@@ -0,0 +1,21 @@
+version: build-{build}.{branch}
+
+clone_folder: C:\gopath\src\github.com\Masterminds\goutils
+shallow_clone: true
+
+environment:
+ GOPATH: C:\gopath
+
+platform:
+ - x64
+
+build: off
+
+install:
+ - go version
+ - go env
+
+test_script:
+ - go test -v
+
+deploy: off
diff --git a/vendor/github.com/Masterminds/goutils/cryptorandomstringutils.go b/vendor/github.com/Masterminds/goutils/cryptorandomstringutils.go
new file mode 100644
index 000000000..8dbd92485
--- /dev/null
+++ b/vendor/github.com/Masterminds/goutils/cryptorandomstringutils.go
@@ -0,0 +1,230 @@
+/*
+Copyright 2014 Alexander Okoli
+
+Licensed under the Apache License, Version 2.0 (the "License");
+you may not use this file except in compliance with the License.
+You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+*/
+
+package goutils
+
+import (
+ "crypto/rand"
+ "fmt"
+ "math"
+ "math/big"
+ "unicode"
+)
+
+/*
+CryptoRandomNonAlphaNumeric creates a random string whose length is the number of characters specified.
+Characters will be chosen from the set of all characters (ASCII/Unicode values between 0 to 2,147,483,647 (math.MaxInt32)).
+
+Parameter:
+ count - the length of random string to create
+
+Returns:
+ string - the random string
+ error - an error stemming from an invalid parameter within underlying function, CryptoRandom(...)
+*/
+func CryptoRandomNonAlphaNumeric(count int) (string, error) {
+ return CryptoRandomAlphaNumericCustom(count, false, false)
+}
+
+/*
+CryptoRandomAscii creates a random string whose length is the number of characters specified.
+Characters will be chosen from the set of characters whose ASCII value is between 32 and 126 (inclusive).
+
+Parameter:
+ count - the length of random string to create
+
+Returns:
+ string - the random string
+ error - an error stemming from an invalid parameter within underlying function, CryptoRandom(...)
+*/
+func CryptoRandomAscii(count int) (string, error) {
+ return CryptoRandom(count, 32, 127, false, false)
+}
+
+/*
+CryptoRandomNumeric creates a random string whose length is the number of characters specified.
+Characters will be chosen from the set of numeric characters.
+
+Parameter:
+ count - the length of random string to create
+
+Returns:
+ string - the random string
+ error - an error stemming from an invalid parameter within underlying function, CryptoRandom(...)
+*/
+func CryptoRandomNumeric(count int) (string, error) {
+ return CryptoRandom(count, 0, 0, false, true)
+}
+
+/*
+CryptoRandomAlphabetic creates a random string whose length is the number of characters specified.
+Characters will be chosen from the set of alpha-numeric characters as indicated by the arguments.
+
+Parameters:
+ count - the length of random string to create
+ letters - if true, generated string may include alphabetic characters
+ numbers - if true, generated string may include numeric characters
+
+Returns:
+ string - the random string
+ error - an error stemming from an invalid parameter within underlying function, CryptoRandom(...)
+*/
+func CryptoRandomAlphabetic(count int) (string, error) {
+ return CryptoRandom(count, 0, 0, true, false)
+}
+
+/*
+CryptoRandomAlphaNumeric creates a random string whose length is the number of characters specified.
+Characters will be chosen from the set of alpha-numeric characters.
+
+Parameter:
+ count - the length of random string to create
+
+Returns:
+ string - the random string
+ error - an error stemming from an invalid parameter within underlying function, CryptoRandom(...)
+*/
+func CryptoRandomAlphaNumeric(count int) (string, error) {
+ return CryptoRandom(count, 0, 0, true, true)
+}
+
+/*
+CryptoRandomAlphaNumericCustom creates a random string whose length is the number of characters specified.
+Characters will be chosen from the set of alpha-numeric characters as indicated by the arguments.
+
+Parameters:
+ count - the length of random string to create
+ letters - if true, generated string may include alphabetic characters
+ numbers - if true, generated string may include numeric characters
+
+Returns:
+ string - the random string
+ error - an error stemming from an invalid parameter within underlying function, CryptoRandom(...)
+*/
+func CryptoRandomAlphaNumericCustom(count int, letters bool, numbers bool) (string, error) {
+ return CryptoRandom(count, 0, 0, letters, numbers)
+}
+
+/*
+CryptoRandom creates a random string based on a variety of options, using using golang's crypto/rand source of randomness.
+If the parameters start and end are both 0, start and end are set to ' ' and 'z', the ASCII printable characters, will be used,
+unless letters and numbers are both false, in which case, start and end are set to 0 and math.MaxInt32, respectively.
+If chars is not nil, characters stored in chars that are between start and end are chosen.
+
+Parameters:
+ count - the length of random string to create
+ start - the position in set of chars (ASCII/Unicode int) to start at
+ end - the position in set of chars (ASCII/Unicode int) to end before
+ letters - if true, generated string may include alphabetic characters
+ numbers - if true, generated string may include numeric characters
+ chars - the set of chars to choose randoms from. If nil, then it will use the set of all chars.
+
+Returns:
+ string - the random string
+ error - an error stemming from invalid parameters: if count < 0; or the provided chars array is empty; or end <= start; or end > len(chars)
+*/
+func CryptoRandom(count int, start int, end int, letters bool, numbers bool, chars ...rune) (string, error) {
+ if count == 0 {
+ return "", nil
+ } else if count < 0 {
+ err := fmt.Errorf("randomstringutils illegal argument: Requested random string length %v is less than 0.", count) // equiv to err := errors.New("...")
+ return "", err
+ }
+ if chars != nil && len(chars) == 0 {
+ err := fmt.Errorf("randomstringutils illegal argument: The chars array must not be empty")
+ return "", err
+ }
+
+ if start == 0 && end == 0 {
+ if chars != nil {
+ end = len(chars)
+ } else {
+ if !letters && !numbers {
+ end = math.MaxInt32
+ } else {
+ end = 'z' + 1
+ start = ' '
+ }
+ }
+ } else {
+ if end <= start {
+ err := fmt.Errorf("randomstringutils illegal argument: Parameter end (%v) must be greater than start (%v)", end, start)
+ return "", err
+ }
+
+ if chars != nil && end > len(chars) {
+ err := fmt.Errorf("randomstringutils illegal argument: Parameter end (%v) cannot be greater than len(chars) (%v)", end, len(chars))
+ return "", err
+ }
+ }
+
+ buffer := make([]rune, count)
+ gap := end - start
+
+ // high-surrogates range, (\uD800-\uDBFF) = 55296 - 56319
+ // low-surrogates range, (\uDC00-\uDFFF) = 56320 - 57343
+
+ for count != 0 {
+ count--
+ var ch rune
+ if chars == nil {
+ ch = rune(getCryptoRandomInt(gap) + int64(start))
+ } else {
+ ch = chars[getCryptoRandomInt(gap)+int64(start)]
+ }
+
+ if letters && unicode.IsLetter(ch) || numbers && unicode.IsDigit(ch) || !letters && !numbers {
+ if ch >= 56320 && ch <= 57343 { // low surrogate range
+ if count == 0 {
+ count++
+ } else {
+ // Insert low surrogate
+ buffer[count] = ch
+ count--
+ // Insert high surrogate
+ buffer[count] = rune(55296 + getCryptoRandomInt(128))
+ }
+ } else if ch >= 55296 && ch <= 56191 { // High surrogates range (Partial)
+ if count == 0 {
+ count++
+ } else {
+ // Insert low surrogate
+ buffer[count] = rune(56320 + getCryptoRandomInt(128))
+ count--
+ // Insert high surrogate
+ buffer[count] = ch
+ }
+ } else if ch >= 56192 && ch <= 56319 {
+ // private high surrogate, skip it
+ count++
+ } else {
+ // not one of the surrogates*
+ buffer[count] = ch
+ }
+ } else {
+ count++
+ }
+ }
+ return string(buffer), nil
+}
+
+func getCryptoRandomInt(count int) int64 {
+ nBig, err := rand.Int(rand.Reader, big.NewInt(int64(count)))
+ if err != nil {
+ panic(err)
+ }
+ return nBig.Int64()
+}
diff --git a/vendor/github.com/Masterminds/goutils/randomstringutils.go b/vendor/github.com/Masterminds/goutils/randomstringutils.go
new file mode 100644
index 000000000..272670231
--- /dev/null
+++ b/vendor/github.com/Masterminds/goutils/randomstringutils.go
@@ -0,0 +1,248 @@
+/*
+Copyright 2014 Alexander Okoli
+
+Licensed under the Apache License, Version 2.0 (the "License");
+you may not use this file except in compliance with the License.
+You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+*/
+
+package goutils
+
+import (
+ "fmt"
+ "math"
+ "math/rand"
+ "time"
+ "unicode"
+)
+
+// RANDOM provides the time-based seed used to generate random numbers
+var RANDOM = rand.New(rand.NewSource(time.Now().UnixNano()))
+
+/*
+RandomNonAlphaNumeric creates a random string whose length is the number of characters specified.
+Characters will be chosen from the set of all characters (ASCII/Unicode values between 0 to 2,147,483,647 (math.MaxInt32)).
+
+Parameter:
+ count - the length of random string to create
+
+Returns:
+ string - the random string
+ error - an error stemming from an invalid parameter within underlying function, RandomSeed(...)
+*/
+func RandomNonAlphaNumeric(count int) (string, error) {
+ return RandomAlphaNumericCustom(count, false, false)
+}
+
+/*
+RandomAscii creates a random string whose length is the number of characters specified.
+Characters will be chosen from the set of characters whose ASCII value is between 32 and 126 (inclusive).
+
+Parameter:
+ count - the length of random string to create
+
+Returns:
+ string - the random string
+ error - an error stemming from an invalid parameter within underlying function, RandomSeed(...)
+*/
+func RandomAscii(count int) (string, error) {
+ return Random(count, 32, 127, false, false)
+}
+
+/*
+RandomNumeric creates a random string whose length is the number of characters specified.
+Characters will be chosen from the set of numeric characters.
+
+Parameter:
+ count - the length of random string to create
+
+Returns:
+ string - the random string
+ error - an error stemming from an invalid parameter within underlying function, RandomSeed(...)
+*/
+func RandomNumeric(count int) (string, error) {
+ return Random(count, 0, 0, false, true)
+}
+
+/*
+RandomAlphabetic creates a random string whose length is the number of characters specified.
+Characters will be chosen from the set of alphabetic characters.
+
+Parameters:
+ count - the length of random string to create
+
+Returns:
+ string - the random string
+ error - an error stemming from an invalid parameter within underlying function, RandomSeed(...)
+*/
+func RandomAlphabetic(count int) (string, error) {
+ return Random(count, 0, 0, true, false)
+}
+
+/*
+RandomAlphaNumeric creates a random string whose length is the number of characters specified.
+Characters will be chosen from the set of alpha-numeric characters.
+
+Parameter:
+ count - the length of random string to create
+
+Returns:
+ string - the random string
+ error - an error stemming from an invalid parameter within underlying function, RandomSeed(...)
+*/
+func RandomAlphaNumeric(count int) (string, error) {
+ return Random(count, 0, 0, true, true)
+}
+
+/*
+RandomAlphaNumericCustom creates a random string whose length is the number of characters specified.
+Characters will be chosen from the set of alpha-numeric characters as indicated by the arguments.
+
+Parameters:
+ count - the length of random string to create
+ letters - if true, generated string may include alphabetic characters
+ numbers - if true, generated string may include numeric characters
+
+Returns:
+ string - the random string
+ error - an error stemming from an invalid parameter within underlying function, RandomSeed(...)
+*/
+func RandomAlphaNumericCustom(count int, letters bool, numbers bool) (string, error) {
+ return Random(count, 0, 0, letters, numbers)
+}
+
+/*
+Random creates a random string based on a variety of options, using default source of randomness.
+This method has exactly the same semantics as RandomSeed(int, int, int, bool, bool, []char, *rand.Rand), but
+instead of using an externally supplied source of randomness, it uses the internal *rand.Rand instance.
+
+Parameters:
+ count - the length of random string to create
+ start - the position in set of chars (ASCII/Unicode int) to start at
+ end - the position in set of chars (ASCII/Unicode int) to end before
+ letters - if true, generated string may include alphabetic characters
+ numbers - if true, generated string may include numeric characters
+ chars - the set of chars to choose randoms from. If nil, then it will use the set of all chars.
+
+Returns:
+ string - the random string
+ error - an error stemming from an invalid parameter within underlying function, RandomSeed(...)
+*/
+func Random(count int, start int, end int, letters bool, numbers bool, chars ...rune) (string, error) {
+ return RandomSeed(count, start, end, letters, numbers, chars, RANDOM)
+}
+
+/*
+RandomSeed creates a random string based on a variety of options, using supplied source of randomness.
+If the parameters start and end are both 0, start and end are set to ' ' and 'z', the ASCII printable characters, will be used,
+unless letters and numbers are both false, in which case, start and end are set to 0 and math.MaxInt32, respectively.
+If chars is not nil, characters stored in chars that are between start and end are chosen.
+This method accepts a user-supplied *rand.Rand instance to use as a source of randomness. By seeding a single *rand.Rand instance
+with a fixed seed and using it for each call, the same random sequence of strings can be generated repeatedly and predictably.
+
+Parameters:
+ count - the length of random string to create
+ start - the position in set of chars (ASCII/Unicode decimals) to start at
+ end - the position in set of chars (ASCII/Unicode decimals) to end before
+ letters - if true, generated string may include alphabetic characters
+ numbers - if true, generated string may include numeric characters
+ chars - the set of chars to choose randoms from. If nil, then it will use the set of all chars.
+ random - a source of randomness.
+
+Returns:
+ string - the random string
+ error - an error stemming from invalid parameters: if count < 0; or the provided chars array is empty; or end <= start; or end > len(chars)
+*/
+func RandomSeed(count int, start int, end int, letters bool, numbers bool, chars []rune, random *rand.Rand) (string, error) {
+
+ if count == 0 {
+ return "", nil
+ } else if count < 0 {
+ err := fmt.Errorf("randomstringutils illegal argument: Requested random string length %v is less than 0.", count) // equiv to err := errors.New("...")
+ return "", err
+ }
+ if chars != nil && len(chars) == 0 {
+ err := fmt.Errorf("randomstringutils illegal argument: The chars array must not be empty")
+ return "", err
+ }
+
+ if start == 0 && end == 0 {
+ if chars != nil {
+ end = len(chars)
+ } else {
+ if !letters && !numbers {
+ end = math.MaxInt32
+ } else {
+ end = 'z' + 1
+ start = ' '
+ }
+ }
+ } else {
+ if end <= start {
+ err := fmt.Errorf("randomstringutils illegal argument: Parameter end (%v) must be greater than start (%v)", end, start)
+ return "", err
+ }
+
+ if chars != nil && end > len(chars) {
+ err := fmt.Errorf("randomstringutils illegal argument: Parameter end (%v) cannot be greater than len(chars) (%v)", end, len(chars))
+ return "", err
+ }
+ }
+
+ buffer := make([]rune, count)
+ gap := end - start
+
+ // high-surrogates range, (\uD800-\uDBFF) = 55296 - 56319
+ // low-surrogates range, (\uDC00-\uDFFF) = 56320 - 57343
+
+ for count != 0 {
+ count--
+ var ch rune
+ if chars == nil {
+ ch = rune(random.Intn(gap) + start)
+ } else {
+ ch = chars[random.Intn(gap)+start]
+ }
+
+ if letters && unicode.IsLetter(ch) || numbers && unicode.IsDigit(ch) || !letters && !numbers {
+ if ch >= 56320 && ch <= 57343 { // low surrogate range
+ if count == 0 {
+ count++
+ } else {
+ // Insert low surrogate
+ buffer[count] = ch
+ count--
+ // Insert high surrogate
+ buffer[count] = rune(55296 + random.Intn(128))
+ }
+ } else if ch >= 55296 && ch <= 56191 { // High surrogates range (Partial)
+ if count == 0 {
+ count++
+ } else {
+ // Insert low surrogate
+ buffer[count] = rune(56320 + random.Intn(128))
+ count--
+ // Insert high surrogate
+ buffer[count] = ch
+ }
+ } else if ch >= 56192 && ch <= 56319 {
+ // private high surrogate, skip it
+ count++
+ } else {
+ // not one of the surrogates*
+ buffer[count] = ch
+ }
+ } else {
+ count++
+ }
+ }
+ return string(buffer), nil
+}
diff --git a/vendor/github.com/Masterminds/goutils/stringutils.go b/vendor/github.com/Masterminds/goutils/stringutils.go
new file mode 100644
index 000000000..741bb530e
--- /dev/null
+++ b/vendor/github.com/Masterminds/goutils/stringutils.go
@@ -0,0 +1,240 @@
+/*
+Copyright 2014 Alexander Okoli
+
+Licensed under the Apache License, Version 2.0 (the "License");
+you may not use this file except in compliance with the License.
+You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+*/
+
+package goutils
+
+import (
+ "bytes"
+ "fmt"
+ "strings"
+ "unicode"
+)
+
+// Typically returned by functions where a searched item cannot be found
+const INDEX_NOT_FOUND = -1
+
+/*
+Abbreviate abbreviates a string using ellipses. This will turn the string "Now is the time for all good men" into "Now is the time for..."
+
+Specifically, the algorithm is as follows:
+
+ - If str is less than maxWidth characters long, return it.
+ - Else abbreviate it to (str[0:maxWidth - 3] + "...").
+ - If maxWidth is less than 4, return an illegal argument error.
+ - In no case will it return a string of length greater than maxWidth.
+
+Parameters:
+ str - the string to check
+ maxWidth - maximum length of result string, must be at least 4
+
+Returns:
+ string - abbreviated string
+ error - if the width is too small
+*/
+func Abbreviate(str string, maxWidth int) (string, error) {
+ return AbbreviateFull(str, 0, maxWidth)
+}
+
+/*
+AbbreviateFull abbreviates a string using ellipses. This will turn the string "Now is the time for all good men" into "...is the time for..."
+This function works like Abbreviate(string, int), but allows you to specify a "left edge" offset. Note that this left edge is not
+necessarily going to be the leftmost character in the result, or the first character following the ellipses, but it will appear
+somewhere in the result.
+In no case will it return a string of length greater than maxWidth.
+
+Parameters:
+ str - the string to check
+ offset - left edge of source string
+ maxWidth - maximum length of result string, must be at least 4
+
+Returns:
+ string - abbreviated string
+ error - if the width is too small
+*/
+func AbbreviateFull(str string, offset int, maxWidth int) (string, error) {
+ if str == "" {
+ return "", nil
+ }
+ if maxWidth < 4 {
+ err := fmt.Errorf("stringutils illegal argument: Minimum abbreviation width is 4")
+ return "", err
+ }
+ if len(str) <= maxWidth {
+ return str, nil
+ }
+ if offset > len(str) {
+ offset = len(str)
+ }
+ if len(str)-offset < (maxWidth - 3) { // 15 - 5 < 10 - 3 = 10 < 7
+ offset = len(str) - (maxWidth - 3)
+ }
+ abrevMarker := "..."
+ if offset <= 4 {
+ return str[0:maxWidth-3] + abrevMarker, nil // str.substring(0, maxWidth - 3) + abrevMarker;
+ }
+ if maxWidth < 7 {
+ err := fmt.Errorf("stringutils illegal argument: Minimum abbreviation width with offset is 7")
+ return "", err
+ }
+ if (offset + maxWidth - 3) < len(str) { // 5 + (10-3) < 15 = 12 < 15
+ abrevStr, _ := Abbreviate(str[offset:len(str)], (maxWidth - 3))
+ return abrevMarker + abrevStr, nil // abrevMarker + abbreviate(str.substring(offset), maxWidth - 3);
+ }
+ return abrevMarker + str[(len(str)-(maxWidth-3)):len(str)], nil // abrevMarker + str.substring(str.length() - (maxWidth - 3));
+}
+
+/*
+DeleteWhiteSpace deletes all whitespaces from a string as defined by unicode.IsSpace(rune).
+It returns the string without whitespaces.
+
+Parameter:
+ str - the string to delete whitespace from, may be nil
+
+Returns:
+ the string without whitespaces
+*/
+func DeleteWhiteSpace(str string) string {
+ if str == "" {
+ return str
+ }
+ sz := len(str)
+ var chs bytes.Buffer
+ count := 0
+ for i := 0; i < sz; i++ {
+ ch := rune(str[i])
+ if !unicode.IsSpace(ch) {
+ chs.WriteRune(ch)
+ count++
+ }
+ }
+ if count == sz {
+ return str
+ }
+ return chs.String()
+}
+
+/*
+IndexOfDifference compares two strings, and returns the index at which the strings begin to differ.
+
+Parameters:
+ str1 - the first string
+ str2 - the second string
+
+Returns:
+ the index where str1 and str2 begin to differ; -1 if they are equal
+*/
+func IndexOfDifference(str1 string, str2 string) int {
+ if str1 == str2 {
+ return INDEX_NOT_FOUND
+ }
+ if IsEmpty(str1) || IsEmpty(str2) {
+ return 0
+ }
+ var i int
+ for i = 0; i < len(str1) && i < len(str2); i++ {
+ if rune(str1[i]) != rune(str2[i]) {
+ break
+ }
+ }
+ if i < len(str2) || i < len(str1) {
+ return i
+ }
+ return INDEX_NOT_FOUND
+}
+
+/*
+IsBlank checks if a string is whitespace or empty (""). Observe the following behavior:
+
+ goutils.IsBlank("") = true
+ goutils.IsBlank(" ") = true
+ goutils.IsBlank("bob") = false
+ goutils.IsBlank(" bob ") = false
+
+Parameter:
+ str - the string to check
+
+Returns:
+ true - if the string is whitespace or empty ("")
+*/
+func IsBlank(str string) bool {
+ strLen := len(str)
+ if str == "" || strLen == 0 {
+ return true
+ }
+ for i := 0; i < strLen; i++ {
+ if unicode.IsSpace(rune(str[i])) == false {
+ return false
+ }
+ }
+ return true
+}
+
+/*
+IndexOf returns the index of the first instance of sub in str, with the search beginning from the
+index start point specified. -1 is returned if sub is not present in str.
+
+An empty string ("") will return -1 (INDEX_NOT_FOUND). A negative start position is treated as zero.
+A start position greater than the string length returns -1.
+
+Parameters:
+ str - the string to check
+ sub - the substring to find
+ start - the start position; negative treated as zero
+
+Returns:
+ the first index where the sub string was found (always >= start)
+*/
+func IndexOf(str string, sub string, start int) int {
+
+ if start < 0 {
+ start = 0
+ }
+
+ if len(str) < start {
+ return INDEX_NOT_FOUND
+ }
+
+ if IsEmpty(str) || IsEmpty(sub) {
+ return INDEX_NOT_FOUND
+ }
+
+ partialIndex := strings.Index(str[start:len(str)], sub)
+ if partialIndex == -1 {
+ return INDEX_NOT_FOUND
+ }
+ return partialIndex + start
+}
+
+// IsEmpty checks if a string is empty (""). Returns true if empty, and false otherwise.
+func IsEmpty(str string) bool {
+ return len(str) == 0
+}
+
+// Returns either the passed in string, or if the string is empty, the value of defaultStr.
+func DefaultString(str string, defaultStr string) string {
+ if IsEmpty(str) {
+ return defaultStr
+ }
+ return str
+}
+
+// Returns either the passed in string, or if the string is whitespace, empty (""), the value of defaultStr.
+func DefaultIfBlank(str string, defaultStr string) string {
+ if IsBlank(str) {
+ return defaultStr
+ }
+ return str
+}
diff --git a/vendor/github.com/Masterminds/goutils/wordutils.go b/vendor/github.com/Masterminds/goutils/wordutils.go
new file mode 100644
index 000000000..034cad8e2
--- /dev/null
+++ b/vendor/github.com/Masterminds/goutils/wordutils.go
@@ -0,0 +1,357 @@
+/*
+Copyright 2014 Alexander Okoli
+
+Licensed under the Apache License, Version 2.0 (the "License");
+you may not use this file except in compliance with the License.
+You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+*/
+
+/*
+Package goutils provides utility functions to manipulate strings in various ways.
+The code snippets below show examples of how to use goutils. Some functions return
+errors while others do not, so usage would vary as a result.
+
+Example:
+
+ package main
+
+ import (
+ "fmt"
+ "github.com/aokoli/goutils"
+ )
+
+ func main() {
+
+ // EXAMPLE 1: A goutils function which returns no errors
+ fmt.Println (goutils.Initials("John Doe Foo")) // Prints out "JDF"
+
+
+
+ // EXAMPLE 2: A goutils function which returns an error
+ rand1, err1 := goutils.Random (-1, 0, 0, true, true)
+
+ if err1 != nil {
+ fmt.Println(err1) // Prints out error message because -1 was entered as the first parameter in goutils.Random(...)
+ } else {
+ fmt.Println(rand1)
+ }
+ }
+*/
+package goutils
+
+import (
+ "bytes"
+ "strings"
+ "unicode"
+)
+
+// VERSION indicates the current version of goutils
+const VERSION = "1.0.0"
+
+/*
+Wrap wraps a single line of text, identifying words by ' '.
+New lines will be separated by '\n'. Very long words, such as URLs will not be wrapped.
+Leading spaces on a new line are stripped. Trailing spaces are not stripped.
+
+Parameters:
+ str - the string to be word wrapped
+ wrapLength - the column (a column can fit only one character) to wrap the words at, less than 1 is treated as 1
+
+Returns:
+ a line with newlines inserted
+*/
+func Wrap(str string, wrapLength int) string {
+ return WrapCustom(str, wrapLength, "", false)
+}
+
+/*
+WrapCustom wraps a single line of text, identifying words by ' '.
+Leading spaces on a new line are stripped. Trailing spaces are not stripped.
+
+Parameters:
+ str - the string to be word wrapped
+ wrapLength - the column number (a column can fit only one character) to wrap the words at, less than 1 is treated as 1
+ newLineStr - the string to insert for a new line, "" uses '\n'
+ wrapLongWords - true if long words (such as URLs) should be wrapped
+
+Returns:
+ a line with newlines inserted
+*/
+func WrapCustom(str string, wrapLength int, newLineStr string, wrapLongWords bool) string {
+
+ if str == "" {
+ return ""
+ }
+ if newLineStr == "" {
+ newLineStr = "\n" // TODO Assumes "\n" is seperator. Explore SystemUtils.LINE_SEPARATOR from Apache Commons
+ }
+ if wrapLength < 1 {
+ wrapLength = 1
+ }
+
+ inputLineLength := len(str)
+ offset := 0
+
+ var wrappedLine bytes.Buffer
+
+ for inputLineLength-offset > wrapLength {
+
+ if rune(str[offset]) == ' ' {
+ offset++
+ continue
+ }
+
+ end := wrapLength + offset + 1
+ spaceToWrapAt := strings.LastIndex(str[offset:end], " ") + offset
+
+ if spaceToWrapAt >= offset {
+ // normal word (not longer than wrapLength)
+ wrappedLine.WriteString(str[offset:spaceToWrapAt])
+ wrappedLine.WriteString(newLineStr)
+ offset = spaceToWrapAt + 1
+
+ } else {
+ // long word or URL
+ if wrapLongWords {
+ end := wrapLength + offset
+ // long words are wrapped one line at a time
+ wrappedLine.WriteString(str[offset:end])
+ wrappedLine.WriteString(newLineStr)
+ offset += wrapLength
+ } else {
+ // long words aren't wrapped, just extended beyond limit
+ end := wrapLength + offset
+ index := strings.IndexRune(str[end:len(str)], ' ')
+ if index == -1 {
+ wrappedLine.WriteString(str[offset:len(str)])
+ offset = inputLineLength
+ } else {
+ spaceToWrapAt = index + end
+ wrappedLine.WriteString(str[offset:spaceToWrapAt])
+ wrappedLine.WriteString(newLineStr)
+ offset = spaceToWrapAt + 1
+ }
+ }
+ }
+ }
+
+ wrappedLine.WriteString(str[offset:len(str)])
+
+ return wrappedLine.String()
+
+}
+
+/*
+Capitalize capitalizes all the delimiter separated words in a string. Only the first letter of each word is changed.
+To convert the rest of each word to lowercase at the same time, use CapitalizeFully(str string, delimiters ...rune).
+The delimiters represent a set of characters understood to separate words. The first string character
+and the first non-delimiter character after a delimiter will be capitalized. A "" input string returns "".
+Capitalization uses the Unicode title case, normally equivalent to upper case.
+
+Parameters:
+ str - the string to capitalize
+ delimiters - set of characters to determine capitalization, exclusion of this parameter means whitespace would be delimeter
+
+Returns:
+ capitalized string
+*/
+func Capitalize(str string, delimiters ...rune) string {
+
+ var delimLen int
+
+ if delimiters == nil {
+ delimLen = -1
+ } else {
+ delimLen = len(delimiters)
+ }
+
+ if str == "" || delimLen == 0 {
+ return str
+ }
+
+ buffer := []rune(str)
+ capitalizeNext := true
+ for i := 0; i < len(buffer); i++ {
+ ch := buffer[i]
+ if isDelimiter(ch, delimiters...) {
+ capitalizeNext = true
+ } else if capitalizeNext {
+ buffer[i] = unicode.ToTitle(ch)
+ capitalizeNext = false
+ }
+ }
+ return string(buffer)
+
+}
+
+/*
+CapitalizeFully converts all the delimiter separated words in a string into capitalized words, that is each word is made up of a
+titlecase character and then a series of lowercase characters. The delimiters represent a set of characters understood
+to separate words. The first string character and the first non-delimiter character after a delimiter will be capitalized.
+Capitalization uses the Unicode title case, normally equivalent to upper case.
+
+Parameters:
+ str - the string to capitalize fully
+ delimiters - set of characters to determine capitalization, exclusion of this parameter means whitespace would be delimeter
+
+Returns:
+ capitalized string
+*/
+func CapitalizeFully(str string, delimiters ...rune) string {
+
+ var delimLen int
+
+ if delimiters == nil {
+ delimLen = -1
+ } else {
+ delimLen = len(delimiters)
+ }
+
+ if str == "" || delimLen == 0 {
+ return str
+ }
+ str = strings.ToLower(str)
+ return Capitalize(str, delimiters...)
+}
+
+/*
+Uncapitalize uncapitalizes all the whitespace separated words in a string. Only the first letter of each word is changed.
+The delimiters represent a set of characters understood to separate words. The first string character and the first non-delimiter
+character after a delimiter will be uncapitalized. Whitespace is defined by unicode.IsSpace(char).
+
+Parameters:
+ str - the string to uncapitalize fully
+ delimiters - set of characters to determine capitalization, exclusion of this parameter means whitespace would be delimeter
+
+Returns:
+ uncapitalized string
+*/
+func Uncapitalize(str string, delimiters ...rune) string {
+
+ var delimLen int
+
+ if delimiters == nil {
+ delimLen = -1
+ } else {
+ delimLen = len(delimiters)
+ }
+
+ if str == "" || delimLen == 0 {
+ return str
+ }
+
+ buffer := []rune(str)
+ uncapitalizeNext := true // TODO Always makes capitalize/un apply to first char.
+ for i := 0; i < len(buffer); i++ {
+ ch := buffer[i]
+ if isDelimiter(ch, delimiters...) {
+ uncapitalizeNext = true
+ } else if uncapitalizeNext {
+ buffer[i] = unicode.ToLower(ch)
+ uncapitalizeNext = false
+ }
+ }
+ return string(buffer)
+}
+
+/*
+SwapCase swaps the case of a string using a word based algorithm.
+
+Conversion algorithm:
+
+ Upper case character converts to Lower case
+ Title case character converts to Lower case
+ Lower case character after Whitespace or at start converts to Title case
+ Other Lower case character converts to Upper case
+ Whitespace is defined by unicode.IsSpace(char).
+
+Parameters:
+ str - the string to swap case
+
+Returns:
+ the changed string
+*/
+func SwapCase(str string) string {
+ if str == "" {
+ return str
+ }
+ buffer := []rune(str)
+
+ whitespace := true
+
+ for i := 0; i < len(buffer); i++ {
+ ch := buffer[i]
+ if unicode.IsUpper(ch) {
+ buffer[i] = unicode.ToLower(ch)
+ whitespace = false
+ } else if unicode.IsTitle(ch) {
+ buffer[i] = unicode.ToLower(ch)
+ whitespace = false
+ } else if unicode.IsLower(ch) {
+ if whitespace {
+ buffer[i] = unicode.ToTitle(ch)
+ whitespace = false
+ } else {
+ buffer[i] = unicode.ToUpper(ch)
+ }
+ } else {
+ whitespace = unicode.IsSpace(ch)
+ }
+ }
+ return string(buffer)
+}
+
+/*
+Initials extracts the initial letters from each word in the string. The first letter of the string and all first
+letters after the defined delimiters are returned as a new string. Their case is not changed. If the delimiters
+parameter is excluded, then Whitespace is used. Whitespace is defined by unicode.IsSpacea(char). An empty delimiter array returns an empty string.
+
+Parameters:
+ str - the string to get initials from
+ delimiters - set of characters to determine words, exclusion of this parameter means whitespace would be delimeter
+Returns:
+ string of initial letters
+*/
+func Initials(str string, delimiters ...rune) string {
+ if str == "" {
+ return str
+ }
+ if delimiters != nil && len(delimiters) == 0 {
+ return ""
+ }
+ strLen := len(str)
+ var buf bytes.Buffer
+ lastWasGap := true
+ for i := 0; i < strLen; i++ {
+ ch := rune(str[i])
+
+ if isDelimiter(ch, delimiters...) {
+ lastWasGap = true
+ } else if lastWasGap {
+ buf.WriteRune(ch)
+ lastWasGap = false
+ }
+ }
+ return buf.String()
+}
+
+// private function (lower case func name)
+func isDelimiter(ch rune, delimiters ...rune) bool {
+ if delimiters == nil {
+ return unicode.IsSpace(ch)
+ }
+ for _, delimiter := range delimiters {
+ if ch == delimiter {
+ return true
+ }
+ }
+ return false
+}
diff --git a/vendor/github.com/Masterminds/semver/v3/.gitignore b/vendor/github.com/Masterminds/semver/v3/.gitignore
new file mode 100644
index 000000000..6b061e617
--- /dev/null
+++ b/vendor/github.com/Masterminds/semver/v3/.gitignore
@@ -0,0 +1 @@
+_fuzz/
\ No newline at end of file
diff --git a/vendor/github.com/Masterminds/semver/v3/.golangci.yml b/vendor/github.com/Masterminds/semver/v3/.golangci.yml
new file mode 100644
index 000000000..fbc633259
--- /dev/null
+++ b/vendor/github.com/Masterminds/semver/v3/.golangci.yml
@@ -0,0 +1,27 @@
+run:
+ deadline: 2m
+
+linters:
+ disable-all: true
+ enable:
+ - misspell
+ - govet
+ - staticcheck
+ - errcheck
+ - unparam
+ - ineffassign
+ - nakedret
+ - gocyclo
+ - dupl
+ - goimports
+ - revive
+ - gosec
+ - gosimple
+ - typecheck
+ - unused
+
+linters-settings:
+ gofmt:
+ simplify: true
+ dupl:
+ threshold: 600
diff --git a/vendor/github.com/Masterminds/semver/v3/CHANGELOG.md b/vendor/github.com/Masterminds/semver/v3/CHANGELOG.md
new file mode 100644
index 000000000..f95a504fe
--- /dev/null
+++ b/vendor/github.com/Masterminds/semver/v3/CHANGELOG.md
@@ -0,0 +1,242 @@
+# Changelog
+
+## 3.3.0 (2024-08-27)
+
+### Added
+
+- #238: Add LessThanEqual and GreaterThanEqual functions (thanks @grosser)
+- #213: nil version equality checking (thanks @KnutZuidema)
+
+### Changed
+
+- #241: Simplify StrictNewVersion parsing (thanks @grosser)
+- Testing support up through Go 1.23
+- Minimum version set to 1.21 as this is what's tested now
+- Fuzz testing now supports caching
+
+## 3.2.1 (2023-04-10)
+
+### Changed
+
+- #198: Improved testing around pre-release names
+- #200: Improved code scanning with addition of CodeQL
+- #201: Testing now includes Go 1.20. Go 1.17 has been dropped
+- #202: Migrated Fuzz testing to Go built-in Fuzzing. CI runs daily
+- #203: Docs updated for security details
+
+### Fixed
+
+- #199: Fixed issue with range transformations
+
+## 3.2.0 (2022-11-28)
+
+### Added
+
+- #190: Added text marshaling and unmarshaling
+- #167: Added JSON marshalling for constraints (thanks @SimonTheLeg)
+- #173: Implement encoding.TextMarshaler and encoding.TextUnmarshaler on Version (thanks @MarkRosemaker)
+- #179: Added New() version constructor (thanks @kazhuravlev)
+
+### Changed
+
+- #182/#183: Updated CI testing setup
+
+### Fixed
+
+- #186: Fixing issue where validation of constraint section gave false positives
+- #176: Fix constraints check with *-0 (thanks @mtt0)
+- #181: Fixed Caret operator (^) gives unexpected results when the minor version in constraint is 0 (thanks @arshchimni)
+- #161: Fixed godoc (thanks @afirth)
+
+## 3.1.1 (2020-11-23)
+
+### Fixed
+
+- #158: Fixed issue with generated regex operation order that could cause problem
+
+## 3.1.0 (2020-04-15)
+
+### Added
+
+- #131: Add support for serializing/deserializing SQL (thanks @ryancurrah)
+
+### Changed
+
+- #148: More accurate validation messages on constraints
+
+## 3.0.3 (2019-12-13)
+
+### Fixed
+
+- #141: Fixed issue with <= comparison
+
+## 3.0.2 (2019-11-14)
+
+### Fixed
+
+- #134: Fixed broken constraint checking with ^0.0 (thanks @krmichelos)
+
+## 3.0.1 (2019-09-13)
+
+### Fixed
+
+- #125: Fixes issue with module path for v3
+
+## 3.0.0 (2019-09-12)
+
+This is a major release of the semver package which includes API changes. The Go
+API is compatible with ^1. The Go API was not changed because many people are using
+`go get` without Go modules for their applications and API breaking changes cause
+errors which we have or would need to support.
+
+The changes in this release are the handling based on the data passed into the
+functions. These are described in the added and changed sections below.
+
+### Added
+
+- StrictNewVersion function. This is similar to NewVersion but will return an
+ error if the version passed in is not a strict semantic version. For example,
+ 1.2.3 would pass but v1.2.3 or 1.2 would fail because they are not strictly
+ speaking semantic versions. This function is faster, performs fewer operations,
+ and uses fewer allocations than NewVersion.
+- Fuzzing has been performed on NewVersion, StrictNewVersion, and NewConstraint.
+ The Makefile contains the operations used. For more information on you can start
+ on Wikipedia at https://en.wikipedia.org/wiki/Fuzzing
+- Now using Go modules
+
+### Changed
+
+- NewVersion has proper prerelease and metadata validation with error messages
+ to signal an issue with either of them
+- ^ now operates using a similar set of rules to npm/js and Rust/Cargo. If the
+ version is >=1 the ^ ranges works the same as v1. For major versions of 0 the
+ rules have changed. The minor version is treated as the stable version unless
+ a patch is specified and then it is equivalent to =. One difference from npm/js
+ is that prereleases there are only to a specific version (e.g. 1.2.3).
+ Prereleases here look over multiple versions and follow semantic version
+ ordering rules. This pattern now follows along with the expected and requested
+ handling of this packaged by numerous users.
+
+## 1.5.0 (2019-09-11)
+
+### Added
+
+- #103: Add basic fuzzing for `NewVersion()` (thanks @jesse-c)
+
+### Changed
+
+- #82: Clarify wildcard meaning in range constraints and update tests for it (thanks @greysteil)
+- #83: Clarify caret operator range for pre-1.0.0 dependencies (thanks @greysteil)
+- #72: Adding docs comment pointing to vert for a cli
+- #71: Update the docs on pre-release comparator handling
+- #89: Test with new go versions (thanks @thedevsaddam)
+- #87: Added $ to ValidPrerelease for better validation (thanks @jeremycarroll)
+
+### Fixed
+
+- #78: Fix unchecked error in example code (thanks @ravron)
+- #70: Fix the handling of pre-releases and the 0.0.0 release edge case
+- #97: Fixed copyright file for proper display on GitHub
+- #107: Fix handling prerelease when sorting alphanum and num
+- #109: Fixed where Validate sometimes returns wrong message on error
+
+## 1.4.2 (2018-04-10)
+
+### Changed
+
+- #72: Updated the docs to point to vert for a console appliaction
+- #71: Update the docs on pre-release comparator handling
+
+### Fixed
+
+- #70: Fix the handling of pre-releases and the 0.0.0 release edge case
+
+## 1.4.1 (2018-04-02)
+
+### Fixed
+
+- Fixed #64: Fix pre-release precedence issue (thanks @uudashr)
+
+## 1.4.0 (2017-10-04)
+
+### Changed
+
+- #61: Update NewVersion to parse ints with a 64bit int size (thanks @zknill)
+
+## 1.3.1 (2017-07-10)
+
+### Fixed
+
+- Fixed #57: number comparisons in prerelease sometimes inaccurate
+
+## 1.3.0 (2017-05-02)
+
+### Added
+
+- #45: Added json (un)marshaling support (thanks @mh-cbon)
+- Stability marker. See https://masterminds.github.io/stability/
+
+### Fixed
+
+- #51: Fix handling of single digit tilde constraint (thanks @dgodd)
+
+### Changed
+
+- #55: The godoc icon moved from png to svg
+
+## 1.2.3 (2017-04-03)
+
+### Fixed
+
+- #46: Fixed 0.x.x and 0.0.x in constraints being treated as *
+
+## Release 1.2.2 (2016-12-13)
+
+### Fixed
+
+- #34: Fixed issue where hyphen range was not working with pre-release parsing.
+
+## Release 1.2.1 (2016-11-28)
+
+### Fixed
+
+- #24: Fixed edge case issue where constraint "> 0" does not handle "0.0.1-alpha"
+ properly.
+
+## Release 1.2.0 (2016-11-04)
+
+### Added
+
+- #20: Added MustParse function for versions (thanks @adamreese)
+- #15: Added increment methods on versions (thanks @mh-cbon)
+
+### Fixed
+
+- Issue #21: Per the SemVer spec (section 9) a pre-release is unstable and
+ might not satisfy the intended compatibility. The change here ignores pre-releases
+ on constraint checks (e.g., ~ or ^) when a pre-release is not part of the
+ constraint. For example, `^1.2.3` will ignore pre-releases while
+ `^1.2.3-alpha` will include them.
+
+## Release 1.1.1 (2016-06-30)
+
+### Changed
+
+- Issue #9: Speed up version comparison performance (thanks @sdboyer)
+- Issue #8: Added benchmarks (thanks @sdboyer)
+- Updated Go Report Card URL to new location
+- Updated Readme to add code snippet formatting (thanks @mh-cbon)
+- Updating tagging to v[SemVer] structure for compatibility with other tools.
+
+## Release 1.1.0 (2016-03-11)
+
+- Issue #2: Implemented validation to provide reasons a versions failed a
+ constraint.
+
+## Release 1.0.1 (2015-12-31)
+
+- Fixed #1: * constraint failing on valid versions.
+
+## Release 1.0.0 (2015-10-20)
+
+- Initial release
diff --git a/vendor/github.com/Masterminds/semver/v3/LICENSE.txt b/vendor/github.com/Masterminds/semver/v3/LICENSE.txt
new file mode 100644
index 000000000..9ff7da9c4
--- /dev/null
+++ b/vendor/github.com/Masterminds/semver/v3/LICENSE.txt
@@ -0,0 +1,19 @@
+Copyright (C) 2014-2019, Matt Butcher and Matt Farina
+
+Permission is hereby granted, free of charge, to any person obtaining a copy
+of this software and associated documentation files (the "Software"), to deal
+in the Software without restriction, including without limitation the rights
+to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
+copies of the Software, and to permit persons to whom the Software is
+furnished to do so, subject to the following conditions:
+
+The above copyright notice and this permission notice shall be included in
+all copies or substantial portions of the Software.
+
+THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
+AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
+THE SOFTWARE.
diff --git a/vendor/github.com/Masterminds/semver/v3/Makefile b/vendor/github.com/Masterminds/semver/v3/Makefile
new file mode 100644
index 000000000..9ca87a2c7
--- /dev/null
+++ b/vendor/github.com/Masterminds/semver/v3/Makefile
@@ -0,0 +1,31 @@
+GOPATH=$(shell go env GOPATH)
+GOLANGCI_LINT=$(GOPATH)/bin/golangci-lint
+
+.PHONY: lint
+lint: $(GOLANGCI_LINT)
+ @echo "==> Linting codebase"
+ @$(GOLANGCI_LINT) run
+
+.PHONY: test
+test:
+ @echo "==> Running tests"
+ GO111MODULE=on go test -v
+
+.PHONY: test-cover
+test-cover:
+ @echo "==> Running Tests with coverage"
+ GO111MODULE=on go test -cover .
+
+.PHONY: fuzz
+fuzz:
+ @echo "==> Running Fuzz Tests"
+ go env GOCACHE
+ go test -fuzz=FuzzNewVersion -fuzztime=15s .
+ go test -fuzz=FuzzStrictNewVersion -fuzztime=15s .
+ go test -fuzz=FuzzNewConstraint -fuzztime=15s .
+
+$(GOLANGCI_LINT):
+ # Install golangci-lint. The configuration for it is in the .golangci.yml
+ # file in the root of the repository
+ echo ${GOPATH}
+ curl -sfL https://install.goreleaser.com/github.com/golangci/golangci-lint.sh | sh -s -- -b $(GOPATH)/bin v1.56.2
diff --git a/vendor/github.com/Masterminds/semver/v3/README.md b/vendor/github.com/Masterminds/semver/v3/README.md
new file mode 100644
index 000000000..ed5693608
--- /dev/null
+++ b/vendor/github.com/Masterminds/semver/v3/README.md
@@ -0,0 +1,258 @@
+# SemVer
+
+The `semver` package provides the ability to work with [Semantic Versions](http://semver.org) in Go. Specifically it provides the ability to:
+
+* Parse semantic versions
+* Sort semantic versions
+* Check if a semantic version fits within a set of constraints
+* Optionally work with a `v` prefix
+
+[![Stability:
+Active](https://masterminds.github.io/stability/active.svg)](https://masterminds.github.io/stability/active.html)
+[![](https://github.com/Masterminds/semver/workflows/Tests/badge.svg)](https://github.com/Masterminds/semver/actions)
+[![GoDoc](https://img.shields.io/static/v1?label=godoc&message=reference&color=blue)](https://pkg.go.dev/github.com/Masterminds/semver/v3)
+[![Go Report Card](https://goreportcard.com/badge/github.com/Masterminds/semver)](https://goreportcard.com/report/github.com/Masterminds/semver)
+
+## Package Versions
+
+Note, import `github.com/Masterminds/semver/v3` to use the latest version.
+
+There are three major versions fo the `semver` package.
+
+* 3.x.x is the stable and active version. This version is focused on constraint
+ compatibility for range handling in other tools from other languages. It has
+ a similar API to the v1 releases. The development of this version is on the master
+ branch. The documentation for this version is below.
+* 2.x was developed primarily for [dep](https://github.com/golang/dep). There are
+ no tagged releases and the development was performed by [@sdboyer](https://github.com/sdboyer).
+ There are API breaking changes from v1. This version lives on the [2.x branch](https://github.com/Masterminds/semver/tree/2.x).
+* 1.x.x is the original release. It is no longer maintained. You should use the
+ v3 release instead. You can read the documentation for the 1.x.x release
+ [here](https://github.com/Masterminds/semver/blob/release-1/README.md).
+
+## Parsing Semantic Versions
+
+There are two functions that can parse semantic versions. The `StrictNewVersion`
+function only parses valid version 2 semantic versions as outlined in the
+specification. The `NewVersion` function attempts to coerce a version into a
+semantic version and parse it. For example, if there is a leading v or a version
+listed without all 3 parts (e.g. `v1.2`) it will attempt to coerce it into a valid
+semantic version (e.g., 1.2.0). In both cases a `Version` object is returned
+that can be sorted, compared, and used in constraints.
+
+When parsing a version an error is returned if there is an issue parsing the
+version. For example,
+
+ v, err := semver.NewVersion("1.2.3-beta.1+build345")
+
+The version object has methods to get the parts of the version, compare it to
+other versions, convert the version back into a string, and get the original
+string. Getting the original string is useful if the semantic version was coerced
+into a valid form.
+
+## Sorting Semantic Versions
+
+A set of versions can be sorted using the `sort` package from the standard library.
+For example,
+
+```go
+raw := []string{"1.2.3", "1.0", "1.3", "2", "0.4.2",}
+vs := make([]*semver.Version, len(raw))
+for i, r := range raw {
+ v, err := semver.NewVersion(r)
+ if err != nil {
+ t.Errorf("Error parsing version: %s", err)
+ }
+
+ vs[i] = v
+}
+
+sort.Sort(semver.Collection(vs))
+```
+
+## Checking Version Constraints
+
+There are two methods for comparing versions. One uses comparison methods on
+`Version` instances and the other uses `Constraints`. There are some important
+differences to notes between these two methods of comparison.
+
+1. When two versions are compared using functions such as `Compare`, `LessThan`,
+ and others it will follow the specification and always include pre-releases
+ within the comparison. It will provide an answer that is valid with the
+ comparison section of the spec at https://semver.org/#spec-item-11
+2. When constraint checking is used for checks or validation it will follow a
+ different set of rules that are common for ranges with tools like npm/js
+ and Rust/Cargo. This includes considering pre-releases to be invalid if the
+ ranges does not include one. If you want to have it include pre-releases a
+ simple solution is to include `-0` in your range.
+3. Constraint ranges can have some complex rules including the shorthand use of
+ ~ and ^. For more details on those see the options below.
+
+There are differences between the two methods or checking versions because the
+comparison methods on `Version` follow the specification while comparison ranges
+are not part of the specification. Different packages and tools have taken it
+upon themselves to come up with range rules. This has resulted in differences.
+For example, npm/js and Cargo/Rust follow similar patterns while PHP has a
+different pattern for ^. The comparison features in this package follow the
+npm/js and Cargo/Rust lead because applications using it have followed similar
+patters with their versions.
+
+Checking a version against version constraints is one of the most featureful
+parts of the package.
+
+```go
+c, err := semver.NewConstraint(">= 1.2.3")
+if err != nil {
+ // Handle constraint not being parsable.
+}
+
+v, err := semver.NewVersion("1.3")
+if err != nil {
+ // Handle version not being parsable.
+}
+// Check if the version meets the constraints. The variable a will be true.
+a := c.Check(v)
+```
+
+### Basic Comparisons
+
+There are two elements to the comparisons. First, a comparison string is a list
+of space or comma separated AND comparisons. These are then separated by || (OR)
+comparisons. For example, `">= 1.2 < 3.0.0 || >= 4.2.3"` is looking for a
+comparison that's greater than or equal to 1.2 and less than 3.0.0 or is
+greater than or equal to 4.2.3.
+
+The basic comparisons are:
+
+* `=`: equal (aliased to no operator)
+* `!=`: not equal
+* `>`: greater than
+* `<`: less than
+* `>=`: greater than or equal to
+* `<=`: less than or equal to
+
+### Working With Prerelease Versions
+
+Pre-releases, for those not familiar with them, are used for software releases
+prior to stable or generally available releases. Examples of pre-releases include
+development, alpha, beta, and release candidate releases. A pre-release may be
+a version such as `1.2.3-beta.1` while the stable release would be `1.2.3`. In the
+order of precedence, pre-releases come before their associated releases. In this
+example `1.2.3-beta.1 < 1.2.3`.
+
+According to the Semantic Version specification, pre-releases may not be
+API compliant with their release counterpart. It says,
+
+> A pre-release version indicates that the version is unstable and might not satisfy the intended compatibility requirements as denoted by its associated normal version.
+
+SemVer's comparisons using constraints without a pre-release comparator will skip
+pre-release versions. For example, `>=1.2.3` will skip pre-releases when looking
+at a list of releases while `>=1.2.3-0` will evaluate and find pre-releases.
+
+The reason for the `0` as a pre-release version in the example comparison is
+because pre-releases can only contain ASCII alphanumerics and hyphens (along with
+`.` separators), per the spec. Sorting happens in ASCII sort order, again per the
+spec. The lowest character is a `0` in ASCII sort order
+(see an [ASCII Table](http://www.asciitable.com/))
+
+Understanding ASCII sort ordering is important because A-Z comes before a-z. That
+means `>=1.2.3-BETA` will return `1.2.3-alpha`. What you might expect from case
+sensitivity doesn't apply here. This is due to ASCII sort ordering which is what
+the spec specifies.
+
+### Hyphen Range Comparisons
+
+There are multiple methods to handle ranges and the first is hyphens ranges.
+These look like:
+
+* `1.2 - 1.4.5` which is equivalent to `>= 1.2 <= 1.4.5`
+* `2.3.4 - 4.5` which is equivalent to `>= 2.3.4 <= 4.5`
+
+Note that `1.2-1.4.5` without whitespace is parsed completely differently; it's
+parsed as a single constraint `1.2.0` with _prerelease_ `1.4.5`.
+
+### Wildcards In Comparisons
+
+The `x`, `X`, and `*` characters can be used as a wildcard character. This works
+for all comparison operators. When used on the `=` operator it falls
+back to the patch level comparison (see tilde below). For example,
+
+* `1.2.x` is equivalent to `>= 1.2.0, < 1.3.0`
+* `>= 1.2.x` is equivalent to `>= 1.2.0`
+* `<= 2.x` is equivalent to `< 3`
+* `*` is equivalent to `>= 0.0.0`
+
+### Tilde Range Comparisons (Patch)
+
+The tilde (`~`) comparison operator is for patch level ranges when a minor
+version is specified and major level changes when the minor number is missing.
+For example,
+
+* `~1.2.3` is equivalent to `>= 1.2.3, < 1.3.0`
+* `~1` is equivalent to `>= 1, < 2`
+* `~2.3` is equivalent to `>= 2.3, < 2.4`
+* `~1.2.x` is equivalent to `>= 1.2.0, < 1.3.0`
+* `~1.x` is equivalent to `>= 1, < 2`
+
+### Caret Range Comparisons (Major)
+
+The caret (`^`) comparison operator is for major level changes once a stable
+(1.0.0) release has occurred. Prior to a 1.0.0 release the minor versions acts
+as the API stability level. This is useful when comparisons of API versions as a
+major change is API breaking. For example,
+
+* `^1.2.3` is equivalent to `>= 1.2.3, < 2.0.0`
+* `^1.2.x` is equivalent to `>= 1.2.0, < 2.0.0`
+* `^2.3` is equivalent to `>= 2.3, < 3`
+* `^2.x` is equivalent to `>= 2.0.0, < 3`
+* `^0.2.3` is equivalent to `>=0.2.3 <0.3.0`
+* `^0.2` is equivalent to `>=0.2.0 <0.3.0`
+* `^0.0.3` is equivalent to `>=0.0.3 <0.0.4`
+* `^0.0` is equivalent to `>=0.0.0 <0.1.0`
+* `^0` is equivalent to `>=0.0.0 <1.0.0`
+
+## Validation
+
+In addition to testing a version against a constraint, a version can be validated
+against a constraint. When validation fails a slice of errors containing why a
+version didn't meet the constraint is returned. For example,
+
+```go
+c, err := semver.NewConstraint("<= 1.2.3, >= 1.4")
+if err != nil {
+ // Handle constraint not being parseable.
+}
+
+v, err := semver.NewVersion("1.3")
+if err != nil {
+ // Handle version not being parseable.
+}
+
+// Validate a version against a constraint.
+a, msgs := c.Validate(v)
+// a is false
+for _, m := range msgs {
+ fmt.Println(m)
+
+ // Loops over the errors which would read
+ // "1.3 is greater than 1.2.3"
+ // "1.3 is less than 1.4"
+}
+```
+
+## Contribute
+
+If you find an issue or want to contribute please file an [issue](https://github.com/Masterminds/semver/issues)
+or [create a pull request](https://github.com/Masterminds/semver/pulls).
+
+## Security
+
+Security is an important consideration for this project. The project currently
+uses the following tools to help discover security issues:
+
+* [CodeQL](https://github.com/Masterminds/semver)
+* [gosec](https://github.com/securego/gosec)
+* Daily Fuzz testing
+
+If you believe you have found a security vulnerability you can privately disclose
+it through the [GitHub security page](https://github.com/Masterminds/semver/security).
diff --git a/vendor/github.com/Masterminds/semver/v3/SECURITY.md b/vendor/github.com/Masterminds/semver/v3/SECURITY.md
new file mode 100644
index 000000000..a30a66b1f
--- /dev/null
+++ b/vendor/github.com/Masterminds/semver/v3/SECURITY.md
@@ -0,0 +1,19 @@
+# Security Policy
+
+## Supported Versions
+
+The following versions of semver are currently supported:
+
+| Version | Supported |
+| ------- | ------------------ |
+| 3.x | :white_check_mark: |
+| 2.x | :x: |
+| 1.x | :x: |
+
+Fixes are only released for the latest minor version in the form of a patch release.
+
+## Reporting a Vulnerability
+
+You can privately disclose a vulnerability through GitHubs
+[private vulnerability reporting](https://github.com/Masterminds/semver/security/advisories)
+mechanism.
diff --git a/vendor/github.com/Masterminds/semver/v3/collection.go b/vendor/github.com/Masterminds/semver/v3/collection.go
new file mode 100644
index 000000000..a78235895
--- /dev/null
+++ b/vendor/github.com/Masterminds/semver/v3/collection.go
@@ -0,0 +1,24 @@
+package semver
+
+// Collection is a collection of Version instances and implements the sort
+// interface. See the sort package for more details.
+// https://golang.org/pkg/sort/
+type Collection []*Version
+
+// Len returns the length of a collection. The number of Version instances
+// on the slice.
+func (c Collection) Len() int {
+ return len(c)
+}
+
+// Less is needed for the sort interface to compare two Version objects on the
+// slice. If checks if one is less than the other.
+func (c Collection) Less(i, j int) bool {
+ return c[i].LessThan(c[j])
+}
+
+// Swap is needed for the sort interface to replace the Version objects
+// at two different positions in the slice.
+func (c Collection) Swap(i, j int) {
+ c[i], c[j] = c[j], c[i]
+}
diff --git a/vendor/github.com/Masterminds/semver/v3/constraints.go b/vendor/github.com/Masterminds/semver/v3/constraints.go
new file mode 100644
index 000000000..8461c7ed9
--- /dev/null
+++ b/vendor/github.com/Masterminds/semver/v3/constraints.go
@@ -0,0 +1,594 @@
+package semver
+
+import (
+ "bytes"
+ "errors"
+ "fmt"
+ "regexp"
+ "strings"
+)
+
+// Constraints is one or more constraint that a semantic version can be
+// checked against.
+type Constraints struct {
+ constraints [][]*constraint
+}
+
+// NewConstraint returns a Constraints instance that a Version instance can
+// be checked against. If there is a parse error it will be returned.
+func NewConstraint(c string) (*Constraints, error) {
+
+ // Rewrite - ranges into a comparison operation.
+ c = rewriteRange(c)
+
+ ors := strings.Split(c, "||")
+ or := make([][]*constraint, len(ors))
+ for k, v := range ors {
+
+ // TODO: Find a way to validate and fetch all the constraints in a simpler form
+
+ // Validate the segment
+ if !validConstraintRegex.MatchString(v) {
+ return nil, fmt.Errorf("improper constraint: %s", v)
+ }
+
+ cs := findConstraintRegex.FindAllString(v, -1)
+ if cs == nil {
+ cs = append(cs, v)
+ }
+ result := make([]*constraint, len(cs))
+ for i, s := range cs {
+ pc, err := parseConstraint(s)
+ if err != nil {
+ return nil, err
+ }
+
+ result[i] = pc
+ }
+ or[k] = result
+ }
+
+ o := &Constraints{constraints: or}
+ return o, nil
+}
+
+// Check tests if a version satisfies the constraints.
+func (cs Constraints) Check(v *Version) bool {
+ // TODO(mattfarina): For v4 of this library consolidate the Check and Validate
+ // functions as the underlying functions make that possible now.
+ // loop over the ORs and check the inner ANDs
+ for _, o := range cs.constraints {
+ joy := true
+ for _, c := range o {
+ if check, _ := c.check(v); !check {
+ joy = false
+ break
+ }
+ }
+
+ if joy {
+ return true
+ }
+ }
+
+ return false
+}
+
+// Validate checks if a version satisfies a constraint. If not a slice of
+// reasons for the failure are returned in addition to a bool.
+func (cs Constraints) Validate(v *Version) (bool, []error) {
+ // loop over the ORs and check the inner ANDs
+ var e []error
+
+ // Capture the prerelease message only once. When it happens the first time
+ // this var is marked
+ var prerelesase bool
+ for _, o := range cs.constraints {
+ joy := true
+ for _, c := range o {
+ // Before running the check handle the case there the version is
+ // a prerelease and the check is not searching for prereleases.
+ if c.con.pre == "" && v.pre != "" {
+ if !prerelesase {
+ em := fmt.Errorf("%s is a prerelease version and the constraint is only looking for release versions", v)
+ e = append(e, em)
+ prerelesase = true
+ }
+ joy = false
+
+ } else {
+
+ if _, err := c.check(v); err != nil {
+ e = append(e, err)
+ joy = false
+ }
+ }
+ }
+
+ if joy {
+ return true, []error{}
+ }
+ }
+
+ return false, e
+}
+
+func (cs Constraints) String() string {
+ buf := make([]string, len(cs.constraints))
+ var tmp bytes.Buffer
+
+ for k, v := range cs.constraints {
+ tmp.Reset()
+ vlen := len(v)
+ for kk, c := range v {
+ tmp.WriteString(c.string())
+
+ // Space separate the AND conditions
+ if vlen > 1 && kk < vlen-1 {
+ tmp.WriteString(" ")
+ }
+ }
+ buf[k] = tmp.String()
+ }
+
+ return strings.Join(buf, " || ")
+}
+
+// UnmarshalText implements the encoding.TextUnmarshaler interface.
+func (cs *Constraints) UnmarshalText(text []byte) error {
+ temp, err := NewConstraint(string(text))
+ if err != nil {
+ return err
+ }
+
+ *cs = *temp
+
+ return nil
+}
+
+// MarshalText implements the encoding.TextMarshaler interface.
+func (cs Constraints) MarshalText() ([]byte, error) {
+ return []byte(cs.String()), nil
+}
+
+var constraintOps map[string]cfunc
+var constraintRegex *regexp.Regexp
+var constraintRangeRegex *regexp.Regexp
+
+// Used to find individual constraints within a multi-constraint string
+var findConstraintRegex *regexp.Regexp
+
+// Used to validate an segment of ANDs is valid
+var validConstraintRegex *regexp.Regexp
+
+const cvRegex string = `v?([0-9|x|X|\*]+)(\.[0-9|x|X|\*]+)?(\.[0-9|x|X|\*]+)?` +
+ `(-([0-9A-Za-z\-]+(\.[0-9A-Za-z\-]+)*))?` +
+ `(\+([0-9A-Za-z\-]+(\.[0-9A-Za-z\-]+)*))?`
+
+func init() {
+ constraintOps = map[string]cfunc{
+ "": constraintTildeOrEqual,
+ "=": constraintTildeOrEqual,
+ "!=": constraintNotEqual,
+ ">": constraintGreaterThan,
+ "<": constraintLessThan,
+ ">=": constraintGreaterThanEqual,
+ "=>": constraintGreaterThanEqual,
+ "<=": constraintLessThanEqual,
+ "=<": constraintLessThanEqual,
+ "~": constraintTilde,
+ "~>": constraintTilde,
+ "^": constraintCaret,
+ }
+
+ ops := `=||!=|>|<|>=|=>|<=|=<|~|~>|\^`
+
+ constraintRegex = regexp.MustCompile(fmt.Sprintf(
+ `^\s*(%s)\s*(%s)\s*$`,
+ ops,
+ cvRegex))
+
+ constraintRangeRegex = regexp.MustCompile(fmt.Sprintf(
+ `\s*(%s)\s+-\s+(%s)\s*`,
+ cvRegex, cvRegex))
+
+ findConstraintRegex = regexp.MustCompile(fmt.Sprintf(
+ `(%s)\s*(%s)`,
+ ops,
+ cvRegex))
+
+ // The first time a constraint shows up will look slightly different from
+ // future times it shows up due to a leading space or comma in a given
+ // string.
+ validConstraintRegex = regexp.MustCompile(fmt.Sprintf(
+ `^(\s*(%s)\s*(%s)\s*)((?:\s+|,\s*)(%s)\s*(%s)\s*)*$`,
+ ops,
+ cvRegex,
+ ops,
+ cvRegex))
+}
+
+// An individual constraint
+type constraint struct {
+ // The version used in the constraint check. For example, if a constraint
+ // is '<= 2.0.0' the con a version instance representing 2.0.0.
+ con *Version
+
+ // The original parsed version (e.g., 4.x from != 4.x)
+ orig string
+
+ // The original operator for the constraint
+ origfunc string
+
+ // When an x is used as part of the version (e.g., 1.x)
+ minorDirty bool
+ dirty bool
+ patchDirty bool
+}
+
+// Check if a version meets the constraint
+func (c *constraint) check(v *Version) (bool, error) {
+ return constraintOps[c.origfunc](v, c)
+}
+
+// String prints an individual constraint into a string
+func (c *constraint) string() string {
+ return c.origfunc + c.orig
+}
+
+type cfunc func(v *Version, c *constraint) (bool, error)
+
+func parseConstraint(c string) (*constraint, error) {
+ if len(c) > 0 {
+ m := constraintRegex.FindStringSubmatch(c)
+ if m == nil {
+ return nil, fmt.Errorf("improper constraint: %s", c)
+ }
+
+ cs := &constraint{
+ orig: m[2],
+ origfunc: m[1],
+ }
+
+ ver := m[2]
+ minorDirty := false
+ patchDirty := false
+ dirty := false
+ if isX(m[3]) || m[3] == "" {
+ ver = fmt.Sprintf("0.0.0%s", m[6])
+ dirty = true
+ } else if isX(strings.TrimPrefix(m[4], ".")) || m[4] == "" {
+ minorDirty = true
+ dirty = true
+ ver = fmt.Sprintf("%s.0.0%s", m[3], m[6])
+ } else if isX(strings.TrimPrefix(m[5], ".")) || m[5] == "" {
+ dirty = true
+ patchDirty = true
+ ver = fmt.Sprintf("%s%s.0%s", m[3], m[4], m[6])
+ }
+
+ con, err := NewVersion(ver)
+ if err != nil {
+
+ // The constraintRegex should catch any regex parsing errors. So,
+ // we should never get here.
+ return nil, errors.New("constraint Parser Error")
+ }
+
+ cs.con = con
+ cs.minorDirty = minorDirty
+ cs.patchDirty = patchDirty
+ cs.dirty = dirty
+
+ return cs, nil
+ }
+
+ // The rest is the special case where an empty string was passed in which
+ // is equivalent to * or >=0.0.0
+ con, err := StrictNewVersion("0.0.0")
+ if err != nil {
+
+ // The constraintRegex should catch any regex parsing errors. So,
+ // we should never get here.
+ return nil, errors.New("constraint Parser Error")
+ }
+
+ cs := &constraint{
+ con: con,
+ orig: c,
+ origfunc: "",
+ minorDirty: false,
+ patchDirty: false,
+ dirty: true,
+ }
+ return cs, nil
+}
+
+// Constraint functions
+func constraintNotEqual(v *Version, c *constraint) (bool, error) {
+ if c.dirty {
+
+ // If there is a pre-release on the version but the constraint isn't looking
+ // for them assume that pre-releases are not compatible. See issue 21 for
+ // more details.
+ if v.Prerelease() != "" && c.con.Prerelease() == "" {
+ return false, fmt.Errorf("%s is a prerelease version and the constraint is only looking for release versions", v)
+ }
+
+ if c.con.Major() != v.Major() {
+ return true, nil
+ }
+ if c.con.Minor() != v.Minor() && !c.minorDirty {
+ return true, nil
+ } else if c.minorDirty {
+ return false, fmt.Errorf("%s is equal to %s", v, c.orig)
+ } else if c.con.Patch() != v.Patch() && !c.patchDirty {
+ return true, nil
+ } else if c.patchDirty {
+ // Need to handle prereleases if present
+ if v.Prerelease() != "" || c.con.Prerelease() != "" {
+ eq := comparePrerelease(v.Prerelease(), c.con.Prerelease()) != 0
+ if eq {
+ return true, nil
+ }
+ return false, fmt.Errorf("%s is equal to %s", v, c.orig)
+ }
+ return false, fmt.Errorf("%s is equal to %s", v, c.orig)
+ }
+ }
+
+ eq := v.Equal(c.con)
+ if eq {
+ return false, fmt.Errorf("%s is equal to %s", v, c.orig)
+ }
+
+ return true, nil
+}
+
+func constraintGreaterThan(v *Version, c *constraint) (bool, error) {
+
+ // If there is a pre-release on the version but the constraint isn't looking
+ // for them assume that pre-releases are not compatible. See issue 21 for
+ // more details.
+ if v.Prerelease() != "" && c.con.Prerelease() == "" {
+ return false, fmt.Errorf("%s is a prerelease version and the constraint is only looking for release versions", v)
+ }
+
+ var eq bool
+
+ if !c.dirty {
+ eq = v.Compare(c.con) == 1
+ if eq {
+ return true, nil
+ }
+ return false, fmt.Errorf("%s is less than or equal to %s", v, c.orig)
+ }
+
+ if v.Major() > c.con.Major() {
+ return true, nil
+ } else if v.Major() < c.con.Major() {
+ return false, fmt.Errorf("%s is less than or equal to %s", v, c.orig)
+ } else if c.minorDirty {
+ // This is a range case such as >11. When the version is something like
+ // 11.1.0 is it not > 11. For that we would need 12 or higher
+ return false, fmt.Errorf("%s is less than or equal to %s", v, c.orig)
+ } else if c.patchDirty {
+ // This is for ranges such as >11.1. A version of 11.1.1 is not greater
+ // which one of 11.2.1 is greater
+ eq = v.Minor() > c.con.Minor()
+ if eq {
+ return true, nil
+ }
+ return false, fmt.Errorf("%s is less than or equal to %s", v, c.orig)
+ }
+
+ // If we have gotten here we are not comparing pre-preleases and can use the
+ // Compare function to accomplish that.
+ eq = v.Compare(c.con) == 1
+ if eq {
+ return true, nil
+ }
+ return false, fmt.Errorf("%s is less than or equal to %s", v, c.orig)
+}
+
+func constraintLessThan(v *Version, c *constraint) (bool, error) {
+ // If there is a pre-release on the version but the constraint isn't looking
+ // for them assume that pre-releases are not compatible. See issue 21 for
+ // more details.
+ if v.Prerelease() != "" && c.con.Prerelease() == "" {
+ return false, fmt.Errorf("%s is a prerelease version and the constraint is only looking for release versions", v)
+ }
+
+ eq := v.Compare(c.con) < 0
+ if eq {
+ return true, nil
+ }
+ return false, fmt.Errorf("%s is greater than or equal to %s", v, c.orig)
+}
+
+func constraintGreaterThanEqual(v *Version, c *constraint) (bool, error) {
+
+ // If there is a pre-release on the version but the constraint isn't looking
+ // for them assume that pre-releases are not compatible. See issue 21 for
+ // more details.
+ if v.Prerelease() != "" && c.con.Prerelease() == "" {
+ return false, fmt.Errorf("%s is a prerelease version and the constraint is only looking for release versions", v)
+ }
+
+ eq := v.Compare(c.con) >= 0
+ if eq {
+ return true, nil
+ }
+ return false, fmt.Errorf("%s is less than %s", v, c.orig)
+}
+
+func constraintLessThanEqual(v *Version, c *constraint) (bool, error) {
+ // If there is a pre-release on the version but the constraint isn't looking
+ // for them assume that pre-releases are not compatible. See issue 21 for
+ // more details.
+ if v.Prerelease() != "" && c.con.Prerelease() == "" {
+ return false, fmt.Errorf("%s is a prerelease version and the constraint is only looking for release versions", v)
+ }
+
+ var eq bool
+
+ if !c.dirty {
+ eq = v.Compare(c.con) <= 0
+ if eq {
+ return true, nil
+ }
+ return false, fmt.Errorf("%s is greater than %s", v, c.orig)
+ }
+
+ if v.Major() > c.con.Major() {
+ return false, fmt.Errorf("%s is greater than %s", v, c.orig)
+ } else if v.Major() == c.con.Major() && v.Minor() > c.con.Minor() && !c.minorDirty {
+ return false, fmt.Errorf("%s is greater than %s", v, c.orig)
+ }
+
+ return true, nil
+}
+
+// ~*, ~>* --> >= 0.0.0 (any)
+// ~2, ~2.x, ~2.x.x, ~>2, ~>2.x ~>2.x.x --> >=2.0.0, <3.0.0
+// ~2.0, ~2.0.x, ~>2.0, ~>2.0.x --> >=2.0.0, <2.1.0
+// ~1.2, ~1.2.x, ~>1.2, ~>1.2.x --> >=1.2.0, <1.3.0
+// ~1.2.3, ~>1.2.3 --> >=1.2.3, <1.3.0
+// ~1.2.0, ~>1.2.0 --> >=1.2.0, <1.3.0
+func constraintTilde(v *Version, c *constraint) (bool, error) {
+ // If there is a pre-release on the version but the constraint isn't looking
+ // for them assume that pre-releases are not compatible. See issue 21 for
+ // more details.
+ if v.Prerelease() != "" && c.con.Prerelease() == "" {
+ return false, fmt.Errorf("%s is a prerelease version and the constraint is only looking for release versions", v)
+ }
+
+ if v.LessThan(c.con) {
+ return false, fmt.Errorf("%s is less than %s", v, c.orig)
+ }
+
+ // ~0.0.0 is a special case where all constraints are accepted. It's
+ // equivalent to >= 0.0.0.
+ if c.con.Major() == 0 && c.con.Minor() == 0 && c.con.Patch() == 0 &&
+ !c.minorDirty && !c.patchDirty {
+ return true, nil
+ }
+
+ if v.Major() != c.con.Major() {
+ return false, fmt.Errorf("%s does not have same major version as %s", v, c.orig)
+ }
+
+ if v.Minor() != c.con.Minor() && !c.minorDirty {
+ return false, fmt.Errorf("%s does not have same major and minor version as %s", v, c.orig)
+ }
+
+ return true, nil
+}
+
+// When there is a .x (dirty) status it automatically opts in to ~. Otherwise
+// it's a straight =
+func constraintTildeOrEqual(v *Version, c *constraint) (bool, error) {
+ // If there is a pre-release on the version but the constraint isn't looking
+ // for them assume that pre-releases are not compatible. See issue 21 for
+ // more details.
+ if v.Prerelease() != "" && c.con.Prerelease() == "" {
+ return false, fmt.Errorf("%s is a prerelease version and the constraint is only looking for release versions", v)
+ }
+
+ if c.dirty {
+ return constraintTilde(v, c)
+ }
+
+ eq := v.Equal(c.con)
+ if eq {
+ return true, nil
+ }
+
+ return false, fmt.Errorf("%s is not equal to %s", v, c.orig)
+}
+
+// ^* --> (any)
+// ^1.2.3 --> >=1.2.3 <2.0.0
+// ^1.2 --> >=1.2.0 <2.0.0
+// ^1 --> >=1.0.0 <2.0.0
+// ^0.2.3 --> >=0.2.3 <0.3.0
+// ^0.2 --> >=0.2.0 <0.3.0
+// ^0.0.3 --> >=0.0.3 <0.0.4
+// ^0.0 --> >=0.0.0 <0.1.0
+// ^0 --> >=0.0.0 <1.0.0
+func constraintCaret(v *Version, c *constraint) (bool, error) {
+ // If there is a pre-release on the version but the constraint isn't looking
+ // for them assume that pre-releases are not compatible. See issue 21 for
+ // more details.
+ if v.Prerelease() != "" && c.con.Prerelease() == "" {
+ return false, fmt.Errorf("%s is a prerelease version and the constraint is only looking for release versions", v)
+ }
+
+ // This less than handles prereleases
+ if v.LessThan(c.con) {
+ return false, fmt.Errorf("%s is less than %s", v, c.orig)
+ }
+
+ var eq bool
+
+ // ^ when the major > 0 is >=x.y.z < x+1
+ if c.con.Major() > 0 || c.minorDirty {
+
+ // ^ has to be within a major range for > 0. Everything less than was
+ // filtered out with the LessThan call above. This filters out those
+ // that greater but not within the same major range.
+ eq = v.Major() == c.con.Major()
+ if eq {
+ return true, nil
+ }
+ return false, fmt.Errorf("%s does not have same major version as %s", v, c.orig)
+ }
+
+ // ^ when the major is 0 and minor > 0 is >=0.y.z < 0.y+1
+ if c.con.Major() == 0 && v.Major() > 0 {
+ return false, fmt.Errorf("%s does not have same major version as %s", v, c.orig)
+ }
+ // If the con Minor is > 0 it is not dirty
+ if c.con.Minor() > 0 || c.patchDirty {
+ eq = v.Minor() == c.con.Minor()
+ if eq {
+ return true, nil
+ }
+ return false, fmt.Errorf("%s does not have same minor version as %s. Expected minor versions to match when constraint major version is 0", v, c.orig)
+ }
+ // ^ when the minor is 0 and minor > 0 is =0.0.z
+ if c.con.Minor() == 0 && v.Minor() > 0 {
+ return false, fmt.Errorf("%s does not have same minor version as %s", v, c.orig)
+ }
+
+ // At this point the major is 0 and the minor is 0 and not dirty. The patch
+ // is not dirty so we need to check if they are equal. If they are not equal
+ eq = c.con.Patch() == v.Patch()
+ if eq {
+ return true, nil
+ }
+ return false, fmt.Errorf("%s does not equal %s. Expect version and constraint to equal when major and minor versions are 0", v, c.orig)
+}
+
+func isX(x string) bool {
+ switch x {
+ case "x", "*", "X":
+ return true
+ default:
+ return false
+ }
+}
+
+func rewriteRange(i string) string {
+ m := constraintRangeRegex.FindAllStringSubmatch(i, -1)
+ if m == nil {
+ return i
+ }
+ o := i
+ for _, v := range m {
+ t := fmt.Sprintf(">= %s, <= %s ", v[1], v[11])
+ o = strings.Replace(o, v[0], t, 1)
+ }
+
+ return o
+}
diff --git a/vendor/github.com/Masterminds/semver/v3/doc.go b/vendor/github.com/Masterminds/semver/v3/doc.go
new file mode 100644
index 000000000..74f97caa5
--- /dev/null
+++ b/vendor/github.com/Masterminds/semver/v3/doc.go
@@ -0,0 +1,184 @@
+/*
+Package semver provides the ability to work with Semantic Versions (http://semver.org) in Go.
+
+Specifically it provides the ability to:
+
+ - Parse semantic versions
+ - Sort semantic versions
+ - Check if a semantic version fits within a set of constraints
+ - Optionally work with a `v` prefix
+
+# Parsing Semantic Versions
+
+There are two functions that can parse semantic versions. The `StrictNewVersion`
+function only parses valid version 2 semantic versions as outlined in the
+specification. The `NewVersion` function attempts to coerce a version into a
+semantic version and parse it. For example, if there is a leading v or a version
+listed without all 3 parts (e.g. 1.2) it will attempt to coerce it into a valid
+semantic version (e.g., 1.2.0). In both cases a `Version` object is returned
+that can be sorted, compared, and used in constraints.
+
+When parsing a version an optional error can be returned if there is an issue
+parsing the version. For example,
+
+ v, err := semver.NewVersion("1.2.3-beta.1+b345")
+
+The version object has methods to get the parts of the version, compare it to
+other versions, convert the version back into a string, and get the original
+string. For more details please see the documentation
+at https://godoc.org/github.com/Masterminds/semver.
+
+# Sorting Semantic Versions
+
+A set of versions can be sorted using the `sort` package from the standard library.
+For example,
+
+ raw := []string{"1.2.3", "1.0", "1.3", "2", "0.4.2",}
+ vs := make([]*semver.Version, len(raw))
+ for i, r := range raw {
+ v, err := semver.NewVersion(r)
+ if err != nil {
+ t.Errorf("Error parsing version: %s", err)
+ }
+
+ vs[i] = v
+ }
+
+ sort.Sort(semver.Collection(vs))
+
+# Checking Version Constraints and Comparing Versions
+
+There are two methods for comparing versions. One uses comparison methods on
+`Version` instances and the other is using Constraints. There are some important
+differences to notes between these two methods of comparison.
+
+ 1. When two versions are compared using functions such as `Compare`, `LessThan`,
+ and others it will follow the specification and always include prereleases
+ within the comparison. It will provide an answer valid with the comparison
+ spec section at https://semver.org/#spec-item-11
+ 2. When constraint checking is used for checks or validation it will follow a
+ different set of rules that are common for ranges with tools like npm/js
+ and Rust/Cargo. This includes considering prereleases to be invalid if the
+ ranges does not include on. If you want to have it include pre-releases a
+ simple solution is to include `-0` in your range.
+ 3. Constraint ranges can have some complex rules including the shorthard use of
+ ~ and ^. For more details on those see the options below.
+
+There are differences between the two methods or checking versions because the
+comparison methods on `Version` follow the specification while comparison ranges
+are not part of the specification. Different packages and tools have taken it
+upon themselves to come up with range rules. This has resulted in differences.
+For example, npm/js and Cargo/Rust follow similar patterns which PHP has a
+different pattern for ^. The comparison features in this package follow the
+npm/js and Cargo/Rust lead because applications using it have followed similar
+patters with their versions.
+
+Checking a version against version constraints is one of the most featureful
+parts of the package.
+
+ c, err := semver.NewConstraint(">= 1.2.3")
+ if err != nil {
+ // Handle constraint not being parsable.
+ }
+
+ v, err := semver.NewVersion("1.3")
+ if err != nil {
+ // Handle version not being parsable.
+ }
+ // Check if the version meets the constraints. The a variable will be true.
+ a := c.Check(v)
+
+# Basic Comparisons
+
+There are two elements to the comparisons. First, a comparison string is a list
+of comma or space separated AND comparisons. These are then separated by || (OR)
+comparisons. For example, `">= 1.2 < 3.0.0 || >= 4.2.3"` is looking for a
+comparison that's greater than or equal to 1.2 and less than 3.0.0 or is
+greater than or equal to 4.2.3. This can also be written as
+`">= 1.2, < 3.0.0 || >= 4.2.3"`
+
+The basic comparisons are:
+
+ - `=`: equal (aliased to no operator)
+ - `!=`: not equal
+ - `>`: greater than
+ - `<`: less than
+ - `>=`: greater than or equal to
+ - `<=`: less than or equal to
+
+# Hyphen Range Comparisons
+
+There are multiple methods to handle ranges and the first is hyphens ranges.
+These look like:
+
+ - `1.2 - 1.4.5` which is equivalent to `>= 1.2, <= 1.4.5`
+ - `2.3.4 - 4.5` which is equivalent to `>= 2.3.4 <= 4.5`
+
+# Wildcards In Comparisons
+
+The `x`, `X`, and `*` characters can be used as a wildcard character. This works
+for all comparison operators. When used on the `=` operator it falls
+back to the tilde operation. For example,
+
+ - `1.2.x` is equivalent to `>= 1.2.0 < 1.3.0`
+ - `>= 1.2.x` is equivalent to `>= 1.2.0`
+ - `<= 2.x` is equivalent to `<= 3`
+ - `*` is equivalent to `>= 0.0.0`
+
+Tilde Range Comparisons (Patch)
+
+The tilde (`~`) comparison operator is for patch level ranges when a minor
+version is specified and major level changes when the minor number is missing.
+For example,
+
+ - `~1.2.3` is equivalent to `>= 1.2.3 < 1.3.0`
+ - `~1` is equivalent to `>= 1, < 2`
+ - `~2.3` is equivalent to `>= 2.3 < 2.4`
+ - `~1.2.x` is equivalent to `>= 1.2.0 < 1.3.0`
+ - `~1.x` is equivalent to `>= 1 < 2`
+
+Caret Range Comparisons (Major)
+
+The caret (`^`) comparison operator is for major level changes once a stable
+(1.0.0) release has occurred. Prior to a 1.0.0 release the minor versions acts
+as the API stability level. This is useful when comparisons of API versions as a
+major change is API breaking. For example,
+
+ - `^1.2.3` is equivalent to `>= 1.2.3, < 2.0.0`
+ - `^1.2.x` is equivalent to `>= 1.2.0, < 2.0.0`
+ - `^2.3` is equivalent to `>= 2.3, < 3`
+ - `^2.x` is equivalent to `>= 2.0.0, < 3`
+ - `^0.2.3` is equivalent to `>=0.2.3 <0.3.0`
+ - `^0.2` is equivalent to `>=0.2.0 <0.3.0`
+ - `^0.0.3` is equivalent to `>=0.0.3 <0.0.4`
+ - `^0.0` is equivalent to `>=0.0.0 <0.1.0`
+ - `^0` is equivalent to `>=0.0.0 <1.0.0`
+
+# Validation
+
+In addition to testing a version against a constraint, a version can be validated
+against a constraint. When validation fails a slice of errors containing why a
+version didn't meet the constraint is returned. For example,
+
+ c, err := semver.NewConstraint("<= 1.2.3, >= 1.4")
+ if err != nil {
+ // Handle constraint not being parseable.
+ }
+
+ v, _ := semver.NewVersion("1.3")
+ if err != nil {
+ // Handle version not being parseable.
+ }
+
+ // Validate a version against a constraint.
+ a, msgs := c.Validate(v)
+ // a is false
+ for _, m := range msgs {
+ fmt.Println(m)
+
+ // Loops over the errors which would read
+ // "1.3 is greater than 1.2.3"
+ // "1.3 is less than 1.4"
+ }
+*/
+package semver
diff --git a/vendor/github.com/Masterminds/semver/v3/version.go b/vendor/github.com/Masterminds/semver/v3/version.go
new file mode 100644
index 000000000..ff499fb66
--- /dev/null
+++ b/vendor/github.com/Masterminds/semver/v3/version.go
@@ -0,0 +1,639 @@
+package semver
+
+import (
+ "bytes"
+ "database/sql/driver"
+ "encoding/json"
+ "errors"
+ "fmt"
+ "regexp"
+ "strconv"
+ "strings"
+)
+
+// The compiled version of the regex created at init() is cached here so it
+// only needs to be created once.
+var versionRegex *regexp.Regexp
+
+var (
+ // ErrInvalidSemVer is returned a version is found to be invalid when
+ // being parsed.
+ ErrInvalidSemVer = errors.New("Invalid Semantic Version")
+
+ // ErrEmptyString is returned when an empty string is passed in for parsing.
+ ErrEmptyString = errors.New("Version string empty")
+
+ // ErrInvalidCharacters is returned when invalid characters are found as
+ // part of a version
+ ErrInvalidCharacters = errors.New("Invalid characters in version")
+
+ // ErrSegmentStartsZero is returned when a version segment starts with 0.
+ // This is invalid in SemVer.
+ ErrSegmentStartsZero = errors.New("Version segment starts with 0")
+
+ // ErrInvalidMetadata is returned when the metadata is an invalid format
+ ErrInvalidMetadata = errors.New("Invalid Metadata string")
+
+ // ErrInvalidPrerelease is returned when the pre-release is an invalid format
+ ErrInvalidPrerelease = errors.New("Invalid Prerelease string")
+)
+
+// semVerRegex is the regular expression used to parse a semantic version.
+const semVerRegex string = `v?([0-9]+)(\.[0-9]+)?(\.[0-9]+)?` +
+ `(-([0-9A-Za-z\-]+(\.[0-9A-Za-z\-]+)*))?` +
+ `(\+([0-9A-Za-z\-]+(\.[0-9A-Za-z\-]+)*))?`
+
+// Version represents a single semantic version.
+type Version struct {
+ major, minor, patch uint64
+ pre string
+ metadata string
+ original string
+}
+
+func init() {
+ versionRegex = regexp.MustCompile("^" + semVerRegex + "$")
+}
+
+const (
+ num string = "0123456789"
+ allowed string = "abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ-" + num
+)
+
+// StrictNewVersion parses a given version and returns an instance of Version or
+// an error if unable to parse the version. Only parses valid semantic versions.
+// Performs checking that can find errors within the version.
+// If you want to coerce a version such as 1 or 1.2 and parse it as the 1.x
+// releases of semver did, use the NewVersion() function.
+func StrictNewVersion(v string) (*Version, error) {
+ // Parsing here does not use RegEx in order to increase performance and reduce
+ // allocations.
+
+ if len(v) == 0 {
+ return nil, ErrEmptyString
+ }
+
+ // Split the parts into [0]major, [1]minor, and [2]patch,prerelease,build
+ parts := strings.SplitN(v, ".", 3)
+ if len(parts) != 3 {
+ return nil, ErrInvalidSemVer
+ }
+
+ sv := &Version{
+ original: v,
+ }
+
+ // Extract build metadata
+ if strings.Contains(parts[2], "+") {
+ extra := strings.SplitN(parts[2], "+", 2)
+ sv.metadata = extra[1]
+ parts[2] = extra[0]
+ if err := validateMetadata(sv.metadata); err != nil {
+ return nil, err
+ }
+ }
+
+ // Extract build prerelease
+ if strings.Contains(parts[2], "-") {
+ extra := strings.SplitN(parts[2], "-", 2)
+ sv.pre = extra[1]
+ parts[2] = extra[0]
+ if err := validatePrerelease(sv.pre); err != nil {
+ return nil, err
+ }
+ }
+
+ // Validate the number segments are valid. This includes only having positive
+ // numbers and no leading 0's.
+ for _, p := range parts {
+ if !containsOnly(p, num) {
+ return nil, ErrInvalidCharacters
+ }
+
+ if len(p) > 1 && p[0] == '0' {
+ return nil, ErrSegmentStartsZero
+ }
+ }
+
+ // Extract major, minor, and patch
+ var err error
+ sv.major, err = strconv.ParseUint(parts[0], 10, 64)
+ if err != nil {
+ return nil, err
+ }
+
+ sv.minor, err = strconv.ParseUint(parts[1], 10, 64)
+ if err != nil {
+ return nil, err
+ }
+
+ sv.patch, err = strconv.ParseUint(parts[2], 10, 64)
+ if err != nil {
+ return nil, err
+ }
+
+ return sv, nil
+}
+
+// NewVersion parses a given version and returns an instance of Version or
+// an error if unable to parse the version. If the version is SemVer-ish it
+// attempts to convert it to SemVer. If you want to validate it was a strict
+// semantic version at parse time see StrictNewVersion().
+func NewVersion(v string) (*Version, error) {
+ m := versionRegex.FindStringSubmatch(v)
+ if m == nil {
+ return nil, ErrInvalidSemVer
+ }
+
+ sv := &Version{
+ metadata: m[8],
+ pre: m[5],
+ original: v,
+ }
+
+ var err error
+ sv.major, err = strconv.ParseUint(m[1], 10, 64)
+ if err != nil {
+ return nil, fmt.Errorf("Error parsing version segment: %s", err)
+ }
+
+ if m[2] != "" {
+ sv.minor, err = strconv.ParseUint(strings.TrimPrefix(m[2], "."), 10, 64)
+ if err != nil {
+ return nil, fmt.Errorf("Error parsing version segment: %s", err)
+ }
+ } else {
+ sv.minor = 0
+ }
+
+ if m[3] != "" {
+ sv.patch, err = strconv.ParseUint(strings.TrimPrefix(m[3], "."), 10, 64)
+ if err != nil {
+ return nil, fmt.Errorf("Error parsing version segment: %s", err)
+ }
+ } else {
+ sv.patch = 0
+ }
+
+ // Perform some basic due diligence on the extra parts to ensure they are
+ // valid.
+
+ if sv.pre != "" {
+ if err = validatePrerelease(sv.pre); err != nil {
+ return nil, err
+ }
+ }
+
+ if sv.metadata != "" {
+ if err = validateMetadata(sv.metadata); err != nil {
+ return nil, err
+ }
+ }
+
+ return sv, nil
+}
+
+// New creates a new instance of Version with each of the parts passed in as
+// arguments instead of parsing a version string.
+func New(major, minor, patch uint64, pre, metadata string) *Version {
+ v := Version{
+ major: major,
+ minor: minor,
+ patch: patch,
+ pre: pre,
+ metadata: metadata,
+ original: "",
+ }
+
+ v.original = v.String()
+
+ return &v
+}
+
+// MustParse parses a given version and panics on error.
+func MustParse(v string) *Version {
+ sv, err := NewVersion(v)
+ if err != nil {
+ panic(err)
+ }
+ return sv
+}
+
+// String converts a Version object to a string.
+// Note, if the original version contained a leading v this version will not.
+// See the Original() method to retrieve the original value. Semantic Versions
+// don't contain a leading v per the spec. Instead it's optional on
+// implementation.
+func (v Version) String() string {
+ var buf bytes.Buffer
+
+ fmt.Fprintf(&buf, "%d.%d.%d", v.major, v.minor, v.patch)
+ if v.pre != "" {
+ fmt.Fprintf(&buf, "-%s", v.pre)
+ }
+ if v.metadata != "" {
+ fmt.Fprintf(&buf, "+%s", v.metadata)
+ }
+
+ return buf.String()
+}
+
+// Original returns the original value passed in to be parsed.
+func (v *Version) Original() string {
+ return v.original
+}
+
+// Major returns the major version.
+func (v Version) Major() uint64 {
+ return v.major
+}
+
+// Minor returns the minor version.
+func (v Version) Minor() uint64 {
+ return v.minor
+}
+
+// Patch returns the patch version.
+func (v Version) Patch() uint64 {
+ return v.patch
+}
+
+// Prerelease returns the pre-release version.
+func (v Version) Prerelease() string {
+ return v.pre
+}
+
+// Metadata returns the metadata on the version.
+func (v Version) Metadata() string {
+ return v.metadata
+}
+
+// originalVPrefix returns the original 'v' prefix if any.
+func (v Version) originalVPrefix() string {
+ // Note, only lowercase v is supported as a prefix by the parser.
+ if v.original != "" && v.original[:1] == "v" {
+ return v.original[:1]
+ }
+ return ""
+}
+
+// IncPatch produces the next patch version.
+// If the current version does not have prerelease/metadata information,
+// it unsets metadata and prerelease values, increments patch number.
+// If the current version has any of prerelease or metadata information,
+// it unsets both values and keeps current patch value
+func (v Version) IncPatch() Version {
+ vNext := v
+ // according to http://semver.org/#spec-item-9
+ // Pre-release versions have a lower precedence than the associated normal version.
+ // according to http://semver.org/#spec-item-10
+ // Build metadata SHOULD be ignored when determining version precedence.
+ if v.pre != "" {
+ vNext.metadata = ""
+ vNext.pre = ""
+ } else {
+ vNext.metadata = ""
+ vNext.pre = ""
+ vNext.patch = v.patch + 1
+ }
+ vNext.original = v.originalVPrefix() + "" + vNext.String()
+ return vNext
+}
+
+// IncMinor produces the next minor version.
+// Sets patch to 0.
+// Increments minor number.
+// Unsets metadata.
+// Unsets prerelease status.
+func (v Version) IncMinor() Version {
+ vNext := v
+ vNext.metadata = ""
+ vNext.pre = ""
+ vNext.patch = 0
+ vNext.minor = v.minor + 1
+ vNext.original = v.originalVPrefix() + "" + vNext.String()
+ return vNext
+}
+
+// IncMajor produces the next major version.
+// Sets patch to 0.
+// Sets minor to 0.
+// Increments major number.
+// Unsets metadata.
+// Unsets prerelease status.
+func (v Version) IncMajor() Version {
+ vNext := v
+ vNext.metadata = ""
+ vNext.pre = ""
+ vNext.patch = 0
+ vNext.minor = 0
+ vNext.major = v.major + 1
+ vNext.original = v.originalVPrefix() + "" + vNext.String()
+ return vNext
+}
+
+// SetPrerelease defines the prerelease value.
+// Value must not include the required 'hyphen' prefix.
+func (v Version) SetPrerelease(prerelease string) (Version, error) {
+ vNext := v
+ if len(prerelease) > 0 {
+ if err := validatePrerelease(prerelease); err != nil {
+ return vNext, err
+ }
+ }
+ vNext.pre = prerelease
+ vNext.original = v.originalVPrefix() + "" + vNext.String()
+ return vNext, nil
+}
+
+// SetMetadata defines metadata value.
+// Value must not include the required 'plus' prefix.
+func (v Version) SetMetadata(metadata string) (Version, error) {
+ vNext := v
+ if len(metadata) > 0 {
+ if err := validateMetadata(metadata); err != nil {
+ return vNext, err
+ }
+ }
+ vNext.metadata = metadata
+ vNext.original = v.originalVPrefix() + "" + vNext.String()
+ return vNext, nil
+}
+
+// LessThan tests if one version is less than another one.
+func (v *Version) LessThan(o *Version) bool {
+ return v.Compare(o) < 0
+}
+
+// LessThanEqual tests if one version is less or equal than another one.
+func (v *Version) LessThanEqual(o *Version) bool {
+ return v.Compare(o) <= 0
+}
+
+// GreaterThan tests if one version is greater than another one.
+func (v *Version) GreaterThan(o *Version) bool {
+ return v.Compare(o) > 0
+}
+
+// GreaterThanEqual tests if one version is greater or equal than another one.
+func (v *Version) GreaterThanEqual(o *Version) bool {
+ return v.Compare(o) >= 0
+}
+
+// Equal tests if two versions are equal to each other.
+// Note, versions can be equal with different metadata since metadata
+// is not considered part of the comparable version.
+func (v *Version) Equal(o *Version) bool {
+ if v == o {
+ return true
+ }
+ if v == nil || o == nil {
+ return false
+ }
+ return v.Compare(o) == 0
+}
+
+// Compare compares this version to another one. It returns -1, 0, or 1 if
+// the version smaller, equal, or larger than the other version.
+//
+// Versions are compared by X.Y.Z. Build metadata is ignored. Prerelease is
+// lower than the version without a prerelease. Compare always takes into account
+// prereleases. If you want to work with ranges using typical range syntaxes that
+// skip prereleases if the range is not looking for them use constraints.
+func (v *Version) Compare(o *Version) int {
+ // Compare the major, minor, and patch version for differences. If a
+ // difference is found return the comparison.
+ if d := compareSegment(v.Major(), o.Major()); d != 0 {
+ return d
+ }
+ if d := compareSegment(v.Minor(), o.Minor()); d != 0 {
+ return d
+ }
+ if d := compareSegment(v.Patch(), o.Patch()); d != 0 {
+ return d
+ }
+
+ // At this point the major, minor, and patch versions are the same.
+ ps := v.pre
+ po := o.Prerelease()
+
+ if ps == "" && po == "" {
+ return 0
+ }
+ if ps == "" {
+ return 1
+ }
+ if po == "" {
+ return -1
+ }
+
+ return comparePrerelease(ps, po)
+}
+
+// UnmarshalJSON implements JSON.Unmarshaler interface.
+func (v *Version) UnmarshalJSON(b []byte) error {
+ var s string
+ if err := json.Unmarshal(b, &s); err != nil {
+ return err
+ }
+ temp, err := NewVersion(s)
+ if err != nil {
+ return err
+ }
+ v.major = temp.major
+ v.minor = temp.minor
+ v.patch = temp.patch
+ v.pre = temp.pre
+ v.metadata = temp.metadata
+ v.original = temp.original
+ return nil
+}
+
+// MarshalJSON implements JSON.Marshaler interface.
+func (v Version) MarshalJSON() ([]byte, error) {
+ return json.Marshal(v.String())
+}
+
+// UnmarshalText implements the encoding.TextUnmarshaler interface.
+func (v *Version) UnmarshalText(text []byte) error {
+ temp, err := NewVersion(string(text))
+ if err != nil {
+ return err
+ }
+
+ *v = *temp
+
+ return nil
+}
+
+// MarshalText implements the encoding.TextMarshaler interface.
+func (v Version) MarshalText() ([]byte, error) {
+ return []byte(v.String()), nil
+}
+
+// Scan implements the SQL.Scanner interface.
+func (v *Version) Scan(value interface{}) error {
+ var s string
+ s, _ = value.(string)
+ temp, err := NewVersion(s)
+ if err != nil {
+ return err
+ }
+ v.major = temp.major
+ v.minor = temp.minor
+ v.patch = temp.patch
+ v.pre = temp.pre
+ v.metadata = temp.metadata
+ v.original = temp.original
+ return nil
+}
+
+// Value implements the Driver.Valuer interface.
+func (v Version) Value() (driver.Value, error) {
+ return v.String(), nil
+}
+
+func compareSegment(v, o uint64) int {
+ if v < o {
+ return -1
+ }
+ if v > o {
+ return 1
+ }
+
+ return 0
+}
+
+func comparePrerelease(v, o string) int {
+ // split the prelease versions by their part. The separator, per the spec,
+ // is a .
+ sparts := strings.Split(v, ".")
+ oparts := strings.Split(o, ".")
+
+ // Find the longer length of the parts to know how many loop iterations to
+ // go through.
+ slen := len(sparts)
+ olen := len(oparts)
+
+ l := slen
+ if olen > slen {
+ l = olen
+ }
+
+ // Iterate over each part of the prereleases to compare the differences.
+ for i := 0; i < l; i++ {
+ // Since the lentgh of the parts can be different we need to create
+ // a placeholder. This is to avoid out of bounds issues.
+ stemp := ""
+ if i < slen {
+ stemp = sparts[i]
+ }
+
+ otemp := ""
+ if i < olen {
+ otemp = oparts[i]
+ }
+
+ d := comparePrePart(stemp, otemp)
+ if d != 0 {
+ return d
+ }
+ }
+
+ // Reaching here means two versions are of equal value but have different
+ // metadata (the part following a +). They are not identical in string form
+ // but the version comparison finds them to be equal.
+ return 0
+}
+
+func comparePrePart(s, o string) int {
+ // Fastpath if they are equal
+ if s == o {
+ return 0
+ }
+
+ // When s or o are empty we can use the other in an attempt to determine
+ // the response.
+ if s == "" {
+ if o != "" {
+ return -1
+ }
+ return 1
+ }
+
+ if o == "" {
+ if s != "" {
+ return 1
+ }
+ return -1
+ }
+
+ // When comparing strings "99" is greater than "103". To handle
+ // cases like this we need to detect numbers and compare them. According
+ // to the semver spec, numbers are always positive. If there is a - at the
+ // start like -99 this is to be evaluated as an alphanum. numbers always
+ // have precedence over alphanum. Parsing as Uints because negative numbers
+ // are ignored.
+
+ oi, n1 := strconv.ParseUint(o, 10, 64)
+ si, n2 := strconv.ParseUint(s, 10, 64)
+
+ // The case where both are strings compare the strings
+ if n1 != nil && n2 != nil {
+ if s > o {
+ return 1
+ }
+ return -1
+ } else if n1 != nil {
+ // o is a string and s is a number
+ return -1
+ } else if n2 != nil {
+ // s is a string and o is a number
+ return 1
+ }
+ // Both are numbers
+ if si > oi {
+ return 1
+ }
+ return -1
+}
+
+// Like strings.ContainsAny but does an only instead of any.
+func containsOnly(s string, comp string) bool {
+ return strings.IndexFunc(s, func(r rune) bool {
+ return !strings.ContainsRune(comp, r)
+ }) == -1
+}
+
+// From the spec, "Identifiers MUST comprise only
+// ASCII alphanumerics and hyphen [0-9A-Za-z-]. Identifiers MUST NOT be empty.
+// Numeric identifiers MUST NOT include leading zeroes.". These segments can
+// be dot separated.
+func validatePrerelease(p string) error {
+ eparts := strings.Split(p, ".")
+ for _, p := range eparts {
+ if containsOnly(p, num) {
+ if len(p) > 1 && p[0] == '0' {
+ return ErrSegmentStartsZero
+ }
+ } else if !containsOnly(p, allowed) {
+ return ErrInvalidPrerelease
+ }
+ }
+
+ return nil
+}
+
+// From the spec, "Build metadata MAY be denoted by
+// appending a plus sign and a series of dot separated identifiers immediately
+// following the patch or pre-release version. Identifiers MUST comprise only
+// ASCII alphanumerics and hyphen [0-9A-Za-z-]. Identifiers MUST NOT be empty."
+func validateMetadata(m string) error {
+ eparts := strings.Split(m, ".")
+ for _, p := range eparts {
+ if !containsOnly(p, allowed) {
+ return ErrInvalidMetadata
+ }
+ }
+ return nil
+}
diff --git a/vendor/github.com/Masterminds/sprig/v3/.gitignore b/vendor/github.com/Masterminds/sprig/v3/.gitignore
new file mode 100644
index 000000000..5e3002f88
--- /dev/null
+++ b/vendor/github.com/Masterminds/sprig/v3/.gitignore
@@ -0,0 +1,2 @@
+vendor/
+/.glide
diff --git a/vendor/github.com/Masterminds/sprig/v3/CHANGELOG.md b/vendor/github.com/Masterminds/sprig/v3/CHANGELOG.md
new file mode 100644
index 000000000..b5ef766a7
--- /dev/null
+++ b/vendor/github.com/Masterminds/sprig/v3/CHANGELOG.md
@@ -0,0 +1,401 @@
+# Changelog
+
+## Release 3.3.0 (2024-08-29)
+
+### Added
+
+- #400: added sha512sum function (thanks @itzik-elayev)
+
+### Changed
+
+- #407: Removed duplicate documentation (functions were documentated in 2 places)
+- #290: Corrected copy/paster oops in math documentation (thanks @zzhu41)
+- #369: Corrected template reference in docs (thanks @chey)
+- #375: Added link to URL documenation (thanks @carlpett)
+- #406: Updated the mergo dependency which had a breaking change (which was accounted for)
+- #376: Fixed documentation error (thanks @jheyduk)
+- #404: Updated dependency tree
+- #391: Fixed misspelling (thanks @chrishalbert)
+- #405: Updated Go versions used in testing
+
+## Release 3.2.3 (2022-11-29)
+
+### Changed
+
+- Updated docs (thanks @book987 @aJetHorn @neelayu @pellizzetti @apricote @SaigyoujiYuyuko233 @AlekSi)
+- #348: Updated huandu/xstrings which fixed a snake case bug (thanks @yxxhero)
+- #353: Updated masterminds/semver which included bug fixes
+- #354: Updated golang.org/x/crypto which included bug fixes
+
+## Release 3.2.2 (2021-02-04)
+
+This is a re-release of 3.2.1 to satisfy something with the Go module system.
+
+## Release 3.2.1 (2021-02-04)
+
+### Changed
+
+- Upgraded `Masterminds/goutils` to `v1.1.1`. see the [Security Advisory](https://github.com/Masterminds/goutils/security/advisories/GHSA-xg2h-wx96-xgxr)
+
+## Release 3.2.0 (2020-12-14)
+
+### Added
+
+- #211: Added randInt function (thanks @kochurovro)
+- #223: Added fromJson and mustFromJson functions (thanks @mholt)
+- #242: Added a bcrypt function (thanks @robbiet480)
+- #253: Added randBytes function (thanks @MikaelSmith)
+- #254: Added dig function for dicts (thanks @nyarly)
+- #257: Added regexQuoteMeta for quoting regex metadata (thanks @rheaton)
+- #261: Added filepath functions osBase, osDir, osExt, osClean, osIsAbs (thanks @zugl)
+- #268: Added and and all functions for testing conditions (thanks @phuslu)
+- #181: Added float64 arithmetic addf, add1f, subf, divf, mulf, maxf, and minf
+ (thanks @andrewmostello)
+- #265: Added chunk function to split array into smaller arrays (thanks @karelbilek)
+- #270: Extend certificate functions to handle non-RSA keys + add support for
+ ed25519 keys (thanks @misberner)
+
+### Changed
+
+- Removed testing and support for Go 1.12. ed25519 support requires Go 1.13 or newer
+- Using semver 3.1.1 and mergo 0.3.11
+
+### Fixed
+
+- #249: Fix htmlDateInZone example (thanks @spawnia)
+
+NOTE: The dependency github.com/imdario/mergo reverted the breaking change in
+0.3.9 via 0.3.10 release.
+
+## Release 3.1.0 (2020-04-16)
+
+NOTE: The dependency github.com/imdario/mergo made a behavior change in 0.3.9
+that impacts sprig functionality. Do not use sprig with a version newer than 0.3.8.
+
+### Added
+
+- #225: Added support for generating htpasswd hash (thanks @rustycl0ck)
+- #224: Added duration filter (thanks @frebib)
+- #205: Added `seq` function (thanks @thadc23)
+
+### Changed
+
+- #203: Unlambda functions with correct signature (thanks @muesli)
+- #236: Updated the license formatting for GitHub display purposes
+- #238: Updated package dependency versions. Note, mergo not updated to 0.3.9
+ as it causes a breaking change for sprig. That issue is tracked at
+ https://github.com/imdario/mergo/issues/139
+
+### Fixed
+
+- #229: Fix `seq` example in docs (thanks @kalmant)
+
+## Release 3.0.2 (2019-12-13)
+
+### Fixed
+
+- #220: Updating to semver v3.0.3 to fix issue with <= ranges
+- #218: fix typo elyptical->elliptic in ecdsa key description (thanks @laverya)
+
+## Release 3.0.1 (2019-12-08)
+
+### Fixed
+
+- #212: Updated semver fixing broken constraint checking with ^0.0
+
+## Release 3.0.0 (2019-10-02)
+
+### Added
+
+- #187: Added durationRound function (thanks @yjp20)
+- #189: Added numerous template functions that return errors rather than panic (thanks @nrvnrvn)
+- #193: Added toRawJson support (thanks @Dean-Coakley)
+- #197: Added get support to dicts (thanks @Dean-Coakley)
+
+### Changed
+
+- #186: Moving dependency management to Go modules
+- #186: Updated semver to v3. This has changes in the way ^ is handled
+- #194: Updated documentation on merging and how it copies. Added example using deepCopy
+- #196: trunc now supports negative values (thanks @Dean-Coakley)
+
+## Release 2.22.0 (2019-10-02)
+
+### Added
+
+- #173: Added getHostByName function to resolve dns names to ips (thanks @fcgravalos)
+- #195: Added deepCopy function for use with dicts
+
+### Changed
+
+- Updated merge and mergeOverwrite documentation to explain copying and how to
+ use deepCopy with it
+
+## Release 2.21.0 (2019-09-18)
+
+### Added
+
+- #122: Added encryptAES/decryptAES functions (thanks @n0madic)
+- #128: Added toDecimal support (thanks @Dean-Coakley)
+- #169: Added list contcat (thanks @astorath)
+- #174: Added deepEqual function (thanks @bonifaido)
+- #170: Added url parse and join functions (thanks @astorath)
+
+### Changed
+
+- #171: Updated glide config for Google UUID to v1 and to add ranges to semver and testify
+
+### Fixed
+
+- #172: Fix semver wildcard example (thanks @piepmatz)
+- #175: Fix dateInZone doc example (thanks @s3than)
+
+## Release 2.20.0 (2019-06-18)
+
+### Added
+
+- #164: Adding function to get unix epoch for a time (@mattfarina)
+- #166: Adding tests for date_in_zone (@mattfarina)
+
+### Changed
+
+- #144: Fix function comments based on best practices from Effective Go (@CodeLingoTeam)
+- #150: Handles pointer type for time.Time in "htmlDate" (@mapreal19)
+- #161, #157, #160, #153, #158, #156, #155, #159, #152 documentation updates (@badeadan)
+
+### Fixed
+
+## Release 2.19.0 (2019-03-02)
+
+IMPORTANT: This release reverts a change from 2.18.0
+
+In the previous release (2.18), we prematurely merged a partial change to the crypto functions that led to creating two sets of crypto functions (I blame @technosophos -- since that's me). This release rolls back that change, and does what was originally intended: It alters the existing crypto functions to use secure random.
+
+We debated whether this classifies as a change worthy of major revision, but given the proximity to the last release, we have decided that treating 2.18 as a faulty release is the correct course of action. We apologize for any inconvenience.
+
+### Changed
+
+- Fix substr panic 35fb796 (Alexey igrychev)
+- Remove extra period 1eb7729 (Matthew Lorimor)
+- Make random string functions use crypto by default 6ceff26 (Matthew Lorimor)
+- README edits/fixes/suggestions 08fe136 (Lauri Apple)
+
+
+## Release 2.18.0 (2019-02-12)
+
+### Added
+
+- Added mergeOverwrite function
+- cryptographic functions that use secure random (see fe1de12)
+
+### Changed
+
+- Improve documentation of regexMatch function, resolves #139 90b89ce (Jan Tagscherer)
+- Handle has for nil list 9c10885 (Daniel Cohen)
+- Document behaviour of mergeOverwrite fe0dbe9 (Lukas Rieder)
+- doc: adds missing documentation. 4b871e6 (Fernandez Ludovic)
+- Replace outdated goutils imports 01893d2 (Matthew Lorimor)
+- Surface crypto secure random strings from goutils fe1de12 (Matthew Lorimor)
+- Handle untyped nil values as paramters to string functions 2b2ec8f (Morten Torkildsen)
+
+### Fixed
+
+- Fix dict merge issue and provide mergeOverwrite .dst .src1 to overwrite from src -> dst 4c59c12 (Lukas Rieder)
+- Fix substr var names and comments d581f80 (Dean Coakley)
+- Fix substr documentation 2737203 (Dean Coakley)
+
+## Release 2.17.1 (2019-01-03)
+
+### Fixed
+
+The 2.17.0 release did not have a version pinned for xstrings, which caused compilation failures when xstrings < 1.2 was used. This adds the correct version string to glide.yaml.
+
+## Release 2.17.0 (2019-01-03)
+
+### Added
+
+- adds alder32sum function and test 6908fc2 (marshallford)
+- Added kebabcase function ca331a1 (Ilyes512)
+
+### Changed
+
+- Update goutils to 1.1.0 4e1125d (Matt Butcher)
+
+### Fixed
+
+- Fix 'has' documentation e3f2a85 (dean-coakley)
+- docs(dict): fix typo in pick example dc424f9 (Dustin Specker)
+- fixes spelling errors... not sure how that happened 4cf188a (marshallford)
+
+## Release 2.16.0 (2018-08-13)
+
+### Added
+
+- add splitn function fccb0b0 (Helgi Þorbjörnsson)
+- Add slice func df28ca7 (gongdo)
+- Generate serial number a3bdffd (Cody Coons)
+- Extract values of dict with values function df39312 (Lawrence Jones)
+
+### Changed
+
+- Modify panic message for list.slice ae38335 (gongdo)
+- Minor improvement in code quality - Removed an unreachable piece of code at defaults.go#L26:6 - Resolve formatting issues. 5834241 (Abhishek Kashyap)
+- Remove duplicated documentation 1d97af1 (Matthew Fisher)
+- Test on go 1.11 49df809 (Helgi Þormar Þorbjörnsson)
+
+### Fixed
+
+- Fix file permissions c5f40b5 (gongdo)
+- Fix example for buildCustomCert 7779e0d (Tin Lam)
+
+## Release 2.15.0 (2018-04-02)
+
+### Added
+
+- #68 and #69: Add json helpers to docs (thanks @arunvelsriram)
+- #66: Add ternary function (thanks @binoculars)
+- #67: Allow keys function to take multiple dicts (thanks @binoculars)
+- #89: Added sha1sum to crypto function (thanks @benkeil)
+- #81: Allow customizing Root CA that used by genSignedCert (thanks @chenzhiwei)
+- #92: Add travis testing for go 1.10
+- #93: Adding appveyor config for windows testing
+
+### Changed
+
+- #90: Updating to more recent dependencies
+- #73: replace satori/go.uuid with google/uuid (thanks @petterw)
+
+### Fixed
+
+- #76: Fixed documentation typos (thanks @Thiht)
+- Fixed rounding issue on the `ago` function. Note, the removes support for Go 1.8 and older
+
+## Release 2.14.1 (2017-12-01)
+
+### Fixed
+
+- #60: Fix typo in function name documentation (thanks @neil-ca-moore)
+- #61: Removing line with {{ due to blocking github pages genertion
+- #64: Update the list functions to handle int, string, and other slices for compatibility
+
+## Release 2.14.0 (2017-10-06)
+
+This new version of Sprig adds a set of functions for generating and working with SSL certificates.
+
+- `genCA` generates an SSL Certificate Authority
+- `genSelfSignedCert` generates an SSL self-signed certificate
+- `genSignedCert` generates an SSL certificate and key based on a given CA
+
+## Release 2.13.0 (2017-09-18)
+
+This release adds new functions, including:
+
+- `regexMatch`, `regexFindAll`, `regexFind`, `regexReplaceAll`, `regexReplaceAllLiteral`, and `regexSplit` to work with regular expressions
+- `floor`, `ceil`, and `round` math functions
+- `toDate` converts a string to a date
+- `nindent` is just like `indent` but also prepends a new line
+- `ago` returns the time from `time.Now`
+
+### Added
+
+- #40: Added basic regex functionality (thanks @alanquillin)
+- #41: Added ceil floor and round functions (thanks @alanquillin)
+- #48: Added toDate function (thanks @andreynering)
+- #50: Added nindent function (thanks @binoculars)
+- #46: Added ago function (thanks @slayer)
+
+### Changed
+
+- #51: Updated godocs to include new string functions (thanks @curtisallen)
+- #49: Added ability to merge multiple dicts (thanks @binoculars)
+
+## Release 2.12.0 (2017-05-17)
+
+- `snakecase`, `camelcase`, and `shuffle` are three new string functions
+- `fail` allows you to bail out of a template render when conditions are not met
+
+## Release 2.11.0 (2017-05-02)
+
+- Added `toJson` and `toPrettyJson`
+- Added `merge`
+- Refactored documentation
+
+## Release 2.10.0 (2017-03-15)
+
+- Added `semver` and `semverCompare` for Semantic Versions
+- `list` replaces `tuple`
+- Fixed issue with `join`
+- Added `first`, `last`, `initial`, `rest`, `prepend`, `append`, `toString`, `toStrings`, `sortAlpha`, `reverse`, `coalesce`, `pluck`, `pick`, `compact`, `keys`, `omit`, `uniq`, `has`, `without`
+
+## Release 2.9.0 (2017-02-23)
+
+- Added `splitList` to split a list
+- Added crypto functions of `genPrivateKey` and `derivePassword`
+
+## Release 2.8.0 (2016-12-21)
+
+- Added access to several path functions (`base`, `dir`, `clean`, `ext`, and `abs`)
+- Added functions for _mutating_ dictionaries (`set`, `unset`, `hasKey`)
+
+## Release 2.7.0 (2016-12-01)
+
+- Added `sha256sum` to generate a hash of an input
+- Added functions to convert a numeric or string to `int`, `int64`, `float64`
+
+## Release 2.6.0 (2016-10-03)
+
+- Added a `uuidv4` template function for generating UUIDs inside of a template.
+
+## Release 2.5.0 (2016-08-19)
+
+- New `trimSuffix`, `trimPrefix`, `hasSuffix`, and `hasPrefix` functions
+- New aliases have been added for a few functions that didn't follow the naming conventions (`trimAll` and `abbrevBoth`)
+- `trimall` and `abbrevboth` (notice the case) are deprecated and will be removed in 3.0.0
+
+## Release 2.4.0 (2016-08-16)
+
+- Adds two functions: `until` and `untilStep`
+
+## Release 2.3.0 (2016-06-21)
+
+- cat: Concatenate strings with whitespace separators.
+- replace: Replace parts of a string: `replace " " "-" "Me First"` renders "Me-First"
+- plural: Format plurals: `len "foo" | plural "one foo" "many foos"` renders "many foos"
+- indent: Indent blocks of text in a way that is sensitive to "\n" characters.
+
+## Release 2.2.0 (2016-04-21)
+
+- Added a `genPrivateKey` function (Thanks @bacongobbler)
+
+## Release 2.1.0 (2016-03-30)
+
+- `default` now prints the default value when it does not receive a value down the pipeline. It is much safer now to do `{{.Foo | default "bar"}}`.
+- Added accessors for "hermetic" functions. These return only functions that, when given the same input, produce the same output.
+
+## Release 2.0.0 (2016-03-29)
+
+Because we switched from `int` to `int64` as the return value for all integer math functions, the library's major version number has been incremented.
+
+- `min` complements `max` (formerly `biggest`)
+- `empty` indicates that a value is the empty value for its type
+- `tuple` creates a tuple inside of a template: `{{$t := tuple "a", "b" "c"}}`
+- `dict` creates a dictionary inside of a template `{{$d := dict "key1" "val1" "key2" "val2"}}`
+- Date formatters have been added for HTML dates (as used in `date` input fields)
+- Integer math functions can convert from a number of types, including `string` (via `strconv.ParseInt`).
+
+## Release 1.2.0 (2016-02-01)
+
+- Added quote and squote
+- Added b32enc and b32dec
+- add now takes varargs
+- biggest now takes varargs
+
+## Release 1.1.0 (2015-12-29)
+
+- Added #4: Added contains function. strings.Contains, but with the arguments
+ switched to simplify common pipelines. (thanks krancour)
+- Added Travis-CI testing support
+
+## Release 1.0.0 (2015-12-23)
+
+- Initial release
diff --git a/vendor/github.com/Masterminds/sprig/v3/LICENSE.txt b/vendor/github.com/Masterminds/sprig/v3/LICENSE.txt
new file mode 100644
index 000000000..f311b1eaa
--- /dev/null
+++ b/vendor/github.com/Masterminds/sprig/v3/LICENSE.txt
@@ -0,0 +1,19 @@
+Copyright (C) 2013-2020 Masterminds
+
+Permission is hereby granted, free of charge, to any person obtaining a copy
+of this software and associated documentation files (the "Software"), to deal
+in the Software without restriction, including without limitation the rights
+to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
+copies of the Software, and to permit persons to whom the Software is
+furnished to do so, subject to the following conditions:
+
+The above copyright notice and this permission notice shall be included in
+all copies or substantial portions of the Software.
+
+THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
+AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
+THE SOFTWARE.
diff --git a/vendor/github.com/Masterminds/sprig/v3/Makefile b/vendor/github.com/Masterminds/sprig/v3/Makefile
new file mode 100644
index 000000000..78d409cde
--- /dev/null
+++ b/vendor/github.com/Masterminds/sprig/v3/Makefile
@@ -0,0 +1,9 @@
+.PHONY: test
+test:
+ @echo "==> Running tests"
+ GO111MODULE=on go test -v
+
+.PHONY: test-cover
+test-cover:
+ @echo "==> Running Tests with coverage"
+ GO111MODULE=on go test -cover .
diff --git a/vendor/github.com/Masterminds/sprig/v3/README.md b/vendor/github.com/Masterminds/sprig/v3/README.md
new file mode 100644
index 000000000..3e22c60e1
--- /dev/null
+++ b/vendor/github.com/Masterminds/sprig/v3/README.md
@@ -0,0 +1,100 @@
+# Sprig: Template functions for Go templates
+
+[![GoDoc](https://img.shields.io/static/v1?label=godoc&message=reference&color=blue)](https://pkg.go.dev/github.com/Masterminds/sprig/v3)
+[![Go Report Card](https://goreportcard.com/badge/github.com/Masterminds/sprig)](https://goreportcard.com/report/github.com/Masterminds/sprig)
+[![Stability: Sustained](https://masterminds.github.io/stability/sustained.svg)](https://masterminds.github.io/stability/sustained.html)
+[![](https://github.com/Masterminds/sprig/workflows/Tests/badge.svg)](https://github.com/Masterminds/sprig/actions)
+
+The Go language comes with a [built-in template
+language](http://golang.org/pkg/text/template/), but not
+very many template functions. Sprig is a library that provides more than 100 commonly
+used template functions.
+
+It is inspired by the template functions found in
+[Twig](http://twig.sensiolabs.org/documentation) and in various
+JavaScript libraries, such as [underscore.js](http://underscorejs.org/).
+
+## IMPORTANT NOTES
+
+Sprig leverages [mergo](https://github.com/imdario/mergo) to handle merges. In
+its v0.3.9 release, there was a behavior change that impacts merging template
+functions in sprig. It is currently recommended to use v0.3.10 or later of that package.
+Using v0.3.9 will cause sprig tests to fail.
+
+## Package Versions
+
+There are two active major versions of the `sprig` package.
+
+* v3 is currently stable release series on the `master` branch. The Go API should
+ remain compatible with v2, the current stable version. Behavior change behind
+ some functions is the reason for the new major version.
+* v2 is the previous stable release series. It has been more than three years since
+ the initial release of v2. You can read the documentation and see the code
+ on the [release-2](https://github.com/Masterminds/sprig/tree/release-2) branch.
+ Bug fixes to this major version will continue for some time.
+
+## Usage
+
+**Template developers**: Please use Sprig's [function documentation](http://masterminds.github.io/sprig/) for
+detailed instructions and code snippets for the >100 template functions available.
+
+**Go developers**: If you'd like to include Sprig as a library in your program,
+our API documentation is available [at GoDoc.org](http://godoc.org/github.com/Masterminds/sprig).
+
+For standard usage, read on.
+
+### Load the Sprig library
+
+To load the Sprig `FuncMap`:
+
+```go
+
+import (
+ "github.com/Masterminds/sprig/v3"
+ "html/template"
+)
+
+// This example illustrates that the FuncMap *must* be set before the
+// templates themselves are loaded.
+tpl := template.Must(
+ template.New("base").Funcs(sprig.FuncMap()).ParseGlob("*.html")
+)
+
+
+```
+
+### Calling the functions inside of templates
+
+By convention, all functions are lowercase. This seems to follow the Go
+idiom for template functions (as opposed to template methods, which are
+TitleCase). For example, this:
+
+```
+{{ "hello!" | upper | repeat 5 }}
+```
+
+produces this:
+
+```
+HELLO!HELLO!HELLO!HELLO!HELLO!
+```
+
+## Principles Driving Our Function Selection
+
+We followed these principles to decide which functions to add and how to implement them:
+
+- Use template functions to build layout. The following
+ types of operations are within the domain of template functions:
+ - Formatting
+ - Layout
+ - Simple type conversions
+ - Utilities that assist in handling common formatting and layout needs (e.g. arithmetic)
+- Template functions should not return errors unless there is no way to print
+ a sensible value. For example, converting a string to an integer should not
+ produce an error if conversion fails. Instead, it should display a default
+ value.
+- Simple math is necessary for grid layouts, pagers, and so on. Complex math
+ (anything other than arithmetic) should be done outside of templates.
+- Template functions only deal with the data passed into them. They never retrieve
+ data from a source.
+- Finally, do not override core Go template functions.
diff --git a/vendor/github.com/Masterminds/sprig/v3/crypto.go b/vendor/github.com/Masterminds/sprig/v3/crypto.go
new file mode 100644
index 000000000..75fe027e4
--- /dev/null
+++ b/vendor/github.com/Masterminds/sprig/v3/crypto.go
@@ -0,0 +1,659 @@
+package sprig
+
+import (
+ "bytes"
+ "crypto"
+ "crypto/aes"
+ "crypto/cipher"
+ "crypto/dsa"
+ "crypto/ecdsa"
+ "crypto/ed25519"
+ "crypto/elliptic"
+ "crypto/hmac"
+ "crypto/rand"
+ "crypto/rsa"
+ "crypto/sha1"
+ "crypto/sha256"
+ "crypto/sha512"
+ "crypto/x509"
+ "crypto/x509/pkix"
+ "encoding/asn1"
+ "encoding/base64"
+ "encoding/binary"
+ "encoding/hex"
+ "encoding/pem"
+ "errors"
+ "fmt"
+ "hash/adler32"
+ "io"
+ "math/big"
+ "net"
+ "time"
+
+ "strings"
+
+ "github.com/google/uuid"
+ bcrypt_lib "golang.org/x/crypto/bcrypt"
+ "golang.org/x/crypto/scrypt"
+)
+
+func sha512sum(input string) string {
+ hash := sha512.Sum512([]byte(input))
+ return hex.EncodeToString(hash[:])
+}
+
+func sha256sum(input string) string {
+ hash := sha256.Sum256([]byte(input))
+ return hex.EncodeToString(hash[:])
+}
+
+func sha1sum(input string) string {
+ hash := sha1.Sum([]byte(input))
+ return hex.EncodeToString(hash[:])
+}
+
+func adler32sum(input string) string {
+ hash := adler32.Checksum([]byte(input))
+ return fmt.Sprintf("%d", hash)
+}
+
+func bcrypt(input string) string {
+ hash, err := bcrypt_lib.GenerateFromPassword([]byte(input), bcrypt_lib.DefaultCost)
+ if err != nil {
+ return fmt.Sprintf("failed to encrypt string with bcrypt: %s", err)
+ }
+
+ return string(hash)
+}
+
+func htpasswd(username string, password string) string {
+ if strings.Contains(username, ":") {
+ return fmt.Sprintf("invalid username: %s", username)
+ }
+ return fmt.Sprintf("%s:%s", username, bcrypt(password))
+}
+
+func randBytes(count int) (string, error) {
+ buf := make([]byte, count)
+ if _, err := rand.Read(buf); err != nil {
+ return "", err
+ }
+ return base64.StdEncoding.EncodeToString(buf), nil
+}
+
+// uuidv4 provides a safe and secure UUID v4 implementation
+func uuidv4() string {
+ return uuid.New().String()
+}
+
+var masterPasswordSeed = "com.lyndir.masterpassword"
+
+var passwordTypeTemplates = map[string][][]byte{
+ "maximum": {[]byte("anoxxxxxxxxxxxxxxxxx"), []byte("axxxxxxxxxxxxxxxxxno")},
+ "long": {[]byte("CvcvnoCvcvCvcv"), []byte("CvcvCvcvnoCvcv"), []byte("CvcvCvcvCvcvno"), []byte("CvccnoCvcvCvcv"), []byte("CvccCvcvnoCvcv"),
+ []byte("CvccCvcvCvcvno"), []byte("CvcvnoCvccCvcv"), []byte("CvcvCvccnoCvcv"), []byte("CvcvCvccCvcvno"), []byte("CvcvnoCvcvCvcc"),
+ []byte("CvcvCvcvnoCvcc"), []byte("CvcvCvcvCvccno"), []byte("CvccnoCvccCvcv"), []byte("CvccCvccnoCvcv"), []byte("CvccCvccCvcvno"),
+ []byte("CvcvnoCvccCvcc"), []byte("CvcvCvccnoCvcc"), []byte("CvcvCvccCvccno"), []byte("CvccnoCvcvCvcc"), []byte("CvccCvcvnoCvcc"),
+ []byte("CvccCvcvCvccno")},
+ "medium": {[]byte("CvcnoCvc"), []byte("CvcCvcno")},
+ "short": {[]byte("Cvcn")},
+ "basic": {[]byte("aaanaaan"), []byte("aannaaan"), []byte("aaannaaa")},
+ "pin": {[]byte("nnnn")},
+}
+
+var templateCharacters = map[byte]string{
+ 'V': "AEIOU",
+ 'C': "BCDFGHJKLMNPQRSTVWXYZ",
+ 'v': "aeiou",
+ 'c': "bcdfghjklmnpqrstvwxyz",
+ 'A': "AEIOUBCDFGHJKLMNPQRSTVWXYZ",
+ 'a': "AEIOUaeiouBCDFGHJKLMNPQRSTVWXYZbcdfghjklmnpqrstvwxyz",
+ 'n': "0123456789",
+ 'o': "@&%?,=[]_:-+*$#!'^~;()/.",
+ 'x': "AEIOUaeiouBCDFGHJKLMNPQRSTVWXYZbcdfghjklmnpqrstvwxyz0123456789!@#$%^&*()",
+}
+
+func derivePassword(counter uint32, passwordType, password, user, site string) string {
+ var templates = passwordTypeTemplates[passwordType]
+ if templates == nil {
+ return fmt.Sprintf("cannot find password template %s", passwordType)
+ }
+
+ var buffer bytes.Buffer
+ buffer.WriteString(masterPasswordSeed)
+ binary.Write(&buffer, binary.BigEndian, uint32(len(user)))
+ buffer.WriteString(user)
+
+ salt := buffer.Bytes()
+ key, err := scrypt.Key([]byte(password), salt, 32768, 8, 2, 64)
+ if err != nil {
+ return fmt.Sprintf("failed to derive password: %s", err)
+ }
+
+ buffer.Truncate(len(masterPasswordSeed))
+ binary.Write(&buffer, binary.BigEndian, uint32(len(site)))
+ buffer.WriteString(site)
+ binary.Write(&buffer, binary.BigEndian, counter)
+
+ var hmacv = hmac.New(sha256.New, key)
+ hmacv.Write(buffer.Bytes())
+ var seed = hmacv.Sum(nil)
+ var temp = templates[int(seed[0])%len(templates)]
+
+ buffer.Truncate(0)
+ for i, element := range temp {
+ passChars := templateCharacters[element]
+ passChar := passChars[int(seed[i+1])%len(passChars)]
+ buffer.WriteByte(passChar)
+ }
+
+ return buffer.String()
+}
+
+func generatePrivateKey(typ string) string {
+ var priv interface{}
+ var err error
+ switch typ {
+ case "", "rsa":
+ // good enough for government work
+ priv, err = rsa.GenerateKey(rand.Reader, 4096)
+ case "dsa":
+ key := new(dsa.PrivateKey)
+ // again, good enough for government work
+ if err = dsa.GenerateParameters(&key.Parameters, rand.Reader, dsa.L2048N256); err != nil {
+ return fmt.Sprintf("failed to generate dsa params: %s", err)
+ }
+ err = dsa.GenerateKey(key, rand.Reader)
+ priv = key
+ case "ecdsa":
+ // again, good enough for government work
+ priv, err = ecdsa.GenerateKey(elliptic.P256(), rand.Reader)
+ case "ed25519":
+ _, priv, err = ed25519.GenerateKey(rand.Reader)
+ default:
+ return "Unknown type " + typ
+ }
+ if err != nil {
+ return fmt.Sprintf("failed to generate private key: %s", err)
+ }
+
+ return string(pem.EncodeToMemory(pemBlockForKey(priv)))
+}
+
+// DSAKeyFormat stores the format for DSA keys.
+// Used by pemBlockForKey
+type DSAKeyFormat struct {
+ Version int
+ P, Q, G, Y, X *big.Int
+}
+
+func pemBlockForKey(priv interface{}) *pem.Block {
+ switch k := priv.(type) {
+ case *rsa.PrivateKey:
+ return &pem.Block{Type: "RSA PRIVATE KEY", Bytes: x509.MarshalPKCS1PrivateKey(k)}
+ case *dsa.PrivateKey:
+ val := DSAKeyFormat{
+ P: k.P, Q: k.Q, G: k.G,
+ Y: k.Y, X: k.X,
+ }
+ bytes, _ := asn1.Marshal(val)
+ return &pem.Block{Type: "DSA PRIVATE KEY", Bytes: bytes}
+ case *ecdsa.PrivateKey:
+ b, _ := x509.MarshalECPrivateKey(k)
+ return &pem.Block{Type: "EC PRIVATE KEY", Bytes: b}
+ default:
+ // attempt PKCS#8 format for all other keys
+ b, err := x509.MarshalPKCS8PrivateKey(k)
+ if err != nil {
+ return nil
+ }
+ return &pem.Block{Type: "PRIVATE KEY", Bytes: b}
+ }
+}
+
+func parsePrivateKeyPEM(pemBlock string) (crypto.PrivateKey, error) {
+ block, _ := pem.Decode([]byte(pemBlock))
+ if block == nil {
+ return nil, errors.New("no PEM data in input")
+ }
+
+ if block.Type == "PRIVATE KEY" {
+ priv, err := x509.ParsePKCS8PrivateKey(block.Bytes)
+ if err != nil {
+ return nil, fmt.Errorf("decoding PEM as PKCS#8: %s", err)
+ }
+ return priv, nil
+ } else if !strings.HasSuffix(block.Type, " PRIVATE KEY") {
+ return nil, fmt.Errorf("no private key data in PEM block of type %s", block.Type)
+ }
+
+ switch block.Type[:len(block.Type)-12] { // strip " PRIVATE KEY"
+ case "RSA":
+ priv, err := x509.ParsePKCS1PrivateKey(block.Bytes)
+ if err != nil {
+ return nil, fmt.Errorf("parsing RSA private key from PEM: %s", err)
+ }
+ return priv, nil
+ case "EC":
+ priv, err := x509.ParseECPrivateKey(block.Bytes)
+ if err != nil {
+ return nil, fmt.Errorf("parsing EC private key from PEM: %s", err)
+ }
+ return priv, nil
+ case "DSA":
+ var k DSAKeyFormat
+ _, err := asn1.Unmarshal(block.Bytes, &k)
+ if err != nil {
+ return nil, fmt.Errorf("parsing DSA private key from PEM: %s", err)
+ }
+ priv := &dsa.PrivateKey{
+ PublicKey: dsa.PublicKey{
+ Parameters: dsa.Parameters{
+ P: k.P, Q: k.Q, G: k.G,
+ },
+ Y: k.Y,
+ },
+ X: k.X,
+ }
+ return priv, nil
+ default:
+ return nil, fmt.Errorf("invalid private key type %s", block.Type)
+ }
+}
+
+func getPublicKey(priv crypto.PrivateKey) (crypto.PublicKey, error) {
+ switch k := priv.(type) {
+ case interface{ Public() crypto.PublicKey }:
+ return k.Public(), nil
+ case *dsa.PrivateKey:
+ return &k.PublicKey, nil
+ default:
+ return nil, fmt.Errorf("unable to get public key for type %T", priv)
+ }
+}
+
+type certificate struct {
+ Cert string
+ Key string
+}
+
+func buildCustomCertificate(b64cert string, b64key string) (certificate, error) {
+ crt := certificate{}
+
+ cert, err := base64.StdEncoding.DecodeString(b64cert)
+ if err != nil {
+ return crt, errors.New("unable to decode base64 certificate")
+ }
+
+ key, err := base64.StdEncoding.DecodeString(b64key)
+ if err != nil {
+ return crt, errors.New("unable to decode base64 private key")
+ }
+
+ decodedCert, _ := pem.Decode(cert)
+ if decodedCert == nil {
+ return crt, errors.New("unable to decode certificate")
+ }
+ _, err = x509.ParseCertificate(decodedCert.Bytes)
+ if err != nil {
+ return crt, fmt.Errorf(
+ "error parsing certificate: decodedCert.Bytes: %s",
+ err,
+ )
+ }
+
+ _, err = parsePrivateKeyPEM(string(key))
+ if err != nil {
+ return crt, fmt.Errorf(
+ "error parsing private key: %s",
+ err,
+ )
+ }
+
+ crt.Cert = string(cert)
+ crt.Key = string(key)
+
+ return crt, nil
+}
+
+func generateCertificateAuthority(
+ cn string,
+ daysValid int,
+) (certificate, error) {
+ priv, err := rsa.GenerateKey(rand.Reader, 2048)
+ if err != nil {
+ return certificate{}, fmt.Errorf("error generating rsa key: %s", err)
+ }
+
+ return generateCertificateAuthorityWithKeyInternal(cn, daysValid, priv)
+}
+
+func generateCertificateAuthorityWithPEMKey(
+ cn string,
+ daysValid int,
+ privPEM string,
+) (certificate, error) {
+ priv, err := parsePrivateKeyPEM(privPEM)
+ if err != nil {
+ return certificate{}, fmt.Errorf("parsing private key: %s", err)
+ }
+ return generateCertificateAuthorityWithKeyInternal(cn, daysValid, priv)
+}
+
+func generateCertificateAuthorityWithKeyInternal(
+ cn string,
+ daysValid int,
+ priv crypto.PrivateKey,
+) (certificate, error) {
+ ca := certificate{}
+
+ template, err := getBaseCertTemplate(cn, nil, nil, daysValid)
+ if err != nil {
+ return ca, err
+ }
+ // Override KeyUsage and IsCA
+ template.KeyUsage = x509.KeyUsageKeyEncipherment |
+ x509.KeyUsageDigitalSignature |
+ x509.KeyUsageCertSign
+ template.IsCA = true
+
+ ca.Cert, ca.Key, err = getCertAndKey(template, priv, template, priv)
+
+ return ca, err
+}
+
+func generateSelfSignedCertificate(
+ cn string,
+ ips []interface{},
+ alternateDNS []interface{},
+ daysValid int,
+) (certificate, error) {
+ priv, err := rsa.GenerateKey(rand.Reader, 2048)
+ if err != nil {
+ return certificate{}, fmt.Errorf("error generating rsa key: %s", err)
+ }
+ return generateSelfSignedCertificateWithKeyInternal(cn, ips, alternateDNS, daysValid, priv)
+}
+
+func generateSelfSignedCertificateWithPEMKey(
+ cn string,
+ ips []interface{},
+ alternateDNS []interface{},
+ daysValid int,
+ privPEM string,
+) (certificate, error) {
+ priv, err := parsePrivateKeyPEM(privPEM)
+ if err != nil {
+ return certificate{}, fmt.Errorf("parsing private key: %s", err)
+ }
+ return generateSelfSignedCertificateWithKeyInternal(cn, ips, alternateDNS, daysValid, priv)
+}
+
+func generateSelfSignedCertificateWithKeyInternal(
+ cn string,
+ ips []interface{},
+ alternateDNS []interface{},
+ daysValid int,
+ priv crypto.PrivateKey,
+) (certificate, error) {
+ cert := certificate{}
+
+ template, err := getBaseCertTemplate(cn, ips, alternateDNS, daysValid)
+ if err != nil {
+ return cert, err
+ }
+
+ cert.Cert, cert.Key, err = getCertAndKey(template, priv, template, priv)
+
+ return cert, err
+}
+
+func generateSignedCertificate(
+ cn string,
+ ips []interface{},
+ alternateDNS []interface{},
+ daysValid int,
+ ca certificate,
+) (certificate, error) {
+ priv, err := rsa.GenerateKey(rand.Reader, 2048)
+ if err != nil {
+ return certificate{}, fmt.Errorf("error generating rsa key: %s", err)
+ }
+ return generateSignedCertificateWithKeyInternal(cn, ips, alternateDNS, daysValid, ca, priv)
+}
+
+func generateSignedCertificateWithPEMKey(
+ cn string,
+ ips []interface{},
+ alternateDNS []interface{},
+ daysValid int,
+ ca certificate,
+ privPEM string,
+) (certificate, error) {
+ priv, err := parsePrivateKeyPEM(privPEM)
+ if err != nil {
+ return certificate{}, fmt.Errorf("parsing private key: %s", err)
+ }
+ return generateSignedCertificateWithKeyInternal(cn, ips, alternateDNS, daysValid, ca, priv)
+}
+
+func generateSignedCertificateWithKeyInternal(
+ cn string,
+ ips []interface{},
+ alternateDNS []interface{},
+ daysValid int,
+ ca certificate,
+ priv crypto.PrivateKey,
+) (certificate, error) {
+ cert := certificate{}
+
+ decodedSignerCert, _ := pem.Decode([]byte(ca.Cert))
+ if decodedSignerCert == nil {
+ return cert, errors.New("unable to decode certificate")
+ }
+ signerCert, err := x509.ParseCertificate(decodedSignerCert.Bytes)
+ if err != nil {
+ return cert, fmt.Errorf(
+ "error parsing certificate: decodedSignerCert.Bytes: %s",
+ err,
+ )
+ }
+ signerKey, err := parsePrivateKeyPEM(ca.Key)
+ if err != nil {
+ return cert, fmt.Errorf(
+ "error parsing private key: %s",
+ err,
+ )
+ }
+
+ template, err := getBaseCertTemplate(cn, ips, alternateDNS, daysValid)
+ if err != nil {
+ return cert, err
+ }
+
+ cert.Cert, cert.Key, err = getCertAndKey(
+ template,
+ priv,
+ signerCert,
+ signerKey,
+ )
+
+ return cert, err
+}
+
+func getCertAndKey(
+ template *x509.Certificate,
+ signeeKey crypto.PrivateKey,
+ parent *x509.Certificate,
+ signingKey crypto.PrivateKey,
+) (string, string, error) {
+ signeePubKey, err := getPublicKey(signeeKey)
+ if err != nil {
+ return "", "", fmt.Errorf("error retrieving public key from signee key: %s", err)
+ }
+ derBytes, err := x509.CreateCertificate(
+ rand.Reader,
+ template,
+ parent,
+ signeePubKey,
+ signingKey,
+ )
+ if err != nil {
+ return "", "", fmt.Errorf("error creating certificate: %s", err)
+ }
+
+ certBuffer := bytes.Buffer{}
+ if err := pem.Encode(
+ &certBuffer,
+ &pem.Block{Type: "CERTIFICATE", Bytes: derBytes},
+ ); err != nil {
+ return "", "", fmt.Errorf("error pem-encoding certificate: %s", err)
+ }
+
+ keyBuffer := bytes.Buffer{}
+ if err := pem.Encode(
+ &keyBuffer,
+ pemBlockForKey(signeeKey),
+ ); err != nil {
+ return "", "", fmt.Errorf("error pem-encoding key: %s", err)
+ }
+
+ return certBuffer.String(), keyBuffer.String(), nil
+}
+
+func getBaseCertTemplate(
+ cn string,
+ ips []interface{},
+ alternateDNS []interface{},
+ daysValid int,
+) (*x509.Certificate, error) {
+ ipAddresses, err := getNetIPs(ips)
+ if err != nil {
+ return nil, err
+ }
+ dnsNames, err := getAlternateDNSStrs(alternateDNS)
+ if err != nil {
+ return nil, err
+ }
+ serialNumberUpperBound := new(big.Int).Lsh(big.NewInt(1), 128)
+ serialNumber, err := rand.Int(rand.Reader, serialNumberUpperBound)
+ if err != nil {
+ return nil, err
+ }
+ return &x509.Certificate{
+ SerialNumber: serialNumber,
+ Subject: pkix.Name{
+ CommonName: cn,
+ },
+ IPAddresses: ipAddresses,
+ DNSNames: dnsNames,
+ NotBefore: time.Now(),
+ NotAfter: time.Now().Add(time.Hour * 24 * time.Duration(daysValid)),
+ KeyUsage: x509.KeyUsageKeyEncipherment | x509.KeyUsageDigitalSignature,
+ ExtKeyUsage: []x509.ExtKeyUsage{
+ x509.ExtKeyUsageServerAuth,
+ x509.ExtKeyUsageClientAuth,
+ },
+ BasicConstraintsValid: true,
+ }, nil
+}
+
+func getNetIPs(ips []interface{}) ([]net.IP, error) {
+ if ips == nil {
+ return []net.IP{}, nil
+ }
+ var ipStr string
+ var ok bool
+ var netIP net.IP
+ netIPs := make([]net.IP, len(ips))
+ for i, ip := range ips {
+ ipStr, ok = ip.(string)
+ if !ok {
+ return nil, fmt.Errorf("error parsing ip: %v is not a string", ip)
+ }
+ netIP = net.ParseIP(ipStr)
+ if netIP == nil {
+ return nil, fmt.Errorf("error parsing ip: %s", ipStr)
+ }
+ netIPs[i] = netIP
+ }
+ return netIPs, nil
+}
+
+func getAlternateDNSStrs(alternateDNS []interface{}) ([]string, error) {
+ if alternateDNS == nil {
+ return []string{}, nil
+ }
+ var dnsStr string
+ var ok bool
+ alternateDNSStrs := make([]string, len(alternateDNS))
+ for i, dns := range alternateDNS {
+ dnsStr, ok = dns.(string)
+ if !ok {
+ return nil, fmt.Errorf(
+ "error processing alternate dns name: %v is not a string",
+ dns,
+ )
+ }
+ alternateDNSStrs[i] = dnsStr
+ }
+ return alternateDNSStrs, nil
+}
+
+func encryptAES(password string, plaintext string) (string, error) {
+ if plaintext == "" {
+ return "", nil
+ }
+
+ key := make([]byte, 32)
+ copy(key, []byte(password))
+ block, err := aes.NewCipher(key)
+ if err != nil {
+ return "", err
+ }
+
+ content := []byte(plaintext)
+ blockSize := block.BlockSize()
+ padding := blockSize - len(content)%blockSize
+ padtext := bytes.Repeat([]byte{byte(padding)}, padding)
+ content = append(content, padtext...)
+
+ ciphertext := make([]byte, aes.BlockSize+len(content))
+
+ iv := ciphertext[:aes.BlockSize]
+ if _, err := io.ReadFull(rand.Reader, iv); err != nil {
+ return "", err
+ }
+
+ mode := cipher.NewCBCEncrypter(block, iv)
+ mode.CryptBlocks(ciphertext[aes.BlockSize:], content)
+
+ return base64.StdEncoding.EncodeToString(ciphertext), nil
+}
+
+func decryptAES(password string, crypt64 string) (string, error) {
+ if crypt64 == "" {
+ return "", nil
+ }
+
+ key := make([]byte, 32)
+ copy(key, []byte(password))
+
+ crypt, err := base64.StdEncoding.DecodeString(crypt64)
+ if err != nil {
+ return "", err
+ }
+
+ block, err := aes.NewCipher(key)
+ if err != nil {
+ return "", err
+ }
+
+ iv := crypt[:aes.BlockSize]
+ crypt = crypt[aes.BlockSize:]
+ decrypted := make([]byte, len(crypt))
+ mode := cipher.NewCBCDecrypter(block, iv)
+ mode.CryptBlocks(decrypted, crypt)
+
+ return string(decrypted[:len(decrypted)-int(decrypted[len(decrypted)-1])]), nil
+}
diff --git a/vendor/github.com/Masterminds/sprig/v3/date.go b/vendor/github.com/Masterminds/sprig/v3/date.go
new file mode 100644
index 000000000..ed022ddac
--- /dev/null
+++ b/vendor/github.com/Masterminds/sprig/v3/date.go
@@ -0,0 +1,152 @@
+package sprig
+
+import (
+ "strconv"
+ "time"
+)
+
+// Given a format and a date, format the date string.
+//
+// Date can be a `time.Time` or an `int, int32, int64`.
+// In the later case, it is treated as seconds since UNIX
+// epoch.
+func date(fmt string, date interface{}) string {
+ return dateInZone(fmt, date, "Local")
+}
+
+func htmlDate(date interface{}) string {
+ return dateInZone("2006-01-02", date, "Local")
+}
+
+func htmlDateInZone(date interface{}, zone string) string {
+ return dateInZone("2006-01-02", date, zone)
+}
+
+func dateInZone(fmt string, date interface{}, zone string) string {
+ var t time.Time
+ switch date := date.(type) {
+ default:
+ t = time.Now()
+ case time.Time:
+ t = date
+ case *time.Time:
+ t = *date
+ case int64:
+ t = time.Unix(date, 0)
+ case int:
+ t = time.Unix(int64(date), 0)
+ case int32:
+ t = time.Unix(int64(date), 0)
+ }
+
+ loc, err := time.LoadLocation(zone)
+ if err != nil {
+ loc, _ = time.LoadLocation("UTC")
+ }
+
+ return t.In(loc).Format(fmt)
+}
+
+func dateModify(fmt string, date time.Time) time.Time {
+ d, err := time.ParseDuration(fmt)
+ if err != nil {
+ return date
+ }
+ return date.Add(d)
+}
+
+func mustDateModify(fmt string, date time.Time) (time.Time, error) {
+ d, err := time.ParseDuration(fmt)
+ if err != nil {
+ return time.Time{}, err
+ }
+ return date.Add(d), nil
+}
+
+func dateAgo(date interface{}) string {
+ var t time.Time
+
+ switch date := date.(type) {
+ default:
+ t = time.Now()
+ case time.Time:
+ t = date
+ case int64:
+ t = time.Unix(date, 0)
+ case int:
+ t = time.Unix(int64(date), 0)
+ }
+ // Drop resolution to seconds
+ duration := time.Since(t).Round(time.Second)
+ return duration.String()
+}
+
+func duration(sec interface{}) string {
+ var n int64
+ switch value := sec.(type) {
+ default:
+ n = 0
+ case string:
+ n, _ = strconv.ParseInt(value, 10, 64)
+ case int64:
+ n = value
+ }
+ return (time.Duration(n) * time.Second).String()
+}
+
+func durationRound(duration interface{}) string {
+ var d time.Duration
+ switch duration := duration.(type) {
+ default:
+ d = 0
+ case string:
+ d, _ = time.ParseDuration(duration)
+ case int64:
+ d = time.Duration(duration)
+ case time.Time:
+ d = time.Since(duration)
+ }
+
+ u := uint64(d)
+ neg := d < 0
+ if neg {
+ u = -u
+ }
+
+ var (
+ year = uint64(time.Hour) * 24 * 365
+ month = uint64(time.Hour) * 24 * 30
+ day = uint64(time.Hour) * 24
+ hour = uint64(time.Hour)
+ minute = uint64(time.Minute)
+ second = uint64(time.Second)
+ )
+ switch {
+ case u > year:
+ return strconv.FormatUint(u/year, 10) + "y"
+ case u > month:
+ return strconv.FormatUint(u/month, 10) + "mo"
+ case u > day:
+ return strconv.FormatUint(u/day, 10) + "d"
+ case u > hour:
+ return strconv.FormatUint(u/hour, 10) + "h"
+ case u > minute:
+ return strconv.FormatUint(u/minute, 10) + "m"
+ case u > second:
+ return strconv.FormatUint(u/second, 10) + "s"
+ }
+ return "0s"
+}
+
+func toDate(fmt, str string) time.Time {
+ t, _ := time.ParseInLocation(fmt, str, time.Local)
+ return t
+}
+
+func mustToDate(fmt, str string) (time.Time, error) {
+ return time.ParseInLocation(fmt, str, time.Local)
+}
+
+func unixEpoch(date time.Time) string {
+ return strconv.FormatInt(date.Unix(), 10)
+}
diff --git a/vendor/github.com/Masterminds/sprig/v3/defaults.go b/vendor/github.com/Masterminds/sprig/v3/defaults.go
new file mode 100644
index 000000000..b9f979666
--- /dev/null
+++ b/vendor/github.com/Masterminds/sprig/v3/defaults.go
@@ -0,0 +1,163 @@
+package sprig
+
+import (
+ "bytes"
+ "encoding/json"
+ "math/rand"
+ "reflect"
+ "strings"
+ "time"
+)
+
+func init() {
+ rand.Seed(time.Now().UnixNano())
+}
+
+// dfault checks whether `given` is set, and returns default if not set.
+//
+// This returns `d` if `given` appears not to be set, and `given` otherwise.
+//
+// For numeric types 0 is unset.
+// For strings, maps, arrays, and slices, len() = 0 is considered unset.
+// For bool, false is unset.
+// Structs are never considered unset.
+//
+// For everything else, including pointers, a nil value is unset.
+func dfault(d interface{}, given ...interface{}) interface{} {
+
+ if empty(given) || empty(given[0]) {
+ return d
+ }
+ return given[0]
+}
+
+// empty returns true if the given value has the zero value for its type.
+func empty(given interface{}) bool {
+ g := reflect.ValueOf(given)
+ if !g.IsValid() {
+ return true
+ }
+
+ // Basically adapted from text/template.isTrue
+ switch g.Kind() {
+ default:
+ return g.IsNil()
+ case reflect.Array, reflect.Slice, reflect.Map, reflect.String:
+ return g.Len() == 0
+ case reflect.Bool:
+ return !g.Bool()
+ case reflect.Complex64, reflect.Complex128:
+ return g.Complex() == 0
+ case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64:
+ return g.Int() == 0
+ case reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64, reflect.Uintptr:
+ return g.Uint() == 0
+ case reflect.Float32, reflect.Float64:
+ return g.Float() == 0
+ case reflect.Struct:
+ return false
+ }
+}
+
+// coalesce returns the first non-empty value.
+func coalesce(v ...interface{}) interface{} {
+ for _, val := range v {
+ if !empty(val) {
+ return val
+ }
+ }
+ return nil
+}
+
+// all returns true if empty(x) is false for all values x in the list.
+// If the list is empty, return true.
+func all(v ...interface{}) bool {
+ for _, val := range v {
+ if empty(val) {
+ return false
+ }
+ }
+ return true
+}
+
+// any returns true if empty(x) is false for any x in the list.
+// If the list is empty, return false.
+func any(v ...interface{}) bool {
+ for _, val := range v {
+ if !empty(val) {
+ return true
+ }
+ }
+ return false
+}
+
+// fromJson decodes JSON into a structured value, ignoring errors.
+func fromJson(v string) interface{} {
+ output, _ := mustFromJson(v)
+ return output
+}
+
+// mustFromJson decodes JSON into a structured value, returning errors.
+func mustFromJson(v string) (interface{}, error) {
+ var output interface{}
+ err := json.Unmarshal([]byte(v), &output)
+ return output, err
+}
+
+// toJson encodes an item into a JSON string
+func toJson(v interface{}) string {
+ output, _ := json.Marshal(v)
+ return string(output)
+}
+
+func mustToJson(v interface{}) (string, error) {
+ output, err := json.Marshal(v)
+ if err != nil {
+ return "", err
+ }
+ return string(output), nil
+}
+
+// toPrettyJson encodes an item into a pretty (indented) JSON string
+func toPrettyJson(v interface{}) string {
+ output, _ := json.MarshalIndent(v, "", " ")
+ return string(output)
+}
+
+func mustToPrettyJson(v interface{}) (string, error) {
+ output, err := json.MarshalIndent(v, "", " ")
+ if err != nil {
+ return "", err
+ }
+ return string(output), nil
+}
+
+// toRawJson encodes an item into a JSON string with no escaping of HTML characters.
+func toRawJson(v interface{}) string {
+ output, err := mustToRawJson(v)
+ if err != nil {
+ panic(err)
+ }
+ return string(output)
+}
+
+// mustToRawJson encodes an item into a JSON string with no escaping of HTML characters.
+func mustToRawJson(v interface{}) (string, error) {
+ buf := new(bytes.Buffer)
+ enc := json.NewEncoder(buf)
+ enc.SetEscapeHTML(false)
+ err := enc.Encode(&v)
+ if err != nil {
+ return "", err
+ }
+ return strings.TrimSuffix(buf.String(), "\n"), nil
+}
+
+// ternary returns the first value if the last value is true, otherwise returns the second value.
+func ternary(vt interface{}, vf interface{}, v bool) interface{} {
+ if v {
+ return vt
+ }
+
+ return vf
+}
diff --git a/vendor/github.com/Masterminds/sprig/v3/dict.go b/vendor/github.com/Masterminds/sprig/v3/dict.go
new file mode 100644
index 000000000..4315b3542
--- /dev/null
+++ b/vendor/github.com/Masterminds/sprig/v3/dict.go
@@ -0,0 +1,174 @@
+package sprig
+
+import (
+ "dario.cat/mergo"
+ "github.com/mitchellh/copystructure"
+)
+
+func get(d map[string]interface{}, key string) interface{} {
+ if val, ok := d[key]; ok {
+ return val
+ }
+ return ""
+}
+
+func set(d map[string]interface{}, key string, value interface{}) map[string]interface{} {
+ d[key] = value
+ return d
+}
+
+func unset(d map[string]interface{}, key string) map[string]interface{} {
+ delete(d, key)
+ return d
+}
+
+func hasKey(d map[string]interface{}, key string) bool {
+ _, ok := d[key]
+ return ok
+}
+
+func pluck(key string, d ...map[string]interface{}) []interface{} {
+ res := []interface{}{}
+ for _, dict := range d {
+ if val, ok := dict[key]; ok {
+ res = append(res, val)
+ }
+ }
+ return res
+}
+
+func keys(dicts ...map[string]interface{}) []string {
+ k := []string{}
+ for _, dict := range dicts {
+ for key := range dict {
+ k = append(k, key)
+ }
+ }
+ return k
+}
+
+func pick(dict map[string]interface{}, keys ...string) map[string]interface{} {
+ res := map[string]interface{}{}
+ for _, k := range keys {
+ if v, ok := dict[k]; ok {
+ res[k] = v
+ }
+ }
+ return res
+}
+
+func omit(dict map[string]interface{}, keys ...string) map[string]interface{} {
+ res := map[string]interface{}{}
+
+ omit := make(map[string]bool, len(keys))
+ for _, k := range keys {
+ omit[k] = true
+ }
+
+ for k, v := range dict {
+ if _, ok := omit[k]; !ok {
+ res[k] = v
+ }
+ }
+ return res
+}
+
+func dict(v ...interface{}) map[string]interface{} {
+ dict := map[string]interface{}{}
+ lenv := len(v)
+ for i := 0; i < lenv; i += 2 {
+ key := strval(v[i])
+ if i+1 >= lenv {
+ dict[key] = ""
+ continue
+ }
+ dict[key] = v[i+1]
+ }
+ return dict
+}
+
+func merge(dst map[string]interface{}, srcs ...map[string]interface{}) interface{} {
+ for _, src := range srcs {
+ if err := mergo.Merge(&dst, src); err != nil {
+ // Swallow errors inside of a template.
+ return ""
+ }
+ }
+ return dst
+}
+
+func mustMerge(dst map[string]interface{}, srcs ...map[string]interface{}) (interface{}, error) {
+ for _, src := range srcs {
+ if err := mergo.Merge(&dst, src); err != nil {
+ return nil, err
+ }
+ }
+ return dst, nil
+}
+
+func mergeOverwrite(dst map[string]interface{}, srcs ...map[string]interface{}) interface{} {
+ for _, src := range srcs {
+ if err := mergo.MergeWithOverwrite(&dst, src); err != nil {
+ // Swallow errors inside of a template.
+ return ""
+ }
+ }
+ return dst
+}
+
+func mustMergeOverwrite(dst map[string]interface{}, srcs ...map[string]interface{}) (interface{}, error) {
+ for _, src := range srcs {
+ if err := mergo.MergeWithOverwrite(&dst, src); err != nil {
+ return nil, err
+ }
+ }
+ return dst, nil
+}
+
+func values(dict map[string]interface{}) []interface{} {
+ values := []interface{}{}
+ for _, value := range dict {
+ values = append(values, value)
+ }
+
+ return values
+}
+
+func deepCopy(i interface{}) interface{} {
+ c, err := mustDeepCopy(i)
+ if err != nil {
+ panic("deepCopy error: " + err.Error())
+ }
+
+ return c
+}
+
+func mustDeepCopy(i interface{}) (interface{}, error) {
+ return copystructure.Copy(i)
+}
+
+func dig(ps ...interface{}) (interface{}, error) {
+ if len(ps) < 3 {
+ panic("dig needs at least three arguments")
+ }
+ dict := ps[len(ps)-1].(map[string]interface{})
+ def := ps[len(ps)-2]
+ ks := make([]string, len(ps)-2)
+ for i := 0; i < len(ks); i++ {
+ ks[i] = ps[i].(string)
+ }
+
+ return digFromDict(dict, def, ks)
+}
+
+func digFromDict(dict map[string]interface{}, d interface{}, ks []string) (interface{}, error) {
+ k, ns := ks[0], ks[1:len(ks)]
+ step, has := dict[k]
+ if !has {
+ return d, nil
+ }
+ if len(ns) == 0 {
+ return step, nil
+ }
+ return digFromDict(step.(map[string]interface{}), d, ns)
+}
diff --git a/vendor/github.com/Masterminds/sprig/v3/doc.go b/vendor/github.com/Masterminds/sprig/v3/doc.go
new file mode 100644
index 000000000..91031d6d1
--- /dev/null
+++ b/vendor/github.com/Masterminds/sprig/v3/doc.go
@@ -0,0 +1,19 @@
+/*
+Package sprig provides template functions for Go.
+
+This package contains a number of utility functions for working with data
+inside of Go `html/template` and `text/template` files.
+
+To add these functions, use the `template.Funcs()` method:
+
+ t := template.New("foo").Funcs(sprig.FuncMap())
+
+Note that you should add the function map before you parse any template files.
+
+ In several cases, Sprig reverses the order of arguments from the way they
+ appear in the standard library. This is to make it easier to pipe
+ arguments into functions.
+
+See http://masterminds.github.io/sprig/ for more detailed documentation on each of the available functions.
+*/
+package sprig
diff --git a/vendor/github.com/Masterminds/sprig/v3/functions.go b/vendor/github.com/Masterminds/sprig/v3/functions.go
new file mode 100644
index 000000000..cda47d26f
--- /dev/null
+++ b/vendor/github.com/Masterminds/sprig/v3/functions.go
@@ -0,0 +1,385 @@
+package sprig
+
+import (
+ "errors"
+ "html/template"
+ "math/rand"
+ "os"
+ "path"
+ "path/filepath"
+ "reflect"
+ "strconv"
+ "strings"
+ ttemplate "text/template"
+ "time"
+
+ util "github.com/Masterminds/goutils"
+ "github.com/huandu/xstrings"
+ "github.com/shopspring/decimal"
+)
+
+// FuncMap produces the function map.
+//
+// Use this to pass the functions into the template engine:
+//
+// tpl := template.New("foo").Funcs(sprig.FuncMap()))
+func FuncMap() template.FuncMap {
+ return HtmlFuncMap()
+}
+
+// HermeticTxtFuncMap returns a 'text/template'.FuncMap with only repeatable functions.
+func HermeticTxtFuncMap() ttemplate.FuncMap {
+ r := TxtFuncMap()
+ for _, name := range nonhermeticFunctions {
+ delete(r, name)
+ }
+ return r
+}
+
+// HermeticHtmlFuncMap returns an 'html/template'.Funcmap with only repeatable functions.
+func HermeticHtmlFuncMap() template.FuncMap {
+ r := HtmlFuncMap()
+ for _, name := range nonhermeticFunctions {
+ delete(r, name)
+ }
+ return r
+}
+
+// TxtFuncMap returns a 'text/template'.FuncMap
+func TxtFuncMap() ttemplate.FuncMap {
+ return ttemplate.FuncMap(GenericFuncMap())
+}
+
+// HtmlFuncMap returns an 'html/template'.Funcmap
+func HtmlFuncMap() template.FuncMap {
+ return template.FuncMap(GenericFuncMap())
+}
+
+// GenericFuncMap returns a copy of the basic function map as a map[string]interface{}.
+func GenericFuncMap() map[string]interface{} {
+ gfm := make(map[string]interface{}, len(genericMap))
+ for k, v := range genericMap {
+ gfm[k] = v
+ }
+ return gfm
+}
+
+// These functions are not guaranteed to evaluate to the same result for given input, because they
+// refer to the environment or global state.
+var nonhermeticFunctions = []string{
+ // Date functions
+ "date",
+ "date_in_zone",
+ "date_modify",
+ "now",
+ "htmlDate",
+ "htmlDateInZone",
+ "dateInZone",
+ "dateModify",
+
+ // Strings
+ "randAlphaNum",
+ "randAlpha",
+ "randAscii",
+ "randNumeric",
+ "randBytes",
+ "uuidv4",
+
+ // OS
+ "env",
+ "expandenv",
+
+ // Network
+ "getHostByName",
+}
+
+var genericMap = map[string]interface{}{
+ "hello": func() string { return "Hello!" },
+
+ // Date functions
+ "ago": dateAgo,
+ "date": date,
+ "date_in_zone": dateInZone,
+ "date_modify": dateModify,
+ "dateInZone": dateInZone,
+ "dateModify": dateModify,
+ "duration": duration,
+ "durationRound": durationRound,
+ "htmlDate": htmlDate,
+ "htmlDateInZone": htmlDateInZone,
+ "must_date_modify": mustDateModify,
+ "mustDateModify": mustDateModify,
+ "mustToDate": mustToDate,
+ "now": time.Now,
+ "toDate": toDate,
+ "unixEpoch": unixEpoch,
+
+ // Strings
+ "abbrev": abbrev,
+ "abbrevboth": abbrevboth,
+ "trunc": trunc,
+ "trim": strings.TrimSpace,
+ "upper": strings.ToUpper,
+ "lower": strings.ToLower,
+ "title": strings.Title,
+ "untitle": untitle,
+ "substr": substring,
+ // Switch order so that "foo" | repeat 5
+ "repeat": func(count int, str string) string { return strings.Repeat(str, count) },
+ // Deprecated: Use trimAll.
+ "trimall": func(a, b string) string { return strings.Trim(b, a) },
+ // Switch order so that "$foo" | trimall "$"
+ "trimAll": func(a, b string) string { return strings.Trim(b, a) },
+ "trimSuffix": func(a, b string) string { return strings.TrimSuffix(b, a) },
+ "trimPrefix": func(a, b string) string { return strings.TrimPrefix(b, a) },
+ "nospace": util.DeleteWhiteSpace,
+ "initials": initials,
+ "randAlphaNum": randAlphaNumeric,
+ "randAlpha": randAlpha,
+ "randAscii": randAscii,
+ "randNumeric": randNumeric,
+ "swapcase": util.SwapCase,
+ "shuffle": xstrings.Shuffle,
+ "snakecase": xstrings.ToSnakeCase,
+ // camelcase used to call xstrings.ToCamelCase, but that function had a breaking change in version
+ // 1.5 that moved it from upper camel case to lower camel case. This is a breaking change for sprig.
+ // A new xstrings.ToPascalCase function was added that provided upper camel case.
+ "camelcase": xstrings.ToPascalCase,
+ "kebabcase": xstrings.ToKebabCase,
+ "wrap": func(l int, s string) string { return util.Wrap(s, l) },
+ "wrapWith": func(l int, sep, str string) string { return util.WrapCustom(str, l, sep, true) },
+ // Switch order so that "foobar" | contains "foo"
+ "contains": func(substr string, str string) bool { return strings.Contains(str, substr) },
+ "hasPrefix": func(substr string, str string) bool { return strings.HasPrefix(str, substr) },
+ "hasSuffix": func(substr string, str string) bool { return strings.HasSuffix(str, substr) },
+ "quote": quote,
+ "squote": squote,
+ "cat": cat,
+ "indent": indent,
+ "nindent": nindent,
+ "replace": replace,
+ "plural": plural,
+ "sha1sum": sha1sum,
+ "sha256sum": sha256sum,
+ "sha512sum": sha512sum,
+ "adler32sum": adler32sum,
+ "toString": strval,
+
+ // Wrap Atoi to stop errors.
+ "atoi": func(a string) int { i, _ := strconv.Atoi(a); return i },
+ "int64": toInt64,
+ "int": toInt,
+ "float64": toFloat64,
+ "seq": seq,
+ "toDecimal": toDecimal,
+
+ //"gt": func(a, b int) bool {return a > b},
+ //"gte": func(a, b int) bool {return a >= b},
+ //"lt": func(a, b int) bool {return a < b},
+ //"lte": func(a, b int) bool {return a <= b},
+
+ // split "/" foo/bar returns map[int]string{0: foo, 1: bar}
+ "split": split,
+ "splitList": func(sep, orig string) []string { return strings.Split(orig, sep) },
+ // splitn "/" foo/bar/fuu returns map[int]string{0: foo, 1: bar/fuu}
+ "splitn": splitn,
+ "toStrings": strslice,
+
+ "until": until,
+ "untilStep": untilStep,
+
+ // VERY basic arithmetic.
+ "add1": func(i interface{}) int64 { return toInt64(i) + 1 },
+ "add": func(i ...interface{}) int64 {
+ var a int64 = 0
+ for _, b := range i {
+ a += toInt64(b)
+ }
+ return a
+ },
+ "sub": func(a, b interface{}) int64 { return toInt64(a) - toInt64(b) },
+ "div": func(a, b interface{}) int64 { return toInt64(a) / toInt64(b) },
+ "mod": func(a, b interface{}) int64 { return toInt64(a) % toInt64(b) },
+ "mul": func(a interface{}, v ...interface{}) int64 {
+ val := toInt64(a)
+ for _, b := range v {
+ val = val * toInt64(b)
+ }
+ return val
+ },
+ "randInt": func(min, max int) int { return rand.Intn(max-min) + min },
+ "add1f": func(i interface{}) float64 {
+ return execDecimalOp(i, []interface{}{1}, func(d1, d2 decimal.Decimal) decimal.Decimal { return d1.Add(d2) })
+ },
+ "addf": func(i ...interface{}) float64 {
+ a := interface{}(float64(0))
+ return execDecimalOp(a, i, func(d1, d2 decimal.Decimal) decimal.Decimal { return d1.Add(d2) })
+ },
+ "subf": func(a interface{}, v ...interface{}) float64 {
+ return execDecimalOp(a, v, func(d1, d2 decimal.Decimal) decimal.Decimal { return d1.Sub(d2) })
+ },
+ "divf": func(a interface{}, v ...interface{}) float64 {
+ return execDecimalOp(a, v, func(d1, d2 decimal.Decimal) decimal.Decimal { return d1.Div(d2) })
+ },
+ "mulf": func(a interface{}, v ...interface{}) float64 {
+ return execDecimalOp(a, v, func(d1, d2 decimal.Decimal) decimal.Decimal { return d1.Mul(d2) })
+ },
+ "biggest": max,
+ "max": max,
+ "min": min,
+ "maxf": maxf,
+ "minf": minf,
+ "ceil": ceil,
+ "floor": floor,
+ "round": round,
+
+ // string slices. Note that we reverse the order b/c that's better
+ // for template processing.
+ "join": join,
+ "sortAlpha": sortAlpha,
+
+ // Defaults
+ "default": dfault,
+ "empty": empty,
+ "coalesce": coalesce,
+ "all": all,
+ "any": any,
+ "compact": compact,
+ "mustCompact": mustCompact,
+ "fromJson": fromJson,
+ "toJson": toJson,
+ "toPrettyJson": toPrettyJson,
+ "toRawJson": toRawJson,
+ "mustFromJson": mustFromJson,
+ "mustToJson": mustToJson,
+ "mustToPrettyJson": mustToPrettyJson,
+ "mustToRawJson": mustToRawJson,
+ "ternary": ternary,
+ "deepCopy": deepCopy,
+ "mustDeepCopy": mustDeepCopy,
+
+ // Reflection
+ "typeOf": typeOf,
+ "typeIs": typeIs,
+ "typeIsLike": typeIsLike,
+ "kindOf": kindOf,
+ "kindIs": kindIs,
+ "deepEqual": reflect.DeepEqual,
+
+ // OS:
+ "env": os.Getenv,
+ "expandenv": os.ExpandEnv,
+
+ // Network:
+ "getHostByName": getHostByName,
+
+ // Paths:
+ "base": path.Base,
+ "dir": path.Dir,
+ "clean": path.Clean,
+ "ext": path.Ext,
+ "isAbs": path.IsAbs,
+
+ // Filepaths:
+ "osBase": filepath.Base,
+ "osClean": filepath.Clean,
+ "osDir": filepath.Dir,
+ "osExt": filepath.Ext,
+ "osIsAbs": filepath.IsAbs,
+
+ // Encoding:
+ "b64enc": base64encode,
+ "b64dec": base64decode,
+ "b32enc": base32encode,
+ "b32dec": base32decode,
+
+ // Data Structures:
+ "tuple": list, // FIXME: with the addition of append/prepend these are no longer immutable.
+ "list": list,
+ "dict": dict,
+ "get": get,
+ "set": set,
+ "unset": unset,
+ "hasKey": hasKey,
+ "pluck": pluck,
+ "keys": keys,
+ "pick": pick,
+ "omit": omit,
+ "merge": merge,
+ "mergeOverwrite": mergeOverwrite,
+ "mustMerge": mustMerge,
+ "mustMergeOverwrite": mustMergeOverwrite,
+ "values": values,
+
+ "append": push, "push": push,
+ "mustAppend": mustPush, "mustPush": mustPush,
+ "prepend": prepend,
+ "mustPrepend": mustPrepend,
+ "first": first,
+ "mustFirst": mustFirst,
+ "rest": rest,
+ "mustRest": mustRest,
+ "last": last,
+ "mustLast": mustLast,
+ "initial": initial,
+ "mustInitial": mustInitial,
+ "reverse": reverse,
+ "mustReverse": mustReverse,
+ "uniq": uniq,
+ "mustUniq": mustUniq,
+ "without": without,
+ "mustWithout": mustWithout,
+ "has": has,
+ "mustHas": mustHas,
+ "slice": slice,
+ "mustSlice": mustSlice,
+ "concat": concat,
+ "dig": dig,
+ "chunk": chunk,
+ "mustChunk": mustChunk,
+
+ // Crypto:
+ "bcrypt": bcrypt,
+ "htpasswd": htpasswd,
+ "genPrivateKey": generatePrivateKey,
+ "derivePassword": derivePassword,
+ "buildCustomCert": buildCustomCertificate,
+ "genCA": generateCertificateAuthority,
+ "genCAWithKey": generateCertificateAuthorityWithPEMKey,
+ "genSelfSignedCert": generateSelfSignedCertificate,
+ "genSelfSignedCertWithKey": generateSelfSignedCertificateWithPEMKey,
+ "genSignedCert": generateSignedCertificate,
+ "genSignedCertWithKey": generateSignedCertificateWithPEMKey,
+ "encryptAES": encryptAES,
+ "decryptAES": decryptAES,
+ "randBytes": randBytes,
+
+ // UUIDs:
+ "uuidv4": uuidv4,
+
+ // SemVer:
+ "semver": semver,
+ "semverCompare": semverCompare,
+
+ // Flow Control:
+ "fail": func(msg string) (string, error) { return "", errors.New(msg) },
+
+ // Regex
+ "regexMatch": regexMatch,
+ "mustRegexMatch": mustRegexMatch,
+ "regexFindAll": regexFindAll,
+ "mustRegexFindAll": mustRegexFindAll,
+ "regexFind": regexFind,
+ "mustRegexFind": mustRegexFind,
+ "regexReplaceAll": regexReplaceAll,
+ "mustRegexReplaceAll": mustRegexReplaceAll,
+ "regexReplaceAllLiteral": regexReplaceAllLiteral,
+ "mustRegexReplaceAllLiteral": mustRegexReplaceAllLiteral,
+ "regexSplit": regexSplit,
+ "mustRegexSplit": mustRegexSplit,
+ "regexQuoteMeta": regexQuoteMeta,
+
+ // URLs:
+ "urlParse": urlParse,
+ "urlJoin": urlJoin,
+}
diff --git a/vendor/github.com/Masterminds/sprig/v3/list.go b/vendor/github.com/Masterminds/sprig/v3/list.go
new file mode 100644
index 000000000..ca0fbb789
--- /dev/null
+++ b/vendor/github.com/Masterminds/sprig/v3/list.go
@@ -0,0 +1,464 @@
+package sprig
+
+import (
+ "fmt"
+ "math"
+ "reflect"
+ "sort"
+)
+
+// Reflection is used in these functions so that slices and arrays of strings,
+// ints, and other types not implementing []interface{} can be worked with.
+// For example, this is useful if you need to work on the output of regexs.
+
+func list(v ...interface{}) []interface{} {
+ return v
+}
+
+func push(list interface{}, v interface{}) []interface{} {
+ l, err := mustPush(list, v)
+ if err != nil {
+ panic(err)
+ }
+
+ return l
+}
+
+func mustPush(list interface{}, v interface{}) ([]interface{}, error) {
+ tp := reflect.TypeOf(list).Kind()
+ switch tp {
+ case reflect.Slice, reflect.Array:
+ l2 := reflect.ValueOf(list)
+
+ l := l2.Len()
+ nl := make([]interface{}, l)
+ for i := 0; i < l; i++ {
+ nl[i] = l2.Index(i).Interface()
+ }
+
+ return append(nl, v), nil
+
+ default:
+ return nil, fmt.Errorf("Cannot push on type %s", tp)
+ }
+}
+
+func prepend(list interface{}, v interface{}) []interface{} {
+ l, err := mustPrepend(list, v)
+ if err != nil {
+ panic(err)
+ }
+
+ return l
+}
+
+func mustPrepend(list interface{}, v interface{}) ([]interface{}, error) {
+ //return append([]interface{}{v}, list...)
+
+ tp := reflect.TypeOf(list).Kind()
+ switch tp {
+ case reflect.Slice, reflect.Array:
+ l2 := reflect.ValueOf(list)
+
+ l := l2.Len()
+ nl := make([]interface{}, l)
+ for i := 0; i < l; i++ {
+ nl[i] = l2.Index(i).Interface()
+ }
+
+ return append([]interface{}{v}, nl...), nil
+
+ default:
+ return nil, fmt.Errorf("Cannot prepend on type %s", tp)
+ }
+}
+
+func chunk(size int, list interface{}) [][]interface{} {
+ l, err := mustChunk(size, list)
+ if err != nil {
+ panic(err)
+ }
+
+ return l
+}
+
+func mustChunk(size int, list interface{}) ([][]interface{}, error) {
+ tp := reflect.TypeOf(list).Kind()
+ switch tp {
+ case reflect.Slice, reflect.Array:
+ l2 := reflect.ValueOf(list)
+
+ l := l2.Len()
+
+ cs := int(math.Floor(float64(l-1)/float64(size)) + 1)
+ nl := make([][]interface{}, cs)
+
+ for i := 0; i < cs; i++ {
+ clen := size
+ if i == cs-1 {
+ clen = int(math.Floor(math.Mod(float64(l), float64(size))))
+ if clen == 0 {
+ clen = size
+ }
+ }
+
+ nl[i] = make([]interface{}, clen)
+
+ for j := 0; j < clen; j++ {
+ ix := i*size + j
+ nl[i][j] = l2.Index(ix).Interface()
+ }
+ }
+
+ return nl, nil
+
+ default:
+ return nil, fmt.Errorf("Cannot chunk type %s", tp)
+ }
+}
+
+func last(list interface{}) interface{} {
+ l, err := mustLast(list)
+ if err != nil {
+ panic(err)
+ }
+
+ return l
+}
+
+func mustLast(list interface{}) (interface{}, error) {
+ tp := reflect.TypeOf(list).Kind()
+ switch tp {
+ case reflect.Slice, reflect.Array:
+ l2 := reflect.ValueOf(list)
+
+ l := l2.Len()
+ if l == 0 {
+ return nil, nil
+ }
+
+ return l2.Index(l - 1).Interface(), nil
+ default:
+ return nil, fmt.Errorf("Cannot find last on type %s", tp)
+ }
+}
+
+func first(list interface{}) interface{} {
+ l, err := mustFirst(list)
+ if err != nil {
+ panic(err)
+ }
+
+ return l
+}
+
+func mustFirst(list interface{}) (interface{}, error) {
+ tp := reflect.TypeOf(list).Kind()
+ switch tp {
+ case reflect.Slice, reflect.Array:
+ l2 := reflect.ValueOf(list)
+
+ l := l2.Len()
+ if l == 0 {
+ return nil, nil
+ }
+
+ return l2.Index(0).Interface(), nil
+ default:
+ return nil, fmt.Errorf("Cannot find first on type %s", tp)
+ }
+}
+
+func rest(list interface{}) []interface{} {
+ l, err := mustRest(list)
+ if err != nil {
+ panic(err)
+ }
+
+ return l
+}
+
+func mustRest(list interface{}) ([]interface{}, error) {
+ tp := reflect.TypeOf(list).Kind()
+ switch tp {
+ case reflect.Slice, reflect.Array:
+ l2 := reflect.ValueOf(list)
+
+ l := l2.Len()
+ if l == 0 {
+ return nil, nil
+ }
+
+ nl := make([]interface{}, l-1)
+ for i := 1; i < l; i++ {
+ nl[i-1] = l2.Index(i).Interface()
+ }
+
+ return nl, nil
+ default:
+ return nil, fmt.Errorf("Cannot find rest on type %s", tp)
+ }
+}
+
+func initial(list interface{}) []interface{} {
+ l, err := mustInitial(list)
+ if err != nil {
+ panic(err)
+ }
+
+ return l
+}
+
+func mustInitial(list interface{}) ([]interface{}, error) {
+ tp := reflect.TypeOf(list).Kind()
+ switch tp {
+ case reflect.Slice, reflect.Array:
+ l2 := reflect.ValueOf(list)
+
+ l := l2.Len()
+ if l == 0 {
+ return nil, nil
+ }
+
+ nl := make([]interface{}, l-1)
+ for i := 0; i < l-1; i++ {
+ nl[i] = l2.Index(i).Interface()
+ }
+
+ return nl, nil
+ default:
+ return nil, fmt.Errorf("Cannot find initial on type %s", tp)
+ }
+}
+
+func sortAlpha(list interface{}) []string {
+ k := reflect.Indirect(reflect.ValueOf(list)).Kind()
+ switch k {
+ case reflect.Slice, reflect.Array:
+ a := strslice(list)
+ s := sort.StringSlice(a)
+ s.Sort()
+ return s
+ }
+ return []string{strval(list)}
+}
+
+func reverse(v interface{}) []interface{} {
+ l, err := mustReverse(v)
+ if err != nil {
+ panic(err)
+ }
+
+ return l
+}
+
+func mustReverse(v interface{}) ([]interface{}, error) {
+ tp := reflect.TypeOf(v).Kind()
+ switch tp {
+ case reflect.Slice, reflect.Array:
+ l2 := reflect.ValueOf(v)
+
+ l := l2.Len()
+ // We do not sort in place because the incoming array should not be altered.
+ nl := make([]interface{}, l)
+ for i := 0; i < l; i++ {
+ nl[l-i-1] = l2.Index(i).Interface()
+ }
+
+ return nl, nil
+ default:
+ return nil, fmt.Errorf("Cannot find reverse on type %s", tp)
+ }
+}
+
+func compact(list interface{}) []interface{} {
+ l, err := mustCompact(list)
+ if err != nil {
+ panic(err)
+ }
+
+ return l
+}
+
+func mustCompact(list interface{}) ([]interface{}, error) {
+ tp := reflect.TypeOf(list).Kind()
+ switch tp {
+ case reflect.Slice, reflect.Array:
+ l2 := reflect.ValueOf(list)
+
+ l := l2.Len()
+ nl := []interface{}{}
+ var item interface{}
+ for i := 0; i < l; i++ {
+ item = l2.Index(i).Interface()
+ if !empty(item) {
+ nl = append(nl, item)
+ }
+ }
+
+ return nl, nil
+ default:
+ return nil, fmt.Errorf("Cannot compact on type %s", tp)
+ }
+}
+
+func uniq(list interface{}) []interface{} {
+ l, err := mustUniq(list)
+ if err != nil {
+ panic(err)
+ }
+
+ return l
+}
+
+func mustUniq(list interface{}) ([]interface{}, error) {
+ tp := reflect.TypeOf(list).Kind()
+ switch tp {
+ case reflect.Slice, reflect.Array:
+ l2 := reflect.ValueOf(list)
+
+ l := l2.Len()
+ dest := []interface{}{}
+ var item interface{}
+ for i := 0; i < l; i++ {
+ item = l2.Index(i).Interface()
+ if !inList(dest, item) {
+ dest = append(dest, item)
+ }
+ }
+
+ return dest, nil
+ default:
+ return nil, fmt.Errorf("Cannot find uniq on type %s", tp)
+ }
+}
+
+func inList(haystack []interface{}, needle interface{}) bool {
+ for _, h := range haystack {
+ if reflect.DeepEqual(needle, h) {
+ return true
+ }
+ }
+ return false
+}
+
+func without(list interface{}, omit ...interface{}) []interface{} {
+ l, err := mustWithout(list, omit...)
+ if err != nil {
+ panic(err)
+ }
+
+ return l
+}
+
+func mustWithout(list interface{}, omit ...interface{}) ([]interface{}, error) {
+ tp := reflect.TypeOf(list).Kind()
+ switch tp {
+ case reflect.Slice, reflect.Array:
+ l2 := reflect.ValueOf(list)
+
+ l := l2.Len()
+ res := []interface{}{}
+ var item interface{}
+ for i := 0; i < l; i++ {
+ item = l2.Index(i).Interface()
+ if !inList(omit, item) {
+ res = append(res, item)
+ }
+ }
+
+ return res, nil
+ default:
+ return nil, fmt.Errorf("Cannot find without on type %s", tp)
+ }
+}
+
+func has(needle interface{}, haystack interface{}) bool {
+ l, err := mustHas(needle, haystack)
+ if err != nil {
+ panic(err)
+ }
+
+ return l
+}
+
+func mustHas(needle interface{}, haystack interface{}) (bool, error) {
+ if haystack == nil {
+ return false, nil
+ }
+ tp := reflect.TypeOf(haystack).Kind()
+ switch tp {
+ case reflect.Slice, reflect.Array:
+ l2 := reflect.ValueOf(haystack)
+ var item interface{}
+ l := l2.Len()
+ for i := 0; i < l; i++ {
+ item = l2.Index(i).Interface()
+ if reflect.DeepEqual(needle, item) {
+ return true, nil
+ }
+ }
+
+ return false, nil
+ default:
+ return false, fmt.Errorf("Cannot find has on type %s", tp)
+ }
+}
+
+// $list := [1, 2, 3, 4, 5]
+// slice $list -> list[0:5] = list[:]
+// slice $list 0 3 -> list[0:3] = list[:3]
+// slice $list 3 5 -> list[3:5]
+// slice $list 3 -> list[3:5] = list[3:]
+func slice(list interface{}, indices ...interface{}) interface{} {
+ l, err := mustSlice(list, indices...)
+ if err != nil {
+ panic(err)
+ }
+
+ return l
+}
+
+func mustSlice(list interface{}, indices ...interface{}) (interface{}, error) {
+ tp := reflect.TypeOf(list).Kind()
+ switch tp {
+ case reflect.Slice, reflect.Array:
+ l2 := reflect.ValueOf(list)
+
+ l := l2.Len()
+ if l == 0 {
+ return nil, nil
+ }
+
+ var start, end int
+ if len(indices) > 0 {
+ start = toInt(indices[0])
+ }
+ if len(indices) < 2 {
+ end = l
+ } else {
+ end = toInt(indices[1])
+ }
+
+ return l2.Slice(start, end).Interface(), nil
+ default:
+ return nil, fmt.Errorf("list should be type of slice or array but %s", tp)
+ }
+}
+
+func concat(lists ...interface{}) interface{} {
+ var res []interface{}
+ for _, list := range lists {
+ tp := reflect.TypeOf(list).Kind()
+ switch tp {
+ case reflect.Slice, reflect.Array:
+ l2 := reflect.ValueOf(list)
+ for i := 0; i < l2.Len(); i++ {
+ res = append(res, l2.Index(i).Interface())
+ }
+ default:
+ panic(fmt.Sprintf("Cannot concat type %s as list", tp))
+ }
+ }
+ return res
+}
diff --git a/vendor/github.com/Masterminds/sprig/v3/network.go b/vendor/github.com/Masterminds/sprig/v3/network.go
new file mode 100644
index 000000000..108d78a94
--- /dev/null
+++ b/vendor/github.com/Masterminds/sprig/v3/network.go
@@ -0,0 +1,12 @@
+package sprig
+
+import (
+ "math/rand"
+ "net"
+)
+
+func getHostByName(name string) string {
+ addrs, _ := net.LookupHost(name)
+ //TODO: add error handing when release v3 comes out
+ return addrs[rand.Intn(len(addrs))]
+}
diff --git a/vendor/github.com/Masterminds/sprig/v3/numeric.go b/vendor/github.com/Masterminds/sprig/v3/numeric.go
new file mode 100644
index 000000000..f68e4182e
--- /dev/null
+++ b/vendor/github.com/Masterminds/sprig/v3/numeric.go
@@ -0,0 +1,186 @@
+package sprig
+
+import (
+ "fmt"
+ "math"
+ "strconv"
+ "strings"
+
+ "github.com/spf13/cast"
+ "github.com/shopspring/decimal"
+)
+
+// toFloat64 converts 64-bit floats
+func toFloat64(v interface{}) float64 {
+ return cast.ToFloat64(v)
+}
+
+func toInt(v interface{}) int {
+ return cast.ToInt(v)
+}
+
+// toInt64 converts integer types to 64-bit integers
+func toInt64(v interface{}) int64 {
+ return cast.ToInt64(v)
+}
+
+func max(a interface{}, i ...interface{}) int64 {
+ aa := toInt64(a)
+ for _, b := range i {
+ bb := toInt64(b)
+ if bb > aa {
+ aa = bb
+ }
+ }
+ return aa
+}
+
+func maxf(a interface{}, i ...interface{}) float64 {
+ aa := toFloat64(a)
+ for _, b := range i {
+ bb := toFloat64(b)
+ aa = math.Max(aa, bb)
+ }
+ return aa
+}
+
+func min(a interface{}, i ...interface{}) int64 {
+ aa := toInt64(a)
+ for _, b := range i {
+ bb := toInt64(b)
+ if bb < aa {
+ aa = bb
+ }
+ }
+ return aa
+}
+
+func minf(a interface{}, i ...interface{}) float64 {
+ aa := toFloat64(a)
+ for _, b := range i {
+ bb := toFloat64(b)
+ aa = math.Min(aa, bb)
+ }
+ return aa
+}
+
+func until(count int) []int {
+ step := 1
+ if count < 0 {
+ step = -1
+ }
+ return untilStep(0, count, step)
+}
+
+func untilStep(start, stop, step int) []int {
+ v := []int{}
+
+ if stop < start {
+ if step >= 0 {
+ return v
+ }
+ for i := start; i > stop; i += step {
+ v = append(v, i)
+ }
+ return v
+ }
+
+ if step <= 0 {
+ return v
+ }
+ for i := start; i < stop; i += step {
+ v = append(v, i)
+ }
+ return v
+}
+
+func floor(a interface{}) float64 {
+ aa := toFloat64(a)
+ return math.Floor(aa)
+}
+
+func ceil(a interface{}) float64 {
+ aa := toFloat64(a)
+ return math.Ceil(aa)
+}
+
+func round(a interface{}, p int, rOpt ...float64) float64 {
+ roundOn := .5
+ if len(rOpt) > 0 {
+ roundOn = rOpt[0]
+ }
+ val := toFloat64(a)
+ places := toFloat64(p)
+
+ var round float64
+ pow := math.Pow(10, places)
+ digit := pow * val
+ _, div := math.Modf(digit)
+ if div >= roundOn {
+ round = math.Ceil(digit)
+ } else {
+ round = math.Floor(digit)
+ }
+ return round / pow
+}
+
+// converts unix octal to decimal
+func toDecimal(v interface{}) int64 {
+ result, err := strconv.ParseInt(fmt.Sprint(v), 8, 64)
+ if err != nil {
+ return 0
+ }
+ return result
+}
+
+func seq(params ...int) string {
+ increment := 1
+ switch len(params) {
+ case 0:
+ return ""
+ case 1:
+ start := 1
+ end := params[0]
+ if end < start {
+ increment = -1
+ }
+ return intArrayToString(untilStep(start, end+increment, increment), " ")
+ case 3:
+ start := params[0]
+ end := params[2]
+ step := params[1]
+ if end < start {
+ increment = -1
+ if step > 0 {
+ return ""
+ }
+ }
+ return intArrayToString(untilStep(start, end+increment, step), " ")
+ case 2:
+ start := params[0]
+ end := params[1]
+ step := 1
+ if end < start {
+ step = -1
+ }
+ return intArrayToString(untilStep(start, end+step, step), " ")
+ default:
+ return ""
+ }
+}
+
+func intArrayToString(slice []int, delimeter string) string {
+ return strings.Trim(strings.Join(strings.Fields(fmt.Sprint(slice)), delimeter), "[]")
+}
+
+// performs a float and subsequent decimal.Decimal conversion on inputs,
+// and iterates through a and b executing the mathmetical operation f
+func execDecimalOp(a interface{}, b []interface{}, f func(d1, d2 decimal.Decimal) decimal.Decimal) float64 {
+ prt := decimal.NewFromFloat(toFloat64(a))
+ for _, x := range b {
+ dx := decimal.NewFromFloat(toFloat64(x))
+ prt = f(prt, dx)
+ }
+ rslt, _ := prt.Float64()
+ return rslt
+}
diff --git a/vendor/github.com/Masterminds/sprig/v3/reflect.go b/vendor/github.com/Masterminds/sprig/v3/reflect.go
new file mode 100644
index 000000000..8a65c132f
--- /dev/null
+++ b/vendor/github.com/Masterminds/sprig/v3/reflect.go
@@ -0,0 +1,28 @@
+package sprig
+
+import (
+ "fmt"
+ "reflect"
+)
+
+// typeIs returns true if the src is the type named in target.
+func typeIs(target string, src interface{}) bool {
+ return target == typeOf(src)
+}
+
+func typeIsLike(target string, src interface{}) bool {
+ t := typeOf(src)
+ return target == t || "*"+target == t
+}
+
+func typeOf(src interface{}) string {
+ return fmt.Sprintf("%T", src)
+}
+
+func kindIs(target string, src interface{}) bool {
+ return target == kindOf(src)
+}
+
+func kindOf(src interface{}) string {
+ return reflect.ValueOf(src).Kind().String()
+}
diff --git a/vendor/github.com/Masterminds/sprig/v3/regex.go b/vendor/github.com/Masterminds/sprig/v3/regex.go
new file mode 100644
index 000000000..fab551018
--- /dev/null
+++ b/vendor/github.com/Masterminds/sprig/v3/regex.go
@@ -0,0 +1,83 @@
+package sprig
+
+import (
+ "regexp"
+)
+
+func regexMatch(regex string, s string) bool {
+ match, _ := regexp.MatchString(regex, s)
+ return match
+}
+
+func mustRegexMatch(regex string, s string) (bool, error) {
+ return regexp.MatchString(regex, s)
+}
+
+func regexFindAll(regex string, s string, n int) []string {
+ r := regexp.MustCompile(regex)
+ return r.FindAllString(s, n)
+}
+
+func mustRegexFindAll(regex string, s string, n int) ([]string, error) {
+ r, err := regexp.Compile(regex)
+ if err != nil {
+ return []string{}, err
+ }
+ return r.FindAllString(s, n), nil
+}
+
+func regexFind(regex string, s string) string {
+ r := regexp.MustCompile(regex)
+ return r.FindString(s)
+}
+
+func mustRegexFind(regex string, s string) (string, error) {
+ r, err := regexp.Compile(regex)
+ if err != nil {
+ return "", err
+ }
+ return r.FindString(s), nil
+}
+
+func regexReplaceAll(regex string, s string, repl string) string {
+ r := regexp.MustCompile(regex)
+ return r.ReplaceAllString(s, repl)
+}
+
+func mustRegexReplaceAll(regex string, s string, repl string) (string, error) {
+ r, err := regexp.Compile(regex)
+ if err != nil {
+ return "", err
+ }
+ return r.ReplaceAllString(s, repl), nil
+}
+
+func regexReplaceAllLiteral(regex string, s string, repl string) string {
+ r := regexp.MustCompile(regex)
+ return r.ReplaceAllLiteralString(s, repl)
+}
+
+func mustRegexReplaceAllLiteral(regex string, s string, repl string) (string, error) {
+ r, err := regexp.Compile(regex)
+ if err != nil {
+ return "", err
+ }
+ return r.ReplaceAllLiteralString(s, repl), nil
+}
+
+func regexSplit(regex string, s string, n int) []string {
+ r := regexp.MustCompile(regex)
+ return r.Split(s, n)
+}
+
+func mustRegexSplit(regex string, s string, n int) ([]string, error) {
+ r, err := regexp.Compile(regex)
+ if err != nil {
+ return []string{}, err
+ }
+ return r.Split(s, n), nil
+}
+
+func regexQuoteMeta(s string) string {
+ return regexp.QuoteMeta(s)
+}
diff --git a/vendor/github.com/Masterminds/sprig/v3/semver.go b/vendor/github.com/Masterminds/sprig/v3/semver.go
new file mode 100644
index 000000000..3fbe08aa6
--- /dev/null
+++ b/vendor/github.com/Masterminds/sprig/v3/semver.go
@@ -0,0 +1,23 @@
+package sprig
+
+import (
+ sv2 "github.com/Masterminds/semver/v3"
+)
+
+func semverCompare(constraint, version string) (bool, error) {
+ c, err := sv2.NewConstraint(constraint)
+ if err != nil {
+ return false, err
+ }
+
+ v, err := sv2.NewVersion(version)
+ if err != nil {
+ return false, err
+ }
+
+ return c.Check(v), nil
+}
+
+func semver(version string) (*sv2.Version, error) {
+ return sv2.NewVersion(version)
+}
diff --git a/vendor/github.com/Masterminds/sprig/v3/strings.go b/vendor/github.com/Masterminds/sprig/v3/strings.go
new file mode 100644
index 000000000..e0ae628c8
--- /dev/null
+++ b/vendor/github.com/Masterminds/sprig/v3/strings.go
@@ -0,0 +1,236 @@
+package sprig
+
+import (
+ "encoding/base32"
+ "encoding/base64"
+ "fmt"
+ "reflect"
+ "strconv"
+ "strings"
+
+ util "github.com/Masterminds/goutils"
+)
+
+func base64encode(v string) string {
+ return base64.StdEncoding.EncodeToString([]byte(v))
+}
+
+func base64decode(v string) string {
+ data, err := base64.StdEncoding.DecodeString(v)
+ if err != nil {
+ return err.Error()
+ }
+ return string(data)
+}
+
+func base32encode(v string) string {
+ return base32.StdEncoding.EncodeToString([]byte(v))
+}
+
+func base32decode(v string) string {
+ data, err := base32.StdEncoding.DecodeString(v)
+ if err != nil {
+ return err.Error()
+ }
+ return string(data)
+}
+
+func abbrev(width int, s string) string {
+ if width < 4 {
+ return s
+ }
+ r, _ := util.Abbreviate(s, width)
+ return r
+}
+
+func abbrevboth(left, right int, s string) string {
+ if right < 4 || left > 0 && right < 7 {
+ return s
+ }
+ r, _ := util.AbbreviateFull(s, left, right)
+ return r
+}
+func initials(s string) string {
+ // Wrap this just to eliminate the var args, which templates don't do well.
+ return util.Initials(s)
+}
+
+func randAlphaNumeric(count int) string {
+ // It is not possible, it appears, to actually generate an error here.
+ r, _ := util.CryptoRandomAlphaNumeric(count)
+ return r
+}
+
+func randAlpha(count int) string {
+ r, _ := util.CryptoRandomAlphabetic(count)
+ return r
+}
+
+func randAscii(count int) string {
+ r, _ := util.CryptoRandomAscii(count)
+ return r
+}
+
+func randNumeric(count int) string {
+ r, _ := util.CryptoRandomNumeric(count)
+ return r
+}
+
+func untitle(str string) string {
+ return util.Uncapitalize(str)
+}
+
+func quote(str ...interface{}) string {
+ out := make([]string, 0, len(str))
+ for _, s := range str {
+ if s != nil {
+ out = append(out, fmt.Sprintf("%q", strval(s)))
+ }
+ }
+ return strings.Join(out, " ")
+}
+
+func squote(str ...interface{}) string {
+ out := make([]string, 0, len(str))
+ for _, s := range str {
+ if s != nil {
+ out = append(out, fmt.Sprintf("'%v'", s))
+ }
+ }
+ return strings.Join(out, " ")
+}
+
+func cat(v ...interface{}) string {
+ v = removeNilElements(v)
+ r := strings.TrimSpace(strings.Repeat("%v ", len(v)))
+ return fmt.Sprintf(r, v...)
+}
+
+func indent(spaces int, v string) string {
+ pad := strings.Repeat(" ", spaces)
+ return pad + strings.Replace(v, "\n", "\n"+pad, -1)
+}
+
+func nindent(spaces int, v string) string {
+ return "\n" + indent(spaces, v)
+}
+
+func replace(old, new, src string) string {
+ return strings.Replace(src, old, new, -1)
+}
+
+func plural(one, many string, count int) string {
+ if count == 1 {
+ return one
+ }
+ return many
+}
+
+func strslice(v interface{}) []string {
+ switch v := v.(type) {
+ case []string:
+ return v
+ case []interface{}:
+ b := make([]string, 0, len(v))
+ for _, s := range v {
+ if s != nil {
+ b = append(b, strval(s))
+ }
+ }
+ return b
+ default:
+ val := reflect.ValueOf(v)
+ switch val.Kind() {
+ case reflect.Array, reflect.Slice:
+ l := val.Len()
+ b := make([]string, 0, l)
+ for i := 0; i < l; i++ {
+ value := val.Index(i).Interface()
+ if value != nil {
+ b = append(b, strval(value))
+ }
+ }
+ return b
+ default:
+ if v == nil {
+ return []string{}
+ }
+
+ return []string{strval(v)}
+ }
+ }
+}
+
+func removeNilElements(v []interface{}) []interface{} {
+ newSlice := make([]interface{}, 0, len(v))
+ for _, i := range v {
+ if i != nil {
+ newSlice = append(newSlice, i)
+ }
+ }
+ return newSlice
+}
+
+func strval(v interface{}) string {
+ switch v := v.(type) {
+ case string:
+ return v
+ case []byte:
+ return string(v)
+ case error:
+ return v.Error()
+ case fmt.Stringer:
+ return v.String()
+ default:
+ return fmt.Sprintf("%v", v)
+ }
+}
+
+func trunc(c int, s string) string {
+ if c < 0 && len(s)+c > 0 {
+ return s[len(s)+c:]
+ }
+ if c >= 0 && len(s) > c {
+ return s[:c]
+ }
+ return s
+}
+
+func join(sep string, v interface{}) string {
+ return strings.Join(strslice(v), sep)
+}
+
+func split(sep, orig string) map[string]string {
+ parts := strings.Split(orig, sep)
+ res := make(map[string]string, len(parts))
+ for i, v := range parts {
+ res["_"+strconv.Itoa(i)] = v
+ }
+ return res
+}
+
+func splitn(sep string, n int, orig string) map[string]string {
+ parts := strings.SplitN(orig, sep, n)
+ res := make(map[string]string, len(parts))
+ for i, v := range parts {
+ res["_"+strconv.Itoa(i)] = v
+ }
+ return res
+}
+
+// substring creates a substring of the given string.
+//
+// If start is < 0, this calls string[:end].
+//
+// If start is >= 0 and end < 0 or end bigger than s length, this calls string[start:]
+//
+// Otherwise, this calls string[start, end].
+func substring(start, end int, s string) string {
+ if start < 0 {
+ return s[:end]
+ }
+ if end < 0 || end > len(s) {
+ return s[start:]
+ }
+ return s[start:end]
+}
diff --git a/vendor/github.com/Masterminds/sprig/v3/url.go b/vendor/github.com/Masterminds/sprig/v3/url.go
new file mode 100644
index 000000000..b8e120e19
--- /dev/null
+++ b/vendor/github.com/Masterminds/sprig/v3/url.go
@@ -0,0 +1,66 @@
+package sprig
+
+import (
+ "fmt"
+ "net/url"
+ "reflect"
+)
+
+func dictGetOrEmpty(dict map[string]interface{}, key string) string {
+ value, ok := dict[key]
+ if !ok {
+ return ""
+ }
+ tp := reflect.TypeOf(value).Kind()
+ if tp != reflect.String {
+ panic(fmt.Sprintf("unable to parse %s key, must be of type string, but %s found", key, tp.String()))
+ }
+ return reflect.ValueOf(value).String()
+}
+
+// parses given URL to return dict object
+func urlParse(v string) map[string]interface{} {
+ dict := map[string]interface{}{}
+ parsedURL, err := url.Parse(v)
+ if err != nil {
+ panic(fmt.Sprintf("unable to parse url: %s", err))
+ }
+ dict["scheme"] = parsedURL.Scheme
+ dict["host"] = parsedURL.Host
+ dict["hostname"] = parsedURL.Hostname()
+ dict["path"] = parsedURL.Path
+ dict["query"] = parsedURL.RawQuery
+ dict["opaque"] = parsedURL.Opaque
+ dict["fragment"] = parsedURL.Fragment
+ if parsedURL.User != nil {
+ dict["userinfo"] = parsedURL.User.String()
+ } else {
+ dict["userinfo"] = ""
+ }
+
+ return dict
+}
+
+// join given dict to URL string
+func urlJoin(d map[string]interface{}) string {
+ resURL := url.URL{
+ Scheme: dictGetOrEmpty(d, "scheme"),
+ Host: dictGetOrEmpty(d, "host"),
+ Path: dictGetOrEmpty(d, "path"),
+ RawQuery: dictGetOrEmpty(d, "query"),
+ Opaque: dictGetOrEmpty(d, "opaque"),
+ Fragment: dictGetOrEmpty(d, "fragment"),
+ }
+ userinfo := dictGetOrEmpty(d, "userinfo")
+ var user *url.Userinfo
+ if userinfo != "" {
+ tempURL, err := url.Parse(fmt.Sprintf("proto://%s@host", userinfo))
+ if err != nil {
+ panic(fmt.Sprintf("unable to parse userinfo in dict: %s", err))
+ }
+ user = tempURL.User
+ }
+
+ resURL.User = user
+ return resURL.String()
+}
diff --git a/vendor/github.com/Masterminds/squirrel/.gitignore b/vendor/github.com/Masterminds/squirrel/.gitignore
new file mode 100644
index 000000000..4a0699f0b
--- /dev/null
+++ b/vendor/github.com/Masterminds/squirrel/.gitignore
@@ -0,0 +1 @@
+squirrel.test
\ No newline at end of file
diff --git a/vendor/github.com/Masterminds/squirrel/.travis.yml b/vendor/github.com/Masterminds/squirrel/.travis.yml
new file mode 100644
index 000000000..7bb6da487
--- /dev/null
+++ b/vendor/github.com/Masterminds/squirrel/.travis.yml
@@ -0,0 +1,30 @@
+language: go
+
+go:
+ - 1.11.x
+ - 1.12.x
+ - 1.13.x
+
+services:
+ - mysql
+ - postgresql
+
+# Setting sudo access to false will let Travis CI use containers rather than
+# VMs to run the tests. For more details see:
+# - http://docs.travis-ci.com/user/workers/container-based-infrastructure/
+# - http://docs.travis-ci.com/user/workers/standard-infrastructure/
+sudo: false
+
+before_script:
+ - mysql -e 'CREATE DATABASE squirrel;'
+ - psql -c 'CREATE DATABASE squirrel;' -U postgres
+
+script:
+ - go test
+ - cd integration
+ - go test -args -driver sqlite3
+ - go test -args -driver mysql -dataSource travis@/squirrel
+ - go test -args -driver postgres -dataSource 'postgres://postgres@localhost/squirrel?sslmode=disable'
+
+notifications:
+ irc: "irc.freenode.net#masterminds"
diff --git a/vendor/github.com/Masterminds/squirrel/LICENSE b/vendor/github.com/Masterminds/squirrel/LICENSE
new file mode 100644
index 000000000..b459007fd
--- /dev/null
+++ b/vendor/github.com/Masterminds/squirrel/LICENSE
@@ -0,0 +1,23 @@
+MIT License
+
+Squirrel: The Masterminds
+Copyright (c) 2014-2015, Lann Martin. Copyright (C) 2015-2016, Google. Copyright (C) 2015, Matt Farina and Matt Butcher.
+
+
+Permission is hereby granted, free of charge, to any person obtaining a copy
+of this software and associated documentation files (the "Software"), to deal
+in the Software without restriction, including without limitation the rights
+to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
+copies of the Software, and to permit persons to whom the Software is
+furnished to do so, subject to the following conditions:
+
+The above copyright notice and this permission notice shall be included in all
+copies or substantial portions of the Software.
+
+THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
+AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
+SOFTWARE.
diff --git a/vendor/github.com/Masterminds/squirrel/README.md b/vendor/github.com/Masterminds/squirrel/README.md
new file mode 100644
index 000000000..1d37f282f
--- /dev/null
+++ b/vendor/github.com/Masterminds/squirrel/README.md
@@ -0,0 +1,142 @@
+[![Stability: Maintenance](https://masterminds.github.io/stability/maintenance.svg)](https://masterminds.github.io/stability/maintenance.html)
+### Squirrel is "complete".
+Bug fixes will still be merged (slowly). Bug reports are welcome, but I will not necessarily respond to them. If another fork (or substantially similar project) actively improves on what Squirrel does, let me know and I may link to it here.
+
+
+# Squirrel - fluent SQL generator for Go
+
+```go
+import "github.com/Masterminds/squirrel"
+```
+
+
+[![GoDoc](https://godoc.org/github.com/Masterminds/squirrel?status.png)](https://godoc.org/github.com/Masterminds/squirrel)
+[![Build Status](https://api.travis-ci.org/Masterminds/squirrel.svg?branch=master)](https://travis-ci.org/Masterminds/squirrel)
+
+**Squirrel is not an ORM.** For an application of Squirrel, check out
+[structable, a table-struct mapper](https://github.com/Masterminds/structable)
+
+
+Squirrel helps you build SQL queries from composable parts:
+
+```go
+import sq "github.com/Masterminds/squirrel"
+
+users := sq.Select("*").From("users").Join("emails USING (email_id)")
+
+active := users.Where(sq.Eq{"deleted_at": nil})
+
+sql, args, err := active.ToSql()
+
+sql == "SELECT * FROM users JOIN emails USING (email_id) WHERE deleted_at IS NULL"
+```
+
+```go
+sql, args, err := sq.
+ Insert("users").Columns("name", "age").
+ Values("moe", 13).Values("larry", sq.Expr("? + 5", 12)).
+ ToSql()
+
+sql == "INSERT INTO users (name,age) VALUES (?,?),(?,? + 5)"
+```
+
+Squirrel can also execute queries directly:
+
+```go
+stooges := users.Where(sq.Eq{"username": []string{"moe", "larry", "curly", "shemp"}})
+three_stooges := stooges.Limit(3)
+rows, err := three_stooges.RunWith(db).Query()
+
+// Behaves like:
+rows, err := db.Query("SELECT * FROM users WHERE username IN (?,?,?,?) LIMIT 3",
+ "moe", "larry", "curly", "shemp")
+```
+
+Squirrel makes conditional query building a breeze:
+
+```go
+if len(q) > 0 {
+ users = users.Where("name LIKE ?", fmt.Sprint("%", q, "%"))
+}
+```
+
+Squirrel wants to make your life easier:
+
+```go
+// StmtCache caches Prepared Stmts for you
+dbCache := sq.NewStmtCache(db)
+
+// StatementBuilder keeps your syntax neat
+mydb := sq.StatementBuilder.RunWith(dbCache)
+select_users := mydb.Select("*").From("users")
+```
+
+Squirrel loves PostgreSQL:
+
+```go
+psql := sq.StatementBuilder.PlaceholderFormat(sq.Dollar)
+
+// You use question marks for placeholders...
+sql, _, _ := psql.Select("*").From("elephants").Where("name IN (?,?)", "Dumbo", "Verna").ToSql()
+
+/// ...squirrel replaces them using PlaceholderFormat.
+sql == "SELECT * FROM elephants WHERE name IN ($1,$2)"
+
+
+/// You can retrieve id ...
+query := sq.Insert("nodes").
+ Columns("uuid", "type", "data").
+ Values(node.Uuid, node.Type, node.Data).
+ Suffix("RETURNING \"id\"").
+ RunWith(m.db).
+ PlaceholderFormat(sq.Dollar)
+
+query.QueryRow().Scan(&node.id)
+```
+
+You can escape question marks by inserting two question marks:
+
+```sql
+SELECT * FROM nodes WHERE meta->'format' ??| array[?,?]
+```
+
+will generate with the Dollar Placeholder:
+
+```sql
+SELECT * FROM nodes WHERE meta->'format' ?| array[$1,$2]
+```
+
+## FAQ
+
+* **How can I build an IN query on composite keys / tuples, e.g. `WHERE (col1, col2) IN ((1,2),(3,4))`? ([#104](https://github.com/Masterminds/squirrel/issues/104))**
+
+ Squirrel does not explicitly support tuples, but you can get the same effect with e.g.:
+
+ ```go
+ sq.Or{
+ sq.Eq{"col1": 1, "col2": 2},
+ sq.Eq{"col1": 3, "col2": 4}}
+ ```
+
+ ```sql
+ WHERE (col1 = 1 AND col2 = 2) OR (col1 = 3 AND col2 = 4)
+ ```
+
+ (which should produce the same query plan as the tuple version)
+
+* **Why doesn't `Eq{"mynumber": []uint8{1,2,3}}` turn into an `IN` query? ([#114](https://github.com/Masterminds/squirrel/issues/114))**
+
+ Values of type `[]byte` are handled specially by `database/sql`. In Go, [`byte` is just an alias of `uint8`](https://golang.org/pkg/builtin/#byte), so there is no way to distinguish `[]uint8` from `[]byte`.
+
+* **Some features are poorly documented!**
+
+ This isn't a frequent complaints section!
+
+* **Some features are poorly documented?**
+
+ Yes. The tests should be considered a part of the documentation; take a look at those for ideas on how to express more complex queries.
+
+## License
+
+Squirrel is released under the
+[MIT License](http://www.opensource.org/licenses/MIT).
diff --git a/vendor/github.com/Masterminds/squirrel/case.go b/vendor/github.com/Masterminds/squirrel/case.go
new file mode 100644
index 000000000..299e14b9d
--- /dev/null
+++ b/vendor/github.com/Masterminds/squirrel/case.go
@@ -0,0 +1,128 @@
+package squirrel
+
+import (
+ "bytes"
+ "errors"
+
+ "github.com/lann/builder"
+)
+
+func init() {
+ builder.Register(CaseBuilder{}, caseData{})
+}
+
+// sqlizerBuffer is a helper that allows to write many Sqlizers one by one
+// without constant checks for errors that may come from Sqlizer
+type sqlizerBuffer struct {
+ bytes.Buffer
+ args []interface{}
+ err error
+}
+
+// WriteSql converts Sqlizer to SQL strings and writes it to buffer
+func (b *sqlizerBuffer) WriteSql(item Sqlizer) {
+ if b.err != nil {
+ return
+ }
+
+ var str string
+ var args []interface{}
+ str, args, b.err = nestedToSql(item)
+
+ if b.err != nil {
+ return
+ }
+
+ b.WriteString(str)
+ b.WriteByte(' ')
+ b.args = append(b.args, args...)
+}
+
+func (b *sqlizerBuffer) ToSql() (string, []interface{}, error) {
+ return b.String(), b.args, b.err
+}
+
+// whenPart is a helper structure to describe SQLs "WHEN ... THEN ..." expression
+type whenPart struct {
+ when Sqlizer
+ then Sqlizer
+}
+
+func newWhenPart(when interface{}, then interface{}) whenPart {
+ return whenPart{newPart(when), newPart(then)}
+}
+
+// caseData holds all the data required to build a CASE SQL construct
+type caseData struct {
+ What Sqlizer
+ WhenParts []whenPart
+ Else Sqlizer
+}
+
+// ToSql implements Sqlizer
+func (d *caseData) ToSql() (sqlStr string, args []interface{}, err error) {
+ if len(d.WhenParts) == 0 {
+ err = errors.New("case expression must contain at lease one WHEN clause")
+
+ return
+ }
+
+ sql := sqlizerBuffer{}
+
+ sql.WriteString("CASE ")
+ if d.What != nil {
+ sql.WriteSql(d.What)
+ }
+
+ for _, p := range d.WhenParts {
+ sql.WriteString("WHEN ")
+ sql.WriteSql(p.when)
+ sql.WriteString("THEN ")
+ sql.WriteSql(p.then)
+ }
+
+ if d.Else != nil {
+ sql.WriteString("ELSE ")
+ sql.WriteSql(d.Else)
+ }
+
+ sql.WriteString("END")
+
+ return sql.ToSql()
+}
+
+// CaseBuilder builds SQL CASE construct which could be used as parts of queries.
+type CaseBuilder builder.Builder
+
+// ToSql builds the query into a SQL string and bound args.
+func (b CaseBuilder) ToSql() (string, []interface{}, error) {
+ data := builder.GetStruct(b).(caseData)
+ return data.ToSql()
+}
+
+// MustSql builds the query into a SQL string and bound args.
+// It panics if there are any errors.
+func (b CaseBuilder) MustSql() (string, []interface{}) {
+ sql, args, err := b.ToSql()
+ if err != nil {
+ panic(err)
+ }
+ return sql, args
+}
+
+// what sets optional value for CASE construct "CASE [value] ..."
+func (b CaseBuilder) what(expr interface{}) CaseBuilder {
+ return builder.Set(b, "What", newPart(expr)).(CaseBuilder)
+}
+
+// When adds "WHEN ... THEN ..." part to CASE construct
+func (b CaseBuilder) When(when interface{}, then interface{}) CaseBuilder {
+ // TODO: performance hint: replace slice of WhenPart with just slice of parts
+ // where even indices of the slice belong to "when"s and odd indices belong to "then"s
+ return builder.Append(b, "WhenParts", newWhenPart(when, then)).(CaseBuilder)
+}
+
+// What sets optional "ELSE ..." part for CASE construct
+func (b CaseBuilder) Else(expr interface{}) CaseBuilder {
+ return builder.Set(b, "Else", newPart(expr)).(CaseBuilder)
+}
diff --git a/vendor/github.com/Masterminds/squirrel/delete.go b/vendor/github.com/Masterminds/squirrel/delete.go
new file mode 100644
index 000000000..f3f31e63e
--- /dev/null
+++ b/vendor/github.com/Masterminds/squirrel/delete.go
@@ -0,0 +1,191 @@
+package squirrel
+
+import (
+ "bytes"
+ "database/sql"
+ "fmt"
+ "strings"
+
+ "github.com/lann/builder"
+)
+
+type deleteData struct {
+ PlaceholderFormat PlaceholderFormat
+ RunWith BaseRunner
+ Prefixes []Sqlizer
+ From string
+ WhereParts []Sqlizer
+ OrderBys []string
+ Limit string
+ Offset string
+ Suffixes []Sqlizer
+}
+
+func (d *deleteData) Exec() (sql.Result, error) {
+ if d.RunWith == nil {
+ return nil, RunnerNotSet
+ }
+ return ExecWith(d.RunWith, d)
+}
+
+func (d *deleteData) ToSql() (sqlStr string, args []interface{}, err error) {
+ if len(d.From) == 0 {
+ err = fmt.Errorf("delete statements must specify a From table")
+ return
+ }
+
+ sql := &bytes.Buffer{}
+
+ if len(d.Prefixes) > 0 {
+ args, err = appendToSql(d.Prefixes, sql, " ", args)
+ if err != nil {
+ return
+ }
+
+ sql.WriteString(" ")
+ }
+
+ sql.WriteString("DELETE FROM ")
+ sql.WriteString(d.From)
+
+ if len(d.WhereParts) > 0 {
+ sql.WriteString(" WHERE ")
+ args, err = appendToSql(d.WhereParts, sql, " AND ", args)
+ if err != nil {
+ return
+ }
+ }
+
+ if len(d.OrderBys) > 0 {
+ sql.WriteString(" ORDER BY ")
+ sql.WriteString(strings.Join(d.OrderBys, ", "))
+ }
+
+ if len(d.Limit) > 0 {
+ sql.WriteString(" LIMIT ")
+ sql.WriteString(d.Limit)
+ }
+
+ if len(d.Offset) > 0 {
+ sql.WriteString(" OFFSET ")
+ sql.WriteString(d.Offset)
+ }
+
+ if len(d.Suffixes) > 0 {
+ sql.WriteString(" ")
+ args, err = appendToSql(d.Suffixes, sql, " ", args)
+ if err != nil {
+ return
+ }
+ }
+
+ sqlStr, err = d.PlaceholderFormat.ReplacePlaceholders(sql.String())
+ return
+}
+
+// Builder
+
+// DeleteBuilder builds SQL DELETE statements.
+type DeleteBuilder builder.Builder
+
+func init() {
+ builder.Register(DeleteBuilder{}, deleteData{})
+}
+
+// Format methods
+
+// PlaceholderFormat sets PlaceholderFormat (e.g. Question or Dollar) for the
+// query.
+func (b DeleteBuilder) PlaceholderFormat(f PlaceholderFormat) DeleteBuilder {
+ return builder.Set(b, "PlaceholderFormat", f).(DeleteBuilder)
+}
+
+// Runner methods
+
+// RunWith sets a Runner (like database/sql.DB) to be used with e.g. Exec.
+func (b DeleteBuilder) RunWith(runner BaseRunner) DeleteBuilder {
+ return setRunWith(b, runner).(DeleteBuilder)
+}
+
+// Exec builds and Execs the query with the Runner set by RunWith.
+func (b DeleteBuilder) Exec() (sql.Result, error) {
+ data := builder.GetStruct(b).(deleteData)
+ return data.Exec()
+}
+
+// SQL methods
+
+// ToSql builds the query into a SQL string and bound args.
+func (b DeleteBuilder) ToSql() (string, []interface{}, error) {
+ data := builder.GetStruct(b).(deleteData)
+ return data.ToSql()
+}
+
+// MustSql builds the query into a SQL string and bound args.
+// It panics if there are any errors.
+func (b DeleteBuilder) MustSql() (string, []interface{}) {
+ sql, args, err := b.ToSql()
+ if err != nil {
+ panic(err)
+ }
+ return sql, args
+}
+
+// Prefix adds an expression to the beginning of the query
+func (b DeleteBuilder) Prefix(sql string, args ...interface{}) DeleteBuilder {
+ return b.PrefixExpr(Expr(sql, args...))
+}
+
+// PrefixExpr adds an expression to the very beginning of the query
+func (b DeleteBuilder) PrefixExpr(expr Sqlizer) DeleteBuilder {
+ return builder.Append(b, "Prefixes", expr).(DeleteBuilder)
+}
+
+// From sets the table to be deleted from.
+func (b DeleteBuilder) From(from string) DeleteBuilder {
+ return builder.Set(b, "From", from).(DeleteBuilder)
+}
+
+// Where adds WHERE expressions to the query.
+//
+// See SelectBuilder.Where for more information.
+func (b DeleteBuilder) Where(pred interface{}, args ...interface{}) DeleteBuilder {
+ return builder.Append(b, "WhereParts", newWherePart(pred, args...)).(DeleteBuilder)
+}
+
+// OrderBy adds ORDER BY expressions to the query.
+func (b DeleteBuilder) OrderBy(orderBys ...string) DeleteBuilder {
+ return builder.Extend(b, "OrderBys", orderBys).(DeleteBuilder)
+}
+
+// Limit sets a LIMIT clause on the query.
+func (b DeleteBuilder) Limit(limit uint64) DeleteBuilder {
+ return builder.Set(b, "Limit", fmt.Sprintf("%d", limit)).(DeleteBuilder)
+}
+
+// Offset sets a OFFSET clause on the query.
+func (b DeleteBuilder) Offset(offset uint64) DeleteBuilder {
+ return builder.Set(b, "Offset", fmt.Sprintf("%d", offset)).(DeleteBuilder)
+}
+
+// Suffix adds an expression to the end of the query
+func (b DeleteBuilder) Suffix(sql string, args ...interface{}) DeleteBuilder {
+ return b.SuffixExpr(Expr(sql, args...))
+}
+
+// SuffixExpr adds an expression to the end of the query
+func (b DeleteBuilder) SuffixExpr(expr Sqlizer) DeleteBuilder {
+ return builder.Append(b, "Suffixes", expr).(DeleteBuilder)
+}
+
+func (b DeleteBuilder) Query() (*sql.Rows, error) {
+ data := builder.GetStruct(b).(deleteData)
+ return data.Query()
+}
+
+func (d *deleteData) Query() (*sql.Rows, error) {
+ if d.RunWith == nil {
+ return nil, RunnerNotSet
+ }
+ return QueryWith(d.RunWith, d)
+}
diff --git a/vendor/github.com/Masterminds/squirrel/delete_ctx.go b/vendor/github.com/Masterminds/squirrel/delete_ctx.go
new file mode 100644
index 000000000..de83c55df
--- /dev/null
+++ b/vendor/github.com/Masterminds/squirrel/delete_ctx.go
@@ -0,0 +1,69 @@
+// +build go1.8
+
+package squirrel
+
+import (
+ "context"
+ "database/sql"
+
+ "github.com/lann/builder"
+)
+
+func (d *deleteData) ExecContext(ctx context.Context) (sql.Result, error) {
+ if d.RunWith == nil {
+ return nil, RunnerNotSet
+ }
+ ctxRunner, ok := d.RunWith.(ExecerContext)
+ if !ok {
+ return nil, NoContextSupport
+ }
+ return ExecContextWith(ctx, ctxRunner, d)
+}
+
+func (d *deleteData) QueryContext(ctx context.Context) (*sql.Rows, error) {
+ if d.RunWith == nil {
+ return nil, RunnerNotSet
+ }
+ ctxRunner, ok := d.RunWith.(QueryerContext)
+ if !ok {
+ return nil, NoContextSupport
+ }
+ return QueryContextWith(ctx, ctxRunner, d)
+}
+
+func (d *deleteData) QueryRowContext(ctx context.Context) RowScanner {
+ if d.RunWith == nil {
+ return &Row{err: RunnerNotSet}
+ }
+ queryRower, ok := d.RunWith.(QueryRowerContext)
+ if !ok {
+ if _, ok := d.RunWith.(QueryerContext); !ok {
+ return &Row{err: RunnerNotQueryRunner}
+ }
+ return &Row{err: NoContextSupport}
+ }
+ return QueryRowContextWith(ctx, queryRower, d)
+}
+
+// ExecContext builds and ExecContexts the query with the Runner set by RunWith.
+func (b DeleteBuilder) ExecContext(ctx context.Context) (sql.Result, error) {
+ data := builder.GetStruct(b).(deleteData)
+ return data.ExecContext(ctx)
+}
+
+// QueryContext builds and QueryContexts the query with the Runner set by RunWith.
+func (b DeleteBuilder) QueryContext(ctx context.Context) (*sql.Rows, error) {
+ data := builder.GetStruct(b).(deleteData)
+ return data.QueryContext(ctx)
+}
+
+// QueryRowContext builds and QueryRowContexts the query with the Runner set by RunWith.
+func (b DeleteBuilder) QueryRowContext(ctx context.Context) RowScanner {
+ data := builder.GetStruct(b).(deleteData)
+ return data.QueryRowContext(ctx)
+}
+
+// ScanContext is a shortcut for QueryRowContext().Scan.
+func (b DeleteBuilder) ScanContext(ctx context.Context, dest ...interface{}) error {
+ return b.QueryRowContext(ctx).Scan(dest...)
+}
diff --git a/vendor/github.com/Masterminds/squirrel/expr.go b/vendor/github.com/Masterminds/squirrel/expr.go
new file mode 100644
index 000000000..eba1b4579
--- /dev/null
+++ b/vendor/github.com/Masterminds/squirrel/expr.go
@@ -0,0 +1,419 @@
+package squirrel
+
+import (
+ "bytes"
+ "database/sql/driver"
+ "fmt"
+ "reflect"
+ "sort"
+ "strings"
+)
+
+const (
+ // Portable true/false literals.
+ sqlTrue = "(1=1)"
+ sqlFalse = "(1=0)"
+)
+
+type expr struct {
+ sql string
+ args []interface{}
+}
+
+// Expr builds an expression from a SQL fragment and arguments.
+//
+// Ex:
+// Expr("FROM_UNIXTIME(?)", t)
+func Expr(sql string, args ...interface{}) Sqlizer {
+ return expr{sql: sql, args: args}
+}
+
+func (e expr) ToSql() (sql string, args []interface{}, err error) {
+ simple := true
+ for _, arg := range e.args {
+ if _, ok := arg.(Sqlizer); ok {
+ simple = false
+ }
+ }
+ if simple {
+ return e.sql, e.args, nil
+ }
+
+ buf := &bytes.Buffer{}
+ ap := e.args
+ sp := e.sql
+
+ var isql string
+ var iargs []interface{}
+
+ for err == nil && len(ap) > 0 && len(sp) > 0 {
+ i := strings.Index(sp, "?")
+ if i < 0 {
+ // no more placeholders
+ break
+ }
+ if len(sp) > i+1 && sp[i+1:i+2] == "?" {
+ // escaped "??"; append it and step past
+ buf.WriteString(sp[:i+2])
+ sp = sp[i+2:]
+ continue
+ }
+
+ if as, ok := ap[0].(Sqlizer); ok {
+ // sqlizer argument; expand it and append the result
+ isql, iargs, err = as.ToSql()
+ buf.WriteString(sp[:i])
+ buf.WriteString(isql)
+ args = append(args, iargs...)
+ } else {
+ // normal argument; append it and the placeholder
+ buf.WriteString(sp[:i+1])
+ args = append(args, ap[0])
+ }
+
+ // step past the argument and placeholder
+ ap = ap[1:]
+ sp = sp[i+1:]
+ }
+
+ // append the remaining sql and arguments
+ buf.WriteString(sp)
+ return buf.String(), append(args, ap...), err
+}
+
+type concatExpr []interface{}
+
+func (ce concatExpr) ToSql() (sql string, args []interface{}, err error) {
+ for _, part := range ce {
+ switch p := part.(type) {
+ case string:
+ sql += p
+ case Sqlizer:
+ pSql, pArgs, err := p.ToSql()
+ if err != nil {
+ return "", nil, err
+ }
+ sql += pSql
+ args = append(args, pArgs...)
+ default:
+ return "", nil, fmt.Errorf("%#v is not a string or Sqlizer", part)
+ }
+ }
+ return
+}
+
+// ConcatExpr builds an expression by concatenating strings and other expressions.
+//
+// Ex:
+// name_expr := Expr("CONCAT(?, ' ', ?)", firstName, lastName)
+// ConcatExpr("COALESCE(full_name,", name_expr, ")")
+func ConcatExpr(parts ...interface{}) concatExpr {
+ return concatExpr(parts)
+}
+
+// aliasExpr helps to alias part of SQL query generated with underlying "expr"
+type aliasExpr struct {
+ expr Sqlizer
+ alias string
+}
+
+// Alias allows to define alias for column in SelectBuilder. Useful when column is
+// defined as complex expression like IF or CASE
+// Ex:
+// .Column(Alias(caseStmt, "case_column"))
+func Alias(expr Sqlizer, alias string) aliasExpr {
+ return aliasExpr{expr, alias}
+}
+
+func (e aliasExpr) ToSql() (sql string, args []interface{}, err error) {
+ sql, args, err = e.expr.ToSql()
+ if err == nil {
+ sql = fmt.Sprintf("(%s) AS %s", sql, e.alias)
+ }
+ return
+}
+
+// Eq is syntactic sugar for use with Where/Having/Set methods.
+type Eq map[string]interface{}
+
+func (eq Eq) toSQL(useNotOpr bool) (sql string, args []interface{}, err error) {
+ if len(eq) == 0 {
+ // Empty Sql{} evaluates to true.
+ sql = sqlTrue
+ return
+ }
+
+ var (
+ exprs []string
+ equalOpr = "="
+ inOpr = "IN"
+ nullOpr = "IS"
+ inEmptyExpr = sqlFalse
+ )
+
+ if useNotOpr {
+ equalOpr = "<>"
+ inOpr = "NOT IN"
+ nullOpr = "IS NOT"
+ inEmptyExpr = sqlTrue
+ }
+
+ sortedKeys := getSortedKeys(eq)
+ for _, key := range sortedKeys {
+ var expr string
+ val := eq[key]
+
+ switch v := val.(type) {
+ case driver.Valuer:
+ if val, err = v.Value(); err != nil {
+ return
+ }
+ }
+
+ r := reflect.ValueOf(val)
+ if r.Kind() == reflect.Ptr {
+ if r.IsNil() {
+ val = nil
+ } else {
+ val = r.Elem().Interface()
+ }
+ }
+
+ if val == nil {
+ expr = fmt.Sprintf("%s %s NULL", key, nullOpr)
+ } else {
+ if isListType(val) {
+ valVal := reflect.ValueOf(val)
+ if valVal.Len() == 0 {
+ expr = inEmptyExpr
+ if args == nil {
+ args = []interface{}{}
+ }
+ } else {
+ for i := 0; i < valVal.Len(); i++ {
+ args = append(args, valVal.Index(i).Interface())
+ }
+ expr = fmt.Sprintf("%s %s (%s)", key, inOpr, Placeholders(valVal.Len()))
+ }
+ } else {
+ expr = fmt.Sprintf("%s %s ?", key, equalOpr)
+ args = append(args, val)
+ }
+ }
+ exprs = append(exprs, expr)
+ }
+ sql = strings.Join(exprs, " AND ")
+ return
+}
+
+func (eq Eq) ToSql() (sql string, args []interface{}, err error) {
+ return eq.toSQL(false)
+}
+
+// NotEq is syntactic sugar for use with Where/Having/Set methods.
+// Ex:
+// .Where(NotEq{"id": 1}) == "id <> 1"
+type NotEq Eq
+
+func (neq NotEq) ToSql() (sql string, args []interface{}, err error) {
+ return Eq(neq).toSQL(true)
+}
+
+// Like is syntactic sugar for use with LIKE conditions.
+// Ex:
+// .Where(Like{"name": "%irrel"})
+type Like map[string]interface{}
+
+func (lk Like) toSql(opr string) (sql string, args []interface{}, err error) {
+ var exprs []string
+ for key, val := range lk {
+ expr := ""
+
+ switch v := val.(type) {
+ case driver.Valuer:
+ if val, err = v.Value(); err != nil {
+ return
+ }
+ }
+
+ if val == nil {
+ err = fmt.Errorf("cannot use null with like operators")
+ return
+ } else {
+ if isListType(val) {
+ err = fmt.Errorf("cannot use array or slice with like operators")
+ return
+ } else {
+ expr = fmt.Sprintf("%s %s ?", key, opr)
+ args = append(args, val)
+ }
+ }
+ exprs = append(exprs, expr)
+ }
+ sql = strings.Join(exprs, " AND ")
+ return
+}
+
+func (lk Like) ToSql() (sql string, args []interface{}, err error) {
+ return lk.toSql("LIKE")
+}
+
+// NotLike is syntactic sugar for use with LIKE conditions.
+// Ex:
+// .Where(NotLike{"name": "%irrel"})
+type NotLike Like
+
+func (nlk NotLike) ToSql() (sql string, args []interface{}, err error) {
+ return Like(nlk).toSql("NOT LIKE")
+}
+
+// ILike is syntactic sugar for use with ILIKE conditions.
+// Ex:
+// .Where(ILike{"name": "sq%"})
+type ILike Like
+
+func (ilk ILike) ToSql() (sql string, args []interface{}, err error) {
+ return Like(ilk).toSql("ILIKE")
+}
+
+// NotILike is syntactic sugar for use with ILIKE conditions.
+// Ex:
+// .Where(NotILike{"name": "sq%"})
+type NotILike Like
+
+func (nilk NotILike) ToSql() (sql string, args []interface{}, err error) {
+ return Like(nilk).toSql("NOT ILIKE")
+}
+
+// Lt is syntactic sugar for use with Where/Having/Set methods.
+// Ex:
+// .Where(Lt{"id": 1})
+type Lt map[string]interface{}
+
+func (lt Lt) toSql(opposite, orEq bool) (sql string, args []interface{}, err error) {
+ var (
+ exprs []string
+ opr = "<"
+ )
+
+ if opposite {
+ opr = ">"
+ }
+
+ if orEq {
+ opr = fmt.Sprintf("%s%s", opr, "=")
+ }
+
+ sortedKeys := getSortedKeys(lt)
+ for _, key := range sortedKeys {
+ var expr string
+ val := lt[key]
+
+ switch v := val.(type) {
+ case driver.Valuer:
+ if val, err = v.Value(); err != nil {
+ return
+ }
+ }
+
+ if val == nil {
+ err = fmt.Errorf("cannot use null with less than or greater than operators")
+ return
+ }
+ if isListType(val) {
+ err = fmt.Errorf("cannot use array or slice with less than or greater than operators")
+ return
+ }
+ expr = fmt.Sprintf("%s %s ?", key, opr)
+ args = append(args, val)
+
+ exprs = append(exprs, expr)
+ }
+ sql = strings.Join(exprs, " AND ")
+ return
+}
+
+func (lt Lt) ToSql() (sql string, args []interface{}, err error) {
+ return lt.toSql(false, false)
+}
+
+// LtOrEq is syntactic sugar for use with Where/Having/Set methods.
+// Ex:
+// .Where(LtOrEq{"id": 1}) == "id <= 1"
+type LtOrEq Lt
+
+func (ltOrEq LtOrEq) ToSql() (sql string, args []interface{}, err error) {
+ return Lt(ltOrEq).toSql(false, true)
+}
+
+// Gt is syntactic sugar for use with Where/Having/Set methods.
+// Ex:
+// .Where(Gt{"id": 1}) == "id > 1"
+type Gt Lt
+
+func (gt Gt) ToSql() (sql string, args []interface{}, err error) {
+ return Lt(gt).toSql(true, false)
+}
+
+// GtOrEq is syntactic sugar for use with Where/Having/Set methods.
+// Ex:
+// .Where(GtOrEq{"id": 1}) == "id >= 1"
+type GtOrEq Lt
+
+func (gtOrEq GtOrEq) ToSql() (sql string, args []interface{}, err error) {
+ return Lt(gtOrEq).toSql(true, true)
+}
+
+type conj []Sqlizer
+
+func (c conj) join(sep, defaultExpr string) (sql string, args []interface{}, err error) {
+ if len(c) == 0 {
+ return defaultExpr, []interface{}{}, nil
+ }
+ var sqlParts []string
+ for _, sqlizer := range c {
+ partSQL, partArgs, err := nestedToSql(sqlizer)
+ if err != nil {
+ return "", nil, err
+ }
+ if partSQL != "" {
+ sqlParts = append(sqlParts, partSQL)
+ args = append(args, partArgs...)
+ }
+ }
+ if len(sqlParts) > 0 {
+ sql = fmt.Sprintf("(%s)", strings.Join(sqlParts, sep))
+ }
+ return
+}
+
+// And conjunction Sqlizers
+type And conj
+
+func (a And) ToSql() (string, []interface{}, error) {
+ return conj(a).join(" AND ", sqlTrue)
+}
+
+// Or conjunction Sqlizers
+type Or conj
+
+func (o Or) ToSql() (string, []interface{}, error) {
+ return conj(o).join(" OR ", sqlFalse)
+}
+
+func getSortedKeys(exp map[string]interface{}) []string {
+ sortedKeys := make([]string, 0, len(exp))
+ for k := range exp {
+ sortedKeys = append(sortedKeys, k)
+ }
+ sort.Strings(sortedKeys)
+ return sortedKeys
+}
+
+func isListType(val interface{}) bool {
+ if driver.IsValue(val) {
+ return false
+ }
+ valVal := reflect.ValueOf(val)
+ return valVal.Kind() == reflect.Array || valVal.Kind() == reflect.Slice
+}
diff --git a/vendor/github.com/Masterminds/squirrel/insert.go b/vendor/github.com/Masterminds/squirrel/insert.go
new file mode 100644
index 000000000..c23a57935
--- /dev/null
+++ b/vendor/github.com/Masterminds/squirrel/insert.go
@@ -0,0 +1,298 @@
+package squirrel
+
+import (
+ "bytes"
+ "database/sql"
+ "errors"
+ "fmt"
+ "io"
+ "sort"
+ "strings"
+
+ "github.com/lann/builder"
+)
+
+type insertData struct {
+ PlaceholderFormat PlaceholderFormat
+ RunWith BaseRunner
+ Prefixes []Sqlizer
+ StatementKeyword string
+ Options []string
+ Into string
+ Columns []string
+ Values [][]interface{}
+ Suffixes []Sqlizer
+ Select *SelectBuilder
+}
+
+func (d *insertData) Exec() (sql.Result, error) {
+ if d.RunWith == nil {
+ return nil, RunnerNotSet
+ }
+ return ExecWith(d.RunWith, d)
+}
+
+func (d *insertData) Query() (*sql.Rows, error) {
+ if d.RunWith == nil {
+ return nil, RunnerNotSet
+ }
+ return QueryWith(d.RunWith, d)
+}
+
+func (d *insertData) QueryRow() RowScanner {
+ if d.RunWith == nil {
+ return &Row{err: RunnerNotSet}
+ }
+ queryRower, ok := d.RunWith.(QueryRower)
+ if !ok {
+ return &Row{err: RunnerNotQueryRunner}
+ }
+ return QueryRowWith(queryRower, d)
+}
+
+func (d *insertData) ToSql() (sqlStr string, args []interface{}, err error) {
+ if len(d.Into) == 0 {
+ err = errors.New("insert statements must specify a table")
+ return
+ }
+ if len(d.Values) == 0 && d.Select == nil {
+ err = errors.New("insert statements must have at least one set of values or select clause")
+ return
+ }
+
+ sql := &bytes.Buffer{}
+
+ if len(d.Prefixes) > 0 {
+ args, err = appendToSql(d.Prefixes, sql, " ", args)
+ if err != nil {
+ return
+ }
+
+ sql.WriteString(" ")
+ }
+
+ if d.StatementKeyword == "" {
+ sql.WriteString("INSERT ")
+ } else {
+ sql.WriteString(d.StatementKeyword)
+ sql.WriteString(" ")
+ }
+
+ if len(d.Options) > 0 {
+ sql.WriteString(strings.Join(d.Options, " "))
+ sql.WriteString(" ")
+ }
+
+ sql.WriteString("INTO ")
+ sql.WriteString(d.Into)
+ sql.WriteString(" ")
+
+ if len(d.Columns) > 0 {
+ sql.WriteString("(")
+ sql.WriteString(strings.Join(d.Columns, ","))
+ sql.WriteString(") ")
+ }
+
+ if d.Select != nil {
+ args, err = d.appendSelectToSQL(sql, args)
+ } else {
+ args, err = d.appendValuesToSQL(sql, args)
+ }
+ if err != nil {
+ return
+ }
+
+ if len(d.Suffixes) > 0 {
+ sql.WriteString(" ")
+ args, err = appendToSql(d.Suffixes, sql, " ", args)
+ if err != nil {
+ return
+ }
+ }
+
+ sqlStr, err = d.PlaceholderFormat.ReplacePlaceholders(sql.String())
+ return
+}
+
+func (d *insertData) appendValuesToSQL(w io.Writer, args []interface{}) ([]interface{}, error) {
+ if len(d.Values) == 0 {
+ return args, errors.New("values for insert statements are not set")
+ }
+
+ io.WriteString(w, "VALUES ")
+
+ valuesStrings := make([]string, len(d.Values))
+ for r, row := range d.Values {
+ valueStrings := make([]string, len(row))
+ for v, val := range row {
+ if vs, ok := val.(Sqlizer); ok {
+ vsql, vargs, err := vs.ToSql()
+ if err != nil {
+ return nil, err
+ }
+ valueStrings[v] = vsql
+ args = append(args, vargs...)
+ } else {
+ valueStrings[v] = "?"
+ args = append(args, val)
+ }
+ }
+ valuesStrings[r] = fmt.Sprintf("(%s)", strings.Join(valueStrings, ","))
+ }
+
+ io.WriteString(w, strings.Join(valuesStrings, ","))
+
+ return args, nil
+}
+
+func (d *insertData) appendSelectToSQL(w io.Writer, args []interface{}) ([]interface{}, error) {
+ if d.Select == nil {
+ return args, errors.New("select clause for insert statements are not set")
+ }
+
+ selectClause, sArgs, err := d.Select.ToSql()
+ if err != nil {
+ return args, err
+ }
+
+ io.WriteString(w, selectClause)
+ args = append(args, sArgs...)
+
+ return args, nil
+}
+
+// Builder
+
+// InsertBuilder builds SQL INSERT statements.
+type InsertBuilder builder.Builder
+
+func init() {
+ builder.Register(InsertBuilder{}, insertData{})
+}
+
+// Format methods
+
+// PlaceholderFormat sets PlaceholderFormat (e.g. Question or Dollar) for the
+// query.
+func (b InsertBuilder) PlaceholderFormat(f PlaceholderFormat) InsertBuilder {
+ return builder.Set(b, "PlaceholderFormat", f).(InsertBuilder)
+}
+
+// Runner methods
+
+// RunWith sets a Runner (like database/sql.DB) to be used with e.g. Exec.
+func (b InsertBuilder) RunWith(runner BaseRunner) InsertBuilder {
+ return setRunWith(b, runner).(InsertBuilder)
+}
+
+// Exec builds and Execs the query with the Runner set by RunWith.
+func (b InsertBuilder) Exec() (sql.Result, error) {
+ data := builder.GetStruct(b).(insertData)
+ return data.Exec()
+}
+
+// Query builds and Querys the query with the Runner set by RunWith.
+func (b InsertBuilder) Query() (*sql.Rows, error) {
+ data := builder.GetStruct(b).(insertData)
+ return data.Query()
+}
+
+// QueryRow builds and QueryRows the query with the Runner set by RunWith.
+func (b InsertBuilder) QueryRow() RowScanner {
+ data := builder.GetStruct(b).(insertData)
+ return data.QueryRow()
+}
+
+// Scan is a shortcut for QueryRow().Scan.
+func (b InsertBuilder) Scan(dest ...interface{}) error {
+ return b.QueryRow().Scan(dest...)
+}
+
+// SQL methods
+
+// ToSql builds the query into a SQL string and bound args.
+func (b InsertBuilder) ToSql() (string, []interface{}, error) {
+ data := builder.GetStruct(b).(insertData)
+ return data.ToSql()
+}
+
+// MustSql builds the query into a SQL string and bound args.
+// It panics if there are any errors.
+func (b InsertBuilder) MustSql() (string, []interface{}) {
+ sql, args, err := b.ToSql()
+ if err != nil {
+ panic(err)
+ }
+ return sql, args
+}
+
+// Prefix adds an expression to the beginning of the query
+func (b InsertBuilder) Prefix(sql string, args ...interface{}) InsertBuilder {
+ return b.PrefixExpr(Expr(sql, args...))
+}
+
+// PrefixExpr adds an expression to the very beginning of the query
+func (b InsertBuilder) PrefixExpr(expr Sqlizer) InsertBuilder {
+ return builder.Append(b, "Prefixes", expr).(InsertBuilder)
+}
+
+// Options adds keyword options before the INTO clause of the query.
+func (b InsertBuilder) Options(options ...string) InsertBuilder {
+ return builder.Extend(b, "Options", options).(InsertBuilder)
+}
+
+// Into sets the INTO clause of the query.
+func (b InsertBuilder) Into(from string) InsertBuilder {
+ return builder.Set(b, "Into", from).(InsertBuilder)
+}
+
+// Columns adds insert columns to the query.
+func (b InsertBuilder) Columns(columns ...string) InsertBuilder {
+ return builder.Extend(b, "Columns", columns).(InsertBuilder)
+}
+
+// Values adds a single row's values to the query.
+func (b InsertBuilder) Values(values ...interface{}) InsertBuilder {
+ return builder.Append(b, "Values", values).(InsertBuilder)
+}
+
+// Suffix adds an expression to the end of the query
+func (b InsertBuilder) Suffix(sql string, args ...interface{}) InsertBuilder {
+ return b.SuffixExpr(Expr(sql, args...))
+}
+
+// SuffixExpr adds an expression to the end of the query
+func (b InsertBuilder) SuffixExpr(expr Sqlizer) InsertBuilder {
+ return builder.Append(b, "Suffixes", expr).(InsertBuilder)
+}
+
+// SetMap set columns and values for insert builder from a map of column name and value
+// note that it will reset all previous columns and values was set if any
+func (b InsertBuilder) SetMap(clauses map[string]interface{}) InsertBuilder {
+ // Keep the columns in a consistent order by sorting the column key string.
+ cols := make([]string, 0, len(clauses))
+ for col := range clauses {
+ cols = append(cols, col)
+ }
+ sort.Strings(cols)
+
+ vals := make([]interface{}, 0, len(clauses))
+ for _, col := range cols {
+ vals = append(vals, clauses[col])
+ }
+
+ b = builder.Set(b, "Columns", cols).(InsertBuilder)
+ b = builder.Set(b, "Values", [][]interface{}{vals}).(InsertBuilder)
+
+ return b
+}
+
+// Select set Select clause for insert query
+// If Values and Select are used, then Select has higher priority
+func (b InsertBuilder) Select(sb SelectBuilder) InsertBuilder {
+ return builder.Set(b, "Select", &sb).(InsertBuilder)
+}
+
+func (b InsertBuilder) statementKeyword(keyword string) InsertBuilder {
+ return builder.Set(b, "StatementKeyword", keyword).(InsertBuilder)
+}
diff --git a/vendor/github.com/Masterminds/squirrel/insert_ctx.go b/vendor/github.com/Masterminds/squirrel/insert_ctx.go
new file mode 100644
index 000000000..4541c2fed
--- /dev/null
+++ b/vendor/github.com/Masterminds/squirrel/insert_ctx.go
@@ -0,0 +1,69 @@
+// +build go1.8
+
+package squirrel
+
+import (
+ "context"
+ "database/sql"
+
+ "github.com/lann/builder"
+)
+
+func (d *insertData) ExecContext(ctx context.Context) (sql.Result, error) {
+ if d.RunWith == nil {
+ return nil, RunnerNotSet
+ }
+ ctxRunner, ok := d.RunWith.(ExecerContext)
+ if !ok {
+ return nil, NoContextSupport
+ }
+ return ExecContextWith(ctx, ctxRunner, d)
+}
+
+func (d *insertData) QueryContext(ctx context.Context) (*sql.Rows, error) {
+ if d.RunWith == nil {
+ return nil, RunnerNotSet
+ }
+ ctxRunner, ok := d.RunWith.(QueryerContext)
+ if !ok {
+ return nil, NoContextSupport
+ }
+ return QueryContextWith(ctx, ctxRunner, d)
+}
+
+func (d *insertData) QueryRowContext(ctx context.Context) RowScanner {
+ if d.RunWith == nil {
+ return &Row{err: RunnerNotSet}
+ }
+ queryRower, ok := d.RunWith.(QueryRowerContext)
+ if !ok {
+ if _, ok := d.RunWith.(QueryerContext); !ok {
+ return &Row{err: RunnerNotQueryRunner}
+ }
+ return &Row{err: NoContextSupport}
+ }
+ return QueryRowContextWith(ctx, queryRower, d)
+}
+
+// ExecContext builds and ExecContexts the query with the Runner set by RunWith.
+func (b InsertBuilder) ExecContext(ctx context.Context) (sql.Result, error) {
+ data := builder.GetStruct(b).(insertData)
+ return data.ExecContext(ctx)
+}
+
+// QueryContext builds and QueryContexts the query with the Runner set by RunWith.
+func (b InsertBuilder) QueryContext(ctx context.Context) (*sql.Rows, error) {
+ data := builder.GetStruct(b).(insertData)
+ return data.QueryContext(ctx)
+}
+
+// QueryRowContext builds and QueryRowContexts the query with the Runner set by RunWith.
+func (b InsertBuilder) QueryRowContext(ctx context.Context) RowScanner {
+ data := builder.GetStruct(b).(insertData)
+ return data.QueryRowContext(ctx)
+}
+
+// ScanContext is a shortcut for QueryRowContext().Scan.
+func (b InsertBuilder) ScanContext(ctx context.Context, dest ...interface{}) error {
+ return b.QueryRowContext(ctx).Scan(dest...)
+}
diff --git a/vendor/github.com/Masterminds/squirrel/part.go b/vendor/github.com/Masterminds/squirrel/part.go
new file mode 100644
index 000000000..c58f68f1a
--- /dev/null
+++ b/vendor/github.com/Masterminds/squirrel/part.go
@@ -0,0 +1,63 @@
+package squirrel
+
+import (
+ "fmt"
+ "io"
+)
+
+type part struct {
+ pred interface{}
+ args []interface{}
+}
+
+func newPart(pred interface{}, args ...interface{}) Sqlizer {
+ return &part{pred, args}
+}
+
+func (p part) ToSql() (sql string, args []interface{}, err error) {
+ switch pred := p.pred.(type) {
+ case nil:
+ // no-op
+ case Sqlizer:
+ sql, args, err = nestedToSql(pred)
+ case string:
+ sql = pred
+ args = p.args
+ default:
+ err = fmt.Errorf("expected string or Sqlizer, not %T", pred)
+ }
+ return
+}
+
+func nestedToSql(s Sqlizer) (string, []interface{}, error) {
+ if raw, ok := s.(rawSqlizer); ok {
+ return raw.toSqlRaw()
+ } else {
+ return s.ToSql()
+ }
+}
+
+func appendToSql(parts []Sqlizer, w io.Writer, sep string, args []interface{}) ([]interface{}, error) {
+ for i, p := range parts {
+ partSql, partArgs, err := nestedToSql(p)
+ if err != nil {
+ return nil, err
+ } else if len(partSql) == 0 {
+ continue
+ }
+
+ if i > 0 {
+ _, err := io.WriteString(w, sep)
+ if err != nil {
+ return nil, err
+ }
+ }
+
+ _, err = io.WriteString(w, partSql)
+ if err != nil {
+ return nil, err
+ }
+ args = append(args, partArgs...)
+ }
+ return args, nil
+}
diff --git a/vendor/github.com/Masterminds/squirrel/placeholder.go b/vendor/github.com/Masterminds/squirrel/placeholder.go
new file mode 100644
index 000000000..8e97a6c62
--- /dev/null
+++ b/vendor/github.com/Masterminds/squirrel/placeholder.go
@@ -0,0 +1,114 @@
+package squirrel
+
+import (
+ "bytes"
+ "fmt"
+ "strings"
+)
+
+// PlaceholderFormat is the interface that wraps the ReplacePlaceholders method.
+//
+// ReplacePlaceholders takes a SQL statement and replaces each question mark
+// placeholder with a (possibly different) SQL placeholder.
+type PlaceholderFormat interface {
+ ReplacePlaceholders(sql string) (string, error)
+}
+
+type placeholderDebugger interface {
+ debugPlaceholder() string
+}
+
+var (
+ // Question is a PlaceholderFormat instance that leaves placeholders as
+ // question marks.
+ Question = questionFormat{}
+
+ // Dollar is a PlaceholderFormat instance that replaces placeholders with
+ // dollar-prefixed positional placeholders (e.g. $1, $2, $3).
+ Dollar = dollarFormat{}
+
+ // Colon is a PlaceholderFormat instance that replaces placeholders with
+ // colon-prefixed positional placeholders (e.g. :1, :2, :3).
+ Colon = colonFormat{}
+
+ // AtP is a PlaceholderFormat instance that replaces placeholders with
+ // "@p"-prefixed positional placeholders (e.g. @p1, @p2, @p3).
+ AtP = atpFormat{}
+)
+
+type questionFormat struct{}
+
+func (questionFormat) ReplacePlaceholders(sql string) (string, error) {
+ return sql, nil
+}
+
+func (questionFormat) debugPlaceholder() string {
+ return "?"
+}
+
+type dollarFormat struct{}
+
+func (dollarFormat) ReplacePlaceholders(sql string) (string, error) {
+ return replacePositionalPlaceholders(sql, "$")
+}
+
+func (dollarFormat) debugPlaceholder() string {
+ return "$"
+}
+
+type colonFormat struct{}
+
+func (colonFormat) ReplacePlaceholders(sql string) (string, error) {
+ return replacePositionalPlaceholders(sql, ":")
+}
+
+func (colonFormat) debugPlaceholder() string {
+ return ":"
+}
+
+type atpFormat struct{}
+
+func (atpFormat) ReplacePlaceholders(sql string) (string, error) {
+ return replacePositionalPlaceholders(sql, "@p")
+}
+
+func (atpFormat) debugPlaceholder() string {
+ return "@p"
+}
+
+// Placeholders returns a string with count ? placeholders joined with commas.
+func Placeholders(count int) string {
+ if count < 1 {
+ return ""
+ }
+
+ return strings.Repeat(",?", count)[1:]
+}
+
+func replacePositionalPlaceholders(sql, prefix string) (string, error) {
+ buf := &bytes.Buffer{}
+ i := 0
+ for {
+ p := strings.Index(sql, "?")
+ if p == -1 {
+ break
+ }
+
+ if len(sql[p:]) > 1 && sql[p:p+2] == "??" { // escape ?? => ?
+ buf.WriteString(sql[:p])
+ buf.WriteString("?")
+ if len(sql[p:]) == 1 {
+ break
+ }
+ sql = sql[p+2:]
+ } else {
+ i++
+ buf.WriteString(sql[:p])
+ fmt.Fprintf(buf, "%s%d", prefix, i)
+ sql = sql[p+1:]
+ }
+ }
+
+ buf.WriteString(sql)
+ return buf.String(), nil
+}
diff --git a/vendor/github.com/Masterminds/squirrel/row.go b/vendor/github.com/Masterminds/squirrel/row.go
new file mode 100644
index 000000000..74ffda92b
--- /dev/null
+++ b/vendor/github.com/Masterminds/squirrel/row.go
@@ -0,0 +1,22 @@
+package squirrel
+
+// RowScanner is the interface that wraps the Scan method.
+//
+// Scan behaves like database/sql.Row.Scan.
+type RowScanner interface {
+ Scan(...interface{}) error
+}
+
+// Row wraps database/sql.Row to let squirrel return new errors on Scan.
+type Row struct {
+ RowScanner
+ err error
+}
+
+// Scan returns Row.err or calls RowScanner.Scan.
+func (r *Row) Scan(dest ...interface{}) error {
+ if r.err != nil {
+ return r.err
+ }
+ return r.RowScanner.Scan(dest...)
+}
diff --git a/vendor/github.com/Masterminds/squirrel/select.go b/vendor/github.com/Masterminds/squirrel/select.go
new file mode 100644
index 000000000..d55ce4c74
--- /dev/null
+++ b/vendor/github.com/Masterminds/squirrel/select.go
@@ -0,0 +1,403 @@
+package squirrel
+
+import (
+ "bytes"
+ "database/sql"
+ "fmt"
+ "strings"
+
+ "github.com/lann/builder"
+)
+
+type selectData struct {
+ PlaceholderFormat PlaceholderFormat
+ RunWith BaseRunner
+ Prefixes []Sqlizer
+ Options []string
+ Columns []Sqlizer
+ From Sqlizer
+ Joins []Sqlizer
+ WhereParts []Sqlizer
+ GroupBys []string
+ HavingParts []Sqlizer
+ OrderByParts []Sqlizer
+ Limit string
+ Offset string
+ Suffixes []Sqlizer
+}
+
+func (d *selectData) Exec() (sql.Result, error) {
+ if d.RunWith == nil {
+ return nil, RunnerNotSet
+ }
+ return ExecWith(d.RunWith, d)
+}
+
+func (d *selectData) Query() (*sql.Rows, error) {
+ if d.RunWith == nil {
+ return nil, RunnerNotSet
+ }
+ return QueryWith(d.RunWith, d)
+}
+
+func (d *selectData) QueryRow() RowScanner {
+ if d.RunWith == nil {
+ return &Row{err: RunnerNotSet}
+ }
+ queryRower, ok := d.RunWith.(QueryRower)
+ if !ok {
+ return &Row{err: RunnerNotQueryRunner}
+ }
+ return QueryRowWith(queryRower, d)
+}
+
+func (d *selectData) ToSql() (sqlStr string, args []interface{}, err error) {
+ sqlStr, args, err = d.toSqlRaw()
+ if err != nil {
+ return
+ }
+
+ sqlStr, err = d.PlaceholderFormat.ReplacePlaceholders(sqlStr)
+ return
+}
+
+func (d *selectData) toSqlRaw() (sqlStr string, args []interface{}, err error) {
+ if len(d.Columns) == 0 {
+ err = fmt.Errorf("select statements must have at least one result column")
+ return
+ }
+
+ sql := &bytes.Buffer{}
+
+ if len(d.Prefixes) > 0 {
+ args, err = appendToSql(d.Prefixes, sql, " ", args)
+ if err != nil {
+ return
+ }
+
+ sql.WriteString(" ")
+ }
+
+ sql.WriteString("SELECT ")
+
+ if len(d.Options) > 0 {
+ sql.WriteString(strings.Join(d.Options, " "))
+ sql.WriteString(" ")
+ }
+
+ if len(d.Columns) > 0 {
+ args, err = appendToSql(d.Columns, sql, ", ", args)
+ if err != nil {
+ return
+ }
+ }
+
+ if d.From != nil {
+ sql.WriteString(" FROM ")
+ args, err = appendToSql([]Sqlizer{d.From}, sql, "", args)
+ if err != nil {
+ return
+ }
+ }
+
+ if len(d.Joins) > 0 {
+ sql.WriteString(" ")
+ args, err = appendToSql(d.Joins, sql, " ", args)
+ if err != nil {
+ return
+ }
+ }
+
+ if len(d.WhereParts) > 0 {
+ sql.WriteString(" WHERE ")
+ args, err = appendToSql(d.WhereParts, sql, " AND ", args)
+ if err != nil {
+ return
+ }
+ }
+
+ if len(d.GroupBys) > 0 {
+ sql.WriteString(" GROUP BY ")
+ sql.WriteString(strings.Join(d.GroupBys, ", "))
+ }
+
+ if len(d.HavingParts) > 0 {
+ sql.WriteString(" HAVING ")
+ args, err = appendToSql(d.HavingParts, sql, " AND ", args)
+ if err != nil {
+ return
+ }
+ }
+
+ if len(d.OrderByParts) > 0 {
+ sql.WriteString(" ORDER BY ")
+ args, err = appendToSql(d.OrderByParts, sql, ", ", args)
+ if err != nil {
+ return
+ }
+ }
+
+ if len(d.Limit) > 0 {
+ sql.WriteString(" LIMIT ")
+ sql.WriteString(d.Limit)
+ }
+
+ if len(d.Offset) > 0 {
+ sql.WriteString(" OFFSET ")
+ sql.WriteString(d.Offset)
+ }
+
+ if len(d.Suffixes) > 0 {
+ sql.WriteString(" ")
+
+ args, err = appendToSql(d.Suffixes, sql, " ", args)
+ if err != nil {
+ return
+ }
+ }
+
+ sqlStr = sql.String()
+ return
+}
+
+// Builder
+
+// SelectBuilder builds SQL SELECT statements.
+type SelectBuilder builder.Builder
+
+func init() {
+ builder.Register(SelectBuilder{}, selectData{})
+}
+
+// Format methods
+
+// PlaceholderFormat sets PlaceholderFormat (e.g. Question or Dollar) for the
+// query.
+func (b SelectBuilder) PlaceholderFormat(f PlaceholderFormat) SelectBuilder {
+ return builder.Set(b, "PlaceholderFormat", f).(SelectBuilder)
+}
+
+// Runner methods
+
+// RunWith sets a Runner (like database/sql.DB) to be used with e.g. Exec.
+// For most cases runner will be a database connection.
+//
+// Internally we use this to mock out the database connection for testing.
+func (b SelectBuilder) RunWith(runner BaseRunner) SelectBuilder {
+ return setRunWith(b, runner).(SelectBuilder)
+}
+
+// Exec builds and Execs the query with the Runner set by RunWith.
+func (b SelectBuilder) Exec() (sql.Result, error) {
+ data := builder.GetStruct(b).(selectData)
+ return data.Exec()
+}
+
+// Query builds and Querys the query with the Runner set by RunWith.
+func (b SelectBuilder) Query() (*sql.Rows, error) {
+ data := builder.GetStruct(b).(selectData)
+ return data.Query()
+}
+
+// QueryRow builds and QueryRows the query with the Runner set by RunWith.
+func (b SelectBuilder) QueryRow() RowScanner {
+ data := builder.GetStruct(b).(selectData)
+ return data.QueryRow()
+}
+
+// Scan is a shortcut for QueryRow().Scan.
+func (b SelectBuilder) Scan(dest ...interface{}) error {
+ return b.QueryRow().Scan(dest...)
+}
+
+// SQL methods
+
+// ToSql builds the query into a SQL string and bound args.
+func (b SelectBuilder) ToSql() (string, []interface{}, error) {
+ data := builder.GetStruct(b).(selectData)
+ return data.ToSql()
+}
+
+func (b SelectBuilder) toSqlRaw() (string, []interface{}, error) {
+ data := builder.GetStruct(b).(selectData)
+ return data.toSqlRaw()
+}
+
+// MustSql builds the query into a SQL string and bound args.
+// It panics if there are any errors.
+func (b SelectBuilder) MustSql() (string, []interface{}) {
+ sql, args, err := b.ToSql()
+ if err != nil {
+ panic(err)
+ }
+ return sql, args
+}
+
+// Prefix adds an expression to the beginning of the query
+func (b SelectBuilder) Prefix(sql string, args ...interface{}) SelectBuilder {
+ return b.PrefixExpr(Expr(sql, args...))
+}
+
+// PrefixExpr adds an expression to the very beginning of the query
+func (b SelectBuilder) PrefixExpr(expr Sqlizer) SelectBuilder {
+ return builder.Append(b, "Prefixes", expr).(SelectBuilder)
+}
+
+// Distinct adds a DISTINCT clause to the query.
+func (b SelectBuilder) Distinct() SelectBuilder {
+ return b.Options("DISTINCT")
+}
+
+// Options adds select option to the query
+func (b SelectBuilder) Options(options ...string) SelectBuilder {
+ return builder.Extend(b, "Options", options).(SelectBuilder)
+}
+
+// Columns adds result columns to the query.
+func (b SelectBuilder) Columns(columns ...string) SelectBuilder {
+ parts := make([]interface{}, 0, len(columns))
+ for _, str := range columns {
+ parts = append(parts, newPart(str))
+ }
+ return builder.Extend(b, "Columns", parts).(SelectBuilder)
+}
+
+// RemoveColumns remove all columns from query.
+// Must add a new column with Column or Columns methods, otherwise
+// return a error.
+func (b SelectBuilder) RemoveColumns() SelectBuilder {
+ return builder.Delete(b, "Columns").(SelectBuilder)
+}
+
+// Column adds a result column to the query.
+// Unlike Columns, Column accepts args which will be bound to placeholders in
+// the columns string, for example:
+// Column("IF(col IN ("+squirrel.Placeholders(3)+"), 1, 0) as col", 1, 2, 3)
+func (b SelectBuilder) Column(column interface{}, args ...interface{}) SelectBuilder {
+ return builder.Append(b, "Columns", newPart(column, args...)).(SelectBuilder)
+}
+
+// From sets the FROM clause of the query.
+func (b SelectBuilder) From(from string) SelectBuilder {
+ return builder.Set(b, "From", newPart(from)).(SelectBuilder)
+}
+
+// FromSelect sets a subquery into the FROM clause of the query.
+func (b SelectBuilder) FromSelect(from SelectBuilder, alias string) SelectBuilder {
+ // Prevent misnumbered parameters in nested selects (#183).
+ from = from.PlaceholderFormat(Question)
+ return builder.Set(b, "From", Alias(from, alias)).(SelectBuilder)
+}
+
+// JoinClause adds a join clause to the query.
+func (b SelectBuilder) JoinClause(pred interface{}, args ...interface{}) SelectBuilder {
+ return builder.Append(b, "Joins", newPart(pred, args...)).(SelectBuilder)
+}
+
+// Join adds a JOIN clause to the query.
+func (b SelectBuilder) Join(join string, rest ...interface{}) SelectBuilder {
+ return b.JoinClause("JOIN "+join, rest...)
+}
+
+// LeftJoin adds a LEFT JOIN clause to the query.
+func (b SelectBuilder) LeftJoin(join string, rest ...interface{}) SelectBuilder {
+ return b.JoinClause("LEFT JOIN "+join, rest...)
+}
+
+// RightJoin adds a RIGHT JOIN clause to the query.
+func (b SelectBuilder) RightJoin(join string, rest ...interface{}) SelectBuilder {
+ return b.JoinClause("RIGHT JOIN "+join, rest...)
+}
+
+// InnerJoin adds a INNER JOIN clause to the query.
+func (b SelectBuilder) InnerJoin(join string, rest ...interface{}) SelectBuilder {
+ return b.JoinClause("INNER JOIN "+join, rest...)
+}
+
+// CrossJoin adds a CROSS JOIN clause to the query.
+func (b SelectBuilder) CrossJoin(join string, rest ...interface{}) SelectBuilder {
+ return b.JoinClause("CROSS JOIN "+join, rest...)
+}
+
+// Where adds an expression to the WHERE clause of the query.
+//
+// Expressions are ANDed together in the generated SQL.
+//
+// Where accepts several types for its pred argument:
+//
+// nil OR "" - ignored.
+//
+// string - SQL expression.
+// If the expression has SQL placeholders then a set of arguments must be passed
+// as well, one for each placeholder.
+//
+// map[string]interface{} OR Eq - map of SQL expressions to values. Each key is
+// transformed into an expression like " = ?", with the corresponding value
+// bound to the placeholder. If the value is nil, the expression will be "
+// IS NULL". If the value is an array or slice, the expression will be " IN
+// (?,?,...)", with one placeholder for each item in the value. These expressions
+// are ANDed together.
+//
+// Where will panic if pred isn't any of the above types.
+func (b SelectBuilder) Where(pred interface{}, args ...interface{}) SelectBuilder {
+ if pred == nil || pred == "" {
+ return b
+ }
+ return builder.Append(b, "WhereParts", newWherePart(pred, args...)).(SelectBuilder)
+}
+
+// GroupBy adds GROUP BY expressions to the query.
+func (b SelectBuilder) GroupBy(groupBys ...string) SelectBuilder {
+ return builder.Extend(b, "GroupBys", groupBys).(SelectBuilder)
+}
+
+// Having adds an expression to the HAVING clause of the query.
+//
+// See Where.
+func (b SelectBuilder) Having(pred interface{}, rest ...interface{}) SelectBuilder {
+ return builder.Append(b, "HavingParts", newWherePart(pred, rest...)).(SelectBuilder)
+}
+
+// OrderByClause adds ORDER BY clause to the query.
+func (b SelectBuilder) OrderByClause(pred interface{}, args ...interface{}) SelectBuilder {
+ return builder.Append(b, "OrderByParts", newPart(pred, args...)).(SelectBuilder)
+}
+
+// OrderBy adds ORDER BY expressions to the query.
+func (b SelectBuilder) OrderBy(orderBys ...string) SelectBuilder {
+ for _, orderBy := range orderBys {
+ b = b.OrderByClause(orderBy)
+ }
+
+ return b
+}
+
+// Limit sets a LIMIT clause on the query.
+func (b SelectBuilder) Limit(limit uint64) SelectBuilder {
+ return builder.Set(b, "Limit", fmt.Sprintf("%d", limit)).(SelectBuilder)
+}
+
+// Limit ALL allows to access all records with limit
+func (b SelectBuilder) RemoveLimit() SelectBuilder {
+ return builder.Delete(b, "Limit").(SelectBuilder)
+}
+
+// Offset sets a OFFSET clause on the query.
+func (b SelectBuilder) Offset(offset uint64) SelectBuilder {
+ return builder.Set(b, "Offset", fmt.Sprintf("%d", offset)).(SelectBuilder)
+}
+
+// RemoveOffset removes OFFSET clause.
+func (b SelectBuilder) RemoveOffset() SelectBuilder {
+ return builder.Delete(b, "Offset").(SelectBuilder)
+}
+
+// Suffix adds an expression to the end of the query
+func (b SelectBuilder) Suffix(sql string, args ...interface{}) SelectBuilder {
+ return b.SuffixExpr(Expr(sql, args...))
+}
+
+// SuffixExpr adds an expression to the end of the query
+func (b SelectBuilder) SuffixExpr(expr Sqlizer) SelectBuilder {
+ return builder.Append(b, "Suffixes", expr).(SelectBuilder)
+}
diff --git a/vendor/github.com/Masterminds/squirrel/select_ctx.go b/vendor/github.com/Masterminds/squirrel/select_ctx.go
new file mode 100644
index 000000000..4c42c13f4
--- /dev/null
+++ b/vendor/github.com/Masterminds/squirrel/select_ctx.go
@@ -0,0 +1,69 @@
+// +build go1.8
+
+package squirrel
+
+import (
+ "context"
+ "database/sql"
+
+ "github.com/lann/builder"
+)
+
+func (d *selectData) ExecContext(ctx context.Context) (sql.Result, error) {
+ if d.RunWith == nil {
+ return nil, RunnerNotSet
+ }
+ ctxRunner, ok := d.RunWith.(ExecerContext)
+ if !ok {
+ return nil, NoContextSupport
+ }
+ return ExecContextWith(ctx, ctxRunner, d)
+}
+
+func (d *selectData) QueryContext(ctx context.Context) (*sql.Rows, error) {
+ if d.RunWith == nil {
+ return nil, RunnerNotSet
+ }
+ ctxRunner, ok := d.RunWith.(QueryerContext)
+ if !ok {
+ return nil, NoContextSupport
+ }
+ return QueryContextWith(ctx, ctxRunner, d)
+}
+
+func (d *selectData) QueryRowContext(ctx context.Context) RowScanner {
+ if d.RunWith == nil {
+ return &Row{err: RunnerNotSet}
+ }
+ queryRower, ok := d.RunWith.(QueryRowerContext)
+ if !ok {
+ if _, ok := d.RunWith.(QueryerContext); !ok {
+ return &Row{err: RunnerNotQueryRunner}
+ }
+ return &Row{err: NoContextSupport}
+ }
+ return QueryRowContextWith(ctx, queryRower, d)
+}
+
+// ExecContext builds and ExecContexts the query with the Runner set by RunWith.
+func (b SelectBuilder) ExecContext(ctx context.Context) (sql.Result, error) {
+ data := builder.GetStruct(b).(selectData)
+ return data.ExecContext(ctx)
+}
+
+// QueryContext builds and QueryContexts the query with the Runner set by RunWith.
+func (b SelectBuilder) QueryContext(ctx context.Context) (*sql.Rows, error) {
+ data := builder.GetStruct(b).(selectData)
+ return data.QueryContext(ctx)
+}
+
+// QueryRowContext builds and QueryRowContexts the query with the Runner set by RunWith.
+func (b SelectBuilder) QueryRowContext(ctx context.Context) RowScanner {
+ data := builder.GetStruct(b).(selectData)
+ return data.QueryRowContext(ctx)
+}
+
+// ScanContext is a shortcut for QueryRowContext().Scan.
+func (b SelectBuilder) ScanContext(ctx context.Context, dest ...interface{}) error {
+ return b.QueryRowContext(ctx).Scan(dest...)
+}
diff --git a/vendor/github.com/Masterminds/squirrel/squirrel.go b/vendor/github.com/Masterminds/squirrel/squirrel.go
new file mode 100644
index 000000000..46d456eb6
--- /dev/null
+++ b/vendor/github.com/Masterminds/squirrel/squirrel.go
@@ -0,0 +1,183 @@
+// Package squirrel provides a fluent SQL generator.
+//
+// See https://github.com/Masterminds/squirrel for examples.
+package squirrel
+
+import (
+ "bytes"
+ "database/sql"
+ "fmt"
+ "strings"
+
+ "github.com/lann/builder"
+)
+
+// Sqlizer is the interface that wraps the ToSql method.
+//
+// ToSql returns a SQL representation of the Sqlizer, along with a slice of args
+// as passed to e.g. database/sql.Exec. It can also return an error.
+type Sqlizer interface {
+ ToSql() (string, []interface{}, error)
+}
+
+// rawSqlizer is expected to do what Sqlizer does, but without finalizing placeholders.
+// This is useful for nested queries.
+type rawSqlizer interface {
+ toSqlRaw() (string, []interface{}, error)
+}
+
+// Execer is the interface that wraps the Exec method.
+//
+// Exec executes the given query as implemented by database/sql.Exec.
+type Execer interface {
+ Exec(query string, args ...interface{}) (sql.Result, error)
+}
+
+// Queryer is the interface that wraps the Query method.
+//
+// Query executes the given query as implemented by database/sql.Query.
+type Queryer interface {
+ Query(query string, args ...interface{}) (*sql.Rows, error)
+}
+
+// QueryRower is the interface that wraps the QueryRow method.
+//
+// QueryRow executes the given query as implemented by database/sql.QueryRow.
+type QueryRower interface {
+ QueryRow(query string, args ...interface{}) RowScanner
+}
+
+// BaseRunner groups the Execer and Queryer interfaces.
+type BaseRunner interface {
+ Execer
+ Queryer
+}
+
+// Runner groups the Execer, Queryer, and QueryRower interfaces.
+type Runner interface {
+ Execer
+ Queryer
+ QueryRower
+}
+
+// WrapStdSql wraps a type implementing the standard SQL interface with methods that
+// squirrel expects.
+func WrapStdSql(stdSql StdSql) Runner {
+ return &stdsqlRunner{stdSql}
+}
+
+// StdSql encompasses the standard methods of the *sql.DB type, and other types that
+// wrap these methods.
+type StdSql interface {
+ Query(string, ...interface{}) (*sql.Rows, error)
+ QueryRow(string, ...interface{}) *sql.Row
+ Exec(string, ...interface{}) (sql.Result, error)
+}
+
+type stdsqlRunner struct {
+ StdSql
+}
+
+func (r *stdsqlRunner) QueryRow(query string, args ...interface{}) RowScanner {
+ return r.StdSql.QueryRow(query, args...)
+}
+
+func setRunWith(b interface{}, runner BaseRunner) interface{} {
+ switch r := runner.(type) {
+ case StdSqlCtx:
+ runner = WrapStdSqlCtx(r)
+ case StdSql:
+ runner = WrapStdSql(r)
+ }
+ return builder.Set(b, "RunWith", runner)
+}
+
+// RunnerNotSet is returned by methods that need a Runner if it isn't set.
+var RunnerNotSet = fmt.Errorf("cannot run; no Runner set (RunWith)")
+
+// RunnerNotQueryRunner is returned by QueryRow if the RunWith value doesn't implement QueryRower.
+var RunnerNotQueryRunner = fmt.Errorf("cannot QueryRow; Runner is not a QueryRower")
+
+// ExecWith Execs the SQL returned by s with db.
+func ExecWith(db Execer, s Sqlizer) (res sql.Result, err error) {
+ query, args, err := s.ToSql()
+ if err != nil {
+ return
+ }
+ return db.Exec(query, args...)
+}
+
+// QueryWith Querys the SQL returned by s with db.
+func QueryWith(db Queryer, s Sqlizer) (rows *sql.Rows, err error) {
+ query, args, err := s.ToSql()
+ if err != nil {
+ return
+ }
+ return db.Query(query, args...)
+}
+
+// QueryRowWith QueryRows the SQL returned by s with db.
+func QueryRowWith(db QueryRower, s Sqlizer) RowScanner {
+ query, args, err := s.ToSql()
+ return &Row{RowScanner: db.QueryRow(query, args...), err: err}
+}
+
+// DebugSqlizer calls ToSql on s and shows the approximate SQL to be executed
+//
+// If ToSql returns an error, the result of this method will look like:
+// "[ToSql error: %s]" or "[DebugSqlizer error: %s]"
+//
+// IMPORTANT: As its name suggests, this function should only be used for
+// debugging. While the string result *might* be valid SQL, this function does
+// not try very hard to ensure it. Additionally, executing the output of this
+// function with any untrusted user input is certainly insecure.
+func DebugSqlizer(s Sqlizer) string {
+ sql, args, err := s.ToSql()
+ if err != nil {
+ return fmt.Sprintf("[ToSql error: %s]", err)
+ }
+
+ var placeholder string
+ downCast, ok := s.(placeholderDebugger)
+ if !ok {
+ placeholder = "?"
+ } else {
+ placeholder = downCast.debugPlaceholder()
+ }
+ // TODO: dedupe this with placeholder.go
+ buf := &bytes.Buffer{}
+ i := 0
+ for {
+ p := strings.Index(sql, placeholder)
+ if p == -1 {
+ break
+ }
+ if len(sql[p:]) > 1 && sql[p:p+2] == "??" { // escape ?? => ?
+ buf.WriteString(sql[:p])
+ buf.WriteString("?")
+ if len(sql[p:]) == 1 {
+ break
+ }
+ sql = sql[p+2:]
+ } else {
+ if i+1 > len(args) {
+ return fmt.Sprintf(
+ "[DebugSqlizer error: too many placeholders in %#v for %d args]",
+ sql, len(args))
+ }
+ buf.WriteString(sql[:p])
+ fmt.Fprintf(buf, "'%v'", args[i])
+ // advance our sql string "cursor" beyond the arg we placed
+ sql = sql[p+1:]
+ i++
+ }
+ }
+ if i < len(args) {
+ return fmt.Sprintf(
+ "[DebugSqlizer error: not enough placeholders in %#v for %d args]",
+ sql, len(args))
+ }
+ // "append" any remaning sql that won't need interpolating
+ buf.WriteString(sql)
+ return buf.String()
+}
diff --git a/vendor/github.com/Masterminds/squirrel/squirrel_ctx.go b/vendor/github.com/Masterminds/squirrel/squirrel_ctx.go
new file mode 100644
index 000000000..c20148ad3
--- /dev/null
+++ b/vendor/github.com/Masterminds/squirrel/squirrel_ctx.go
@@ -0,0 +1,93 @@
+// +build go1.8
+
+package squirrel
+
+import (
+ "context"
+ "database/sql"
+ "errors"
+)
+
+// NoContextSupport is returned if a db doesn't support Context.
+var NoContextSupport = errors.New("DB does not support Context")
+
+// ExecerContext is the interface that wraps the ExecContext method.
+//
+// Exec executes the given query as implemented by database/sql.ExecContext.
+type ExecerContext interface {
+ ExecContext(ctx context.Context, query string, args ...interface{}) (sql.Result, error)
+}
+
+// QueryerContext is the interface that wraps the QueryContext method.
+//
+// QueryContext executes the given query as implemented by database/sql.QueryContext.
+type QueryerContext interface {
+ QueryContext(ctx context.Context, query string, args ...interface{}) (*sql.Rows, error)
+}
+
+// QueryRowerContext is the interface that wraps the QueryRowContext method.
+//
+// QueryRowContext executes the given query as implemented by database/sql.QueryRowContext.
+type QueryRowerContext interface {
+ QueryRowContext(ctx context.Context, query string, args ...interface{}) RowScanner
+}
+
+// RunnerContext groups the Runner interface, along with the Context versions of each of
+// its methods
+type RunnerContext interface {
+ Runner
+ QueryerContext
+ QueryRowerContext
+ ExecerContext
+}
+
+// WrapStdSqlCtx wraps a type implementing the standard SQL interface plus the context
+// versions of the methods with methods that squirrel expects.
+func WrapStdSqlCtx(stdSqlCtx StdSqlCtx) RunnerContext {
+ return &stdsqlCtxRunner{stdSqlCtx}
+}
+
+// StdSqlCtx encompasses the standard methods of the *sql.DB type, along with the Context
+// versions of those methods, and other types that wrap these methods.
+type StdSqlCtx interface {
+ StdSql
+ QueryContext(context.Context, string, ...interface{}) (*sql.Rows, error)
+ QueryRowContext(context.Context, string, ...interface{}) *sql.Row
+ ExecContext(context.Context, string, ...interface{}) (sql.Result, error)
+}
+
+type stdsqlCtxRunner struct {
+ StdSqlCtx
+}
+
+func (r *stdsqlCtxRunner) QueryRow(query string, args ...interface{}) RowScanner {
+ return r.StdSqlCtx.QueryRow(query, args...)
+}
+
+func (r *stdsqlCtxRunner) QueryRowContext(ctx context.Context, query string, args ...interface{}) RowScanner {
+ return r.StdSqlCtx.QueryRowContext(ctx, query, args...)
+}
+
+// ExecContextWith ExecContexts the SQL returned by s with db.
+func ExecContextWith(ctx context.Context, db ExecerContext, s Sqlizer) (res sql.Result, err error) {
+ query, args, err := s.ToSql()
+ if err != nil {
+ return
+ }
+ return db.ExecContext(ctx, query, args...)
+}
+
+// QueryContextWith QueryContexts the SQL returned by s with db.
+func QueryContextWith(ctx context.Context, db QueryerContext, s Sqlizer) (rows *sql.Rows, err error) {
+ query, args, err := s.ToSql()
+ if err != nil {
+ return
+ }
+ return db.QueryContext(ctx, query, args...)
+}
+
+// QueryRowContextWith QueryRowContexts the SQL returned by s with db.
+func QueryRowContextWith(ctx context.Context, db QueryRowerContext, s Sqlizer) RowScanner {
+ query, args, err := s.ToSql()
+ return &Row{RowScanner: db.QueryRowContext(ctx, query, args...), err: err}
+}
diff --git a/vendor/github.com/Masterminds/squirrel/statement.go b/vendor/github.com/Masterminds/squirrel/statement.go
new file mode 100644
index 000000000..9420c67f8
--- /dev/null
+++ b/vendor/github.com/Masterminds/squirrel/statement.go
@@ -0,0 +1,104 @@
+package squirrel
+
+import "github.com/lann/builder"
+
+// StatementBuilderType is the type of StatementBuilder.
+type StatementBuilderType builder.Builder
+
+// Select returns a SelectBuilder for this StatementBuilderType.
+func (b StatementBuilderType) Select(columns ...string) SelectBuilder {
+ return SelectBuilder(b).Columns(columns...)
+}
+
+// Insert returns a InsertBuilder for this StatementBuilderType.
+func (b StatementBuilderType) Insert(into string) InsertBuilder {
+ return InsertBuilder(b).Into(into)
+}
+
+// Replace returns a InsertBuilder for this StatementBuilderType with the
+// statement keyword set to "REPLACE".
+func (b StatementBuilderType) Replace(into string) InsertBuilder {
+ return InsertBuilder(b).statementKeyword("REPLACE").Into(into)
+}
+
+// Update returns a UpdateBuilder for this StatementBuilderType.
+func (b StatementBuilderType) Update(table string) UpdateBuilder {
+ return UpdateBuilder(b).Table(table)
+}
+
+// Delete returns a DeleteBuilder for this StatementBuilderType.
+func (b StatementBuilderType) Delete(from string) DeleteBuilder {
+ return DeleteBuilder(b).From(from)
+}
+
+// PlaceholderFormat sets the PlaceholderFormat field for any child builders.
+func (b StatementBuilderType) PlaceholderFormat(f PlaceholderFormat) StatementBuilderType {
+ return builder.Set(b, "PlaceholderFormat", f).(StatementBuilderType)
+}
+
+// RunWith sets the RunWith field for any child builders.
+func (b StatementBuilderType) RunWith(runner BaseRunner) StatementBuilderType {
+ return setRunWith(b, runner).(StatementBuilderType)
+}
+
+// Where adds WHERE expressions to the query.
+//
+// See SelectBuilder.Where for more information.
+func (b StatementBuilderType) Where(pred interface{}, args ...interface{}) StatementBuilderType {
+ return builder.Append(b, "WhereParts", newWherePart(pred, args...)).(StatementBuilderType)
+}
+
+// StatementBuilder is a parent builder for other builders, e.g. SelectBuilder.
+var StatementBuilder = StatementBuilderType(builder.EmptyBuilder).PlaceholderFormat(Question)
+
+// Select returns a new SelectBuilder, optionally setting some result columns.
+//
+// See SelectBuilder.Columns.
+func Select(columns ...string) SelectBuilder {
+ return StatementBuilder.Select(columns...)
+}
+
+// Insert returns a new InsertBuilder with the given table name.
+//
+// See InsertBuilder.Into.
+func Insert(into string) InsertBuilder {
+ return StatementBuilder.Insert(into)
+}
+
+// Replace returns a new InsertBuilder with the statement keyword set to
+// "REPLACE" and with the given table name.
+//
+// See InsertBuilder.Into.
+func Replace(into string) InsertBuilder {
+ return StatementBuilder.Replace(into)
+}
+
+// Update returns a new UpdateBuilder with the given table name.
+//
+// See UpdateBuilder.Table.
+func Update(table string) UpdateBuilder {
+ return StatementBuilder.Update(table)
+}
+
+// Delete returns a new DeleteBuilder with the given table name.
+//
+// See DeleteBuilder.Table.
+func Delete(from string) DeleteBuilder {
+ return StatementBuilder.Delete(from)
+}
+
+// Case returns a new CaseBuilder
+// "what" represents case value
+func Case(what ...interface{}) CaseBuilder {
+ b := CaseBuilder(builder.EmptyBuilder)
+
+ switch len(what) {
+ case 0:
+ case 1:
+ b = b.what(what[0])
+ default:
+ b = b.what(newPart(what[0], what[1:]...))
+
+ }
+ return b
+}
diff --git a/vendor/github.com/Masterminds/squirrel/stmtcacher.go b/vendor/github.com/Masterminds/squirrel/stmtcacher.go
new file mode 100644
index 000000000..5bf267a13
--- /dev/null
+++ b/vendor/github.com/Masterminds/squirrel/stmtcacher.go
@@ -0,0 +1,121 @@
+package squirrel
+
+import (
+ "database/sql"
+ "fmt"
+ "sync"
+)
+
+// Prepareer is the interface that wraps the Prepare method.
+//
+// Prepare executes the given query as implemented by database/sql.Prepare.
+type Preparer interface {
+ Prepare(query string) (*sql.Stmt, error)
+}
+
+// DBProxy groups the Execer, Queryer, QueryRower, and Preparer interfaces.
+type DBProxy interface {
+ Execer
+ Queryer
+ QueryRower
+ Preparer
+}
+
+// NOTE: NewStmtCache is defined in stmtcacher_ctx.go (Go >= 1.8) or stmtcacher_noctx.go (Go < 1.8).
+
+// StmtCache wraps and delegates down to a Preparer type
+//
+// It also automatically prepares all statements sent to the underlying Preparer calls
+// for Exec, Query and QueryRow and caches the returns *sql.Stmt using the provided
+// query as the key. So that it can be automatically re-used.
+type StmtCache struct {
+ prep Preparer
+ cache map[string]*sql.Stmt
+ mu sync.Mutex
+}
+
+// Prepare delegates down to the underlying Preparer and caches the result
+// using the provided query as a key
+func (sc *StmtCache) Prepare(query string) (*sql.Stmt, error) {
+ sc.mu.Lock()
+ defer sc.mu.Unlock()
+
+ stmt, ok := sc.cache[query]
+ if ok {
+ return stmt, nil
+ }
+ stmt, err := sc.prep.Prepare(query)
+ if err == nil {
+ sc.cache[query] = stmt
+ }
+ return stmt, err
+}
+
+// Exec delegates down to the underlying Preparer using a prepared statement
+func (sc *StmtCache) Exec(query string, args ...interface{}) (res sql.Result, err error) {
+ stmt, err := sc.Prepare(query)
+ if err != nil {
+ return
+ }
+ return stmt.Exec(args...)
+}
+
+// Query delegates down to the underlying Preparer using a prepared statement
+func (sc *StmtCache) Query(query string, args ...interface{}) (rows *sql.Rows, err error) {
+ stmt, err := sc.Prepare(query)
+ if err != nil {
+ return
+ }
+ return stmt.Query(args...)
+}
+
+// QueryRow delegates down to the underlying Preparer using a prepared statement
+func (sc *StmtCache) QueryRow(query string, args ...interface{}) RowScanner {
+ stmt, err := sc.Prepare(query)
+ if err != nil {
+ return &Row{err: err}
+ }
+ return stmt.QueryRow(args...)
+}
+
+// Clear removes and closes all the currently cached prepared statements
+func (sc *StmtCache) Clear() (err error) {
+ sc.mu.Lock()
+ defer sc.mu.Unlock()
+
+ for key, stmt := range sc.cache {
+ delete(sc.cache, key)
+
+ if stmt == nil {
+ continue
+ }
+
+ if cerr := stmt.Close(); cerr != nil {
+ err = cerr
+ }
+ }
+
+ if err != nil {
+ return fmt.Errorf("one or more Stmt.Close failed; last error: %v", err)
+ }
+
+ return
+}
+
+type DBProxyBeginner interface {
+ DBProxy
+ Begin() (*sql.Tx, error)
+}
+
+type stmtCacheProxy struct {
+ DBProxy
+ db *sql.DB
+}
+
+func NewStmtCacheProxy(db *sql.DB) DBProxyBeginner {
+ return &stmtCacheProxy{DBProxy: NewStmtCache(db), db: db}
+}
+
+func (sp *stmtCacheProxy) Begin() (*sql.Tx, error) {
+ return sp.db.Begin()
+}
diff --git a/vendor/github.com/Masterminds/squirrel/stmtcacher_ctx.go b/vendor/github.com/Masterminds/squirrel/stmtcacher_ctx.go
new file mode 100644
index 000000000..53603cf4c
--- /dev/null
+++ b/vendor/github.com/Masterminds/squirrel/stmtcacher_ctx.go
@@ -0,0 +1,86 @@
+// +build go1.8
+
+package squirrel
+
+import (
+ "context"
+ "database/sql"
+)
+
+// PrepareerContext is the interface that wraps the Prepare and PrepareContext methods.
+//
+// Prepare executes the given query as implemented by database/sql.Prepare.
+// PrepareContext executes the given query as implemented by database/sql.PrepareContext.
+type PreparerContext interface {
+ Preparer
+ PrepareContext(ctx context.Context, query string) (*sql.Stmt, error)
+}
+
+// DBProxyContext groups the Execer, Queryer, QueryRower and PreparerContext interfaces.
+type DBProxyContext interface {
+ Execer
+ Queryer
+ QueryRower
+ PreparerContext
+}
+
+// NewStmtCache returns a *StmtCache wrapping a PreparerContext that caches Prepared Stmts.
+//
+// Stmts are cached based on the string value of their queries.
+func NewStmtCache(prep PreparerContext) *StmtCache {
+ return &StmtCache{prep: prep, cache: make(map[string]*sql.Stmt)}
+}
+
+// NewStmtCacher is deprecated
+//
+// Use NewStmtCache instead
+func NewStmtCacher(prep PreparerContext) DBProxyContext {
+ return NewStmtCache(prep)
+}
+
+// PrepareContext delegates down to the underlying PreparerContext and caches the result
+// using the provided query as a key
+func (sc *StmtCache) PrepareContext(ctx context.Context, query string) (*sql.Stmt, error) {
+ ctxPrep, ok := sc.prep.(PreparerContext)
+ if !ok {
+ return nil, NoContextSupport
+ }
+ sc.mu.Lock()
+ defer sc.mu.Unlock()
+ stmt, ok := sc.cache[query]
+ if ok {
+ return stmt, nil
+ }
+ stmt, err := ctxPrep.PrepareContext(ctx, query)
+ if err == nil {
+ sc.cache[query] = stmt
+ }
+ return stmt, err
+}
+
+// ExecContext delegates down to the underlying PreparerContext using a prepared statement
+func (sc *StmtCache) ExecContext(ctx context.Context, query string, args ...interface{}) (res sql.Result, err error) {
+ stmt, err := sc.PrepareContext(ctx, query)
+ if err != nil {
+ return
+ }
+ return stmt.ExecContext(ctx, args...)
+}
+
+// QueryContext delegates down to the underlying PreparerContext using a prepared statement
+func (sc *StmtCache) QueryContext(ctx context.Context, query string, args ...interface{}) (rows *sql.Rows, err error) {
+ stmt, err := sc.PrepareContext(ctx, query)
+ if err != nil {
+ return
+ }
+ return stmt.QueryContext(ctx, args...)
+}
+
+// QueryRowContext delegates down to the underlying PreparerContext using a prepared statement
+func (sc *StmtCache) QueryRowContext(ctx context.Context, query string, args ...interface{}) RowScanner {
+ stmt, err := sc.PrepareContext(ctx, query)
+ if err != nil {
+ return &Row{err: err}
+ }
+ return stmt.QueryRowContext(ctx, args...)
+}
diff --git a/vendor/github.com/Masterminds/squirrel/stmtcacher_noctx.go b/vendor/github.com/Masterminds/squirrel/stmtcacher_noctx.go
new file mode 100644
index 000000000..deac96777
--- /dev/null
+++ b/vendor/github.com/Masterminds/squirrel/stmtcacher_noctx.go
@@ -0,0 +1,21 @@
+// +build !go1.8
+
+package squirrel
+
+import (
+ "database/sql"
+)
+
+// NewStmtCacher returns a DBProxy wrapping prep that caches Prepared Stmts.
+//
+// Stmts are cached based on the string value of their queries.
+func NewStmtCache(prep Preparer) *StmtCache {
+ return &StmtCacher{prep: prep, cache: make(map[string]*sql.Stmt)}
+}
+
+// NewStmtCacher is deprecated
+//
+// Use NewStmtCache instead
+func NewStmtCacher(prep Preparer) DBProxy {
+ return NewStmtCache(prep)
+}
diff --git a/vendor/github.com/Masterminds/squirrel/update.go b/vendor/github.com/Masterminds/squirrel/update.go
new file mode 100644
index 000000000..eb2a9c4dd
--- /dev/null
+++ b/vendor/github.com/Masterminds/squirrel/update.go
@@ -0,0 +1,288 @@
+package squirrel
+
+import (
+ "bytes"
+ "database/sql"
+ "fmt"
+ "sort"
+ "strings"
+
+ "github.com/lann/builder"
+)
+
+type updateData struct {
+ PlaceholderFormat PlaceholderFormat
+ RunWith BaseRunner
+ Prefixes []Sqlizer
+ Table string
+ SetClauses []setClause
+ From Sqlizer
+ WhereParts []Sqlizer
+ OrderBys []string
+ Limit string
+ Offset string
+ Suffixes []Sqlizer
+}
+
+type setClause struct {
+ column string
+ value interface{}
+}
+
+func (d *updateData) Exec() (sql.Result, error) {
+ if d.RunWith == nil {
+ return nil, RunnerNotSet
+ }
+ return ExecWith(d.RunWith, d)
+}
+
+func (d *updateData) Query() (*sql.Rows, error) {
+ if d.RunWith == nil {
+ return nil, RunnerNotSet
+ }
+ return QueryWith(d.RunWith, d)
+}
+
+func (d *updateData) QueryRow() RowScanner {
+ if d.RunWith == nil {
+ return &Row{err: RunnerNotSet}
+ }
+ queryRower, ok := d.RunWith.(QueryRower)
+ if !ok {
+ return &Row{err: RunnerNotQueryRunner}
+ }
+ return QueryRowWith(queryRower, d)
+}
+
+func (d *updateData) ToSql() (sqlStr string, args []interface{}, err error) {
+ if len(d.Table) == 0 {
+ err = fmt.Errorf("update statements must specify a table")
+ return
+ }
+ if len(d.SetClauses) == 0 {
+ err = fmt.Errorf("update statements must have at least one Set clause")
+ return
+ }
+
+ sql := &bytes.Buffer{}
+
+ if len(d.Prefixes) > 0 {
+ args, err = appendToSql(d.Prefixes, sql, " ", args)
+ if err != nil {
+ return
+ }
+
+ sql.WriteString(" ")
+ }
+
+ sql.WriteString("UPDATE ")
+ sql.WriteString(d.Table)
+
+ sql.WriteString(" SET ")
+ setSqls := make([]string, len(d.SetClauses))
+ for i, setClause := range d.SetClauses {
+ var valSql string
+ if vs, ok := setClause.value.(Sqlizer); ok {
+ vsql, vargs, err := vs.ToSql()
+ if err != nil {
+ return "", nil, err
+ }
+ if _, ok := vs.(SelectBuilder); ok {
+ valSql = fmt.Sprintf("(%s)", vsql)
+ } else {
+ valSql = vsql
+ }
+ args = append(args, vargs...)
+ } else {
+ valSql = "?"
+ args = append(args, setClause.value)
+ }
+ setSqls[i] = fmt.Sprintf("%s = %s", setClause.column, valSql)
+ }
+ sql.WriteString(strings.Join(setSqls, ", "))
+
+ if d.From != nil {
+ sql.WriteString(" FROM ")
+ args, err = appendToSql([]Sqlizer{d.From}, sql, "", args)
+ if err != nil {
+ return
+ }
+ }
+
+ if len(d.WhereParts) > 0 {
+ sql.WriteString(" WHERE ")
+ args, err = appendToSql(d.WhereParts, sql, " AND ", args)
+ if err != nil {
+ return
+ }
+ }
+
+ if len(d.OrderBys) > 0 {
+ sql.WriteString(" ORDER BY ")
+ sql.WriteString(strings.Join(d.OrderBys, ", "))
+ }
+
+ if len(d.Limit) > 0 {
+ sql.WriteString(" LIMIT ")
+ sql.WriteString(d.Limit)
+ }
+
+ if len(d.Offset) > 0 {
+ sql.WriteString(" OFFSET ")
+ sql.WriteString(d.Offset)
+ }
+
+ if len(d.Suffixes) > 0 {
+ sql.WriteString(" ")
+ args, err = appendToSql(d.Suffixes, sql, " ", args)
+ if err != nil {
+ return
+ }
+ }
+
+ sqlStr, err = d.PlaceholderFormat.ReplacePlaceholders(sql.String())
+ return
+}
+
+// Builder
+
+// UpdateBuilder builds SQL UPDATE statements.
+type UpdateBuilder builder.Builder
+
+func init() {
+ builder.Register(UpdateBuilder{}, updateData{})
+}
+
+// Format methods
+
+// PlaceholderFormat sets PlaceholderFormat (e.g. Question or Dollar) for the
+// query.
+func (b UpdateBuilder) PlaceholderFormat(f PlaceholderFormat) UpdateBuilder {
+ return builder.Set(b, "PlaceholderFormat", f).(UpdateBuilder)
+}
+
+// Runner methods
+
+// RunWith sets a Runner (like database/sql.DB) to be used with e.g. Exec.
+func (b UpdateBuilder) RunWith(runner BaseRunner) UpdateBuilder {
+ return setRunWith(b, runner).(UpdateBuilder)
+}
+
+// Exec builds and Execs the query with the Runner set by RunWith.
+func (b UpdateBuilder) Exec() (sql.Result, error) {
+ data := builder.GetStruct(b).(updateData)
+ return data.Exec()
+}
+
+func (b UpdateBuilder) Query() (*sql.Rows, error) {
+ data := builder.GetStruct(b).(updateData)
+ return data.Query()
+}
+
+func (b UpdateBuilder) QueryRow() RowScanner {
+ data := builder.GetStruct(b).(updateData)
+ return data.QueryRow()
+}
+
+func (b UpdateBuilder) Scan(dest ...interface{}) error {
+ return b.QueryRow().Scan(dest...)
+}
+
+// SQL methods
+
+// ToSql builds the query into a SQL string and bound args.
+func (b UpdateBuilder) ToSql() (string, []interface{}, error) {
+ data := builder.GetStruct(b).(updateData)
+ return data.ToSql()
+}
+
+// MustSql builds the query into a SQL string and bound args.
+// It panics if there are any errors.
+func (b UpdateBuilder) MustSql() (string, []interface{}) {
+ sql, args, err := b.ToSql()
+ if err != nil {
+ panic(err)
+ }
+ return sql, args
+}
+
+// Prefix adds an expression to the beginning of the query
+func (b UpdateBuilder) Prefix(sql string, args ...interface{}) UpdateBuilder {
+ return b.PrefixExpr(Expr(sql, args...))
+}
+
+// PrefixExpr adds an expression to the very beginning of the query
+func (b UpdateBuilder) PrefixExpr(expr Sqlizer) UpdateBuilder {
+ return builder.Append(b, "Prefixes", expr).(UpdateBuilder)
+}
+
+// Table sets the table to be updated.
+func (b UpdateBuilder) Table(table string) UpdateBuilder {
+ return builder.Set(b, "Table", table).(UpdateBuilder)
+}
+
+// Set adds SET clauses to the query.
+func (b UpdateBuilder) Set(column string, value interface{}) UpdateBuilder {
+ return builder.Append(b, "SetClauses", setClause{column: column, value: value}).(UpdateBuilder)
+}
+
+// SetMap is a convenience method which calls .Set for each key/value pair in clauses.
+func (b UpdateBuilder) SetMap(clauses map[string]interface{}) UpdateBuilder {
+ keys := make([]string, len(clauses))
+ i := 0
+ for key := range clauses {
+ keys[i] = key
+ i++
+ }
+ sort.Strings(keys)
+ for _, key := range keys {
+ val, _ := clauses[key]
+ b = b.Set(key, val)
+ }
+ return b
+}
+
+// From adds FROM clause to the query
+// FROM is valid construct in postgresql only.
+func (b UpdateBuilder) From(from string) UpdateBuilder {
+ return builder.Set(b, "From", newPart(from)).(UpdateBuilder)
+}
+
+// FromSelect sets a subquery into the FROM clause of the query.
+func (b UpdateBuilder) FromSelect(from SelectBuilder, alias string) UpdateBuilder {
+ // Prevent misnumbered parameters in nested selects (#183).
+ from = from.PlaceholderFormat(Question)
+ return builder.Set(b, "From", Alias(from, alias)).(UpdateBuilder)
+}
+
+// Where adds WHERE expressions to the query.
+//
+// See SelectBuilder.Where for more information.
+func (b UpdateBuilder) Where(pred interface{}, args ...interface{}) UpdateBuilder {
+ return builder.Append(b, "WhereParts", newWherePart(pred, args...)).(UpdateBuilder)
+}
+
+// OrderBy adds ORDER BY expressions to the query.
+func (b UpdateBuilder) OrderBy(orderBys ...string) UpdateBuilder {
+ return builder.Extend(b, "OrderBys", orderBys).(UpdateBuilder)
+}
+
+// Limit sets a LIMIT clause on the query.
+func (b UpdateBuilder) Limit(limit uint64) UpdateBuilder {
+ return builder.Set(b, "Limit", fmt.Sprintf("%d", limit)).(UpdateBuilder)
+}
+
+// Offset sets a OFFSET clause on the query.
+func (b UpdateBuilder) Offset(offset uint64) UpdateBuilder {
+ return builder.Set(b, "Offset", fmt.Sprintf("%d", offset)).(UpdateBuilder)
+}
+
+// Suffix adds an expression to the end of the query
+func (b UpdateBuilder) Suffix(sql string, args ...interface{}) UpdateBuilder {
+ return b.SuffixExpr(Expr(sql, args...))
+}
+
+// SuffixExpr adds an expression to the end of the query
+func (b UpdateBuilder) SuffixExpr(expr Sqlizer) UpdateBuilder {
+ return builder.Append(b, "Suffixes", expr).(UpdateBuilder)
+}
diff --git a/vendor/github.com/Masterminds/squirrel/update_ctx.go b/vendor/github.com/Masterminds/squirrel/update_ctx.go
new file mode 100644
index 000000000..ad479f96f
--- /dev/null
+++ b/vendor/github.com/Masterminds/squirrel/update_ctx.go
@@ -0,0 +1,69 @@
+// +build go1.8
+
+package squirrel
+
+import (
+ "context"
+ "database/sql"
+
+ "github.com/lann/builder"
+)
+
+func (d *updateData) ExecContext(ctx context.Context) (sql.Result, error) {
+ if d.RunWith == nil {
+ return nil, RunnerNotSet
+ }
+ ctxRunner, ok := d.RunWith.(ExecerContext)
+ if !ok {
+ return nil, NoContextSupport
+ }
+ return ExecContextWith(ctx, ctxRunner, d)
+}
+
+func (d *updateData) QueryContext(ctx context.Context) (*sql.Rows, error) {
+ if d.RunWith == nil {
+ return nil, RunnerNotSet
+ }
+ ctxRunner, ok := d.RunWith.(QueryerContext)
+ if !ok {
+ return nil, NoContextSupport
+ }
+ return QueryContextWith(ctx, ctxRunner, d)
+}
+
+func (d *updateData) QueryRowContext(ctx context.Context) RowScanner {
+ if d.RunWith == nil {
+ return &Row{err: RunnerNotSet}
+ }
+ queryRower, ok := d.RunWith.(QueryRowerContext)
+ if !ok {
+ if _, ok := d.RunWith.(QueryerContext); !ok {
+ return &Row{err: RunnerNotQueryRunner}
+ }
+ return &Row{err: NoContextSupport}
+ }
+ return QueryRowContextWith(ctx, queryRower, d)
+}
+
+// ExecContext builds and ExecContexts the query with the Runner set by RunWith.
+func (b UpdateBuilder) ExecContext(ctx context.Context) (sql.Result, error) {
+ data := builder.GetStruct(b).(updateData)
+ return data.ExecContext(ctx)
+}
+
+// QueryContext builds and QueryContexts the query with the Runner set by RunWith.
+func (b UpdateBuilder) QueryContext(ctx context.Context) (*sql.Rows, error) {
+ data := builder.GetStruct(b).(updateData)
+ return data.QueryContext(ctx)
+}
+
+// QueryRowContext builds and QueryRowContexts the query with the Runner set by RunWith.
+func (b UpdateBuilder) QueryRowContext(ctx context.Context) RowScanner {
+ data := builder.GetStruct(b).(updateData)
+ return data.QueryRowContext(ctx)
+}
+
+// ScanContext is a shortcut for QueryRowContext().Scan.
+func (b UpdateBuilder) ScanContext(ctx context.Context, dest ...interface{}) error {
+ return b.QueryRowContext(ctx).Scan(dest...)
+}
diff --git a/vendor/github.com/Masterminds/squirrel/where.go b/vendor/github.com/Masterminds/squirrel/where.go
new file mode 100644
index 000000000..976b63ace
--- /dev/null
+++ b/vendor/github.com/Masterminds/squirrel/where.go
@@ -0,0 +1,30 @@
+package squirrel
+
+import (
+ "fmt"
+)
+
+type wherePart part
+
+func newWherePart(pred interface{}, args ...interface{}) Sqlizer {
+ return &wherePart{pred: pred, args: args}
+}
+
+func (p wherePart) ToSql() (sql string, args []interface{}, err error) {
+ switch pred := p.pred.(type) {
+ case nil:
+ // no-op
+ case rawSqlizer:
+ return pred.toSqlRaw()
+ case Sqlizer:
+ return pred.ToSql()
+ case map[string]interface{}:
+ return Eq(pred).ToSql()
+ case string:
+ sql = pred
+ args = p.args
+ default:
+ err = fmt.Errorf("expected string-keyed map or string, not %T", pred)
+ }
+ return
+}
diff --git a/vendor/github.com/asaskevich/govalidator/.gitignore b/vendor/github.com/asaskevich/govalidator/.gitignore
new file mode 100644
index 000000000..8d69a9418
--- /dev/null
+++ b/vendor/github.com/asaskevich/govalidator/.gitignore
@@ -0,0 +1,15 @@
+bin/
+.idea/
+# Binaries for programs and plugins
+*.exe
+*.exe~
+*.dll
+*.so
+*.dylib
+
+# Test binary, built with `go test -c`
+*.test
+
+# Output of the go coverage tool, specifically when used with LiteIDE
+*.out
+
diff --git a/vendor/github.com/asaskevich/govalidator/.travis.yml b/vendor/github.com/asaskevich/govalidator/.travis.yml
new file mode 100644
index 000000000..bb83c6670
--- /dev/null
+++ b/vendor/github.com/asaskevich/govalidator/.travis.yml
@@ -0,0 +1,12 @@
+language: go
+dist: xenial
+go:
+ - '1.10'
+ - '1.11'
+ - '1.12'
+ - '1.13'
+ - 'tip'
+
+script:
+ - go test -coverpkg=./... -coverprofile=coverage.info -timeout=5s
+ - bash <(curl -s https://codecov.io/bash)
diff --git a/vendor/github.com/asaskevich/govalidator/CODE_OF_CONDUCT.md b/vendor/github.com/asaskevich/govalidator/CODE_OF_CONDUCT.md
new file mode 100644
index 000000000..4b462b0d8
--- /dev/null
+++ b/vendor/github.com/asaskevich/govalidator/CODE_OF_CONDUCT.md
@@ -0,0 +1,43 @@
+# Contributor Code of Conduct
+
+This project adheres to [The Code Manifesto](http://codemanifesto.com)
+as its guidelines for contributor interactions.
+
+## The Code Manifesto
+
+We want to work in an ecosystem that empowers developers to reach their
+potential — one that encourages growth and effective collaboration. A space
+that is safe for all.
+
+A space such as this benefits everyone that participates in it. It encourages
+new developers to enter our field. It is through discussion and collaboration
+that we grow, and through growth that we improve.
+
+In the effort to create such a place, we hold to these values:
+
+1. **Discrimination limits us.** This includes discrimination on the basis of
+ race, gender, sexual orientation, gender identity, age, nationality,
+ technology and any other arbitrary exclusion of a group of people.
+2. **Boundaries honor us.** Your comfort levels are not everyone’s comfort
+ levels. Remember that, and if brought to your attention, heed it.
+3. **We are our biggest assets.** None of us were born masters of our trade.
+ Each of us has been helped along the way. Return that favor, when and where
+ you can.
+4. **We are resources for the future.** As an extension of #3, share what you
+ know. Make yourself a resource to help those that come after you.
+5. **Respect defines us.** Treat others as you wish to be treated. Make your
+ discussions, criticisms and debates from a position of respectfulness. Ask
+ yourself, is it true? Is it necessary? Is it constructive? Anything less is
+ unacceptable.
+6. **Reactions require grace.** Angry responses are valid, but abusive language
+ and vindictive actions are toxic. When something happens that offends you,
+ handle it assertively, but be respectful. Escalate reasonably, and try to
+ allow the offender an opportunity to explain themselves, and possibly
+ correct the issue.
+7. **Opinions are just that: opinions.** Each and every one of us, due to our
+ background and upbringing, have varying opinions. That is perfectly
+ acceptable. Remember this: if you respect your own opinions, you should
+ respect the opinions of others.
+8. **To err is human.** You might not intend it, but mistakes do happen and
+ contribute to build experience. Tolerate honest mistakes, and don't
+ hesitate to apologize if you make one yourself.
diff --git a/vendor/github.com/asaskevich/govalidator/CONTRIBUTING.md b/vendor/github.com/asaskevich/govalidator/CONTRIBUTING.md
new file mode 100644
index 000000000..7ed268a1e
--- /dev/null
+++ b/vendor/github.com/asaskevich/govalidator/CONTRIBUTING.md
@@ -0,0 +1,63 @@
+#### Support
+If you do have a contribution to the package, feel free to create a Pull Request or an Issue.
+
+#### What to contribute
+If you don't know what to do, there are some features and functions that need to be done
+
+- [ ] Refactor code
+- [ ] Edit docs and [README](https://github.com/asaskevich/govalidator/README.md): spellcheck, grammar and typo check
+- [ ] Create actual list of contributors and projects that currently using this package
+- [ ] Resolve [issues and bugs](https://github.com/asaskevich/govalidator/issues)
+- [ ] Update actual [list of functions](https://github.com/asaskevich/govalidator#list-of-functions)
+- [ ] Update [list of validators](https://github.com/asaskevich/govalidator#validatestruct-2) that available for `ValidateStruct` and add new
+- [ ] Implement new validators: `IsFQDN`, `IsIMEI`, `IsPostalCode`, `IsISIN`, `IsISRC` etc
+- [x] Implement [validation by maps](https://github.com/asaskevich/govalidator/issues/224)
+- [ ] Implement fuzzing testing
+- [ ] Implement some struct/map/array utilities
+- [ ] Implement map/array validation
+- [ ] Implement benchmarking
+- [ ] Implement batch of examples
+- [ ] Look at forks for new features and fixes
+
+#### Advice
+Feel free to create what you want, but keep in mind when you implement new features:
+- Code must be clear and readable, names of variables/constants clearly describes what they are doing
+- Public functions must be documented and described in source file and added to README.md to the list of available functions
+- There are must be unit-tests for any new functions and improvements
+
+## Financial contributions
+
+We also welcome financial contributions in full transparency on our [open collective](https://opencollective.com/govalidator).
+Anyone can file an expense. If the expense makes sense for the development of the community, it will be "merged" in the ledger of our open collective by the core contributors and the person who filed the expense will be reimbursed.
+
+
+## Credits
+
+
+### Contributors
+
+Thank you to all the people who have already contributed to govalidator!
+
+
+
+### Backers
+
+Thank you to all our backers! [[Become a backer](https://opencollective.com/govalidator#backer)]
+
+
+
+
+### Sponsors
+
+Thank you to all our sponsors! (please ask your company to also support this open source project by [becoming a sponsor](https://opencollective.com/govalidator#sponsor))
+
+
+
+
+
+
+
+
+
+
+
\ No newline at end of file
diff --git a/vendor/github.com/asaskevich/govalidator/LICENSE b/vendor/github.com/asaskevich/govalidator/LICENSE
new file mode 100644
index 000000000..cacba9102
--- /dev/null
+++ b/vendor/github.com/asaskevich/govalidator/LICENSE
@@ -0,0 +1,21 @@
+The MIT License (MIT)
+
+Copyright (c) 2014-2020 Alex Saskevich
+
+Permission is hereby granted, free of charge, to any person obtaining a copy
+of this software and associated documentation files (the "Software"), to deal
+in the Software without restriction, including without limitation the rights
+to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
+copies of the Software, and to permit persons to whom the Software is
+furnished to do so, subject to the following conditions:
+
+The above copyright notice and this permission notice shall be included in all
+copies or substantial portions of the Software.
+
+THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
+AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
+SOFTWARE.
\ No newline at end of file
diff --git a/vendor/github.com/asaskevich/govalidator/README.md b/vendor/github.com/asaskevich/govalidator/README.md
new file mode 100644
index 000000000..2c3fc35eb
--- /dev/null
+++ b/vendor/github.com/asaskevich/govalidator/README.md
@@ -0,0 +1,622 @@
+govalidator
+===========
+[![Gitter](https://badges.gitter.im/Join%20Chat.svg)](https://gitter.im/asaskevich/govalidator?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge) [![GoDoc](https://godoc.org/github.com/asaskevich/govalidator?status.png)](https://godoc.org/github.com/asaskevich/govalidator)
+[![Build Status](https://travis-ci.org/asaskevich/govalidator.svg?branch=master)](https://travis-ci.org/asaskevich/govalidator)
+[![Coverage](https://codecov.io/gh/asaskevich/govalidator/branch/master/graph/badge.svg)](https://codecov.io/gh/asaskevich/govalidator) [![Go Report Card](https://goreportcard.com/badge/github.com/asaskevich/govalidator)](https://goreportcard.com/report/github.com/asaskevich/govalidator) [![GoSearch](http://go-search.org/badge?id=github.com%2Fasaskevich%2Fgovalidator)](http://go-search.org/view?id=github.com%2Fasaskevich%2Fgovalidator) [![Backers on Open Collective](https://opencollective.com/govalidator/backers/badge.svg)](#backers) [![Sponsors on Open Collective](https://opencollective.com/govalidator/sponsors/badge.svg)](#sponsors) [![FOSSA Status](https://app.fossa.io/api/projects/git%2Bgithub.com%2Fasaskevich%2Fgovalidator.svg?type=shield)](https://app.fossa.io/projects/git%2Bgithub.com%2Fasaskevich%2Fgovalidator?ref=badge_shield)
+
+A package of validators and sanitizers for strings, structs and collections. Based on [validator.js](https://github.com/chriso/validator.js).
+
+#### Installation
+Make sure that Go is installed on your computer.
+Type the following command in your terminal:
+
+ go get github.com/asaskevich/govalidator
+
+or you can get specified release of the package with `gopkg.in`:
+
+ go get gopkg.in/asaskevich/govalidator.v10
+
+After it the package is ready to use.
+
+
+#### Import package in your project
+Add following line in your `*.go` file:
+```go
+import "github.com/asaskevich/govalidator"
+```
+If you are unhappy to use long `govalidator`, you can do something like this:
+```go
+import (
+ valid "github.com/asaskevich/govalidator"
+)
+```
+
+#### Activate behavior to require all fields have a validation tag by default
+`SetFieldsRequiredByDefault` causes validation to fail when struct fields do not include validations or are not explicitly marked as exempt (using `valid:"-"` or `valid:"email,optional"`). A good place to activate this is a package init function or the main() function.
+
+`SetNilPtrAllowedByRequired` causes validation to pass when struct fields marked by `required` are set to nil. This is disabled by default for consistency, but some packages that need to be able to determine between `nil` and `zero value` state can use this. If disabled, both `nil` and `zero` values cause validation errors.
+
+```go
+import "github.com/asaskevich/govalidator"
+
+func init() {
+ govalidator.SetFieldsRequiredByDefault(true)
+}
+```
+
+Here's some code to explain it:
+```go
+// this struct definition will fail govalidator.ValidateStruct() (and the field values do not matter):
+type exampleStruct struct {
+ Name string ``
+ Email string `valid:"email"`
+}
+
+// this, however, will only fail when Email is empty or an invalid email address:
+type exampleStruct2 struct {
+ Name string `valid:"-"`
+ Email string `valid:"email"`
+}
+
+// lastly, this will only fail when Email is an invalid email address but not when it's empty:
+type exampleStruct2 struct {
+ Name string `valid:"-"`
+ Email string `valid:"email,optional"`
+}
+```
+
+#### Recent breaking changes (see [#123](https://github.com/asaskevich/govalidator/pull/123))
+##### Custom validator function signature
+A context was added as the second parameter, for structs this is the object being validated – this makes dependent validation possible.
+```go
+import "github.com/asaskevich/govalidator"
+
+// old signature
+func(i interface{}) bool
+
+// new signature
+func(i interface{}, o interface{}) bool
+```
+
+##### Adding a custom validator
+This was changed to prevent data races when accessing custom validators.
+```go
+import "github.com/asaskevich/govalidator"
+
+// before
+govalidator.CustomTypeTagMap["customByteArrayValidator"] = func(i interface{}, o interface{}) bool {
+ // ...
+}
+
+// after
+govalidator.CustomTypeTagMap.Set("customByteArrayValidator", func(i interface{}, o interface{}) bool {
+ // ...
+})
+```
+
+#### List of functions:
+```go
+func Abs(value float64) float64
+func BlackList(str, chars string) string
+func ByteLength(str string, params ...string) bool
+func CamelCaseToUnderscore(str string) string
+func Contains(str, substring string) bool
+func Count(array []interface{}, iterator ConditionIterator) int
+func Each(array []interface{}, iterator Iterator)
+func ErrorByField(e error, field string) string
+func ErrorsByField(e error) map[string]string
+func Filter(array []interface{}, iterator ConditionIterator) []interface{}
+func Find(array []interface{}, iterator ConditionIterator) interface{}
+func GetLine(s string, index int) (string, error)
+func GetLines(s string) []string
+func HasLowerCase(str string) bool
+func HasUpperCase(str string) bool
+func HasWhitespace(str string) bool
+func HasWhitespaceOnly(str string) bool
+func InRange(value interface{}, left interface{}, right interface{}) bool
+func InRangeFloat32(value, left, right float32) bool
+func InRangeFloat64(value, left, right float64) bool
+func InRangeInt(value, left, right interface{}) bool
+func IsASCII(str string) bool
+func IsAlpha(str string) bool
+func IsAlphanumeric(str string) bool
+func IsBase64(str string) bool
+func IsByteLength(str string, min, max int) bool
+func IsCIDR(str string) bool
+func IsCRC32(str string) bool
+func IsCRC32b(str string) bool
+func IsCreditCard(str string) bool
+func IsDNSName(str string) bool
+func IsDataURI(str string) bool
+func IsDialString(str string) bool
+func IsDivisibleBy(str, num string) bool
+func IsEmail(str string) bool
+func IsExistingEmail(email string) bool
+func IsFilePath(str string) (bool, int)
+func IsFloat(str string) bool
+func IsFullWidth(str string) bool
+func IsHalfWidth(str string) bool
+func IsHash(str string, algorithm string) bool
+func IsHexadecimal(str string) bool
+func IsHexcolor(str string) bool
+func IsHost(str string) bool
+func IsIP(str string) bool
+func IsIPv4(str string) bool
+func IsIPv6(str string) bool
+func IsISBN(str string, version int) bool
+func IsISBN10(str string) bool
+func IsISBN13(str string) bool
+func IsISO3166Alpha2(str string) bool
+func IsISO3166Alpha3(str string) bool
+func IsISO4217(str string) bool
+func IsISO693Alpha2(str string) bool
+func IsISO693Alpha3b(str string) bool
+func IsIn(str string, params ...string) bool
+func IsInRaw(str string, params ...string) bool
+func IsInt(str string) bool
+func IsJSON(str string) bool
+func IsLatitude(str string) bool
+func IsLongitude(str string) bool
+func IsLowerCase(str string) bool
+func IsMAC(str string) bool
+func IsMD4(str string) bool
+func IsMD5(str string) bool
+func IsMagnetURI(str string) bool
+func IsMongoID(str string) bool
+func IsMultibyte(str string) bool
+func IsNatural(value float64) bool
+func IsNegative(value float64) bool
+func IsNonNegative(value float64) bool
+func IsNonPositive(value float64) bool
+func IsNotNull(str string) bool
+func IsNull(str string) bool
+func IsNumeric(str string) bool
+func IsPort(str string) bool
+func IsPositive(value float64) bool
+func IsPrintableASCII(str string) bool
+func IsRFC3339(str string) bool
+func IsRFC3339WithoutZone(str string) bool
+func IsRGBcolor(str string) bool
+func IsRegex(str string) bool
+func IsRequestURI(rawurl string) bool
+func IsRequestURL(rawurl string) bool
+func IsRipeMD128(str string) bool
+func IsRipeMD160(str string) bool
+func IsRsaPub(str string, params ...string) bool
+func IsRsaPublicKey(str string, keylen int) bool
+func IsSHA1(str string) bool
+func IsSHA256(str string) bool
+func IsSHA384(str string) bool
+func IsSHA512(str string) bool
+func IsSSN(str string) bool
+func IsSemver(str string) bool
+func IsTiger128(str string) bool
+func IsTiger160(str string) bool
+func IsTiger192(str string) bool
+func IsTime(str string, format string) bool
+func IsType(v interface{}, params ...string) bool
+func IsURL(str string) bool
+func IsUTFDigit(str string) bool
+func IsUTFLetter(str string) bool
+func IsUTFLetterNumeric(str string) bool
+func IsUTFNumeric(str string) bool
+func IsUUID(str string) bool
+func IsUUIDv3(str string) bool
+func IsUUIDv4(str string) bool
+func IsUUIDv5(str string) bool
+func IsULID(str string) bool
+func IsUnixTime(str string) bool
+func IsUpperCase(str string) bool
+func IsVariableWidth(str string) bool
+func IsWhole(value float64) bool
+func LeftTrim(str, chars string) string
+func Map(array []interface{}, iterator ResultIterator) []interface{}
+func Matches(str, pattern string) bool
+func MaxStringLength(str string, params ...string) bool
+func MinStringLength(str string, params ...string) bool
+func NormalizeEmail(str string) (string, error)
+func PadBoth(str string, padStr string, padLen int) string
+func PadLeft(str string, padStr string, padLen int) string
+func PadRight(str string, padStr string, padLen int) string
+func PrependPathToErrors(err error, path string) error
+func Range(str string, params ...string) bool
+func RemoveTags(s string) string
+func ReplacePattern(str, pattern, replace string) string
+func Reverse(s string) string
+func RightTrim(str, chars string) string
+func RuneLength(str string, params ...string) bool
+func SafeFileName(str string) string
+func SetFieldsRequiredByDefault(value bool)
+func SetNilPtrAllowedByRequired(value bool)
+func Sign(value float64) float64
+func StringLength(str string, params ...string) bool
+func StringMatches(s string, params ...string) bool
+func StripLow(str string, keepNewLines bool) string
+func ToBoolean(str string) (bool, error)
+func ToFloat(str string) (float64, error)
+func ToInt(value interface{}) (res int64, err error)
+func ToJSON(obj interface{}) (string, error)
+func ToString(obj interface{}) string
+func Trim(str, chars string) string
+func Truncate(str string, length int, ending string) string
+func TruncatingErrorf(str string, args ...interface{}) error
+func UnderscoreToCamelCase(s string) string
+func ValidateMap(inputMap map[string]interface{}, validationMap map[string]interface{}) (bool, error)
+func ValidateStruct(s interface{}) (bool, error)
+func WhiteList(str, chars string) string
+type ConditionIterator
+type CustomTypeValidator
+type Error
+func (e Error) Error() string
+type Errors
+func (es Errors) Error() string
+func (es Errors) Errors() []error
+type ISO3166Entry
+type ISO693Entry
+type InterfaceParamValidator
+type Iterator
+type ParamValidator
+type ResultIterator
+type UnsupportedTypeError
+func (e *UnsupportedTypeError) Error() string
+type Validator
+```
+
+#### Examples
+###### IsURL
+```go
+println(govalidator.IsURL(`http://user@pass:domain.com/path/page`))
+```
+###### IsType
+```go
+println(govalidator.IsType("Bob", "string"))
+println(govalidator.IsType(1, "int"))
+i := 1
+println(govalidator.IsType(&i, "*int"))
+```
+
+IsType can be used through the tag `type` which is essential for map validation:
+```go
+type User struct {
+ Name string `valid:"type(string)"`
+ Age int `valid:"type(int)"`
+ Meta interface{} `valid:"type(string)"`
+}
+result, err := govalidator.ValidateStruct(User{"Bob", 20, "meta"})
+if err != nil {
+ println("error: " + err.Error())
+}
+println(result)
+```
+###### ToString
+```go
+type User struct {
+ FirstName string
+ LastName string
+}
+
+str := govalidator.ToString(&User{"John", "Juan"})
+println(str)
+```
+###### Each, Map, Filter, Count for slices
+Each iterates over the slice/array and calls Iterator for every item
+```go
+data := []interface{}{1, 2, 3, 4, 5}
+var fn govalidator.Iterator = func(value interface{}, index int) {
+ println(value.(int))
+}
+govalidator.Each(data, fn)
+```
+```go
+data := []interface{}{1, 2, 3, 4, 5}
+var fn govalidator.ResultIterator = func(value interface{}, index int) interface{} {
+ return value.(int) * 3
+}
+_ = govalidator.Map(data, fn) // result = []interface{}{1, 6, 9, 12, 15}
+```
+```go
+data := []interface{}{1, 2, 3, 4, 5, 6, 7, 8, 9, 10}
+var fn govalidator.ConditionIterator = func(value interface{}, index int) bool {
+ return value.(int)%2 == 0
+}
+_ = govalidator.Filter(data, fn) // result = []interface{}{2, 4, 6, 8, 10}
+_ = govalidator.Count(data, fn) // result = 5
+```
+###### ValidateStruct [#2](https://github.com/asaskevich/govalidator/pull/2)
+If you want to validate structs, you can use tag `valid` for any field in your structure. All validators used with this field in one tag are separated by comma. If you want to skip validation, place `-` in your tag. If you need a validator that is not on the list below, you can add it like this:
+```go
+govalidator.TagMap["duck"] = govalidator.Validator(func(str string) bool {
+ return str == "duck"
+})
+```
+For completely custom validators (interface-based), see below.
+
+Here is a list of available validators for struct fields (validator - used function):
+```go
+"email": IsEmail,
+"url": IsURL,
+"dialstring": IsDialString,
+"requrl": IsRequestURL,
+"requri": IsRequestURI,
+"alpha": IsAlpha,
+"utfletter": IsUTFLetter,
+"alphanum": IsAlphanumeric,
+"utfletternum": IsUTFLetterNumeric,
+"numeric": IsNumeric,
+"utfnumeric": IsUTFNumeric,
+"utfdigit": IsUTFDigit,
+"hexadecimal": IsHexadecimal,
+"hexcolor": IsHexcolor,
+"rgbcolor": IsRGBcolor,
+"lowercase": IsLowerCase,
+"uppercase": IsUpperCase,
+"int": IsInt,
+"float": IsFloat,
+"null": IsNull,
+"uuid": IsUUID,
+"uuidv3": IsUUIDv3,
+"uuidv4": IsUUIDv4,
+"uuidv5": IsUUIDv5,
+"creditcard": IsCreditCard,
+"isbn10": IsISBN10,
+"isbn13": IsISBN13,
+"json": IsJSON,
+"multibyte": IsMultibyte,
+"ascii": IsASCII,
+"printableascii": IsPrintableASCII,
+"fullwidth": IsFullWidth,
+"halfwidth": IsHalfWidth,
+"variablewidth": IsVariableWidth,
+"base64": IsBase64,
+"datauri": IsDataURI,
+"ip": IsIP,
+"port": IsPort,
+"ipv4": IsIPv4,
+"ipv6": IsIPv6,
+"dns": IsDNSName,
+"host": IsHost,
+"mac": IsMAC,
+"latitude": IsLatitude,
+"longitude": IsLongitude,
+"ssn": IsSSN,
+"semver": IsSemver,
+"rfc3339": IsRFC3339,
+"rfc3339WithoutZone": IsRFC3339WithoutZone,
+"ISO3166Alpha2": IsISO3166Alpha2,
+"ISO3166Alpha3": IsISO3166Alpha3,
+"ulid": IsULID,
+```
+Validators with parameters
+
+```go
+"range(min|max)": Range,
+"length(min|max)": ByteLength,
+"runelength(min|max)": RuneLength,
+"stringlength(min|max)": StringLength,
+"matches(pattern)": StringMatches,
+"in(string1|string2|...|stringN)": IsIn,
+"rsapub(keylength)" : IsRsaPub,
+"minstringlength(int): MinStringLength,
+"maxstringlength(int): MaxStringLength,
+```
+Validators with parameters for any type
+
+```go
+"type(type)": IsType,
+```
+
+And here is small example of usage:
+```go
+type Post struct {
+ Title string `valid:"alphanum,required"`
+ Message string `valid:"duck,ascii"`
+ Message2 string `valid:"animal(dog)"`
+ AuthorIP string `valid:"ipv4"`
+ Date string `valid:"-"`
+}
+post := &Post{
+ Title: "My Example Post",
+ Message: "duck",
+ Message2: "dog",
+ AuthorIP: "123.234.54.3",
+}
+
+// Add your own struct validation tags
+govalidator.TagMap["duck"] = govalidator.Validator(func(str string) bool {
+ return str == "duck"
+})
+
+// Add your own struct validation tags with parameter
+govalidator.ParamTagMap["animal"] = govalidator.ParamValidator(func(str string, params ...string) bool {
+ species := params[0]
+ return str == species
+})
+govalidator.ParamTagRegexMap["animal"] = regexp.MustCompile("^animal\\((\\w+)\\)$")
+
+result, err := govalidator.ValidateStruct(post)
+if err != nil {
+ println("error: " + err.Error())
+}
+println(result)
+```
+###### ValidateMap [#2](https://github.com/asaskevich/govalidator/pull/338)
+If you want to validate maps, you can use the map to be validated and a validation map that contain the same tags used in ValidateStruct, both maps have to be in the form `map[string]interface{}`
+
+So here is small example of usage:
+```go
+var mapTemplate = map[string]interface{}{
+ "name":"required,alpha",
+ "family":"required,alpha",
+ "email":"required,email",
+ "cell-phone":"numeric",
+ "address":map[string]interface{}{
+ "line1":"required,alphanum",
+ "line2":"alphanum",
+ "postal-code":"numeric",
+ },
+}
+
+var inputMap = map[string]interface{}{
+ "name":"Bob",
+ "family":"Smith",
+ "email":"foo@bar.baz",
+ "address":map[string]interface{}{
+ "line1":"",
+ "line2":"",
+ "postal-code":"",
+ },
+}
+
+result, err := govalidator.ValidateMap(inputMap, mapTemplate)
+if err != nil {
+ println("error: " + err.Error())
+}
+println(result)
+```
+
+###### WhiteList
+```go
+// Remove all characters from string ignoring characters between "a" and "z"
+println(govalidator.WhiteList("a3a43a5a4a3a2a23a4a5a4a3a4", "a-z") == "aaaaaaaaaaaa")
+```
+
+###### Custom validation functions
+Custom validation using your own domain specific validators is also available - here's an example of how to use it:
+```go
+import "github.com/asaskevich/govalidator"
+
+type CustomByteArray [6]byte // custom types are supported and can be validated
+
+type StructWithCustomByteArray struct {
+ ID CustomByteArray `valid:"customByteArrayValidator,customMinLengthValidator"` // multiple custom validators are possible as well and will be evaluated in sequence
+ Email string `valid:"email"`
+ CustomMinLength int `valid:"-"`
+}
+
+govalidator.CustomTypeTagMap.Set("customByteArrayValidator", func(i interface{}, context interface{}) bool {
+ switch v := context.(type) { // you can type switch on the context interface being validated
+ case StructWithCustomByteArray:
+ // you can check and validate against some other field in the context,
+ // return early or not validate against the context at all – your choice
+ case SomeOtherType:
+ // ...
+ default:
+ // expecting some other type? Throw/panic here or continue
+ }
+
+ switch v := i.(type) { // type switch on the struct field being validated
+ case CustomByteArray:
+ for _, e := range v { // this validator checks that the byte array is not empty, i.e. not all zeroes
+ if e != 0 {
+ return true
+ }
+ }
+ }
+ return false
+})
+govalidator.CustomTypeTagMap.Set("customMinLengthValidator", func(i interface{}, context interface{}) bool {
+ switch v := context.(type) { // this validates a field against the value in another field, i.e. dependent validation
+ case StructWithCustomByteArray:
+ return len(v.ID) >= v.CustomMinLength
+ }
+ return false
+})
+```
+
+###### Loop over Error()
+By default .Error() returns all errors in a single String. To access each error you can do this:
+```go
+ if err != nil {
+ errs := err.(govalidator.Errors).Errors()
+ for _, e := range errs {
+ fmt.Println(e.Error())
+ }
+ }
+```
+
+###### Custom error messages
+Custom error messages are supported via annotations by adding the `~` separator - here's an example of how to use it:
+```go
+type Ticket struct {
+ Id int64 `json:"id"`
+ FirstName string `json:"firstname" valid:"required~First name is blank"`
+}
+```
+
+#### Notes
+Documentation is available here: [godoc.org](https://godoc.org/github.com/asaskevich/govalidator).
+Full information about code coverage is also available here: [govalidator on gocover.io](http://gocover.io/github.com/asaskevich/govalidator).
+
+#### Support
+If you do have a contribution to the package, feel free to create a Pull Request or an Issue.
+
+#### What to contribute
+If you don't know what to do, there are some features and functions that need to be done
+
+- [ ] Refactor code
+- [ ] Edit docs and [README](https://github.com/asaskevich/govalidator/README.md): spellcheck, grammar and typo check
+- [ ] Create actual list of contributors and projects that currently using this package
+- [ ] Resolve [issues and bugs](https://github.com/asaskevich/govalidator/issues)
+- [ ] Update actual [list of functions](https://github.com/asaskevich/govalidator#list-of-functions)
+- [ ] Update [list of validators](https://github.com/asaskevich/govalidator#validatestruct-2) that available for `ValidateStruct` and add new
+- [ ] Implement new validators: `IsFQDN`, `IsIMEI`, `IsPostalCode`, `IsISIN`, `IsISRC` etc
+- [x] Implement [validation by maps](https://github.com/asaskevich/govalidator/issues/224)
+- [ ] Implement fuzzing testing
+- [ ] Implement some struct/map/array utilities
+- [ ] Implement map/array validation
+- [ ] Implement benchmarking
+- [ ] Implement batch of examples
+- [ ] Look at forks for new features and fixes
+
+#### Advice
+Feel free to create what you want, but keep in mind when you implement new features:
+- Code must be clear and readable, names of variables/constants clearly describes what they are doing
+- Public functions must be documented and described in source file and added to README.md to the list of available functions
+- There are must be unit-tests for any new functions and improvements
+
+## Credits
+### Contributors
+
+This project exists thanks to all the people who contribute. [[Contribute](CONTRIBUTING.md)].
+
+#### Special thanks to [contributors](https://github.com/asaskevich/govalidator/graphs/contributors)
+* [Daniel Lohse](https://github.com/annismckenzie)
+* [Attila Oláh](https://github.com/attilaolah)
+* [Daniel Korner](https://github.com/Dadie)
+* [Steven Wilkin](https://github.com/stevenwilkin)
+* [Deiwin Sarjas](https://github.com/deiwin)
+* [Noah Shibley](https://github.com/slugmobile)
+* [Nathan Davies](https://github.com/nathj07)
+* [Matt Sanford](https://github.com/mzsanford)
+* [Simon ccl1115](https://github.com/ccl1115)
+
+
+
+
+### Backers
+
+Thank you to all our backers! 🙏 [[Become a backer](https://opencollective.com/govalidator#backer)]
+
+
+
+
+### Sponsors
+
+Support this project by becoming a sponsor. Your logo will show up here with a link to your website. [[Become a sponsor](https://opencollective.com/govalidator#sponsor)]
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+## License
+[![FOSSA Status](https://app.fossa.io/api/projects/git%2Bgithub.com%2Fasaskevich%2Fgovalidator.svg?type=large)](https://app.fossa.io/projects/git%2Bgithub.com%2Fasaskevich%2Fgovalidator?ref=badge_large)
diff --git a/vendor/github.com/asaskevich/govalidator/arrays.go b/vendor/github.com/asaskevich/govalidator/arrays.go
new file mode 100644
index 000000000..3e1da7cb4
--- /dev/null
+++ b/vendor/github.com/asaskevich/govalidator/arrays.go
@@ -0,0 +1,87 @@
+package govalidator
+
+// Iterator is the function that accepts element of slice/array and its index
+type Iterator func(interface{}, int)
+
+// ResultIterator is the function that accepts element of slice/array and its index and returns any result
+type ResultIterator func(interface{}, int) interface{}
+
+// ConditionIterator is the function that accepts element of slice/array and its index and returns boolean
+type ConditionIterator func(interface{}, int) bool
+
+// ReduceIterator is the function that accepts two element of slice/array and returns result of merging those values
+type ReduceIterator func(interface{}, interface{}) interface{}
+
+// Some validates that any item of array corresponds to ConditionIterator. Returns boolean.
+func Some(array []interface{}, iterator ConditionIterator) bool {
+ res := false
+ for index, data := range array {
+ res = res || iterator(data, index)
+ }
+ return res
+}
+
+// Every validates that every item of array corresponds to ConditionIterator. Returns boolean.
+func Every(array []interface{}, iterator ConditionIterator) bool {
+ res := true
+ for index, data := range array {
+ res = res && iterator(data, index)
+ }
+ return res
+}
+
+// Reduce boils down a list of values into a single value by ReduceIterator
+func Reduce(array []interface{}, iterator ReduceIterator, initialValue interface{}) interface{} {
+ for _, data := range array {
+ initialValue = iterator(initialValue, data)
+ }
+ return initialValue
+}
+
+// Each iterates over the slice and apply Iterator to every item
+func Each(array []interface{}, iterator Iterator) {
+ for index, data := range array {
+ iterator(data, index)
+ }
+}
+
+// Map iterates over the slice and apply ResultIterator to every item. Returns new slice as a result.
+func Map(array []interface{}, iterator ResultIterator) []interface{} {
+ var result = make([]interface{}, len(array))
+ for index, data := range array {
+ result[index] = iterator(data, index)
+ }
+ return result
+}
+
+// Find iterates over the slice and apply ConditionIterator to every item. Returns first item that meet ConditionIterator or nil otherwise.
+func Find(array []interface{}, iterator ConditionIterator) interface{} {
+ for index, data := range array {
+ if iterator(data, index) {
+ return data
+ }
+ }
+ return nil
+}
+
+// Filter iterates over the slice and apply ConditionIterator to every item. Returns new slice.
+func Filter(array []interface{}, iterator ConditionIterator) []interface{} {
+ var result = make([]interface{}, 0)
+ for index, data := range array {
+ if iterator(data, index) {
+ result = append(result, data)
+ }
+ }
+ return result
+}
+
+// Count iterates over the slice and apply ConditionIterator to every item. Returns count of items that meets ConditionIterator.
+func Count(array []interface{}, iterator ConditionIterator) int {
+ count := 0
+ for index, data := range array {
+ if iterator(data, index) {
+ count = count + 1
+ }
+ }
+ return count
+}
diff --git a/vendor/github.com/asaskevich/govalidator/converter.go b/vendor/github.com/asaskevich/govalidator/converter.go
new file mode 100644
index 000000000..d68e990fc
--- /dev/null
+++ b/vendor/github.com/asaskevich/govalidator/converter.go
@@ -0,0 +1,81 @@
+package govalidator
+
+import (
+ "encoding/json"
+ "fmt"
+ "reflect"
+ "strconv"
+)
+
+// ToString convert the input to a string.
+func ToString(obj interface{}) string {
+ res := fmt.Sprintf("%v", obj)
+ return res
+}
+
+// ToJSON convert the input to a valid JSON string
+func ToJSON(obj interface{}) (string, error) {
+ res, err := json.Marshal(obj)
+ if err != nil {
+ res = []byte("")
+ }
+ return string(res), err
+}
+
+// ToFloat convert the input string to a float, or 0.0 if the input is not a float.
+func ToFloat(value interface{}) (res float64, err error) {
+ val := reflect.ValueOf(value)
+
+ switch value.(type) {
+ case int, int8, int16, int32, int64:
+ res = float64(val.Int())
+ case uint, uint8, uint16, uint32, uint64:
+ res = float64(val.Uint())
+ case float32, float64:
+ res = val.Float()
+ case string:
+ res, err = strconv.ParseFloat(val.String(), 64)
+ if err != nil {
+ res = 0
+ }
+ default:
+ err = fmt.Errorf("ToInt: unknown interface type %T", value)
+ res = 0
+ }
+
+ return
+}
+
+// ToInt convert the input string or any int type to an integer type 64, or 0 if the input is not an integer.
+func ToInt(value interface{}) (res int64, err error) {
+ val := reflect.ValueOf(value)
+
+ switch value.(type) {
+ case int, int8, int16, int32, int64:
+ res = val.Int()
+ case uint, uint8, uint16, uint32, uint64:
+ res = int64(val.Uint())
+ case float32, float64:
+ res = int64(val.Float())
+ case string:
+ if IsInt(val.String()) {
+ res, err = strconv.ParseInt(val.String(), 0, 64)
+ if err != nil {
+ res = 0
+ }
+ } else {
+ err = fmt.Errorf("ToInt: invalid numeric format %g", value)
+ res = 0
+ }
+ default:
+ err = fmt.Errorf("ToInt: unknown interface type %T", value)
+ res = 0
+ }
+
+ return
+}
+
+// ToBoolean convert the input string to a boolean.
+func ToBoolean(str string) (bool, error) {
+ return strconv.ParseBool(str)
+}
diff --git a/vendor/github.com/asaskevich/govalidator/doc.go b/vendor/github.com/asaskevich/govalidator/doc.go
new file mode 100644
index 000000000..55dce62dc
--- /dev/null
+++ b/vendor/github.com/asaskevich/govalidator/doc.go
@@ -0,0 +1,3 @@
+package govalidator
+
+// A package of validators and sanitizers for strings, structures and collections.
diff --git a/vendor/github.com/asaskevich/govalidator/error.go b/vendor/github.com/asaskevich/govalidator/error.go
new file mode 100644
index 000000000..1da2336f4
--- /dev/null
+++ b/vendor/github.com/asaskevich/govalidator/error.go
@@ -0,0 +1,47 @@
+package govalidator
+
+import (
+ "sort"
+ "strings"
+)
+
+// Errors is an array of multiple errors and conforms to the error interface.
+type Errors []error
+
+// Errors returns itself.
+func (es Errors) Errors() []error {
+ return es
+}
+
+func (es Errors) Error() string {
+ var errs []string
+ for _, e := range es {
+ errs = append(errs, e.Error())
+ }
+ sort.Strings(errs)
+ return strings.Join(errs, ";")
+}
+
+// Error encapsulates a name, an error and whether there's a custom error message or not.
+type Error struct {
+ Name string
+ Err error
+ CustomErrorMessageExists bool
+
+ // Validator indicates the name of the validator that failed
+ Validator string
+ Path []string
+}
+
+func (e Error) Error() string {
+ if e.CustomErrorMessageExists {
+ return e.Err.Error()
+ }
+
+ errName := e.Name
+ if len(e.Path) > 0 {
+ errName = strings.Join(append(e.Path, e.Name), ".")
+ }
+
+ return errName + ": " + e.Err.Error()
+}
diff --git a/vendor/github.com/asaskevich/govalidator/numerics.go b/vendor/github.com/asaskevich/govalidator/numerics.go
new file mode 100644
index 000000000..5041d9e86
--- /dev/null
+++ b/vendor/github.com/asaskevich/govalidator/numerics.go
@@ -0,0 +1,100 @@
+package govalidator
+
+import (
+ "math"
+)
+
+// Abs returns absolute value of number
+func Abs(value float64) float64 {
+ return math.Abs(value)
+}
+
+// Sign returns signum of number: 1 in case of value > 0, -1 in case of value < 0, 0 otherwise
+func Sign(value float64) float64 {
+ if value > 0 {
+ return 1
+ } else if value < 0 {
+ return -1
+ } else {
+ return 0
+ }
+}
+
+// IsNegative returns true if value < 0
+func IsNegative(value float64) bool {
+ return value < 0
+}
+
+// IsPositive returns true if value > 0
+func IsPositive(value float64) bool {
+ return value > 0
+}
+
+// IsNonNegative returns true if value >= 0
+func IsNonNegative(value float64) bool {
+ return value >= 0
+}
+
+// IsNonPositive returns true if value <= 0
+func IsNonPositive(value float64) bool {
+ return value <= 0
+}
+
+// InRangeInt returns true if value lies between left and right border
+func InRangeInt(value, left, right interface{}) bool {
+ value64, _ := ToInt(value)
+ left64, _ := ToInt(left)
+ right64, _ := ToInt(right)
+ if left64 > right64 {
+ left64, right64 = right64, left64
+ }
+ return value64 >= left64 && value64 <= right64
+}
+
+// InRangeFloat32 returns true if value lies between left and right border
+func InRangeFloat32(value, left, right float32) bool {
+ if left > right {
+ left, right = right, left
+ }
+ return value >= left && value <= right
+}
+
+// InRangeFloat64 returns true if value lies between left and right border
+func InRangeFloat64(value, left, right float64) bool {
+ if left > right {
+ left, right = right, left
+ }
+ return value >= left && value <= right
+}
+
+// InRange returns true if value lies between left and right border, generic type to handle int, float32, float64 and string.
+// All types must the same type.
+// False if value doesn't lie in range or if it incompatible or not comparable
+func InRange(value interface{}, left interface{}, right interface{}) bool {
+ switch value.(type) {
+ case int:
+ intValue, _ := ToInt(value)
+ intLeft, _ := ToInt(left)
+ intRight, _ := ToInt(right)
+ return InRangeInt(intValue, intLeft, intRight)
+ case float32, float64:
+ intValue, _ := ToFloat(value)
+ intLeft, _ := ToFloat(left)
+ intRight, _ := ToFloat(right)
+ return InRangeFloat64(intValue, intLeft, intRight)
+ case string:
+ return value.(string) >= left.(string) && value.(string) <= right.(string)
+ default:
+ return false
+ }
+}
+
+// IsWhole returns true if value is whole number
+func IsWhole(value float64) bool {
+ return math.Remainder(value, 1) == 0
+}
+
+// IsNatural returns true if value is natural number (positive and whole)
+func IsNatural(value float64) bool {
+ return IsWhole(value) && IsPositive(value)
+}
diff --git a/vendor/github.com/asaskevich/govalidator/patterns.go b/vendor/github.com/asaskevich/govalidator/patterns.go
new file mode 100644
index 000000000..bafc3765e
--- /dev/null
+++ b/vendor/github.com/asaskevich/govalidator/patterns.go
@@ -0,0 +1,113 @@
+package govalidator
+
+import "regexp"
+
+// Basic regular expressions for validating strings
+const (
+ Email string = "^(((([a-zA-Z]|\\d|[!#\\$%&'\\*\\+\\-\\/=\\?\\^_`{\\|}~]|[\\x{00A0}-\\x{D7FF}\\x{F900}-\\x{FDCF}\\x{FDF0}-\\x{FFEF}])+(\\.([a-zA-Z]|\\d|[!#\\$%&'\\*\\+\\-\\/=\\?\\^_`{\\|}~]|[\\x{00A0}-\\x{D7FF}\\x{F900}-\\x{FDCF}\\x{FDF0}-\\x{FFEF}])+)*)|((\\x22)((((\\x20|\\x09)*(\\x0d\\x0a))?(\\x20|\\x09)+)?(([\\x01-\\x08\\x0b\\x0c\\x0e-\\x1f\\x7f]|\\x21|[\\x23-\\x5b]|[\\x5d-\\x7e]|[\\x{00A0}-\\x{D7FF}\\x{F900}-\\x{FDCF}\\x{FDF0}-\\x{FFEF}])|(\\([\\x01-\\x09\\x0b\\x0c\\x0d-\\x7f]|[\\x{00A0}-\\x{D7FF}\\x{F900}-\\x{FDCF}\\x{FDF0}-\\x{FFEF}]))))*(((\\x20|\\x09)*(\\x0d\\x0a))?(\\x20|\\x09)+)?(\\x22)))@((([a-zA-Z]|\\d|[\\x{00A0}-\\x{D7FF}\\x{F900}-\\x{FDCF}\\x{FDF0}-\\x{FFEF}])|(([a-zA-Z]|\\d|[\\x{00A0}-\\x{D7FF}\\x{F900}-\\x{FDCF}\\x{FDF0}-\\x{FFEF}])([a-zA-Z]|\\d|-|\\.|_|~|[\\x{00A0}-\\x{D7FF}\\x{F900}-\\x{FDCF}\\x{FDF0}-\\x{FFEF}])*([a-zA-Z]|\\d|[\\x{00A0}-\\x{D7FF}\\x{F900}-\\x{FDCF}\\x{FDF0}-\\x{FFEF}])))\\.)+(([a-zA-Z]|[\\x{00A0}-\\x{D7FF}\\x{F900}-\\x{FDCF}\\x{FDF0}-\\x{FFEF}])|(([a-zA-Z]|[\\x{00A0}-\\x{D7FF}\\x{F900}-\\x{FDCF}\\x{FDF0}-\\x{FFEF}])([a-zA-Z]|\\d|-|_|~|[\\x{00A0}-\\x{D7FF}\\x{F900}-\\x{FDCF}\\x{FDF0}-\\x{FFEF}])*([a-zA-Z]|[\\x{00A0}-\\x{D7FF}\\x{F900}-\\x{FDCF}\\x{FDF0}-\\x{FFEF}])))\\.?$"
+ CreditCard string = "^(?:4[0-9]{12}(?:[0-9]{3})?|5[1-5][0-9]{14}|(222[1-9]|22[3-9][0-9]|2[3-6][0-9]{2}|27[01][0-9]|2720)[0-9]{12}|6(?:011|5[0-9][0-9])[0-9]{12}|3[47][0-9]{13}|3(?:0[0-5]|[68][0-9])[0-9]{11}|(?:2131|1800|35\\d{3})\\d{11}|6[27][0-9]{14})$"
+ ISBN10 string = "^(?:[0-9]{9}X|[0-9]{10})$"
+ ISBN13 string = "^(?:[0-9]{13})$"
+ UUID3 string = "^[0-9a-f]{8}-[0-9a-f]{4}-3[0-9a-f]{3}-[0-9a-f]{4}-[0-9a-f]{12}$"
+ UUID4 string = "^[0-9a-f]{8}-[0-9a-f]{4}-4[0-9a-f]{3}-[89ab][0-9a-f]{3}-[0-9a-f]{12}$"
+ UUID5 string = "^[0-9a-f]{8}-[0-9a-f]{4}-5[0-9a-f]{3}-[89ab][0-9a-f]{3}-[0-9a-f]{12}$"
+ UUID string = "^[0-9a-f]{8}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{12}$"
+ Alpha string = "^[a-zA-Z]+$"
+ Alphanumeric string = "^[a-zA-Z0-9]+$"
+ Numeric string = "^[0-9]+$"
+ Int string = "^(?:[-+]?(?:0|[1-9][0-9]*))$"
+ Float string = "^(?:[-+]?(?:[0-9]+))?(?:\\.[0-9]*)?(?:[eE][\\+\\-]?(?:[0-9]+))?$"
+ Hexadecimal string = "^[0-9a-fA-F]+$"
+ Hexcolor string = "^#?([0-9a-fA-F]{3}|[0-9a-fA-F]{6})$"
+ RGBcolor string = "^rgb\\(\\s*(0|[1-9]\\d?|1\\d\\d?|2[0-4]\\d|25[0-5])\\s*,\\s*(0|[1-9]\\d?|1\\d\\d?|2[0-4]\\d|25[0-5])\\s*,\\s*(0|[1-9]\\d?|1\\d\\d?|2[0-4]\\d|25[0-5])\\s*\\)$"
+ ASCII string = "^[\x00-\x7F]+$"
+ Multibyte string = "[^\x00-\x7F]"
+ FullWidth string = "[^\u0020-\u007E\uFF61-\uFF9F\uFFA0-\uFFDC\uFFE8-\uFFEE0-9a-zA-Z]"
+ HalfWidth string = "[\u0020-\u007E\uFF61-\uFF9F\uFFA0-\uFFDC\uFFE8-\uFFEE0-9a-zA-Z]"
+ Base64 string = "^(?:[A-Za-z0-9+\\/]{4})*(?:[A-Za-z0-9+\\/]{2}==|[A-Za-z0-9+\\/]{3}=|[A-Za-z0-9+\\/]{4})$"
+ PrintableASCII string = "^[\x20-\x7E]+$"
+ DataURI string = "^data:.+\\/(.+);base64$"
+ MagnetURI string = "^magnet:\\?xt=urn:[a-zA-Z0-9]+:[a-zA-Z0-9]{32,40}&dn=.+&tr=.+$"
+ Latitude string = "^[-+]?([1-8]?\\d(\\.\\d+)?|90(\\.0+)?)$"
+ Longitude string = "^[-+]?(180(\\.0+)?|((1[0-7]\\d)|([1-9]?\\d))(\\.\\d+)?)$"
+ DNSName string = `^([a-zA-Z0-9_]{1}[a-zA-Z0-9_-]{0,62}){1}(\.[a-zA-Z0-9_]{1}[a-zA-Z0-9_-]{0,62})*[\._]?$`
+ IP string = `(([0-9a-fA-F]{1,4}:){7,7}[0-9a-fA-F]{1,4}|([0-9a-fA-F]{1,4}:){1,7}:|([0-9a-fA-F]{1,4}:){1,6}:[0-9a-fA-F]{1,4}|([0-9a-fA-F]{1,4}:){1,5}(:[0-9a-fA-F]{1,4}){1,2}|([0-9a-fA-F]{1,4}:){1,4}(:[0-9a-fA-F]{1,4}){1,3}|([0-9a-fA-F]{1,4}:){1,3}(:[0-9a-fA-F]{1,4}){1,4}|([0-9a-fA-F]{1,4}:){1,2}(:[0-9a-fA-F]{1,4}){1,5}|[0-9a-fA-F]{1,4}:((:[0-9a-fA-F]{1,4}){1,6})|:((:[0-9a-fA-F]{1,4}){1,7}|:)|fe80:(:[0-9a-fA-F]{0,4}){0,4}%[0-9a-zA-Z]{1,}|::(ffff(:0{1,4}){0,1}:){0,1}((25[0-5]|(2[0-4]|1{0,1}[0-9]){0,1}[0-9])\.){3,3}(25[0-5]|(2[0-4]|1{0,1}[0-9]){0,1}[0-9])|([0-9a-fA-F]{1,4}:){1,4}:((25[0-5]|(2[0-4]|1{0,1}[0-9]){0,1}[0-9])\.){3,3}(25[0-5]|(2[0-4]|1{0,1}[0-9]){0,1}[0-9]))`
+ URLSchema string = `((ftp|tcp|udp|wss?|https?):\/\/)`
+ URLUsername string = `(\S+(:\S*)?@)`
+ URLPath string = `((\/|\?|#)[^\s]*)`
+ URLPort string = `(:(\d{1,5}))`
+ URLIP string = `([1-9]\d?|1\d\d|2[01]\d|22[0-3]|24\d|25[0-5])(\.(\d{1,2}|1\d\d|2[0-4]\d|25[0-5])){2}(?:\.([0-9]\d?|1\d\d|2[0-4]\d|25[0-5]))`
+ URLSubdomain string = `((www\.)|([a-zA-Z0-9]+([-_\.]?[a-zA-Z0-9])*[a-zA-Z0-9]\.[a-zA-Z0-9]+))`
+ URL = `^` + URLSchema + `?` + URLUsername + `?` + `((` + URLIP + `|(\[` + IP + `\])|(([a-zA-Z0-9]([a-zA-Z0-9-_]+)?[a-zA-Z0-9]([-\.][a-zA-Z0-9]+)*)|(` + URLSubdomain + `?))?(([a-zA-Z\x{00a1}-\x{ffff}0-9]+-?-?)*[a-zA-Z\x{00a1}-\x{ffff}0-9]+)(?:\.([a-zA-Z\x{00a1}-\x{ffff}]{1,}))?))\.?` + URLPort + `?` + URLPath + `?$`
+ SSN string = `^\d{3}[- ]?\d{2}[- ]?\d{4}$`
+ WinPath string = `^[a-zA-Z]:\\(?:[^\\/:*?"<>|\r\n]+\\)*[^\\/:*?"<>|\r\n]*$`
+ UnixPath string = `^(/[^/\x00]*)+/?$`
+ WinARPath string = `^(?:(?:[a-zA-Z]:|\\\\[a-z0-9_.$●-]+\\[a-z0-9_.$●-]+)\\|\\?[^\\/:*?"<>|\r\n]+\\?)(?:[^\\/:*?"<>|\r\n]+\\)*[^\\/:*?"<>|\r\n]*$`
+ UnixARPath string = `^((\.{0,2}/)?([^/\x00]*))+/?$`
+ Semver string = "^v?(?:0|[1-9]\\d*)\\.(?:0|[1-9]\\d*)\\.(?:0|[1-9]\\d*)(-(0|[1-9]\\d*|\\d*[a-zA-Z-][0-9a-zA-Z-]*)(\\.(0|[1-9]\\d*|\\d*[a-zA-Z-][0-9a-zA-Z-]*))*)?(\\+[0-9a-zA-Z-]+(\\.[0-9a-zA-Z-]+)*)?$"
+ tagName string = "valid"
+ hasLowerCase string = ".*[[:lower:]]"
+ hasUpperCase string = ".*[[:upper:]]"
+ hasWhitespace string = ".*[[:space:]]"
+ hasWhitespaceOnly string = "^[[:space:]]+$"
+ IMEI string = "^[0-9a-f]{14}$|^\\d{15}$|^\\d{18}$"
+ IMSI string = "^\\d{14,15}$"
+ E164 string = `^\+?[1-9]\d{1,14}$`
+)
+
+// Used by IsFilePath func
+const (
+ // Unknown is unresolved OS type
+ Unknown = iota
+ // Win is Windows type
+ Win
+ // Unix is *nix OS types
+ Unix
+)
+
+var (
+ userRegexp = regexp.MustCompile("^[a-zA-Z0-9!#$%&'*+/=?^_`{|}~.-]+$")
+ hostRegexp = regexp.MustCompile("^[^\\s]+\\.[^\\s]+$")
+ userDotRegexp = regexp.MustCompile("(^[.]{1})|([.]{1}$)|([.]{2,})")
+ rxEmail = regexp.MustCompile(Email)
+ rxCreditCard = regexp.MustCompile(CreditCard)
+ rxISBN10 = regexp.MustCompile(ISBN10)
+ rxISBN13 = regexp.MustCompile(ISBN13)
+ rxUUID3 = regexp.MustCompile(UUID3)
+ rxUUID4 = regexp.MustCompile(UUID4)
+ rxUUID5 = regexp.MustCompile(UUID5)
+ rxUUID = regexp.MustCompile(UUID)
+ rxAlpha = regexp.MustCompile(Alpha)
+ rxAlphanumeric = regexp.MustCompile(Alphanumeric)
+ rxNumeric = regexp.MustCompile(Numeric)
+ rxInt = regexp.MustCompile(Int)
+ rxFloat = regexp.MustCompile(Float)
+ rxHexadecimal = regexp.MustCompile(Hexadecimal)
+ rxHexcolor = regexp.MustCompile(Hexcolor)
+ rxRGBcolor = regexp.MustCompile(RGBcolor)
+ rxASCII = regexp.MustCompile(ASCII)
+ rxPrintableASCII = regexp.MustCompile(PrintableASCII)
+ rxMultibyte = regexp.MustCompile(Multibyte)
+ rxFullWidth = regexp.MustCompile(FullWidth)
+ rxHalfWidth = regexp.MustCompile(HalfWidth)
+ rxBase64 = regexp.MustCompile(Base64)
+ rxDataURI = regexp.MustCompile(DataURI)
+ rxMagnetURI = regexp.MustCompile(MagnetURI)
+ rxLatitude = regexp.MustCompile(Latitude)
+ rxLongitude = regexp.MustCompile(Longitude)
+ rxDNSName = regexp.MustCompile(DNSName)
+ rxURL = regexp.MustCompile(URL)
+ rxSSN = regexp.MustCompile(SSN)
+ rxWinPath = regexp.MustCompile(WinPath)
+ rxUnixPath = regexp.MustCompile(UnixPath)
+ rxARWinPath = regexp.MustCompile(WinARPath)
+ rxARUnixPath = regexp.MustCompile(UnixARPath)
+ rxSemver = regexp.MustCompile(Semver)
+ rxHasLowerCase = regexp.MustCompile(hasLowerCase)
+ rxHasUpperCase = regexp.MustCompile(hasUpperCase)
+ rxHasWhitespace = regexp.MustCompile(hasWhitespace)
+ rxHasWhitespaceOnly = regexp.MustCompile(hasWhitespaceOnly)
+ rxIMEI = regexp.MustCompile(IMEI)
+ rxIMSI = regexp.MustCompile(IMSI)
+ rxE164 = regexp.MustCompile(E164)
+)
diff --git a/vendor/github.com/asaskevich/govalidator/types.go b/vendor/github.com/asaskevich/govalidator/types.go
new file mode 100644
index 000000000..c573abb51
--- /dev/null
+++ b/vendor/github.com/asaskevich/govalidator/types.go
@@ -0,0 +1,656 @@
+package govalidator
+
+import (
+ "reflect"
+ "regexp"
+ "sort"
+ "sync"
+)
+
+// Validator is a wrapper for a validator function that returns bool and accepts string.
+type Validator func(str string) bool
+
+// CustomTypeValidator is a wrapper for validator functions that returns bool and accepts any type.
+// The second parameter should be the context (in the case of validating a struct: the whole object being validated).
+type CustomTypeValidator func(i interface{}, o interface{}) bool
+
+// ParamValidator is a wrapper for validator functions that accept additional parameters.
+type ParamValidator func(str string, params ...string) bool
+
+// InterfaceParamValidator is a wrapper for functions that accept variants parameters for an interface value
+type InterfaceParamValidator func(in interface{}, params ...string) bool
+type tagOptionsMap map[string]tagOption
+
+func (t tagOptionsMap) orderedKeys() []string {
+ var keys []string
+ for k := range t {
+ keys = append(keys, k)
+ }
+
+ sort.Slice(keys, func(a, b int) bool {
+ return t[keys[a]].order < t[keys[b]].order
+ })
+
+ return keys
+}
+
+type tagOption struct {
+ name string
+ customErrorMessage string
+ order int
+}
+
+// UnsupportedTypeError is a wrapper for reflect.Type
+type UnsupportedTypeError struct {
+ Type reflect.Type
+}
+
+// stringValues is a slice of reflect.Value holding *reflect.StringValue.
+// It implements the methods to sort by string.
+type stringValues []reflect.Value
+
+// InterfaceParamTagMap is a map of functions accept variants parameters for an interface value
+var InterfaceParamTagMap = map[string]InterfaceParamValidator{
+ "type": IsType,
+}
+
+// InterfaceParamTagRegexMap maps interface param tags to their respective regexes.
+var InterfaceParamTagRegexMap = map[string]*regexp.Regexp{
+ "type": regexp.MustCompile(`^type\((.*)\)$`),
+}
+
+// ParamTagMap is a map of functions accept variants parameters
+var ParamTagMap = map[string]ParamValidator{
+ "length": ByteLength,
+ "range": Range,
+ "runelength": RuneLength,
+ "stringlength": StringLength,
+ "matches": StringMatches,
+ "in": IsInRaw,
+ "rsapub": IsRsaPub,
+ "minstringlength": MinStringLength,
+ "maxstringlength": MaxStringLength,
+}
+
+// ParamTagRegexMap maps param tags to their respective regexes.
+var ParamTagRegexMap = map[string]*regexp.Regexp{
+ "range": regexp.MustCompile("^range\\((\\d+)\\|(\\d+)\\)$"),
+ "length": regexp.MustCompile("^length\\((\\d+)\\|(\\d+)\\)$"),
+ "runelength": regexp.MustCompile("^runelength\\((\\d+)\\|(\\d+)\\)$"),
+ "stringlength": regexp.MustCompile("^stringlength\\((\\d+)\\|(\\d+)\\)$"),
+ "in": regexp.MustCompile(`^in\((.*)\)`),
+ "matches": regexp.MustCompile(`^matches\((.+)\)$`),
+ "rsapub": regexp.MustCompile("^rsapub\\((\\d+)\\)$"),
+ "minstringlength": regexp.MustCompile("^minstringlength\\((\\d+)\\)$"),
+ "maxstringlength": regexp.MustCompile("^maxstringlength\\((\\d+)\\)$"),
+}
+
+type customTypeTagMap struct {
+ validators map[string]CustomTypeValidator
+
+ sync.RWMutex
+}
+
+func (tm *customTypeTagMap) Get(name string) (CustomTypeValidator, bool) {
+ tm.RLock()
+ defer tm.RUnlock()
+ v, ok := tm.validators[name]
+ return v, ok
+}
+
+func (tm *customTypeTagMap) Set(name string, ctv CustomTypeValidator) {
+ tm.Lock()
+ defer tm.Unlock()
+ tm.validators[name] = ctv
+}
+
+// CustomTypeTagMap is a map of functions that can be used as tags for ValidateStruct function.
+// Use this to validate compound or custom types that need to be handled as a whole, e.g.
+// `type UUID [16]byte` (this would be handled as an array of bytes).
+var CustomTypeTagMap = &customTypeTagMap{validators: make(map[string]CustomTypeValidator)}
+
+// TagMap is a map of functions, that can be used as tags for ValidateStruct function.
+var TagMap = map[string]Validator{
+ "email": IsEmail,
+ "url": IsURL,
+ "dialstring": IsDialString,
+ "requrl": IsRequestURL,
+ "requri": IsRequestURI,
+ "alpha": IsAlpha,
+ "utfletter": IsUTFLetter,
+ "alphanum": IsAlphanumeric,
+ "utfletternum": IsUTFLetterNumeric,
+ "numeric": IsNumeric,
+ "utfnumeric": IsUTFNumeric,
+ "utfdigit": IsUTFDigit,
+ "hexadecimal": IsHexadecimal,
+ "hexcolor": IsHexcolor,
+ "rgbcolor": IsRGBcolor,
+ "lowercase": IsLowerCase,
+ "uppercase": IsUpperCase,
+ "int": IsInt,
+ "float": IsFloat,
+ "null": IsNull,
+ "notnull": IsNotNull,
+ "uuid": IsUUID,
+ "uuidv3": IsUUIDv3,
+ "uuidv4": IsUUIDv4,
+ "uuidv5": IsUUIDv5,
+ "creditcard": IsCreditCard,
+ "isbn10": IsISBN10,
+ "isbn13": IsISBN13,
+ "json": IsJSON,
+ "multibyte": IsMultibyte,
+ "ascii": IsASCII,
+ "printableascii": IsPrintableASCII,
+ "fullwidth": IsFullWidth,
+ "halfwidth": IsHalfWidth,
+ "variablewidth": IsVariableWidth,
+ "base64": IsBase64,
+ "datauri": IsDataURI,
+ "ip": IsIP,
+ "port": IsPort,
+ "ipv4": IsIPv4,
+ "ipv6": IsIPv6,
+ "dns": IsDNSName,
+ "host": IsHost,
+ "mac": IsMAC,
+ "latitude": IsLatitude,
+ "longitude": IsLongitude,
+ "ssn": IsSSN,
+ "semver": IsSemver,
+ "rfc3339": IsRFC3339,
+ "rfc3339WithoutZone": IsRFC3339WithoutZone,
+ "ISO3166Alpha2": IsISO3166Alpha2,
+ "ISO3166Alpha3": IsISO3166Alpha3,
+ "ISO4217": IsISO4217,
+ "IMEI": IsIMEI,
+ "ulid": IsULID,
+}
+
+// ISO3166Entry stores country codes
+type ISO3166Entry struct {
+ EnglishShortName string
+ FrenchShortName string
+ Alpha2Code string
+ Alpha3Code string
+ Numeric string
+}
+
+//ISO3166List based on https://www.iso.org/obp/ui/#search/code/ Code Type "Officially Assigned Codes"
+var ISO3166List = []ISO3166Entry{
+ {"Afghanistan", "Afghanistan (l')", "AF", "AFG", "004"},
+ {"Albania", "Albanie (l')", "AL", "ALB", "008"},
+ {"Antarctica", "Antarctique (l')", "AQ", "ATA", "010"},
+ {"Algeria", "Algérie (l')", "DZ", "DZA", "012"},
+ {"American Samoa", "Samoa américaines (les)", "AS", "ASM", "016"},
+ {"Andorra", "Andorre (l')", "AD", "AND", "020"},
+ {"Angola", "Angola (l')", "AO", "AGO", "024"},
+ {"Antigua and Barbuda", "Antigua-et-Barbuda", "AG", "ATG", "028"},
+ {"Azerbaijan", "Azerbaïdjan (l')", "AZ", "AZE", "031"},
+ {"Argentina", "Argentine (l')", "AR", "ARG", "032"},
+ {"Australia", "Australie (l')", "AU", "AUS", "036"},
+ {"Austria", "Autriche (l')", "AT", "AUT", "040"},
+ {"Bahamas (the)", "Bahamas (les)", "BS", "BHS", "044"},
+ {"Bahrain", "Bahreïn", "BH", "BHR", "048"},
+ {"Bangladesh", "Bangladesh (le)", "BD", "BGD", "050"},
+ {"Armenia", "Arménie (l')", "AM", "ARM", "051"},
+ {"Barbados", "Barbade (la)", "BB", "BRB", "052"},
+ {"Belgium", "Belgique (la)", "BE", "BEL", "056"},
+ {"Bermuda", "Bermudes (les)", "BM", "BMU", "060"},
+ {"Bhutan", "Bhoutan (le)", "BT", "BTN", "064"},
+ {"Bolivia (Plurinational State of)", "Bolivie (État plurinational de)", "BO", "BOL", "068"},
+ {"Bosnia and Herzegovina", "Bosnie-Herzégovine (la)", "BA", "BIH", "070"},
+ {"Botswana", "Botswana (le)", "BW", "BWA", "072"},
+ {"Bouvet Island", "Bouvet (l'Île)", "BV", "BVT", "074"},
+ {"Brazil", "Brésil (le)", "BR", "BRA", "076"},
+ {"Belize", "Belize (le)", "BZ", "BLZ", "084"},
+ {"British Indian Ocean Territory (the)", "Indien (le Territoire britannique de l'océan)", "IO", "IOT", "086"},
+ {"Solomon Islands", "Salomon (Îles)", "SB", "SLB", "090"},
+ {"Virgin Islands (British)", "Vierges britanniques (les Îles)", "VG", "VGB", "092"},
+ {"Brunei Darussalam", "Brunéi Darussalam (le)", "BN", "BRN", "096"},
+ {"Bulgaria", "Bulgarie (la)", "BG", "BGR", "100"},
+ {"Myanmar", "Myanmar (le)", "MM", "MMR", "104"},
+ {"Burundi", "Burundi (le)", "BI", "BDI", "108"},
+ {"Belarus", "Bélarus (le)", "BY", "BLR", "112"},
+ {"Cambodia", "Cambodge (le)", "KH", "KHM", "116"},
+ {"Cameroon", "Cameroun (le)", "CM", "CMR", "120"},
+ {"Canada", "Canada (le)", "CA", "CAN", "124"},
+ {"Cabo Verde", "Cabo Verde", "CV", "CPV", "132"},
+ {"Cayman Islands (the)", "Caïmans (les Îles)", "KY", "CYM", "136"},
+ {"Central African Republic (the)", "République centrafricaine (la)", "CF", "CAF", "140"},
+ {"Sri Lanka", "Sri Lanka", "LK", "LKA", "144"},
+ {"Chad", "Tchad (le)", "TD", "TCD", "148"},
+ {"Chile", "Chili (le)", "CL", "CHL", "152"},
+ {"China", "Chine (la)", "CN", "CHN", "156"},
+ {"Taiwan (Province of China)", "Taïwan (Province de Chine)", "TW", "TWN", "158"},
+ {"Christmas Island", "Christmas (l'Île)", "CX", "CXR", "162"},
+ {"Cocos (Keeling) Islands (the)", "Cocos (les Îles)/ Keeling (les Îles)", "CC", "CCK", "166"},
+ {"Colombia", "Colombie (la)", "CO", "COL", "170"},
+ {"Comoros (the)", "Comores (les)", "KM", "COM", "174"},
+ {"Mayotte", "Mayotte", "YT", "MYT", "175"},
+ {"Congo (the)", "Congo (le)", "CG", "COG", "178"},
+ {"Congo (the Democratic Republic of the)", "Congo (la République démocratique du)", "CD", "COD", "180"},
+ {"Cook Islands (the)", "Cook (les Îles)", "CK", "COK", "184"},
+ {"Costa Rica", "Costa Rica (le)", "CR", "CRI", "188"},
+ {"Croatia", "Croatie (la)", "HR", "HRV", "191"},
+ {"Cuba", "Cuba", "CU", "CUB", "192"},
+ {"Cyprus", "Chypre", "CY", "CYP", "196"},
+ {"Czech Republic (the)", "tchèque (la République)", "CZ", "CZE", "203"},
+ {"Benin", "Bénin (le)", "BJ", "BEN", "204"},
+ {"Denmark", "Danemark (le)", "DK", "DNK", "208"},
+ {"Dominica", "Dominique (la)", "DM", "DMA", "212"},
+ {"Dominican Republic (the)", "dominicaine (la République)", "DO", "DOM", "214"},
+ {"Ecuador", "Équateur (l')", "EC", "ECU", "218"},
+ {"El Salvador", "El Salvador", "SV", "SLV", "222"},
+ {"Equatorial Guinea", "Guinée équatoriale (la)", "GQ", "GNQ", "226"},
+ {"Ethiopia", "Éthiopie (l')", "ET", "ETH", "231"},
+ {"Eritrea", "Érythrée (l')", "ER", "ERI", "232"},
+ {"Estonia", "Estonie (l')", "EE", "EST", "233"},
+ {"Faroe Islands (the)", "Féroé (les Îles)", "FO", "FRO", "234"},
+ {"Falkland Islands (the) [Malvinas]", "Falkland (les Îles)/Malouines (les Îles)", "FK", "FLK", "238"},
+ {"South Georgia and the South Sandwich Islands", "Géorgie du Sud-et-les Îles Sandwich du Sud (la)", "GS", "SGS", "239"},
+ {"Fiji", "Fidji (les)", "FJ", "FJI", "242"},
+ {"Finland", "Finlande (la)", "FI", "FIN", "246"},
+ {"Åland Islands", "Åland(les Îles)", "AX", "ALA", "248"},
+ {"France", "France (la)", "FR", "FRA", "250"},
+ {"French Guiana", "Guyane française (la )", "GF", "GUF", "254"},
+ {"French Polynesia", "Polynésie française (la)", "PF", "PYF", "258"},
+ {"French Southern Territories (the)", "Terres australes françaises (les)", "TF", "ATF", "260"},
+ {"Djibouti", "Djibouti", "DJ", "DJI", "262"},
+ {"Gabon", "Gabon (le)", "GA", "GAB", "266"},
+ {"Georgia", "Géorgie (la)", "GE", "GEO", "268"},
+ {"Gambia (the)", "Gambie (la)", "GM", "GMB", "270"},
+ {"Palestine, State of", "Palestine, État de", "PS", "PSE", "275"},
+ {"Germany", "Allemagne (l')", "DE", "DEU", "276"},
+ {"Ghana", "Ghana (le)", "GH", "GHA", "288"},
+ {"Gibraltar", "Gibraltar", "GI", "GIB", "292"},
+ {"Kiribati", "Kiribati", "KI", "KIR", "296"},
+ {"Greece", "Grèce (la)", "GR", "GRC", "300"},
+ {"Greenland", "Groenland (le)", "GL", "GRL", "304"},
+ {"Grenada", "Grenade (la)", "GD", "GRD", "308"},
+ {"Guadeloupe", "Guadeloupe (la)", "GP", "GLP", "312"},
+ {"Guam", "Guam", "GU", "GUM", "316"},
+ {"Guatemala", "Guatemala (le)", "GT", "GTM", "320"},
+ {"Guinea", "Guinée (la)", "GN", "GIN", "324"},
+ {"Guyana", "Guyana (le)", "GY", "GUY", "328"},
+ {"Haiti", "Haïti", "HT", "HTI", "332"},
+ {"Heard Island and McDonald Islands", "Heard-et-Îles MacDonald (l'Île)", "HM", "HMD", "334"},
+ {"Holy See (the)", "Saint-Siège (le)", "VA", "VAT", "336"},
+ {"Honduras", "Honduras (le)", "HN", "HND", "340"},
+ {"Hong Kong", "Hong Kong", "HK", "HKG", "344"},
+ {"Hungary", "Hongrie (la)", "HU", "HUN", "348"},
+ {"Iceland", "Islande (l')", "IS", "ISL", "352"},
+ {"India", "Inde (l')", "IN", "IND", "356"},
+ {"Indonesia", "Indonésie (l')", "ID", "IDN", "360"},
+ {"Iran (Islamic Republic of)", "Iran (République Islamique d')", "IR", "IRN", "364"},
+ {"Iraq", "Iraq (l')", "IQ", "IRQ", "368"},
+ {"Ireland", "Irlande (l')", "IE", "IRL", "372"},
+ {"Israel", "Israël", "IL", "ISR", "376"},
+ {"Italy", "Italie (l')", "IT", "ITA", "380"},
+ {"Côte d'Ivoire", "Côte d'Ivoire (la)", "CI", "CIV", "384"},
+ {"Jamaica", "Jamaïque (la)", "JM", "JAM", "388"},
+ {"Japan", "Japon (le)", "JP", "JPN", "392"},
+ {"Kazakhstan", "Kazakhstan (le)", "KZ", "KAZ", "398"},
+ {"Jordan", "Jordanie (la)", "JO", "JOR", "400"},
+ {"Kenya", "Kenya (le)", "KE", "KEN", "404"},
+ {"Korea (the Democratic People's Republic of)", "Corée (la République populaire démocratique de)", "KP", "PRK", "408"},
+ {"Korea (the Republic of)", "Corée (la République de)", "KR", "KOR", "410"},
+ {"Kuwait", "Koweït (le)", "KW", "KWT", "414"},
+ {"Kyrgyzstan", "Kirghizistan (le)", "KG", "KGZ", "417"},
+ {"Lao People's Democratic Republic (the)", "Lao, République démocratique populaire", "LA", "LAO", "418"},
+ {"Lebanon", "Liban (le)", "LB", "LBN", "422"},
+ {"Lesotho", "Lesotho (le)", "LS", "LSO", "426"},
+ {"Latvia", "Lettonie (la)", "LV", "LVA", "428"},
+ {"Liberia", "Libéria (le)", "LR", "LBR", "430"},
+ {"Libya", "Libye (la)", "LY", "LBY", "434"},
+ {"Liechtenstein", "Liechtenstein (le)", "LI", "LIE", "438"},
+ {"Lithuania", "Lituanie (la)", "LT", "LTU", "440"},
+ {"Luxembourg", "Luxembourg (le)", "LU", "LUX", "442"},
+ {"Macao", "Macao", "MO", "MAC", "446"},
+ {"Madagascar", "Madagascar", "MG", "MDG", "450"},
+ {"Malawi", "Malawi (le)", "MW", "MWI", "454"},
+ {"Malaysia", "Malaisie (la)", "MY", "MYS", "458"},
+ {"Maldives", "Maldives (les)", "MV", "MDV", "462"},
+ {"Mali", "Mali (le)", "ML", "MLI", "466"},
+ {"Malta", "Malte", "MT", "MLT", "470"},
+ {"Martinique", "Martinique (la)", "MQ", "MTQ", "474"},
+ {"Mauritania", "Mauritanie (la)", "MR", "MRT", "478"},
+ {"Mauritius", "Maurice", "MU", "MUS", "480"},
+ {"Mexico", "Mexique (le)", "MX", "MEX", "484"},
+ {"Monaco", "Monaco", "MC", "MCO", "492"},
+ {"Mongolia", "Mongolie (la)", "MN", "MNG", "496"},
+ {"Moldova (the Republic of)", "Moldova , République de", "MD", "MDA", "498"},
+ {"Montenegro", "Monténégro (le)", "ME", "MNE", "499"},
+ {"Montserrat", "Montserrat", "MS", "MSR", "500"},
+ {"Morocco", "Maroc (le)", "MA", "MAR", "504"},
+ {"Mozambique", "Mozambique (le)", "MZ", "MOZ", "508"},
+ {"Oman", "Oman", "OM", "OMN", "512"},
+ {"Namibia", "Namibie (la)", "NA", "NAM", "516"},
+ {"Nauru", "Nauru", "NR", "NRU", "520"},
+ {"Nepal", "Népal (le)", "NP", "NPL", "524"},
+ {"Netherlands (the)", "Pays-Bas (les)", "NL", "NLD", "528"},
+ {"Curaçao", "Curaçao", "CW", "CUW", "531"},
+ {"Aruba", "Aruba", "AW", "ABW", "533"},
+ {"Sint Maarten (Dutch part)", "Saint-Martin (partie néerlandaise)", "SX", "SXM", "534"},
+ {"Bonaire, Sint Eustatius and Saba", "Bonaire, Saint-Eustache et Saba", "BQ", "BES", "535"},
+ {"New Caledonia", "Nouvelle-Calédonie (la)", "NC", "NCL", "540"},
+ {"Vanuatu", "Vanuatu (le)", "VU", "VUT", "548"},
+ {"New Zealand", "Nouvelle-Zélande (la)", "NZ", "NZL", "554"},
+ {"Nicaragua", "Nicaragua (le)", "NI", "NIC", "558"},
+ {"Niger (the)", "Niger (le)", "NE", "NER", "562"},
+ {"Nigeria", "Nigéria (le)", "NG", "NGA", "566"},
+ {"Niue", "Niue", "NU", "NIU", "570"},
+ {"Norfolk Island", "Norfolk (l'Île)", "NF", "NFK", "574"},
+ {"Norway", "Norvège (la)", "NO", "NOR", "578"},
+ {"Northern Mariana Islands (the)", "Mariannes du Nord (les Îles)", "MP", "MNP", "580"},
+ {"United States Minor Outlying Islands (the)", "Îles mineures éloignées des États-Unis (les)", "UM", "UMI", "581"},
+ {"Micronesia (Federated States of)", "Micronésie (États fédérés de)", "FM", "FSM", "583"},
+ {"Marshall Islands (the)", "Marshall (Îles)", "MH", "MHL", "584"},
+ {"Palau", "Palaos (les)", "PW", "PLW", "585"},
+ {"Pakistan", "Pakistan (le)", "PK", "PAK", "586"},
+ {"Panama", "Panama (le)", "PA", "PAN", "591"},
+ {"Papua New Guinea", "Papouasie-Nouvelle-Guinée (la)", "PG", "PNG", "598"},
+ {"Paraguay", "Paraguay (le)", "PY", "PRY", "600"},
+ {"Peru", "Pérou (le)", "PE", "PER", "604"},
+ {"Philippines (the)", "Philippines (les)", "PH", "PHL", "608"},
+ {"Pitcairn", "Pitcairn", "PN", "PCN", "612"},
+ {"Poland", "Pologne (la)", "PL", "POL", "616"},
+ {"Portugal", "Portugal (le)", "PT", "PRT", "620"},
+ {"Guinea-Bissau", "Guinée-Bissau (la)", "GW", "GNB", "624"},
+ {"Timor-Leste", "Timor-Leste (le)", "TL", "TLS", "626"},
+ {"Puerto Rico", "Porto Rico", "PR", "PRI", "630"},
+ {"Qatar", "Qatar (le)", "QA", "QAT", "634"},
+ {"Réunion", "Réunion (La)", "RE", "REU", "638"},
+ {"Romania", "Roumanie (la)", "RO", "ROU", "642"},
+ {"Russian Federation (the)", "Russie (la Fédération de)", "RU", "RUS", "643"},
+ {"Rwanda", "Rwanda (le)", "RW", "RWA", "646"},
+ {"Saint Barthélemy", "Saint-Barthélemy", "BL", "BLM", "652"},
+ {"Saint Helena, Ascension and Tristan da Cunha", "Sainte-Hélène, Ascension et Tristan da Cunha", "SH", "SHN", "654"},
+ {"Saint Kitts and Nevis", "Saint-Kitts-et-Nevis", "KN", "KNA", "659"},
+ {"Anguilla", "Anguilla", "AI", "AIA", "660"},
+ {"Saint Lucia", "Sainte-Lucie", "LC", "LCA", "662"},
+ {"Saint Martin (French part)", "Saint-Martin (partie française)", "MF", "MAF", "663"},
+ {"Saint Pierre and Miquelon", "Saint-Pierre-et-Miquelon", "PM", "SPM", "666"},
+ {"Saint Vincent and the Grenadines", "Saint-Vincent-et-les Grenadines", "VC", "VCT", "670"},
+ {"San Marino", "Saint-Marin", "SM", "SMR", "674"},
+ {"Sao Tome and Principe", "Sao Tomé-et-Principe", "ST", "STP", "678"},
+ {"Saudi Arabia", "Arabie saoudite (l')", "SA", "SAU", "682"},
+ {"Senegal", "Sénégal (le)", "SN", "SEN", "686"},
+ {"Serbia", "Serbie (la)", "RS", "SRB", "688"},
+ {"Seychelles", "Seychelles (les)", "SC", "SYC", "690"},
+ {"Sierra Leone", "Sierra Leone (la)", "SL", "SLE", "694"},
+ {"Singapore", "Singapour", "SG", "SGP", "702"},
+ {"Slovakia", "Slovaquie (la)", "SK", "SVK", "703"},
+ {"Viet Nam", "Viet Nam (le)", "VN", "VNM", "704"},
+ {"Slovenia", "Slovénie (la)", "SI", "SVN", "705"},
+ {"Somalia", "Somalie (la)", "SO", "SOM", "706"},
+ {"South Africa", "Afrique du Sud (l')", "ZA", "ZAF", "710"},
+ {"Zimbabwe", "Zimbabwe (le)", "ZW", "ZWE", "716"},
+ {"Spain", "Espagne (l')", "ES", "ESP", "724"},
+ {"South Sudan", "Soudan du Sud (le)", "SS", "SSD", "728"},
+ {"Sudan (the)", "Soudan (le)", "SD", "SDN", "729"},
+ {"Western Sahara*", "Sahara occidental (le)*", "EH", "ESH", "732"},
+ {"Suriname", "Suriname (le)", "SR", "SUR", "740"},
+ {"Svalbard and Jan Mayen", "Svalbard et l'Île Jan Mayen (le)", "SJ", "SJM", "744"},
+ {"Swaziland", "Swaziland (le)", "SZ", "SWZ", "748"},
+ {"Sweden", "Suède (la)", "SE", "SWE", "752"},
+ {"Switzerland", "Suisse (la)", "CH", "CHE", "756"},
+ {"Syrian Arab Republic", "République arabe syrienne (la)", "SY", "SYR", "760"},
+ {"Tajikistan", "Tadjikistan (le)", "TJ", "TJK", "762"},
+ {"Thailand", "Thaïlande (la)", "TH", "THA", "764"},
+ {"Togo", "Togo (le)", "TG", "TGO", "768"},
+ {"Tokelau", "Tokelau (les)", "TK", "TKL", "772"},
+ {"Tonga", "Tonga (les)", "TO", "TON", "776"},
+ {"Trinidad and Tobago", "Trinité-et-Tobago (la)", "TT", "TTO", "780"},
+ {"United Arab Emirates (the)", "Émirats arabes unis (les)", "AE", "ARE", "784"},
+ {"Tunisia", "Tunisie (la)", "TN", "TUN", "788"},
+ {"Turkey", "Turquie (la)", "TR", "TUR", "792"},
+ {"Turkmenistan", "Turkménistan (le)", "TM", "TKM", "795"},
+ {"Turks and Caicos Islands (the)", "Turks-et-Caïcos (les Îles)", "TC", "TCA", "796"},
+ {"Tuvalu", "Tuvalu (les)", "TV", "TUV", "798"},
+ {"Uganda", "Ouganda (l')", "UG", "UGA", "800"},
+ {"Ukraine", "Ukraine (l')", "UA", "UKR", "804"},
+ {"Macedonia (the former Yugoslav Republic of)", "Macédoine (l'ex‑République yougoslave de)", "MK", "MKD", "807"},
+ {"Egypt", "Égypte (l')", "EG", "EGY", "818"},
+ {"United Kingdom of Great Britain and Northern Ireland (the)", "Royaume-Uni de Grande-Bretagne et d'Irlande du Nord (le)", "GB", "GBR", "826"},
+ {"Guernsey", "Guernesey", "GG", "GGY", "831"},
+ {"Jersey", "Jersey", "JE", "JEY", "832"},
+ {"Isle of Man", "Île de Man", "IM", "IMN", "833"},
+ {"Tanzania, United Republic of", "Tanzanie, République-Unie de", "TZ", "TZA", "834"},
+ {"United States of America (the)", "États-Unis d'Amérique (les)", "US", "USA", "840"},
+ {"Virgin Islands (U.S.)", "Vierges des États-Unis (les Îles)", "VI", "VIR", "850"},
+ {"Burkina Faso", "Burkina Faso (le)", "BF", "BFA", "854"},
+ {"Uruguay", "Uruguay (l')", "UY", "URY", "858"},
+ {"Uzbekistan", "Ouzbékistan (l')", "UZ", "UZB", "860"},
+ {"Venezuela (Bolivarian Republic of)", "Venezuela (République bolivarienne du)", "VE", "VEN", "862"},
+ {"Wallis and Futuna", "Wallis-et-Futuna", "WF", "WLF", "876"},
+ {"Samoa", "Samoa (le)", "WS", "WSM", "882"},
+ {"Yemen", "Yémen (le)", "YE", "YEM", "887"},
+ {"Zambia", "Zambie (la)", "ZM", "ZMB", "894"},
+}
+
+// ISO4217List is the list of ISO currency codes
+var ISO4217List = []string{
+ "AED", "AFN", "ALL", "AMD", "ANG", "AOA", "ARS", "AUD", "AWG", "AZN",
+ "BAM", "BBD", "BDT", "BGN", "BHD", "BIF", "BMD", "BND", "BOB", "BOV", "BRL", "BSD", "BTN", "BWP", "BYN", "BZD",
+ "CAD", "CDF", "CHE", "CHF", "CHW", "CLF", "CLP", "CNY", "COP", "COU", "CRC", "CUC", "CUP", "CVE", "CZK",
+ "DJF", "DKK", "DOP", "DZD",
+ "EGP", "ERN", "ETB", "EUR",
+ "FJD", "FKP",
+ "GBP", "GEL", "GHS", "GIP", "GMD", "GNF", "GTQ", "GYD",
+ "HKD", "HNL", "HRK", "HTG", "HUF",
+ "IDR", "ILS", "INR", "IQD", "IRR", "ISK",
+ "JMD", "JOD", "JPY",
+ "KES", "KGS", "KHR", "KMF", "KPW", "KRW", "KWD", "KYD", "KZT",
+ "LAK", "LBP", "LKR", "LRD", "LSL", "LYD",
+ "MAD", "MDL", "MGA", "MKD", "MMK", "MNT", "MOP", "MRO", "MUR", "MVR", "MWK", "MXN", "MXV", "MYR", "MZN",
+ "NAD", "NGN", "NIO", "NOK", "NPR", "NZD",
+ "OMR",
+ "PAB", "PEN", "PGK", "PHP", "PKR", "PLN", "PYG",
+ "QAR",
+ "RON", "RSD", "RUB", "RWF",
+ "SAR", "SBD", "SCR", "SDG", "SEK", "SGD", "SHP", "SLL", "SOS", "SRD", "SSP", "STD", "STN", "SVC", "SYP", "SZL",
+ "THB", "TJS", "TMT", "TND", "TOP", "TRY", "TTD", "TWD", "TZS",
+ "UAH", "UGX", "USD", "USN", "UYI", "UYU", "UYW", "UZS",
+ "VEF", "VES", "VND", "VUV",
+ "WST",
+ "XAF", "XAG", "XAU", "XBA", "XBB", "XBC", "XBD", "XCD", "XDR", "XOF", "XPD", "XPF", "XPT", "XSU", "XTS", "XUA", "XXX",
+ "YER",
+ "ZAR", "ZMW", "ZWL",
+}
+
+// ISO693Entry stores ISO language codes
+type ISO693Entry struct {
+ Alpha3bCode string
+ Alpha2Code string
+ English string
+}
+
+//ISO693List based on http://data.okfn.org/data/core/language-codes/r/language-codes-3b2.json
+var ISO693List = []ISO693Entry{
+ {Alpha3bCode: "aar", Alpha2Code: "aa", English: "Afar"},
+ {Alpha3bCode: "abk", Alpha2Code: "ab", English: "Abkhazian"},
+ {Alpha3bCode: "afr", Alpha2Code: "af", English: "Afrikaans"},
+ {Alpha3bCode: "aka", Alpha2Code: "ak", English: "Akan"},
+ {Alpha3bCode: "alb", Alpha2Code: "sq", English: "Albanian"},
+ {Alpha3bCode: "amh", Alpha2Code: "am", English: "Amharic"},
+ {Alpha3bCode: "ara", Alpha2Code: "ar", English: "Arabic"},
+ {Alpha3bCode: "arg", Alpha2Code: "an", English: "Aragonese"},
+ {Alpha3bCode: "arm", Alpha2Code: "hy", English: "Armenian"},
+ {Alpha3bCode: "asm", Alpha2Code: "as", English: "Assamese"},
+ {Alpha3bCode: "ava", Alpha2Code: "av", English: "Avaric"},
+ {Alpha3bCode: "ave", Alpha2Code: "ae", English: "Avestan"},
+ {Alpha3bCode: "aym", Alpha2Code: "ay", English: "Aymara"},
+ {Alpha3bCode: "aze", Alpha2Code: "az", English: "Azerbaijani"},
+ {Alpha3bCode: "bak", Alpha2Code: "ba", English: "Bashkir"},
+ {Alpha3bCode: "bam", Alpha2Code: "bm", English: "Bambara"},
+ {Alpha3bCode: "baq", Alpha2Code: "eu", English: "Basque"},
+ {Alpha3bCode: "bel", Alpha2Code: "be", English: "Belarusian"},
+ {Alpha3bCode: "ben", Alpha2Code: "bn", English: "Bengali"},
+ {Alpha3bCode: "bih", Alpha2Code: "bh", English: "Bihari languages"},
+ {Alpha3bCode: "bis", Alpha2Code: "bi", English: "Bislama"},
+ {Alpha3bCode: "bos", Alpha2Code: "bs", English: "Bosnian"},
+ {Alpha3bCode: "bre", Alpha2Code: "br", English: "Breton"},
+ {Alpha3bCode: "bul", Alpha2Code: "bg", English: "Bulgarian"},
+ {Alpha3bCode: "bur", Alpha2Code: "my", English: "Burmese"},
+ {Alpha3bCode: "cat", Alpha2Code: "ca", English: "Catalan; Valencian"},
+ {Alpha3bCode: "cha", Alpha2Code: "ch", English: "Chamorro"},
+ {Alpha3bCode: "che", Alpha2Code: "ce", English: "Chechen"},
+ {Alpha3bCode: "chi", Alpha2Code: "zh", English: "Chinese"},
+ {Alpha3bCode: "chu", Alpha2Code: "cu", English: "Church Slavic; Old Slavonic; Church Slavonic; Old Bulgarian; Old Church Slavonic"},
+ {Alpha3bCode: "chv", Alpha2Code: "cv", English: "Chuvash"},
+ {Alpha3bCode: "cor", Alpha2Code: "kw", English: "Cornish"},
+ {Alpha3bCode: "cos", Alpha2Code: "co", English: "Corsican"},
+ {Alpha3bCode: "cre", Alpha2Code: "cr", English: "Cree"},
+ {Alpha3bCode: "cze", Alpha2Code: "cs", English: "Czech"},
+ {Alpha3bCode: "dan", Alpha2Code: "da", English: "Danish"},
+ {Alpha3bCode: "div", Alpha2Code: "dv", English: "Divehi; Dhivehi; Maldivian"},
+ {Alpha3bCode: "dut", Alpha2Code: "nl", English: "Dutch; Flemish"},
+ {Alpha3bCode: "dzo", Alpha2Code: "dz", English: "Dzongkha"},
+ {Alpha3bCode: "eng", Alpha2Code: "en", English: "English"},
+ {Alpha3bCode: "epo", Alpha2Code: "eo", English: "Esperanto"},
+ {Alpha3bCode: "est", Alpha2Code: "et", English: "Estonian"},
+ {Alpha3bCode: "ewe", Alpha2Code: "ee", English: "Ewe"},
+ {Alpha3bCode: "fao", Alpha2Code: "fo", English: "Faroese"},
+ {Alpha3bCode: "fij", Alpha2Code: "fj", English: "Fijian"},
+ {Alpha3bCode: "fin", Alpha2Code: "fi", English: "Finnish"},
+ {Alpha3bCode: "fre", Alpha2Code: "fr", English: "French"},
+ {Alpha3bCode: "fry", Alpha2Code: "fy", English: "Western Frisian"},
+ {Alpha3bCode: "ful", Alpha2Code: "ff", English: "Fulah"},
+ {Alpha3bCode: "geo", Alpha2Code: "ka", English: "Georgian"},
+ {Alpha3bCode: "ger", Alpha2Code: "de", English: "German"},
+ {Alpha3bCode: "gla", Alpha2Code: "gd", English: "Gaelic; Scottish Gaelic"},
+ {Alpha3bCode: "gle", Alpha2Code: "ga", English: "Irish"},
+ {Alpha3bCode: "glg", Alpha2Code: "gl", English: "Galician"},
+ {Alpha3bCode: "glv", Alpha2Code: "gv", English: "Manx"},
+ {Alpha3bCode: "gre", Alpha2Code: "el", English: "Greek, Modern (1453-)"},
+ {Alpha3bCode: "grn", Alpha2Code: "gn", English: "Guarani"},
+ {Alpha3bCode: "guj", Alpha2Code: "gu", English: "Gujarati"},
+ {Alpha3bCode: "hat", Alpha2Code: "ht", English: "Haitian; Haitian Creole"},
+ {Alpha3bCode: "hau", Alpha2Code: "ha", English: "Hausa"},
+ {Alpha3bCode: "heb", Alpha2Code: "he", English: "Hebrew"},
+ {Alpha3bCode: "her", Alpha2Code: "hz", English: "Herero"},
+ {Alpha3bCode: "hin", Alpha2Code: "hi", English: "Hindi"},
+ {Alpha3bCode: "hmo", Alpha2Code: "ho", English: "Hiri Motu"},
+ {Alpha3bCode: "hrv", Alpha2Code: "hr", English: "Croatian"},
+ {Alpha3bCode: "hun", Alpha2Code: "hu", English: "Hungarian"},
+ {Alpha3bCode: "ibo", Alpha2Code: "ig", English: "Igbo"},
+ {Alpha3bCode: "ice", Alpha2Code: "is", English: "Icelandic"},
+ {Alpha3bCode: "ido", Alpha2Code: "io", English: "Ido"},
+ {Alpha3bCode: "iii", Alpha2Code: "ii", English: "Sichuan Yi; Nuosu"},
+ {Alpha3bCode: "iku", Alpha2Code: "iu", English: "Inuktitut"},
+ {Alpha3bCode: "ile", Alpha2Code: "ie", English: "Interlingue; Occidental"},
+ {Alpha3bCode: "ina", Alpha2Code: "ia", English: "Interlingua (International Auxiliary Language Association)"},
+ {Alpha3bCode: "ind", Alpha2Code: "id", English: "Indonesian"},
+ {Alpha3bCode: "ipk", Alpha2Code: "ik", English: "Inupiaq"},
+ {Alpha3bCode: "ita", Alpha2Code: "it", English: "Italian"},
+ {Alpha3bCode: "jav", Alpha2Code: "jv", English: "Javanese"},
+ {Alpha3bCode: "jpn", Alpha2Code: "ja", English: "Japanese"},
+ {Alpha3bCode: "kal", Alpha2Code: "kl", English: "Kalaallisut; Greenlandic"},
+ {Alpha3bCode: "kan", Alpha2Code: "kn", English: "Kannada"},
+ {Alpha3bCode: "kas", Alpha2Code: "ks", English: "Kashmiri"},
+ {Alpha3bCode: "kau", Alpha2Code: "kr", English: "Kanuri"},
+ {Alpha3bCode: "kaz", Alpha2Code: "kk", English: "Kazakh"},
+ {Alpha3bCode: "khm", Alpha2Code: "km", English: "Central Khmer"},
+ {Alpha3bCode: "kik", Alpha2Code: "ki", English: "Kikuyu; Gikuyu"},
+ {Alpha3bCode: "kin", Alpha2Code: "rw", English: "Kinyarwanda"},
+ {Alpha3bCode: "kir", Alpha2Code: "ky", English: "Kirghiz; Kyrgyz"},
+ {Alpha3bCode: "kom", Alpha2Code: "kv", English: "Komi"},
+ {Alpha3bCode: "kon", Alpha2Code: "kg", English: "Kongo"},
+ {Alpha3bCode: "kor", Alpha2Code: "ko", English: "Korean"},
+ {Alpha3bCode: "kua", Alpha2Code: "kj", English: "Kuanyama; Kwanyama"},
+ {Alpha3bCode: "kur", Alpha2Code: "ku", English: "Kurdish"},
+ {Alpha3bCode: "lao", Alpha2Code: "lo", English: "Lao"},
+ {Alpha3bCode: "lat", Alpha2Code: "la", English: "Latin"},
+ {Alpha3bCode: "lav", Alpha2Code: "lv", English: "Latvian"},
+ {Alpha3bCode: "lim", Alpha2Code: "li", English: "Limburgan; Limburger; Limburgish"},
+ {Alpha3bCode: "lin", Alpha2Code: "ln", English: "Lingala"},
+ {Alpha3bCode: "lit", Alpha2Code: "lt", English: "Lithuanian"},
+ {Alpha3bCode: "ltz", Alpha2Code: "lb", English: "Luxembourgish; Letzeburgesch"},
+ {Alpha3bCode: "lub", Alpha2Code: "lu", English: "Luba-Katanga"},
+ {Alpha3bCode: "lug", Alpha2Code: "lg", English: "Ganda"},
+ {Alpha3bCode: "mac", Alpha2Code: "mk", English: "Macedonian"},
+ {Alpha3bCode: "mah", Alpha2Code: "mh", English: "Marshallese"},
+ {Alpha3bCode: "mal", Alpha2Code: "ml", English: "Malayalam"},
+ {Alpha3bCode: "mao", Alpha2Code: "mi", English: "Maori"},
+ {Alpha3bCode: "mar", Alpha2Code: "mr", English: "Marathi"},
+ {Alpha3bCode: "may", Alpha2Code: "ms", English: "Malay"},
+ {Alpha3bCode: "mlg", Alpha2Code: "mg", English: "Malagasy"},
+ {Alpha3bCode: "mlt", Alpha2Code: "mt", English: "Maltese"},
+ {Alpha3bCode: "mon", Alpha2Code: "mn", English: "Mongolian"},
+ {Alpha3bCode: "nau", Alpha2Code: "na", English: "Nauru"},
+ {Alpha3bCode: "nav", Alpha2Code: "nv", English: "Navajo; Navaho"},
+ {Alpha3bCode: "nbl", Alpha2Code: "nr", English: "Ndebele, South; South Ndebele"},
+ {Alpha3bCode: "nde", Alpha2Code: "nd", English: "Ndebele, North; North Ndebele"},
+ {Alpha3bCode: "ndo", Alpha2Code: "ng", English: "Ndonga"},
+ {Alpha3bCode: "nep", Alpha2Code: "ne", English: "Nepali"},
+ {Alpha3bCode: "nno", Alpha2Code: "nn", English: "Norwegian Nynorsk; Nynorsk, Norwegian"},
+ {Alpha3bCode: "nob", Alpha2Code: "nb", English: "Bokmål, Norwegian; Norwegian Bokmål"},
+ {Alpha3bCode: "nor", Alpha2Code: "no", English: "Norwegian"},
+ {Alpha3bCode: "nya", Alpha2Code: "ny", English: "Chichewa; Chewa; Nyanja"},
+ {Alpha3bCode: "oci", Alpha2Code: "oc", English: "Occitan (post 1500); Provençal"},
+ {Alpha3bCode: "oji", Alpha2Code: "oj", English: "Ojibwa"},
+ {Alpha3bCode: "ori", Alpha2Code: "or", English: "Oriya"},
+ {Alpha3bCode: "orm", Alpha2Code: "om", English: "Oromo"},
+ {Alpha3bCode: "oss", Alpha2Code: "os", English: "Ossetian; Ossetic"},
+ {Alpha3bCode: "pan", Alpha2Code: "pa", English: "Panjabi; Punjabi"},
+ {Alpha3bCode: "per", Alpha2Code: "fa", English: "Persian"},
+ {Alpha3bCode: "pli", Alpha2Code: "pi", English: "Pali"},
+ {Alpha3bCode: "pol", Alpha2Code: "pl", English: "Polish"},
+ {Alpha3bCode: "por", Alpha2Code: "pt", English: "Portuguese"},
+ {Alpha3bCode: "pus", Alpha2Code: "ps", English: "Pushto; Pashto"},
+ {Alpha3bCode: "que", Alpha2Code: "qu", English: "Quechua"},
+ {Alpha3bCode: "roh", Alpha2Code: "rm", English: "Romansh"},
+ {Alpha3bCode: "rum", Alpha2Code: "ro", English: "Romanian; Moldavian; Moldovan"},
+ {Alpha3bCode: "run", Alpha2Code: "rn", English: "Rundi"},
+ {Alpha3bCode: "rus", Alpha2Code: "ru", English: "Russian"},
+ {Alpha3bCode: "sag", Alpha2Code: "sg", English: "Sango"},
+ {Alpha3bCode: "san", Alpha2Code: "sa", English: "Sanskrit"},
+ {Alpha3bCode: "sin", Alpha2Code: "si", English: "Sinhala; Sinhalese"},
+ {Alpha3bCode: "slo", Alpha2Code: "sk", English: "Slovak"},
+ {Alpha3bCode: "slv", Alpha2Code: "sl", English: "Slovenian"},
+ {Alpha3bCode: "sme", Alpha2Code: "se", English: "Northern Sami"},
+ {Alpha3bCode: "smo", Alpha2Code: "sm", English: "Samoan"},
+ {Alpha3bCode: "sna", Alpha2Code: "sn", English: "Shona"},
+ {Alpha3bCode: "snd", Alpha2Code: "sd", English: "Sindhi"},
+ {Alpha3bCode: "som", Alpha2Code: "so", English: "Somali"},
+ {Alpha3bCode: "sot", Alpha2Code: "st", English: "Sotho, Southern"},
+ {Alpha3bCode: "spa", Alpha2Code: "es", English: "Spanish; Castilian"},
+ {Alpha3bCode: "srd", Alpha2Code: "sc", English: "Sardinian"},
+ {Alpha3bCode: "srp", Alpha2Code: "sr", English: "Serbian"},
+ {Alpha3bCode: "ssw", Alpha2Code: "ss", English: "Swati"},
+ {Alpha3bCode: "sun", Alpha2Code: "su", English: "Sundanese"},
+ {Alpha3bCode: "swa", Alpha2Code: "sw", English: "Swahili"},
+ {Alpha3bCode: "swe", Alpha2Code: "sv", English: "Swedish"},
+ {Alpha3bCode: "tah", Alpha2Code: "ty", English: "Tahitian"},
+ {Alpha3bCode: "tam", Alpha2Code: "ta", English: "Tamil"},
+ {Alpha3bCode: "tat", Alpha2Code: "tt", English: "Tatar"},
+ {Alpha3bCode: "tel", Alpha2Code: "te", English: "Telugu"},
+ {Alpha3bCode: "tgk", Alpha2Code: "tg", English: "Tajik"},
+ {Alpha3bCode: "tgl", Alpha2Code: "tl", English: "Tagalog"},
+ {Alpha3bCode: "tha", Alpha2Code: "th", English: "Thai"},
+ {Alpha3bCode: "tib", Alpha2Code: "bo", English: "Tibetan"},
+ {Alpha3bCode: "tir", Alpha2Code: "ti", English: "Tigrinya"},
+ {Alpha3bCode: "ton", Alpha2Code: "to", English: "Tonga (Tonga Islands)"},
+ {Alpha3bCode: "tsn", Alpha2Code: "tn", English: "Tswana"},
+ {Alpha3bCode: "tso", Alpha2Code: "ts", English: "Tsonga"},
+ {Alpha3bCode: "tuk", Alpha2Code: "tk", English: "Turkmen"},
+ {Alpha3bCode: "tur", Alpha2Code: "tr", English: "Turkish"},
+ {Alpha3bCode: "twi", Alpha2Code: "tw", English: "Twi"},
+ {Alpha3bCode: "uig", Alpha2Code: "ug", English: "Uighur; Uyghur"},
+ {Alpha3bCode: "ukr", Alpha2Code: "uk", English: "Ukrainian"},
+ {Alpha3bCode: "urd", Alpha2Code: "ur", English: "Urdu"},
+ {Alpha3bCode: "uzb", Alpha2Code: "uz", English: "Uzbek"},
+ {Alpha3bCode: "ven", Alpha2Code: "ve", English: "Venda"},
+ {Alpha3bCode: "vie", Alpha2Code: "vi", English: "Vietnamese"},
+ {Alpha3bCode: "vol", Alpha2Code: "vo", English: "Volapük"},
+ {Alpha3bCode: "wel", Alpha2Code: "cy", English: "Welsh"},
+ {Alpha3bCode: "wln", Alpha2Code: "wa", English: "Walloon"},
+ {Alpha3bCode: "wol", Alpha2Code: "wo", English: "Wolof"},
+ {Alpha3bCode: "xho", Alpha2Code: "xh", English: "Xhosa"},
+ {Alpha3bCode: "yid", Alpha2Code: "yi", English: "Yiddish"},
+ {Alpha3bCode: "yor", Alpha2Code: "yo", English: "Yoruba"},
+ {Alpha3bCode: "zha", Alpha2Code: "za", English: "Zhuang; Chuang"},
+ {Alpha3bCode: "zul", Alpha2Code: "zu", English: "Zulu"},
+}
diff --git a/vendor/github.com/asaskevich/govalidator/utils.go b/vendor/github.com/asaskevich/govalidator/utils.go
new file mode 100644
index 000000000..f4c30f824
--- /dev/null
+++ b/vendor/github.com/asaskevich/govalidator/utils.go
@@ -0,0 +1,270 @@
+package govalidator
+
+import (
+ "errors"
+ "fmt"
+ "html"
+ "math"
+ "path"
+ "regexp"
+ "strings"
+ "unicode"
+ "unicode/utf8"
+)
+
+// Contains checks if the string contains the substring.
+func Contains(str, substring string) bool {
+ return strings.Contains(str, substring)
+}
+
+// Matches checks if string matches the pattern (pattern is regular expression)
+// In case of error return false
+func Matches(str, pattern string) bool {
+ match, _ := regexp.MatchString(pattern, str)
+ return match
+}
+
+// LeftTrim trims characters from the left side of the input.
+// If second argument is empty, it will remove leading spaces.
+func LeftTrim(str, chars string) string {
+ if chars == "" {
+ return strings.TrimLeftFunc(str, unicode.IsSpace)
+ }
+ r, _ := regexp.Compile("^[" + chars + "]+")
+ return r.ReplaceAllString(str, "")
+}
+
+// RightTrim trims characters from the right side of the input.
+// If second argument is empty, it will remove trailing spaces.
+func RightTrim(str, chars string) string {
+ if chars == "" {
+ return strings.TrimRightFunc(str, unicode.IsSpace)
+ }
+ r, _ := regexp.Compile("[" + chars + "]+$")
+ return r.ReplaceAllString(str, "")
+}
+
+// Trim trims characters from both sides of the input.
+// If second argument is empty, it will remove spaces.
+func Trim(str, chars string) string {
+ return LeftTrim(RightTrim(str, chars), chars)
+}
+
+// WhiteList removes characters that do not appear in the whitelist.
+func WhiteList(str, chars string) string {
+ pattern := "[^" + chars + "]+"
+ r, _ := regexp.Compile(pattern)
+ return r.ReplaceAllString(str, "")
+}
+
+// BlackList removes characters that appear in the blacklist.
+func BlackList(str, chars string) string {
+ pattern := "[" + chars + "]+"
+ r, _ := regexp.Compile(pattern)
+ return r.ReplaceAllString(str, "")
+}
+
+// StripLow removes characters with a numerical value < 32 and 127, mostly control characters.
+// If keep_new_lines is true, newline characters are preserved (\n and \r, hex 0xA and 0xD).
+func StripLow(str string, keepNewLines bool) string {
+ chars := ""
+ if keepNewLines {
+ chars = "\x00-\x09\x0B\x0C\x0E-\x1F\x7F"
+ } else {
+ chars = "\x00-\x1F\x7F"
+ }
+ return BlackList(str, chars)
+}
+
+// ReplacePattern replaces regular expression pattern in string
+func ReplacePattern(str, pattern, replace string) string {
+ r, _ := regexp.Compile(pattern)
+ return r.ReplaceAllString(str, replace)
+}
+
+// Escape replaces <, >, & and " with HTML entities.
+var Escape = html.EscapeString
+
+func addSegment(inrune, segment []rune) []rune {
+ if len(segment) == 0 {
+ return inrune
+ }
+ if len(inrune) != 0 {
+ inrune = append(inrune, '_')
+ }
+ inrune = append(inrune, segment...)
+ return inrune
+}
+
+// UnderscoreToCamelCase converts from underscore separated form to camel case form.
+// Ex.: my_func => MyFunc
+func UnderscoreToCamelCase(s string) string {
+ return strings.Replace(strings.Title(strings.Replace(strings.ToLower(s), "_", " ", -1)), " ", "", -1)
+}
+
+// CamelCaseToUnderscore converts from camel case form to underscore separated form.
+// Ex.: MyFunc => my_func
+func CamelCaseToUnderscore(str string) string {
+ var output []rune
+ var segment []rune
+ for _, r := range str {
+
+ // not treat number as separate segment
+ if !unicode.IsLower(r) && string(r) != "_" && !unicode.IsNumber(r) {
+ output = addSegment(output, segment)
+ segment = nil
+ }
+ segment = append(segment, unicode.ToLower(r))
+ }
+ output = addSegment(output, segment)
+ return string(output)
+}
+
+// Reverse returns reversed string
+func Reverse(s string) string {
+ r := []rune(s)
+ for i, j := 0, len(r)-1; i < j; i, j = i+1, j-1 {
+ r[i], r[j] = r[j], r[i]
+ }
+ return string(r)
+}
+
+// GetLines splits string by "\n" and return array of lines
+func GetLines(s string) []string {
+ return strings.Split(s, "\n")
+}
+
+// GetLine returns specified line of multiline string
+func GetLine(s string, index int) (string, error) {
+ lines := GetLines(s)
+ if index < 0 || index >= len(lines) {
+ return "", errors.New("line index out of bounds")
+ }
+ return lines[index], nil
+}
+
+// RemoveTags removes all tags from HTML string
+func RemoveTags(s string) string {
+ return ReplacePattern(s, "<[^>]*>", "")
+}
+
+// SafeFileName returns safe string that can be used in file names
+func SafeFileName(str string) string {
+ name := strings.ToLower(str)
+ name = path.Clean(path.Base(name))
+ name = strings.Trim(name, " ")
+ separators, err := regexp.Compile(`[ &_=+:]`)
+ if err == nil {
+ name = separators.ReplaceAllString(name, "-")
+ }
+ legal, err := regexp.Compile(`[^[:alnum:]-.]`)
+ if err == nil {
+ name = legal.ReplaceAllString(name, "")
+ }
+ for strings.Contains(name, "--") {
+ name = strings.Replace(name, "--", "-", -1)
+ }
+ return name
+}
+
+// NormalizeEmail canonicalize an email address.
+// The local part of the email address is lowercased for all domains; the hostname is always lowercased and
+// the local part of the email address is always lowercased for hosts that are known to be case-insensitive (currently only GMail).
+// Normalization follows special rules for known providers: currently, GMail addresses have dots removed in the local part and
+// are stripped of tags (e.g. some.one+tag@gmail.com becomes someone@gmail.com) and all @googlemail.com addresses are
+// normalized to @gmail.com.
+func NormalizeEmail(str string) (string, error) {
+ if !IsEmail(str) {
+ return "", fmt.Errorf("%s is not an email", str)
+ }
+ parts := strings.Split(str, "@")
+ parts[0] = strings.ToLower(parts[0])
+ parts[1] = strings.ToLower(parts[1])
+ if parts[1] == "gmail.com" || parts[1] == "googlemail.com" {
+ parts[1] = "gmail.com"
+ parts[0] = strings.Split(ReplacePattern(parts[0], `\.`, ""), "+")[0]
+ }
+ return strings.Join(parts, "@"), nil
+}
+
+// Truncate a string to the closest length without breaking words.
+func Truncate(str string, length int, ending string) string {
+ var aftstr, befstr string
+ if len(str) > length {
+ words := strings.Fields(str)
+ before, present := 0, 0
+ for i := range words {
+ befstr = aftstr
+ before = present
+ aftstr = aftstr + words[i] + " "
+ present = len(aftstr)
+ if present > length && i != 0 {
+ if (length - before) < (present - length) {
+ return Trim(befstr, " /\\.,\"'#!?&@+-") + ending
+ }
+ return Trim(aftstr, " /\\.,\"'#!?&@+-") + ending
+ }
+ }
+ }
+
+ return str
+}
+
+// PadLeft pads left side of a string if size of string is less then indicated pad length
+func PadLeft(str string, padStr string, padLen int) string {
+ return buildPadStr(str, padStr, padLen, true, false)
+}
+
+// PadRight pads right side of a string if size of string is less then indicated pad length
+func PadRight(str string, padStr string, padLen int) string {
+ return buildPadStr(str, padStr, padLen, false, true)
+}
+
+// PadBoth pads both sides of a string if size of string is less then indicated pad length
+func PadBoth(str string, padStr string, padLen int) string {
+ return buildPadStr(str, padStr, padLen, true, true)
+}
+
+// PadString either left, right or both sides.
+// Note that padding string can be unicode and more then one character
+func buildPadStr(str string, padStr string, padLen int, padLeft bool, padRight bool) string {
+
+ // When padded length is less then the current string size
+ if padLen < utf8.RuneCountInString(str) {
+ return str
+ }
+
+ padLen -= utf8.RuneCountInString(str)
+
+ targetLen := padLen
+
+ targetLenLeft := targetLen
+ targetLenRight := targetLen
+ if padLeft && padRight {
+ targetLenLeft = padLen / 2
+ targetLenRight = padLen - targetLenLeft
+ }
+
+ strToRepeatLen := utf8.RuneCountInString(padStr)
+
+ repeatTimes := int(math.Ceil(float64(targetLen) / float64(strToRepeatLen)))
+ repeatedString := strings.Repeat(padStr, repeatTimes)
+
+ leftSide := ""
+ if padLeft {
+ leftSide = repeatedString[0:targetLenLeft]
+ }
+
+ rightSide := ""
+ if padRight {
+ rightSide = repeatedString[0:targetLenRight]
+ }
+
+ return leftSide + str + rightSide
+}
+
+// TruncatingErrorf removes extra args from fmt.Errorf if not formatted in the str object
+func TruncatingErrorf(str string, args ...interface{}) error {
+ n := strings.Count(str, "%s")
+ return fmt.Errorf(str, args[:n]...)
+}
diff --git a/vendor/github.com/asaskevich/govalidator/validator.go b/vendor/github.com/asaskevich/govalidator/validator.go
new file mode 100644
index 000000000..c9c4fac06
--- /dev/null
+++ b/vendor/github.com/asaskevich/govalidator/validator.go
@@ -0,0 +1,1768 @@
+// Package govalidator is package of validators and sanitizers for strings, structs and collections.
+package govalidator
+
+import (
+ "bytes"
+ "crypto/rsa"
+ "crypto/x509"
+ "encoding/base64"
+ "encoding/json"
+ "encoding/pem"
+ "fmt"
+ "io/ioutil"
+ "net"
+ "net/url"
+ "reflect"
+ "regexp"
+ "sort"
+ "strconv"
+ "strings"
+ "time"
+ "unicode"
+ "unicode/utf8"
+)
+
+var (
+ fieldsRequiredByDefault bool
+ nilPtrAllowedByRequired = false
+ notNumberRegexp = regexp.MustCompile("[^0-9]+")
+ whiteSpacesAndMinus = regexp.MustCompile(`[\s-]+`)
+ paramsRegexp = regexp.MustCompile(`\(.*\)$`)
+)
+
+const maxURLRuneCount = 2083
+const minURLRuneCount = 3
+const rfc3339WithoutZone = "2006-01-02T15:04:05"
+
+// SetFieldsRequiredByDefault causes validation to fail when struct fields
+// do not include validations or are not explicitly marked as exempt (using `valid:"-"` or `valid:"email,optional"`).
+// This struct definition will fail govalidator.ValidateStruct() (and the field values do not matter):
+// type exampleStruct struct {
+// Name string ``
+// Email string `valid:"email"`
+// This, however, will only fail when Email is empty or an invalid email address:
+// type exampleStruct2 struct {
+// Name string `valid:"-"`
+// Email string `valid:"email"`
+// Lastly, this will only fail when Email is an invalid email address but not when it's empty:
+// type exampleStruct2 struct {
+// Name string `valid:"-"`
+// Email string `valid:"email,optional"`
+func SetFieldsRequiredByDefault(value bool) {
+ fieldsRequiredByDefault = value
+}
+
+// SetNilPtrAllowedByRequired causes validation to pass for nil ptrs when a field is set to required.
+// The validation will still reject ptr fields in their zero value state. Example with this enabled:
+// type exampleStruct struct {
+// Name *string `valid:"required"`
+// With `Name` set to "", this will be considered invalid input and will cause a validation error.
+// With `Name` set to nil, this will be considered valid by validation.
+// By default this is disabled.
+func SetNilPtrAllowedByRequired(value bool) {
+ nilPtrAllowedByRequired = value
+}
+
+// IsEmail checks if the string is an email.
+func IsEmail(str string) bool {
+ // TODO uppercase letters are not supported
+ return rxEmail.MatchString(str)
+}
+
+// IsExistingEmail checks if the string is an email of existing domain
+func IsExistingEmail(email string) bool {
+
+ if len(email) < 6 || len(email) > 254 {
+ return false
+ }
+ at := strings.LastIndex(email, "@")
+ if at <= 0 || at > len(email)-3 {
+ return false
+ }
+ user := email[:at]
+ host := email[at+1:]
+ if len(user) > 64 {
+ return false
+ }
+ switch host {
+ case "localhost", "example.com":
+ return true
+ }
+ if userDotRegexp.MatchString(user) || !userRegexp.MatchString(user) || !hostRegexp.MatchString(host) {
+ return false
+ }
+ if _, err := net.LookupMX(host); err != nil {
+ if _, err := net.LookupIP(host); err != nil {
+ return false
+ }
+ }
+
+ return true
+}
+
+// IsURL checks if the string is an URL.
+func IsURL(str string) bool {
+ if str == "" || utf8.RuneCountInString(str) >= maxURLRuneCount || len(str) <= minURLRuneCount || strings.HasPrefix(str, ".") {
+ return false
+ }
+ strTemp := str
+ if strings.Contains(str, ":") && !strings.Contains(str, "://") {
+ // support no indicated urlscheme but with colon for port number
+ // http:// is appended so url.Parse will succeed, strTemp used so it does not impact rxURL.MatchString
+ strTemp = "http://" + str
+ }
+ u, err := url.Parse(strTemp)
+ if err != nil {
+ return false
+ }
+ if strings.HasPrefix(u.Host, ".") {
+ return false
+ }
+ if u.Host == "" && (u.Path != "" && !strings.Contains(u.Path, ".")) {
+ return false
+ }
+ return rxURL.MatchString(str)
+}
+
+// IsRequestURL checks if the string rawurl, assuming
+// it was received in an HTTP request, is a valid
+// URL confirm to RFC 3986
+func IsRequestURL(rawurl string) bool {
+ url, err := url.ParseRequestURI(rawurl)
+ if err != nil {
+ return false //Couldn't even parse the rawurl
+ }
+ if len(url.Scheme) == 0 {
+ return false //No Scheme found
+ }
+ return true
+}
+
+// IsRequestURI checks if the string rawurl, assuming
+// it was received in an HTTP request, is an
+// absolute URI or an absolute path.
+func IsRequestURI(rawurl string) bool {
+ _, err := url.ParseRequestURI(rawurl)
+ return err == nil
+}
+
+// IsAlpha checks if the string contains only letters (a-zA-Z). Empty string is valid.
+func IsAlpha(str string) bool {
+ if IsNull(str) {
+ return true
+ }
+ return rxAlpha.MatchString(str)
+}
+
+//IsUTFLetter checks if the string contains only unicode letter characters.
+//Similar to IsAlpha but for all languages. Empty string is valid.
+func IsUTFLetter(str string) bool {
+ if IsNull(str) {
+ return true
+ }
+
+ for _, c := range str {
+ if !unicode.IsLetter(c) {
+ return false
+ }
+ }
+ return true
+
+}
+
+// IsAlphanumeric checks if the string contains only letters and numbers. Empty string is valid.
+func IsAlphanumeric(str string) bool {
+ if IsNull(str) {
+ return true
+ }
+ return rxAlphanumeric.MatchString(str)
+}
+
+// IsUTFLetterNumeric checks if the string contains only unicode letters and numbers. Empty string is valid.
+func IsUTFLetterNumeric(str string) bool {
+ if IsNull(str) {
+ return true
+ }
+ for _, c := range str {
+ if !unicode.IsLetter(c) && !unicode.IsNumber(c) { //letters && numbers are ok
+ return false
+ }
+ }
+ return true
+
+}
+
+// IsNumeric checks if the string contains only numbers. Empty string is valid.
+func IsNumeric(str string) bool {
+ if IsNull(str) {
+ return true
+ }
+ return rxNumeric.MatchString(str)
+}
+
+// IsUTFNumeric checks if the string contains only unicode numbers of any kind.
+// Numbers can be 0-9 but also Fractions ¾,Roman Ⅸ and Hangzhou 〩. Empty string is valid.
+func IsUTFNumeric(str string) bool {
+ if IsNull(str) {
+ return true
+ }
+ if strings.IndexAny(str, "+-") > 0 {
+ return false
+ }
+ if len(str) > 1 {
+ str = strings.TrimPrefix(str, "-")
+ str = strings.TrimPrefix(str, "+")
+ }
+ for _, c := range str {
+ if !unicode.IsNumber(c) { //numbers && minus sign are ok
+ return false
+ }
+ }
+ return true
+
+}
+
+// IsUTFDigit checks if the string contains only unicode radix-10 decimal digits. Empty string is valid.
+func IsUTFDigit(str string) bool {
+ if IsNull(str) {
+ return true
+ }
+ if strings.IndexAny(str, "+-") > 0 {
+ return false
+ }
+ if len(str) > 1 {
+ str = strings.TrimPrefix(str, "-")
+ str = strings.TrimPrefix(str, "+")
+ }
+ for _, c := range str {
+ if !unicode.IsDigit(c) { //digits && minus sign are ok
+ return false
+ }
+ }
+ return true
+
+}
+
+// IsHexadecimal checks if the string is a hexadecimal number.
+func IsHexadecimal(str string) bool {
+ return rxHexadecimal.MatchString(str)
+}
+
+// IsHexcolor checks if the string is a hexadecimal color.
+func IsHexcolor(str string) bool {
+ return rxHexcolor.MatchString(str)
+}
+
+// IsRGBcolor checks if the string is a valid RGB color in form rgb(RRR, GGG, BBB).
+func IsRGBcolor(str string) bool {
+ return rxRGBcolor.MatchString(str)
+}
+
+// IsLowerCase checks if the string is lowercase. Empty string is valid.
+func IsLowerCase(str string) bool {
+ if IsNull(str) {
+ return true
+ }
+ return str == strings.ToLower(str)
+}
+
+// IsUpperCase checks if the string is uppercase. Empty string is valid.
+func IsUpperCase(str string) bool {
+ if IsNull(str) {
+ return true
+ }
+ return str == strings.ToUpper(str)
+}
+
+// HasLowerCase checks if the string contains at least 1 lowercase. Empty string is valid.
+func HasLowerCase(str string) bool {
+ if IsNull(str) {
+ return true
+ }
+ return rxHasLowerCase.MatchString(str)
+}
+
+// HasUpperCase checks if the string contains as least 1 uppercase. Empty string is valid.
+func HasUpperCase(str string) bool {
+ if IsNull(str) {
+ return true
+ }
+ return rxHasUpperCase.MatchString(str)
+}
+
+// IsInt checks if the string is an integer. Empty string is valid.
+func IsInt(str string) bool {
+ if IsNull(str) {
+ return true
+ }
+ return rxInt.MatchString(str)
+}
+
+// IsFloat checks if the string is a float.
+func IsFloat(str string) bool {
+ return str != "" && rxFloat.MatchString(str)
+}
+
+// IsDivisibleBy checks if the string is a number that's divisible by another.
+// If second argument is not valid integer or zero, it's return false.
+// Otherwise, if first argument is not valid integer or zero, it's return true (Invalid string converts to zero).
+func IsDivisibleBy(str, num string) bool {
+ f, _ := ToFloat(str)
+ p := int64(f)
+ q, _ := ToInt(num)
+ if q == 0 {
+ return false
+ }
+ return (p == 0) || (p%q == 0)
+}
+
+// IsNull checks if the string is null.
+func IsNull(str string) bool {
+ return len(str) == 0
+}
+
+// IsNotNull checks if the string is not null.
+func IsNotNull(str string) bool {
+ return !IsNull(str)
+}
+
+// HasWhitespaceOnly checks the string only contains whitespace
+func HasWhitespaceOnly(str string) bool {
+ return len(str) > 0 && rxHasWhitespaceOnly.MatchString(str)
+}
+
+// HasWhitespace checks if the string contains any whitespace
+func HasWhitespace(str string) bool {
+ return len(str) > 0 && rxHasWhitespace.MatchString(str)
+}
+
+// IsByteLength checks if the string's length (in bytes) falls in a range.
+func IsByteLength(str string, min, max int) bool {
+ return len(str) >= min && len(str) <= max
+}
+
+// IsUUIDv3 checks if the string is a UUID version 3.
+func IsUUIDv3(str string) bool {
+ return rxUUID3.MatchString(str)
+}
+
+// IsUUIDv4 checks if the string is a UUID version 4.
+func IsUUIDv4(str string) bool {
+ return rxUUID4.MatchString(str)
+}
+
+// IsUUIDv5 checks if the string is a UUID version 5.
+func IsUUIDv5(str string) bool {
+ return rxUUID5.MatchString(str)
+}
+
+// IsUUID checks if the string is a UUID (version 3, 4 or 5).
+func IsUUID(str string) bool {
+ return rxUUID.MatchString(str)
+}
+
+// Byte to index table for O(1) lookups when unmarshaling.
+// We use 0xFF as sentinel value for invalid indexes.
+var ulidDec = [...]byte{
+ 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF,
+ 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF,
+ 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF,
+ 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF,
+ 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0x00, 0x01,
+ 0x02, 0x03, 0x04, 0x05, 0x06, 0x07, 0x08, 0x09, 0xFF, 0xFF,
+ 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0x0A, 0x0B, 0x0C, 0x0D, 0x0E,
+ 0x0F, 0x10, 0x11, 0xFF, 0x12, 0x13, 0xFF, 0x14, 0x15, 0xFF,
+ 0x16, 0x17, 0x18, 0x19, 0x1A, 0xFF, 0x1B, 0x1C, 0x1D, 0x1E,
+ 0x1F, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0x0A, 0x0B, 0x0C,
+ 0x0D, 0x0E, 0x0F, 0x10, 0x11, 0xFF, 0x12, 0x13, 0xFF, 0x14,
+ 0x15, 0xFF, 0x16, 0x17, 0x18, 0x19, 0x1A, 0xFF, 0x1B, 0x1C,
+ 0x1D, 0x1E, 0x1F, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF,
+ 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF,
+ 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF,
+ 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF,
+ 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF,
+ 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF,
+ 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF,
+ 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF,
+ 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF,
+ 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF,
+ 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF,
+ 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF,
+ 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF,
+ 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF,
+}
+
+// EncodedSize is the length of a text encoded ULID.
+const ulidEncodedSize = 26
+
+// IsULID checks if the string is a ULID.
+//
+// Implementation got from:
+// https://github.com/oklog/ulid (Apache-2.0 License)
+//
+func IsULID(str string) bool {
+ // Check if a base32 encoded ULID is the right length.
+ if len(str) != ulidEncodedSize {
+ return false
+ }
+
+ // Check if all the characters in a base32 encoded ULID are part of the
+ // expected base32 character set.
+ if ulidDec[str[0]] == 0xFF ||
+ ulidDec[str[1]] == 0xFF ||
+ ulidDec[str[2]] == 0xFF ||
+ ulidDec[str[3]] == 0xFF ||
+ ulidDec[str[4]] == 0xFF ||
+ ulidDec[str[5]] == 0xFF ||
+ ulidDec[str[6]] == 0xFF ||
+ ulidDec[str[7]] == 0xFF ||
+ ulidDec[str[8]] == 0xFF ||
+ ulidDec[str[9]] == 0xFF ||
+ ulidDec[str[10]] == 0xFF ||
+ ulidDec[str[11]] == 0xFF ||
+ ulidDec[str[12]] == 0xFF ||
+ ulidDec[str[13]] == 0xFF ||
+ ulidDec[str[14]] == 0xFF ||
+ ulidDec[str[15]] == 0xFF ||
+ ulidDec[str[16]] == 0xFF ||
+ ulidDec[str[17]] == 0xFF ||
+ ulidDec[str[18]] == 0xFF ||
+ ulidDec[str[19]] == 0xFF ||
+ ulidDec[str[20]] == 0xFF ||
+ ulidDec[str[21]] == 0xFF ||
+ ulidDec[str[22]] == 0xFF ||
+ ulidDec[str[23]] == 0xFF ||
+ ulidDec[str[24]] == 0xFF ||
+ ulidDec[str[25]] == 0xFF {
+ return false
+ }
+
+ // Check if the first character in a base32 encoded ULID will overflow. This
+ // happens because the base32 representation encodes 130 bits, while the
+ // ULID is only 128 bits.
+ //
+ // See https://github.com/oklog/ulid/issues/9 for details.
+ if str[0] > '7' {
+ return false
+ }
+ return true
+}
+
+// IsCreditCard checks if the string is a credit card.
+func IsCreditCard(str string) bool {
+ sanitized := whiteSpacesAndMinus.ReplaceAllString(str, "")
+ if !rxCreditCard.MatchString(sanitized) {
+ return false
+ }
+
+ number, _ := ToInt(sanitized)
+ number, lastDigit := number / 10, number % 10
+
+ var sum int64
+ for i:=0; number > 0; i++ {
+ digit := number % 10
+
+ if i % 2 == 0 {
+ digit *= 2
+ if digit > 9 {
+ digit -= 9
+ }
+ }
+
+ sum += digit
+ number = number / 10
+ }
+
+ return (sum + lastDigit) % 10 == 0
+}
+
+// IsISBN10 checks if the string is an ISBN version 10.
+func IsISBN10(str string) bool {
+ return IsISBN(str, 10)
+}
+
+// IsISBN13 checks if the string is an ISBN version 13.
+func IsISBN13(str string) bool {
+ return IsISBN(str, 13)
+}
+
+// IsISBN checks if the string is an ISBN (version 10 or 13).
+// If version value is not equal to 10 or 13, it will be checks both variants.
+func IsISBN(str string, version int) bool {
+ sanitized := whiteSpacesAndMinus.ReplaceAllString(str, "")
+ var checksum int32
+ var i int32
+ if version == 10 {
+ if !rxISBN10.MatchString(sanitized) {
+ return false
+ }
+ for i = 0; i < 9; i++ {
+ checksum += (i + 1) * int32(sanitized[i]-'0')
+ }
+ if sanitized[9] == 'X' {
+ checksum += 10 * 10
+ } else {
+ checksum += 10 * int32(sanitized[9]-'0')
+ }
+ if checksum%11 == 0 {
+ return true
+ }
+ return false
+ } else if version == 13 {
+ if !rxISBN13.MatchString(sanitized) {
+ return false
+ }
+ factor := []int32{1, 3}
+ for i = 0; i < 12; i++ {
+ checksum += factor[i%2] * int32(sanitized[i]-'0')
+ }
+ return (int32(sanitized[12]-'0'))-((10-(checksum%10))%10) == 0
+ }
+ return IsISBN(str, 10) || IsISBN(str, 13)
+}
+
+// IsJSON checks if the string is valid JSON (note: uses json.Unmarshal).
+func IsJSON(str string) bool {
+ var js json.RawMessage
+ return json.Unmarshal([]byte(str), &js) == nil
+}
+
+// IsMultibyte checks if the string contains one or more multibyte chars. Empty string is valid.
+func IsMultibyte(str string) bool {
+ if IsNull(str) {
+ return true
+ }
+ return rxMultibyte.MatchString(str)
+}
+
+// IsASCII checks if the string contains ASCII chars only. Empty string is valid.
+func IsASCII(str string) bool {
+ if IsNull(str) {
+ return true
+ }
+ return rxASCII.MatchString(str)
+}
+
+// IsPrintableASCII checks if the string contains printable ASCII chars only. Empty string is valid.
+func IsPrintableASCII(str string) bool {
+ if IsNull(str) {
+ return true
+ }
+ return rxPrintableASCII.MatchString(str)
+}
+
+// IsFullWidth checks if the string contains any full-width chars. Empty string is valid.
+func IsFullWidth(str string) bool {
+ if IsNull(str) {
+ return true
+ }
+ return rxFullWidth.MatchString(str)
+}
+
+// IsHalfWidth checks if the string contains any half-width chars. Empty string is valid.
+func IsHalfWidth(str string) bool {
+ if IsNull(str) {
+ return true
+ }
+ return rxHalfWidth.MatchString(str)
+}
+
+// IsVariableWidth checks if the string contains a mixture of full and half-width chars. Empty string is valid.
+func IsVariableWidth(str string) bool {
+ if IsNull(str) {
+ return true
+ }
+ return rxHalfWidth.MatchString(str) && rxFullWidth.MatchString(str)
+}
+
+// IsBase64 checks if a string is base64 encoded.
+func IsBase64(str string) bool {
+ return rxBase64.MatchString(str)
+}
+
+// IsFilePath checks is a string is Win or Unix file path and returns it's type.
+func IsFilePath(str string) (bool, int) {
+ if rxWinPath.MatchString(str) {
+ //check windows path limit see:
+ // http://msdn.microsoft.com/en-us/library/aa365247(VS.85).aspx#maxpath
+ if len(str[3:]) > 32767 {
+ return false, Win
+ }
+ return true, Win
+ } else if rxUnixPath.MatchString(str) {
+ return true, Unix
+ }
+ return false, Unknown
+}
+
+//IsWinFilePath checks both relative & absolute paths in Windows
+func IsWinFilePath(str string) bool {
+ if rxARWinPath.MatchString(str) {
+ //check windows path limit see:
+ // http://msdn.microsoft.com/en-us/library/aa365247(VS.85).aspx#maxpath
+ if len(str[3:]) > 32767 {
+ return false
+ }
+ return true
+ }
+ return false
+}
+
+//IsUnixFilePath checks both relative & absolute paths in Unix
+func IsUnixFilePath(str string) bool {
+ if rxARUnixPath.MatchString(str) {
+ return true
+ }
+ return false
+}
+
+// IsDataURI checks if a string is base64 encoded data URI such as an image
+func IsDataURI(str string) bool {
+ dataURI := strings.Split(str, ",")
+ if !rxDataURI.MatchString(dataURI[0]) {
+ return false
+ }
+ return IsBase64(dataURI[1])
+}
+
+// IsMagnetURI checks if a string is valid magnet URI
+func IsMagnetURI(str string) bool {
+ return rxMagnetURI.MatchString(str)
+}
+
+// IsISO3166Alpha2 checks if a string is valid two-letter country code
+func IsISO3166Alpha2(str string) bool {
+ for _, entry := range ISO3166List {
+ if str == entry.Alpha2Code {
+ return true
+ }
+ }
+ return false
+}
+
+// IsISO3166Alpha3 checks if a string is valid three-letter country code
+func IsISO3166Alpha3(str string) bool {
+ for _, entry := range ISO3166List {
+ if str == entry.Alpha3Code {
+ return true
+ }
+ }
+ return false
+}
+
+// IsISO693Alpha2 checks if a string is valid two-letter language code
+func IsISO693Alpha2(str string) bool {
+ for _, entry := range ISO693List {
+ if str == entry.Alpha2Code {
+ return true
+ }
+ }
+ return false
+}
+
+// IsISO693Alpha3b checks if a string is valid three-letter language code
+func IsISO693Alpha3b(str string) bool {
+ for _, entry := range ISO693List {
+ if str == entry.Alpha3bCode {
+ return true
+ }
+ }
+ return false
+}
+
+// IsDNSName will validate the given string as a DNS name
+func IsDNSName(str string) bool {
+ if str == "" || len(strings.Replace(str, ".", "", -1)) > 255 {
+ // constraints already violated
+ return false
+ }
+ return !IsIP(str) && rxDNSName.MatchString(str)
+}
+
+// IsHash checks if a string is a hash of type algorithm.
+// Algorithm is one of ['md4', 'md5', 'sha1', 'sha256', 'sha384', 'sha512', 'ripemd128', 'ripemd160', 'tiger128', 'tiger160', 'tiger192', 'crc32', 'crc32b']
+func IsHash(str string, algorithm string) bool {
+ var len string
+ algo := strings.ToLower(algorithm)
+
+ if algo == "crc32" || algo == "crc32b" {
+ len = "8"
+ } else if algo == "md5" || algo == "md4" || algo == "ripemd128" || algo == "tiger128" {
+ len = "32"
+ } else if algo == "sha1" || algo == "ripemd160" || algo == "tiger160" {
+ len = "40"
+ } else if algo == "tiger192" {
+ len = "48"
+ } else if algo == "sha3-224" {
+ len = "56"
+ } else if algo == "sha256" || algo == "sha3-256" {
+ len = "64"
+ } else if algo == "sha384" || algo == "sha3-384" {
+ len = "96"
+ } else if algo == "sha512" || algo == "sha3-512" {
+ len = "128"
+ } else {
+ return false
+ }
+
+ return Matches(str, "^[a-f0-9]{"+len+"}$")
+}
+
+// IsSHA3224 checks is a string is a SHA3-224 hash. Alias for `IsHash(str, "sha3-224")`
+func IsSHA3224(str string) bool {
+ return IsHash(str, "sha3-224")
+}
+
+// IsSHA3256 checks is a string is a SHA3-256 hash. Alias for `IsHash(str, "sha3-256")`
+func IsSHA3256(str string) bool {
+ return IsHash(str, "sha3-256")
+}
+
+// IsSHA3384 checks is a string is a SHA3-384 hash. Alias for `IsHash(str, "sha3-384")`
+func IsSHA3384(str string) bool {
+ return IsHash(str, "sha3-384")
+}
+
+// IsSHA3512 checks is a string is a SHA3-512 hash. Alias for `IsHash(str, "sha3-512")`
+func IsSHA3512(str string) bool {
+ return IsHash(str, "sha3-512")
+}
+
+// IsSHA512 checks is a string is a SHA512 hash. Alias for `IsHash(str, "sha512")`
+func IsSHA512(str string) bool {
+ return IsHash(str, "sha512")
+}
+
+// IsSHA384 checks is a string is a SHA384 hash. Alias for `IsHash(str, "sha384")`
+func IsSHA384(str string) bool {
+ return IsHash(str, "sha384")
+}
+
+// IsSHA256 checks is a string is a SHA256 hash. Alias for `IsHash(str, "sha256")`
+func IsSHA256(str string) bool {
+ return IsHash(str, "sha256")
+}
+
+// IsTiger192 checks is a string is a Tiger192 hash. Alias for `IsHash(str, "tiger192")`
+func IsTiger192(str string) bool {
+ return IsHash(str, "tiger192")
+}
+
+// IsTiger160 checks is a string is a Tiger160 hash. Alias for `IsHash(str, "tiger160")`
+func IsTiger160(str string) bool {
+ return IsHash(str, "tiger160")
+}
+
+// IsRipeMD160 checks is a string is a RipeMD160 hash. Alias for `IsHash(str, "ripemd160")`
+func IsRipeMD160(str string) bool {
+ return IsHash(str, "ripemd160")
+}
+
+// IsSHA1 checks is a string is a SHA-1 hash. Alias for `IsHash(str, "sha1")`
+func IsSHA1(str string) bool {
+ return IsHash(str, "sha1")
+}
+
+// IsTiger128 checks is a string is a Tiger128 hash. Alias for `IsHash(str, "tiger128")`
+func IsTiger128(str string) bool {
+ return IsHash(str, "tiger128")
+}
+
+// IsRipeMD128 checks is a string is a RipeMD128 hash. Alias for `IsHash(str, "ripemd128")`
+func IsRipeMD128(str string) bool {
+ return IsHash(str, "ripemd128")
+}
+
+// IsCRC32 checks is a string is a CRC32 hash. Alias for `IsHash(str, "crc32")`
+func IsCRC32(str string) bool {
+ return IsHash(str, "crc32")
+}
+
+// IsCRC32b checks is a string is a CRC32b hash. Alias for `IsHash(str, "crc32b")`
+func IsCRC32b(str string) bool {
+ return IsHash(str, "crc32b")
+}
+
+// IsMD5 checks is a string is a MD5 hash. Alias for `IsHash(str, "md5")`
+func IsMD5(str string) bool {
+ return IsHash(str, "md5")
+}
+
+// IsMD4 checks is a string is a MD4 hash. Alias for `IsHash(str, "md4")`
+func IsMD4(str string) bool {
+ return IsHash(str, "md4")
+}
+
+// IsDialString validates the given string for usage with the various Dial() functions
+func IsDialString(str string) bool {
+ if h, p, err := net.SplitHostPort(str); err == nil && h != "" && p != "" && (IsDNSName(h) || IsIP(h)) && IsPort(p) {
+ return true
+ }
+
+ return false
+}
+
+// IsIP checks if a string is either IP version 4 or 6. Alias for `net.ParseIP`
+func IsIP(str string) bool {
+ return net.ParseIP(str) != nil
+}
+
+// IsPort checks if a string represents a valid port
+func IsPort(str string) bool {
+ if i, err := strconv.Atoi(str); err == nil && i > 0 && i < 65536 {
+ return true
+ }
+ return false
+}
+
+// IsIPv4 checks if the string is an IP version 4.
+func IsIPv4(str string) bool {
+ ip := net.ParseIP(str)
+ return ip != nil && strings.Contains(str, ".")
+}
+
+// IsIPv6 checks if the string is an IP version 6.
+func IsIPv6(str string) bool {
+ ip := net.ParseIP(str)
+ return ip != nil && strings.Contains(str, ":")
+}
+
+// IsCIDR checks if the string is an valid CIDR notiation (IPV4 & IPV6)
+func IsCIDR(str string) bool {
+ _, _, err := net.ParseCIDR(str)
+ return err == nil
+}
+
+// IsMAC checks if a string is valid MAC address.
+// Possible MAC formats:
+// 01:23:45:67:89:ab
+// 01:23:45:67:89:ab:cd:ef
+// 01-23-45-67-89-ab
+// 01-23-45-67-89-ab-cd-ef
+// 0123.4567.89ab
+// 0123.4567.89ab.cdef
+func IsMAC(str string) bool {
+ _, err := net.ParseMAC(str)
+ return err == nil
+}
+
+// IsHost checks if the string is a valid IP (both v4 and v6) or a valid DNS name
+func IsHost(str string) bool {
+ return IsIP(str) || IsDNSName(str)
+}
+
+// IsMongoID checks if the string is a valid hex-encoded representation of a MongoDB ObjectId.
+func IsMongoID(str string) bool {
+ return rxHexadecimal.MatchString(str) && (len(str) == 24)
+}
+
+// IsLatitude checks if a string is valid latitude.
+func IsLatitude(str string) bool {
+ return rxLatitude.MatchString(str)
+}
+
+// IsLongitude checks if a string is valid longitude.
+func IsLongitude(str string) bool {
+ return rxLongitude.MatchString(str)
+}
+
+// IsIMEI checks if a string is valid IMEI
+func IsIMEI(str string) bool {
+ return rxIMEI.MatchString(str)
+}
+
+// IsIMSI checks if a string is valid IMSI
+func IsIMSI(str string) bool {
+ if !rxIMSI.MatchString(str) {
+ return false
+ }
+
+ mcc, err := strconv.ParseInt(str[0:3], 10, 32)
+ if err != nil {
+ return false
+ }
+
+ switch mcc {
+ case 202, 204, 206, 208, 212, 213, 214, 216, 218, 219:
+ case 220, 221, 222, 226, 228, 230, 231, 232, 234, 235:
+ case 238, 240, 242, 244, 246, 247, 248, 250, 255, 257:
+ case 259, 260, 262, 266, 268, 270, 272, 274, 276, 278:
+ case 280, 282, 283, 284, 286, 288, 289, 290, 292, 293:
+ case 294, 295, 297, 302, 308, 310, 311, 312, 313, 314:
+ case 315, 316, 330, 332, 334, 338, 340, 342, 344, 346:
+ case 348, 350, 352, 354, 356, 358, 360, 362, 363, 364:
+ case 365, 366, 368, 370, 372, 374, 376, 400, 401, 402:
+ case 404, 405, 406, 410, 412, 413, 414, 415, 416, 417:
+ case 418, 419, 420, 421, 422, 424, 425, 426, 427, 428:
+ case 429, 430, 431, 432, 434, 436, 437, 438, 440, 441:
+ case 450, 452, 454, 455, 456, 457, 460, 461, 466, 467:
+ case 470, 472, 502, 505, 510, 514, 515, 520, 525, 528:
+ case 530, 536, 537, 539, 540, 541, 542, 543, 544, 545:
+ case 546, 547, 548, 549, 550, 551, 552, 553, 554, 555:
+ case 602, 603, 604, 605, 606, 607, 608, 609, 610, 611:
+ case 612, 613, 614, 615, 616, 617, 618, 619, 620, 621:
+ case 622, 623, 624, 625, 626, 627, 628, 629, 630, 631:
+ case 632, 633, 634, 635, 636, 637, 638, 639, 640, 641:
+ case 642, 643, 645, 646, 647, 648, 649, 650, 651, 652:
+ case 653, 654, 655, 657, 658, 659, 702, 704, 706, 708:
+ case 710, 712, 714, 716, 722, 724, 730, 732, 734, 736:
+ case 738, 740, 742, 744, 746, 748, 750, 995:
+ return true
+ default:
+ return false
+ }
+ return true
+}
+
+// IsRsaPublicKey checks if a string is valid public key with provided length
+func IsRsaPublicKey(str string, keylen int) bool {
+ bb := bytes.NewBufferString(str)
+ pemBytes, err := ioutil.ReadAll(bb)
+ if err != nil {
+ return false
+ }
+ block, _ := pem.Decode(pemBytes)
+ if block != nil && block.Type != "PUBLIC KEY" {
+ return false
+ }
+ var der []byte
+
+ if block != nil {
+ der = block.Bytes
+ } else {
+ der, err = base64.StdEncoding.DecodeString(str)
+ if err != nil {
+ return false
+ }
+ }
+
+ key, err := x509.ParsePKIXPublicKey(der)
+ if err != nil {
+ return false
+ }
+ pubkey, ok := key.(*rsa.PublicKey)
+ if !ok {
+ return false
+ }
+ bitlen := len(pubkey.N.Bytes()) * 8
+ return bitlen == int(keylen)
+}
+
+// IsRegex checks if a give string is a valid regex with RE2 syntax or not
+func IsRegex(str string) bool {
+ if _, err := regexp.Compile(str); err == nil {
+ return true
+ }
+ return false
+}
+
+func toJSONName(tag string) string {
+ if tag == "" {
+ return ""
+ }
+
+ // JSON name always comes first. If there's no options then split[0] is
+ // JSON name, if JSON name is not set, then split[0] is an empty string.
+ split := strings.SplitN(tag, ",", 2)
+
+ name := split[0]
+
+ // However it is possible that the field is skipped when
+ // (de-)serializing from/to JSON, in which case assume that there is no
+ // tag name to use
+ if name == "-" {
+ return ""
+ }
+ return name
+}
+
+func prependPathToErrors(err error, path string) error {
+ switch err2 := err.(type) {
+ case Error:
+ err2.Path = append([]string{path}, err2.Path...)
+ return err2
+ case Errors:
+ errors := err2.Errors()
+ for i, err3 := range errors {
+ errors[i] = prependPathToErrors(err3, path)
+ }
+ return err2
+ }
+ return err
+}
+
+// ValidateArray performs validation according to condition iterator that validates every element of the array
+func ValidateArray(array []interface{}, iterator ConditionIterator) bool {
+ return Every(array, iterator)
+}
+
+// ValidateMap use validation map for fields.
+// result will be equal to `false` if there are any errors.
+// s is the map containing the data to be validated.
+// m is the validation map in the form:
+// map[string]interface{}{"name":"required,alpha","address":map[string]interface{}{"line1":"required,alphanum"}}
+func ValidateMap(s map[string]interface{}, m map[string]interface{}) (bool, error) {
+ if s == nil {
+ return true, nil
+ }
+ result := true
+ var err error
+ var errs Errors
+ var index int
+ val := reflect.ValueOf(s)
+ for key, value := range s {
+ presentResult := true
+ validator, ok := m[key]
+ if !ok {
+ presentResult = false
+ var err error
+ err = fmt.Errorf("all map keys has to be present in the validation map; got %s", key)
+ err = prependPathToErrors(err, key)
+ errs = append(errs, err)
+ }
+ valueField := reflect.ValueOf(value)
+ mapResult := true
+ typeResult := true
+ structResult := true
+ resultField := true
+ switch subValidator := validator.(type) {
+ case map[string]interface{}:
+ var err error
+ if v, ok := value.(map[string]interface{}); !ok {
+ mapResult = false
+ err = fmt.Errorf("map validator has to be for the map type only; got %s", valueField.Type().String())
+ err = prependPathToErrors(err, key)
+ errs = append(errs, err)
+ } else {
+ mapResult, err = ValidateMap(v, subValidator)
+ if err != nil {
+ mapResult = false
+ err = prependPathToErrors(err, key)
+ errs = append(errs, err)
+ }
+ }
+ case string:
+ if (valueField.Kind() == reflect.Struct ||
+ (valueField.Kind() == reflect.Ptr && valueField.Elem().Kind() == reflect.Struct)) &&
+ subValidator != "-" {
+ var err error
+ structResult, err = ValidateStruct(valueField.Interface())
+ if err != nil {
+ err = prependPathToErrors(err, key)
+ errs = append(errs, err)
+ }
+ }
+ resultField, err = typeCheck(valueField, reflect.StructField{
+ Name: key,
+ PkgPath: "",
+ Type: val.Type(),
+ Tag: reflect.StructTag(fmt.Sprintf("%s:%q", tagName, subValidator)),
+ Offset: 0,
+ Index: []int{index},
+ Anonymous: false,
+ }, val, nil)
+ if err != nil {
+ errs = append(errs, err)
+ }
+ case nil:
+ // already handlerd when checked before
+ default:
+ typeResult = false
+ err = fmt.Errorf("map validator has to be either map[string]interface{} or string; got %s", valueField.Type().String())
+ err = prependPathToErrors(err, key)
+ errs = append(errs, err)
+ }
+ result = result && presentResult && typeResult && resultField && structResult && mapResult
+ index++
+ }
+ // checks required keys
+ requiredResult := true
+ for key, value := range m {
+ if schema, ok := value.(string); ok {
+ tags := parseTagIntoMap(schema)
+ if required, ok := tags["required"]; ok {
+ if _, ok := s[key]; !ok {
+ requiredResult = false
+ if required.customErrorMessage != "" {
+ err = Error{key, fmt.Errorf(required.customErrorMessage), true, "required", []string{}}
+ } else {
+ err = Error{key, fmt.Errorf("required field missing"), false, "required", []string{}}
+ }
+ errs = append(errs, err)
+ }
+ }
+ }
+ }
+
+ if len(errs) > 0 {
+ err = errs
+ }
+ return result && requiredResult, err
+}
+
+// ValidateStruct use tags for fields.
+// result will be equal to `false` if there are any errors.
+// todo currently there is no guarantee that errors will be returned in predictable order (tests may to fail)
+func ValidateStruct(s interface{}) (bool, error) {
+ if s == nil {
+ return true, nil
+ }
+ result := true
+ var err error
+ val := reflect.ValueOf(s)
+ if val.Kind() == reflect.Interface || val.Kind() == reflect.Ptr {
+ val = val.Elem()
+ }
+ // we only accept structs
+ if val.Kind() != reflect.Struct {
+ return false, fmt.Errorf("function only accepts structs; got %s", val.Kind())
+ }
+ var errs Errors
+ for i := 0; i < val.NumField(); i++ {
+ valueField := val.Field(i)
+ typeField := val.Type().Field(i)
+ if typeField.PkgPath != "" {
+ continue // Private field
+ }
+ structResult := true
+ if valueField.Kind() == reflect.Interface {
+ valueField = valueField.Elem()
+ }
+ if (valueField.Kind() == reflect.Struct ||
+ (valueField.Kind() == reflect.Ptr && valueField.Elem().Kind() == reflect.Struct)) &&
+ typeField.Tag.Get(tagName) != "-" {
+ var err error
+ structResult, err = ValidateStruct(valueField.Interface())
+ if err != nil {
+ err = prependPathToErrors(err, typeField.Name)
+ errs = append(errs, err)
+ }
+ }
+ resultField, err2 := typeCheck(valueField, typeField, val, nil)
+ if err2 != nil {
+
+ // Replace structure name with JSON name if there is a tag on the variable
+ jsonTag := toJSONName(typeField.Tag.Get("json"))
+ if jsonTag != "" {
+ switch jsonError := err2.(type) {
+ case Error:
+ jsonError.Name = jsonTag
+ err2 = jsonError
+ case Errors:
+ for i2, err3 := range jsonError {
+ switch customErr := err3.(type) {
+ case Error:
+ customErr.Name = jsonTag
+ jsonError[i2] = customErr
+ }
+ }
+
+ err2 = jsonError
+ }
+ }
+
+ errs = append(errs, err2)
+ }
+ result = result && resultField && structResult
+ }
+ if len(errs) > 0 {
+ err = errs
+ }
+ return result, err
+}
+
+// ValidateStructAsync performs async validation of the struct and returns results through the channels
+func ValidateStructAsync(s interface{}) (<-chan bool, <-chan error) {
+ res := make(chan bool)
+ errors := make(chan error)
+
+ go func() {
+ defer close(res)
+ defer close(errors)
+
+ isValid, isFailed := ValidateStruct(s)
+
+ res <- isValid
+ errors <- isFailed
+ }()
+
+ return res, errors
+}
+
+// ValidateMapAsync performs async validation of the map and returns results through the channels
+func ValidateMapAsync(s map[string]interface{}, m map[string]interface{}) (<-chan bool, <-chan error) {
+ res := make(chan bool)
+ errors := make(chan error)
+
+ go func() {
+ defer close(res)
+ defer close(errors)
+
+ isValid, isFailed := ValidateMap(s, m)
+
+ res <- isValid
+ errors <- isFailed
+ }()
+
+ return res, errors
+}
+
+// parseTagIntoMap parses a struct tag `valid:required~Some error message,length(2|3)` into map[string]string{"required": "Some error message", "length(2|3)": ""}
+func parseTagIntoMap(tag string) tagOptionsMap {
+ optionsMap := make(tagOptionsMap)
+ options := strings.Split(tag, ",")
+
+ for i, option := range options {
+ option = strings.TrimSpace(option)
+
+ validationOptions := strings.Split(option, "~")
+ if !isValidTag(validationOptions[0]) {
+ continue
+ }
+ if len(validationOptions) == 2 {
+ optionsMap[validationOptions[0]] = tagOption{validationOptions[0], validationOptions[1], i}
+ } else {
+ optionsMap[validationOptions[0]] = tagOption{validationOptions[0], "", i}
+ }
+ }
+ return optionsMap
+}
+
+func isValidTag(s string) bool {
+ if s == "" {
+ return false
+ }
+ for _, c := range s {
+ switch {
+ case strings.ContainsRune("\\'\"!#$%&()*+-./:<=>?@[]^_{|}~ ", c):
+ // Backslash and quote chars are reserved, but
+ // otherwise any punctuation chars are allowed
+ // in a tag name.
+ default:
+ if !unicode.IsLetter(c) && !unicode.IsDigit(c) {
+ return false
+ }
+ }
+ }
+ return true
+}
+
+// IsSSN will validate the given string as a U.S. Social Security Number
+func IsSSN(str string) bool {
+ if str == "" || len(str) != 11 {
+ return false
+ }
+ return rxSSN.MatchString(str)
+}
+
+// IsSemver checks if string is valid semantic version
+func IsSemver(str string) bool {
+ return rxSemver.MatchString(str)
+}
+
+// IsType checks if interface is of some type
+func IsType(v interface{}, params ...string) bool {
+ if len(params) == 1 {
+ typ := params[0]
+ return strings.Replace(reflect.TypeOf(v).String(), " ", "", -1) == strings.Replace(typ, " ", "", -1)
+ }
+ return false
+}
+
+// IsTime checks if string is valid according to given format
+func IsTime(str string, format string) bool {
+ _, err := time.Parse(format, str)
+ return err == nil
+}
+
+// IsUnixTime checks if string is valid unix timestamp value
+func IsUnixTime(str string) bool {
+ if _, err := strconv.Atoi(str); err == nil {
+ return true
+ }
+ return false
+}
+
+// IsRFC3339 checks if string is valid timestamp value according to RFC3339
+func IsRFC3339(str string) bool {
+ return IsTime(str, time.RFC3339)
+}
+
+// IsRFC3339WithoutZone checks if string is valid timestamp value according to RFC3339 which excludes the timezone.
+func IsRFC3339WithoutZone(str string) bool {
+ return IsTime(str, rfc3339WithoutZone)
+}
+
+// IsISO4217 checks if string is valid ISO currency code
+func IsISO4217(str string) bool {
+ for _, currency := range ISO4217List {
+ if str == currency {
+ return true
+ }
+ }
+
+ return false
+}
+
+// ByteLength checks string's length
+func ByteLength(str string, params ...string) bool {
+ if len(params) == 2 {
+ min, _ := ToInt(params[0])
+ max, _ := ToInt(params[1])
+ return len(str) >= int(min) && len(str) <= int(max)
+ }
+
+ return false
+}
+
+// RuneLength checks string's length
+// Alias for StringLength
+func RuneLength(str string, params ...string) bool {
+ return StringLength(str, params...)
+}
+
+// IsRsaPub checks whether string is valid RSA key
+// Alias for IsRsaPublicKey
+func IsRsaPub(str string, params ...string) bool {
+ if len(params) == 1 {
+ len, _ := ToInt(params[0])
+ return IsRsaPublicKey(str, int(len))
+ }
+
+ return false
+}
+
+// StringMatches checks if a string matches a given pattern.
+func StringMatches(s string, params ...string) bool {
+ if len(params) == 1 {
+ pattern := params[0]
+ return Matches(s, pattern)
+ }
+ return false
+}
+
+// StringLength checks string's length (including multi byte strings)
+func StringLength(str string, params ...string) bool {
+
+ if len(params) == 2 {
+ strLength := utf8.RuneCountInString(str)
+ min, _ := ToInt(params[0])
+ max, _ := ToInt(params[1])
+ return strLength >= int(min) && strLength <= int(max)
+ }
+
+ return false
+}
+
+// MinStringLength checks string's minimum length (including multi byte strings)
+func MinStringLength(str string, params ...string) bool {
+
+ if len(params) == 1 {
+ strLength := utf8.RuneCountInString(str)
+ min, _ := ToInt(params[0])
+ return strLength >= int(min)
+ }
+
+ return false
+}
+
+// MaxStringLength checks string's maximum length (including multi byte strings)
+func MaxStringLength(str string, params ...string) bool {
+
+ if len(params) == 1 {
+ strLength := utf8.RuneCountInString(str)
+ max, _ := ToInt(params[0])
+ return strLength <= int(max)
+ }
+
+ return false
+}
+
+// Range checks string's length
+func Range(str string, params ...string) bool {
+ if len(params) == 2 {
+ value, _ := ToFloat(str)
+ min, _ := ToFloat(params[0])
+ max, _ := ToFloat(params[1])
+ return InRange(value, min, max)
+ }
+
+ return false
+}
+
+// IsInRaw checks if string is in list of allowed values
+func IsInRaw(str string, params ...string) bool {
+ if len(params) == 1 {
+ rawParams := params[0]
+
+ parsedParams := strings.Split(rawParams, "|")
+
+ return IsIn(str, parsedParams...)
+ }
+
+ return false
+}
+
+// IsIn checks if string str is a member of the set of strings params
+func IsIn(str string, params ...string) bool {
+ for _, param := range params {
+ if str == param {
+ return true
+ }
+ }
+
+ return false
+}
+
+func checkRequired(v reflect.Value, t reflect.StructField, options tagOptionsMap) (bool, error) {
+ if nilPtrAllowedByRequired {
+ k := v.Kind()
+ if (k == reflect.Ptr || k == reflect.Interface) && v.IsNil() {
+ return true, nil
+ }
+ }
+
+ if requiredOption, isRequired := options["required"]; isRequired {
+ if len(requiredOption.customErrorMessage) > 0 {
+ return false, Error{t.Name, fmt.Errorf(requiredOption.customErrorMessage), true, "required", []string{}}
+ }
+ return false, Error{t.Name, fmt.Errorf("non zero value required"), false, "required", []string{}}
+ } else if _, isOptional := options["optional"]; fieldsRequiredByDefault && !isOptional {
+ return false, Error{t.Name, fmt.Errorf("Missing required field"), false, "required", []string{}}
+ }
+ // not required and empty is valid
+ return true, nil
+}
+
+func typeCheck(v reflect.Value, t reflect.StructField, o reflect.Value, options tagOptionsMap) (isValid bool, resultErr error) {
+ if !v.IsValid() {
+ return false, nil
+ }
+
+ tag := t.Tag.Get(tagName)
+
+ // checks if the field should be ignored
+ switch tag {
+ case "":
+ if v.Kind() != reflect.Slice && v.Kind() != reflect.Map {
+ if !fieldsRequiredByDefault {
+ return true, nil
+ }
+ return false, Error{t.Name, fmt.Errorf("All fields are required to at least have one validation defined"), false, "required", []string{}}
+ }
+ case "-":
+ return true, nil
+ }
+
+ isRootType := false
+ if options == nil {
+ isRootType = true
+ options = parseTagIntoMap(tag)
+ }
+
+ if isEmptyValue(v) {
+ // an empty value is not validated, checks only required
+ isValid, resultErr = checkRequired(v, t, options)
+ for key := range options {
+ delete(options, key)
+ }
+ return isValid, resultErr
+ }
+
+ var customTypeErrors Errors
+ optionsOrder := options.orderedKeys()
+ for _, validatorName := range optionsOrder {
+ validatorStruct := options[validatorName]
+ if validatefunc, ok := CustomTypeTagMap.Get(validatorName); ok {
+ delete(options, validatorName)
+
+ if result := validatefunc(v.Interface(), o.Interface()); !result {
+ if len(validatorStruct.customErrorMessage) > 0 {
+ customTypeErrors = append(customTypeErrors, Error{Name: t.Name, Err: TruncatingErrorf(validatorStruct.customErrorMessage, fmt.Sprint(v), validatorName), CustomErrorMessageExists: true, Validator: stripParams(validatorName)})
+ continue
+ }
+ customTypeErrors = append(customTypeErrors, Error{Name: t.Name, Err: fmt.Errorf("%s does not validate as %s", fmt.Sprint(v), validatorName), CustomErrorMessageExists: false, Validator: stripParams(validatorName)})
+ }
+ }
+ }
+
+ if len(customTypeErrors.Errors()) > 0 {
+ return false, customTypeErrors
+ }
+
+ if isRootType {
+ // Ensure that we've checked the value by all specified validators before report that the value is valid
+ defer func() {
+ delete(options, "optional")
+ delete(options, "required")
+
+ if isValid && resultErr == nil && len(options) != 0 {
+ optionsOrder := options.orderedKeys()
+ for _, validator := range optionsOrder {
+ isValid = false
+ resultErr = Error{t.Name, fmt.Errorf(
+ "The following validator is invalid or can't be applied to the field: %q", validator), false, stripParams(validator), []string{}}
+ return
+ }
+ }
+ }()
+ }
+
+ for _, validatorSpec := range optionsOrder {
+ validatorStruct := options[validatorSpec]
+ var negate bool
+ validator := validatorSpec
+ customMsgExists := len(validatorStruct.customErrorMessage) > 0
+
+ // checks whether the tag looks like '!something' or 'something'
+ if validator[0] == '!' {
+ validator = validator[1:]
+ negate = true
+ }
+
+ // checks for interface param validators
+ for key, value := range InterfaceParamTagRegexMap {
+ ps := value.FindStringSubmatch(validator)
+ if len(ps) == 0 {
+ continue
+ }
+
+ validatefunc, ok := InterfaceParamTagMap[key]
+ if !ok {
+ continue
+ }
+
+ delete(options, validatorSpec)
+
+ field := fmt.Sprint(v)
+ if result := validatefunc(v.Interface(), ps[1:]...); (!result && !negate) || (result && negate) {
+ if customMsgExists {
+ return false, Error{t.Name, TruncatingErrorf(validatorStruct.customErrorMessage, field, validator), customMsgExists, stripParams(validatorSpec), []string{}}
+ }
+ if negate {
+ return false, Error{t.Name, fmt.Errorf("%s does validate as %s", field, validator), customMsgExists, stripParams(validatorSpec), []string{}}
+ }
+ return false, Error{t.Name, fmt.Errorf("%s does not validate as %s", field, validator), customMsgExists, stripParams(validatorSpec), []string{}}
+ }
+ }
+ }
+
+ switch v.Kind() {
+ case reflect.Bool,
+ reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64,
+ reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64, reflect.Uintptr,
+ reflect.Float32, reflect.Float64,
+ reflect.String:
+ // for each tag option checks the map of validator functions
+ for _, validatorSpec := range optionsOrder {
+ validatorStruct := options[validatorSpec]
+ var negate bool
+ validator := validatorSpec
+ customMsgExists := len(validatorStruct.customErrorMessage) > 0
+
+ // checks whether the tag looks like '!something' or 'something'
+ if validator[0] == '!' {
+ validator = validator[1:]
+ negate = true
+ }
+
+ // checks for param validators
+ for key, value := range ParamTagRegexMap {
+ ps := value.FindStringSubmatch(validator)
+ if len(ps) == 0 {
+ continue
+ }
+
+ validatefunc, ok := ParamTagMap[key]
+ if !ok {
+ continue
+ }
+
+ delete(options, validatorSpec)
+
+ switch v.Kind() {
+ case reflect.String,
+ reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64,
+ reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64,
+ reflect.Float32, reflect.Float64:
+
+ field := fmt.Sprint(v) // make value into string, then validate with regex
+ if result := validatefunc(field, ps[1:]...); (!result && !negate) || (result && negate) {
+ if customMsgExists {
+ return false, Error{t.Name, TruncatingErrorf(validatorStruct.customErrorMessage, field, validator), customMsgExists, stripParams(validatorSpec), []string{}}
+ }
+ if negate {
+ return false, Error{t.Name, fmt.Errorf("%s does validate as %s", field, validator), customMsgExists, stripParams(validatorSpec), []string{}}
+ }
+ return false, Error{t.Name, fmt.Errorf("%s does not validate as %s", field, validator), customMsgExists, stripParams(validatorSpec), []string{}}
+ }
+ default:
+ // type not yet supported, fail
+ return false, Error{t.Name, fmt.Errorf("Validator %s doesn't support kind %s", validator, v.Kind()), false, stripParams(validatorSpec), []string{}}
+ }
+ }
+
+ if validatefunc, ok := TagMap[validator]; ok {
+ delete(options, validatorSpec)
+
+ switch v.Kind() {
+ case reflect.String,
+ reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64,
+ reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64,
+ reflect.Float32, reflect.Float64:
+ field := fmt.Sprint(v) // make value into string, then validate with regex
+ if result := validatefunc(field); !result && !negate || result && negate {
+ if customMsgExists {
+ return false, Error{t.Name, TruncatingErrorf(validatorStruct.customErrorMessage, field, validator), customMsgExists, stripParams(validatorSpec), []string{}}
+ }
+ if negate {
+ return false, Error{t.Name, fmt.Errorf("%s does validate as %s", field, validator), customMsgExists, stripParams(validatorSpec), []string{}}
+ }
+ return false, Error{t.Name, fmt.Errorf("%s does not validate as %s", field, validator), customMsgExists, stripParams(validatorSpec), []string{}}
+ }
+ default:
+ //Not Yet Supported Types (Fail here!)
+ err := fmt.Errorf("Validator %s doesn't support kind %s for value %v", validator, v.Kind(), v)
+ return false, Error{t.Name, err, false, stripParams(validatorSpec), []string{}}
+ }
+ }
+ }
+ return true, nil
+ case reflect.Map:
+ if v.Type().Key().Kind() != reflect.String {
+ return false, &UnsupportedTypeError{v.Type()}
+ }
+ var sv stringValues
+ sv = v.MapKeys()
+ sort.Sort(sv)
+ result := true
+ for i, k := range sv {
+ var resultItem bool
+ var err error
+ if v.MapIndex(k).Kind() != reflect.Struct {
+ resultItem, err = typeCheck(v.MapIndex(k), t, o, options)
+ if err != nil {
+ return false, err
+ }
+ } else {
+ resultItem, err = ValidateStruct(v.MapIndex(k).Interface())
+ if err != nil {
+ err = prependPathToErrors(err, t.Name+"."+sv[i].Interface().(string))
+ return false, err
+ }
+ }
+ result = result && resultItem
+ }
+ return result, nil
+ case reflect.Slice, reflect.Array:
+ result := true
+ for i := 0; i < v.Len(); i++ {
+ var resultItem bool
+ var err error
+ if v.Index(i).Kind() != reflect.Struct {
+ resultItem, err = typeCheck(v.Index(i), t, o, options)
+ if err != nil {
+ return false, err
+ }
+ } else {
+ resultItem, err = ValidateStruct(v.Index(i).Interface())
+ if err != nil {
+ err = prependPathToErrors(err, t.Name+"."+strconv.Itoa(i))
+ return false, err
+ }
+ }
+ result = result && resultItem
+ }
+ return result, nil
+ case reflect.Interface:
+ // If the value is an interface then encode its element
+ if v.IsNil() {
+ return true, nil
+ }
+ return ValidateStruct(v.Interface())
+ case reflect.Ptr:
+ // If the value is a pointer then checks its element
+ if v.IsNil() {
+ return true, nil
+ }
+ return typeCheck(v.Elem(), t, o, options)
+ case reflect.Struct:
+ return true, nil
+ default:
+ return false, &UnsupportedTypeError{v.Type()}
+ }
+}
+
+func stripParams(validatorString string) string {
+ return paramsRegexp.ReplaceAllString(validatorString, "")
+}
+
+// isEmptyValue checks whether value empty or not
+func isEmptyValue(v reflect.Value) bool {
+ switch v.Kind() {
+ case reflect.String, reflect.Array:
+ return v.Len() == 0
+ case reflect.Map, reflect.Slice:
+ return v.Len() == 0 || v.IsNil()
+ case reflect.Bool:
+ return !v.Bool()
+ case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64:
+ return v.Int() == 0
+ case reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64, reflect.Uintptr:
+ return v.Uint() == 0
+ case reflect.Float32, reflect.Float64:
+ return v.Float() == 0
+ case reflect.Interface, reflect.Ptr:
+ return v.IsNil()
+ }
+
+ return reflect.DeepEqual(v.Interface(), reflect.Zero(v.Type()).Interface())
+}
+
+// ErrorByField returns error for specified field of the struct
+// validated by ValidateStruct or empty string if there are no errors
+// or this field doesn't exists or doesn't have any errors.
+func ErrorByField(e error, field string) string {
+ if e == nil {
+ return ""
+ }
+ return ErrorsByField(e)[field]
+}
+
+// ErrorsByField returns map of errors of the struct validated
+// by ValidateStruct or empty map if there are no errors.
+func ErrorsByField(e error) map[string]string {
+ m := make(map[string]string)
+ if e == nil {
+ return m
+ }
+ // prototype for ValidateStruct
+
+ switch e := e.(type) {
+ case Error:
+ m[e.Name] = e.Err.Error()
+ case Errors:
+ for _, item := range e.Errors() {
+ n := ErrorsByField(item)
+ for k, v := range n {
+ m[k] = v
+ }
+ }
+ }
+
+ return m
+}
+
+// Error returns string equivalent for reflect.Type
+func (e *UnsupportedTypeError) Error() string {
+ return "validator: unsupported type: " + e.Type.String()
+}
+
+func (sv stringValues) Len() int { return len(sv) }
+func (sv stringValues) Swap(i, j int) { sv[i], sv[j] = sv[j], sv[i] }
+func (sv stringValues) Less(i, j int) bool { return sv.get(i) < sv.get(j) }
+func (sv stringValues) get(i int) string { return sv[i].String() }
+
+func IsE164(str string) bool {
+ return rxE164.MatchString(str)
+}
diff --git a/vendor/github.com/asaskevich/govalidator/wercker.yml b/vendor/github.com/asaskevich/govalidator/wercker.yml
new file mode 100644
index 000000000..bc5f7b086
--- /dev/null
+++ b/vendor/github.com/asaskevich/govalidator/wercker.yml
@@ -0,0 +1,15 @@
+box: golang
+build:
+ steps:
+ - setup-go-workspace
+
+ - script:
+ name: go get
+ code: |
+ go version
+ go get -t ./...
+
+ - script:
+ name: go test
+ code: |
+ go test -race -v ./...
diff --git a/vendor/github.com/blang/semver/v4/LICENSE b/vendor/github.com/blang/semver/v4/LICENSE
new file mode 100644
index 000000000..5ba5c86fc
--- /dev/null
+++ b/vendor/github.com/blang/semver/v4/LICENSE
@@ -0,0 +1,22 @@
+The MIT License
+
+Copyright (c) 2014 Benedikt Lang
+
+Permission is hereby granted, free of charge, to any person obtaining a copy
+of this software and associated documentation files (the "Software"), to deal
+in the Software without restriction, including without limitation the rights
+to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
+copies of the Software, and to permit persons to whom the Software is
+furnished to do so, subject to the following conditions:
+
+The above copyright notice and this permission notice shall be included in
+all copies or substantial portions of the Software.
+
+THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
+AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
+THE SOFTWARE.
+
diff --git a/vendor/github.com/blang/semver/v4/json.go b/vendor/github.com/blang/semver/v4/json.go
new file mode 100644
index 000000000..a74bf7c44
--- /dev/null
+++ b/vendor/github.com/blang/semver/v4/json.go
@@ -0,0 +1,23 @@
+package semver
+
+import (
+ "encoding/json"
+)
+
+// MarshalJSON implements the encoding/json.Marshaler interface.
+func (v Version) MarshalJSON() ([]byte, error) {
+ return json.Marshal(v.String())
+}
+
+// UnmarshalJSON implements the encoding/json.Unmarshaler interface.
+func (v *Version) UnmarshalJSON(data []byte) (err error) {
+ var versionString string
+
+ if err = json.Unmarshal(data, &versionString); err != nil {
+ return
+ }
+
+ *v, err = Parse(versionString)
+
+ return
+}
diff --git a/vendor/github.com/blang/semver/v4/range.go b/vendor/github.com/blang/semver/v4/range.go
new file mode 100644
index 000000000..95f7139b9
--- /dev/null
+++ b/vendor/github.com/blang/semver/v4/range.go
@@ -0,0 +1,416 @@
+package semver
+
+import (
+ "fmt"
+ "strconv"
+ "strings"
+ "unicode"
+)
+
+type wildcardType int
+
+const (
+ noneWildcard wildcardType = iota
+ majorWildcard wildcardType = 1
+ minorWildcard wildcardType = 2
+ patchWildcard wildcardType = 3
+)
+
+func wildcardTypefromInt(i int) wildcardType {
+ switch i {
+ case 1:
+ return majorWildcard
+ case 2:
+ return minorWildcard
+ case 3:
+ return patchWildcard
+ default:
+ return noneWildcard
+ }
+}
+
+type comparator func(Version, Version) bool
+
+var (
+ compEQ comparator = func(v1 Version, v2 Version) bool {
+ return v1.Compare(v2) == 0
+ }
+ compNE = func(v1 Version, v2 Version) bool {
+ return v1.Compare(v2) != 0
+ }
+ compGT = func(v1 Version, v2 Version) bool {
+ return v1.Compare(v2) == 1
+ }
+ compGE = func(v1 Version, v2 Version) bool {
+ return v1.Compare(v2) >= 0
+ }
+ compLT = func(v1 Version, v2 Version) bool {
+ return v1.Compare(v2) == -1
+ }
+ compLE = func(v1 Version, v2 Version) bool {
+ return v1.Compare(v2) <= 0
+ }
+)
+
+type versionRange struct {
+ v Version
+ c comparator
+}
+
+// rangeFunc creates a Range from the given versionRange.
+func (vr *versionRange) rangeFunc() Range {
+ return Range(func(v Version) bool {
+ return vr.c(v, vr.v)
+ })
+}
+
+// Range represents a range of versions.
+// A Range can be used to check if a Version satisfies it:
+//
+// range, err := semver.ParseRange(">1.0.0 <2.0.0")
+// range(semver.MustParse("1.1.1") // returns true
+type Range func(Version) bool
+
+// OR combines the existing Range with another Range using logical OR.
+func (rf Range) OR(f Range) Range {
+ return Range(func(v Version) bool {
+ return rf(v) || f(v)
+ })
+}
+
+// AND combines the existing Range with another Range using logical AND.
+func (rf Range) AND(f Range) Range {
+ return Range(func(v Version) bool {
+ return rf(v) && f(v)
+ })
+}
+
+// ParseRange parses a range and returns a Range.
+// If the range could not be parsed an error is returned.
+//
+// Valid ranges are:
+// - "<1.0.0"
+// - "<=1.0.0"
+// - ">1.0.0"
+// - ">=1.0.0"
+// - "1.0.0", "=1.0.0", "==1.0.0"
+// - "!1.0.0", "!=1.0.0"
+//
+// A Range can consist of multiple ranges separated by space:
+// Ranges can be linked by logical AND:
+// - ">1.0.0 <2.0.0" would match between both ranges, so "1.1.1" and "1.8.7" but not "1.0.0" or "2.0.0"
+// - ">1.0.0 <3.0.0 !2.0.3-beta.2" would match every version between 1.0.0 and 3.0.0 except 2.0.3-beta.2
+//
+// Ranges can also be linked by logical OR:
+// - "<2.0.0 || >=3.0.0" would match "1.x.x" and "3.x.x" but not "2.x.x"
+//
+// AND has a higher precedence than OR. It's not possible to use brackets.
+//
+// Ranges can be combined by both AND and OR
+//
+// - `>1.0.0 <2.0.0 || >3.0.0 !4.2.1` would match `1.2.3`, `1.9.9`, `3.1.1`, but not `4.2.1`, `2.1.1`
+func ParseRange(s string) (Range, error) {
+ parts := splitAndTrim(s)
+ orParts, err := splitORParts(parts)
+ if err != nil {
+ return nil, err
+ }
+ expandedParts, err := expandWildcardVersion(orParts)
+ if err != nil {
+ return nil, err
+ }
+ var orFn Range
+ for _, p := range expandedParts {
+ var andFn Range
+ for _, ap := range p {
+ opStr, vStr, err := splitComparatorVersion(ap)
+ if err != nil {
+ return nil, err
+ }
+ vr, err := buildVersionRange(opStr, vStr)
+ if err != nil {
+ return nil, fmt.Errorf("Could not parse Range %q: %s", ap, err)
+ }
+ rf := vr.rangeFunc()
+
+ // Set function
+ if andFn == nil {
+ andFn = rf
+ } else { // Combine with existing function
+ andFn = andFn.AND(rf)
+ }
+ }
+ if orFn == nil {
+ orFn = andFn
+ } else {
+ orFn = orFn.OR(andFn)
+ }
+
+ }
+ return orFn, nil
+}
+
+// splitORParts splits the already cleaned parts by '||'.
+// Checks for invalid positions of the operator and returns an
+// error if found.
+func splitORParts(parts []string) ([][]string, error) {
+ var ORparts [][]string
+ last := 0
+ for i, p := range parts {
+ if p == "||" {
+ if i == 0 {
+ return nil, fmt.Errorf("First element in range is '||'")
+ }
+ ORparts = append(ORparts, parts[last:i])
+ last = i + 1
+ }
+ }
+ if last == len(parts) {
+ return nil, fmt.Errorf("Last element in range is '||'")
+ }
+ ORparts = append(ORparts, parts[last:])
+ return ORparts, nil
+}
+
+// buildVersionRange takes a slice of 2: operator and version
+// and builds a versionRange, otherwise an error.
+func buildVersionRange(opStr, vStr string) (*versionRange, error) {
+ c := parseComparator(opStr)
+ if c == nil {
+ return nil, fmt.Errorf("Could not parse comparator %q in %q", opStr, strings.Join([]string{opStr, vStr}, ""))
+ }
+ v, err := Parse(vStr)
+ if err != nil {
+ return nil, fmt.Errorf("Could not parse version %q in %q: %s", vStr, strings.Join([]string{opStr, vStr}, ""), err)
+ }
+
+ return &versionRange{
+ v: v,
+ c: c,
+ }, nil
+
+}
+
+// inArray checks if a byte is contained in an array of bytes
+func inArray(s byte, list []byte) bool {
+ for _, el := range list {
+ if el == s {
+ return true
+ }
+ }
+ return false
+}
+
+// splitAndTrim splits a range string by spaces and cleans whitespaces
+func splitAndTrim(s string) (result []string) {
+ last := 0
+ var lastChar byte
+ excludeFromSplit := []byte{'>', '<', '='}
+ for i := 0; i < len(s); i++ {
+ if s[i] == ' ' && !inArray(lastChar, excludeFromSplit) {
+ if last < i-1 {
+ result = append(result, s[last:i])
+ }
+ last = i + 1
+ } else if s[i] != ' ' {
+ lastChar = s[i]
+ }
+ }
+ if last < len(s)-1 {
+ result = append(result, s[last:])
+ }
+
+ for i, v := range result {
+ result[i] = strings.Replace(v, " ", "", -1)
+ }
+
+ // parts := strings.Split(s, " ")
+ // for _, x := range parts {
+ // if s := strings.TrimSpace(x); len(s) != 0 {
+ // result = append(result, s)
+ // }
+ // }
+ return
+}
+
+// splitComparatorVersion splits the comparator from the version.
+// Input must be free of leading or trailing spaces.
+func splitComparatorVersion(s string) (string, string, error) {
+ i := strings.IndexFunc(s, unicode.IsDigit)
+ if i == -1 {
+ return "", "", fmt.Errorf("Could not get version from string: %q", s)
+ }
+ return strings.TrimSpace(s[0:i]), s[i:], nil
+}
+
+// getWildcardType will return the type of wildcard that the
+// passed version contains
+func getWildcardType(vStr string) wildcardType {
+ parts := strings.Split(vStr, ".")
+ nparts := len(parts)
+ wildcard := parts[nparts-1]
+
+ possibleWildcardType := wildcardTypefromInt(nparts)
+ if wildcard == "x" {
+ return possibleWildcardType
+ }
+
+ return noneWildcard
+}
+
+// createVersionFromWildcard will convert a wildcard version
+// into a regular version, replacing 'x's with '0's, handling
+// special cases like '1.x.x' and '1.x'
+func createVersionFromWildcard(vStr string) string {
+ // handle 1.x.x
+ vStr2 := strings.Replace(vStr, ".x.x", ".x", 1)
+ vStr2 = strings.Replace(vStr2, ".x", ".0", 1)
+ parts := strings.Split(vStr2, ".")
+
+ // handle 1.x
+ if len(parts) == 2 {
+ return vStr2 + ".0"
+ }
+
+ return vStr2
+}
+
+// incrementMajorVersion will increment the major version
+// of the passed version
+func incrementMajorVersion(vStr string) (string, error) {
+ parts := strings.Split(vStr, ".")
+ i, err := strconv.Atoi(parts[0])
+ if err != nil {
+ return "", err
+ }
+ parts[0] = strconv.Itoa(i + 1)
+
+ return strings.Join(parts, "."), nil
+}
+
+// incrementMajorVersion will increment the minor version
+// of the passed version
+func incrementMinorVersion(vStr string) (string, error) {
+ parts := strings.Split(vStr, ".")
+ i, err := strconv.Atoi(parts[1])
+ if err != nil {
+ return "", err
+ }
+ parts[1] = strconv.Itoa(i + 1)
+
+ return strings.Join(parts, "."), nil
+}
+
+// expandWildcardVersion will expand wildcards inside versions
+// following these rules:
+//
+// * when dealing with patch wildcards:
+// >= 1.2.x will become >= 1.2.0
+// <= 1.2.x will become < 1.3.0
+// > 1.2.x will become >= 1.3.0
+// < 1.2.x will become < 1.2.0
+// != 1.2.x will become < 1.2.0 >= 1.3.0
+//
+// * when dealing with minor wildcards:
+// >= 1.x will become >= 1.0.0
+// <= 1.x will become < 2.0.0
+// > 1.x will become >= 2.0.0
+// < 1.0 will become < 1.0.0
+// != 1.x will become < 1.0.0 >= 2.0.0
+//
+// * when dealing with wildcards without
+// version operator:
+// 1.2.x will become >= 1.2.0 < 1.3.0
+// 1.x will become >= 1.0.0 < 2.0.0
+func expandWildcardVersion(parts [][]string) ([][]string, error) {
+ var expandedParts [][]string
+ for _, p := range parts {
+ var newParts []string
+ for _, ap := range p {
+ if strings.Contains(ap, "x") {
+ opStr, vStr, err := splitComparatorVersion(ap)
+ if err != nil {
+ return nil, err
+ }
+
+ versionWildcardType := getWildcardType(vStr)
+ flatVersion := createVersionFromWildcard(vStr)
+
+ var resultOperator string
+ var shouldIncrementVersion bool
+ switch opStr {
+ case ">":
+ resultOperator = ">="
+ shouldIncrementVersion = true
+ case ">=":
+ resultOperator = ">="
+ case "<":
+ resultOperator = "<"
+ case "<=":
+ resultOperator = "<"
+ shouldIncrementVersion = true
+ case "", "=", "==":
+ newParts = append(newParts, ">="+flatVersion)
+ resultOperator = "<"
+ shouldIncrementVersion = true
+ case "!=", "!":
+ newParts = append(newParts, "<"+flatVersion)
+ resultOperator = ">="
+ shouldIncrementVersion = true
+ }
+
+ var resultVersion string
+ if shouldIncrementVersion {
+ switch versionWildcardType {
+ case patchWildcard:
+ resultVersion, _ = incrementMinorVersion(flatVersion)
+ case minorWildcard:
+ resultVersion, _ = incrementMajorVersion(flatVersion)
+ }
+ } else {
+ resultVersion = flatVersion
+ }
+
+ ap = resultOperator + resultVersion
+ }
+ newParts = append(newParts, ap)
+ }
+ expandedParts = append(expandedParts, newParts)
+ }
+
+ return expandedParts, nil
+}
+
+func parseComparator(s string) comparator {
+ switch s {
+ case "==":
+ fallthrough
+ case "":
+ fallthrough
+ case "=":
+ return compEQ
+ case ">":
+ return compGT
+ case ">=":
+ return compGE
+ case "<":
+ return compLT
+ case "<=":
+ return compLE
+ case "!":
+ fallthrough
+ case "!=":
+ return compNE
+ }
+
+ return nil
+}
+
+// MustParseRange is like ParseRange but panics if the range cannot be parsed.
+func MustParseRange(s string) Range {
+ r, err := ParseRange(s)
+ if err != nil {
+ panic(`semver: ParseRange(` + s + `): ` + err.Error())
+ }
+ return r
+}
diff --git a/vendor/github.com/blang/semver/v4/semver.go b/vendor/github.com/blang/semver/v4/semver.go
new file mode 100644
index 000000000..307de610f
--- /dev/null
+++ b/vendor/github.com/blang/semver/v4/semver.go
@@ -0,0 +1,476 @@
+package semver
+
+import (
+ "errors"
+ "fmt"
+ "strconv"
+ "strings"
+)
+
+const (
+ numbers string = "0123456789"
+ alphas = "abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ-"
+ alphanum = alphas + numbers
+)
+
+// SpecVersion is the latest fully supported spec version of semver
+var SpecVersion = Version{
+ Major: 2,
+ Minor: 0,
+ Patch: 0,
+}
+
+// Version represents a semver compatible version
+type Version struct {
+ Major uint64
+ Minor uint64
+ Patch uint64
+ Pre []PRVersion
+ Build []string //No Precedence
+}
+
+// Version to string
+func (v Version) String() string {
+ b := make([]byte, 0, 5)
+ b = strconv.AppendUint(b, v.Major, 10)
+ b = append(b, '.')
+ b = strconv.AppendUint(b, v.Minor, 10)
+ b = append(b, '.')
+ b = strconv.AppendUint(b, v.Patch, 10)
+
+ if len(v.Pre) > 0 {
+ b = append(b, '-')
+ b = append(b, v.Pre[0].String()...)
+
+ for _, pre := range v.Pre[1:] {
+ b = append(b, '.')
+ b = append(b, pre.String()...)
+ }
+ }
+
+ if len(v.Build) > 0 {
+ b = append(b, '+')
+ b = append(b, v.Build[0]...)
+
+ for _, build := range v.Build[1:] {
+ b = append(b, '.')
+ b = append(b, build...)
+ }
+ }
+
+ return string(b)
+}
+
+// FinalizeVersion discards prerelease and build number and only returns
+// major, minor and patch number.
+func (v Version) FinalizeVersion() string {
+ b := make([]byte, 0, 5)
+ b = strconv.AppendUint(b, v.Major, 10)
+ b = append(b, '.')
+ b = strconv.AppendUint(b, v.Minor, 10)
+ b = append(b, '.')
+ b = strconv.AppendUint(b, v.Patch, 10)
+ return string(b)
+}
+
+// Equals checks if v is equal to o.
+func (v Version) Equals(o Version) bool {
+ return (v.Compare(o) == 0)
+}
+
+// EQ checks if v is equal to o.
+func (v Version) EQ(o Version) bool {
+ return (v.Compare(o) == 0)
+}
+
+// NE checks if v is not equal to o.
+func (v Version) NE(o Version) bool {
+ return (v.Compare(o) != 0)
+}
+
+// GT checks if v is greater than o.
+func (v Version) GT(o Version) bool {
+ return (v.Compare(o) == 1)
+}
+
+// GTE checks if v is greater than or equal to o.
+func (v Version) GTE(o Version) bool {
+ return (v.Compare(o) >= 0)
+}
+
+// GE checks if v is greater than or equal to o.
+func (v Version) GE(o Version) bool {
+ return (v.Compare(o) >= 0)
+}
+
+// LT checks if v is less than o.
+func (v Version) LT(o Version) bool {
+ return (v.Compare(o) == -1)
+}
+
+// LTE checks if v is less than or equal to o.
+func (v Version) LTE(o Version) bool {
+ return (v.Compare(o) <= 0)
+}
+
+// LE checks if v is less than or equal to o.
+func (v Version) LE(o Version) bool {
+ return (v.Compare(o) <= 0)
+}
+
+// Compare compares Versions v to o:
+// -1 == v is less than o
+// 0 == v is equal to o
+// 1 == v is greater than o
+func (v Version) Compare(o Version) int {
+ if v.Major != o.Major {
+ if v.Major > o.Major {
+ return 1
+ }
+ return -1
+ }
+ if v.Minor != o.Minor {
+ if v.Minor > o.Minor {
+ return 1
+ }
+ return -1
+ }
+ if v.Patch != o.Patch {
+ if v.Patch > o.Patch {
+ return 1
+ }
+ return -1
+ }
+
+ // Quick comparison if a version has no prerelease versions
+ if len(v.Pre) == 0 && len(o.Pre) == 0 {
+ return 0
+ } else if len(v.Pre) == 0 && len(o.Pre) > 0 {
+ return 1
+ } else if len(v.Pre) > 0 && len(o.Pre) == 0 {
+ return -1
+ }
+
+ i := 0
+ for ; i < len(v.Pre) && i < len(o.Pre); i++ {
+ if comp := v.Pre[i].Compare(o.Pre[i]); comp == 0 {
+ continue
+ } else if comp == 1 {
+ return 1
+ } else {
+ return -1
+ }
+ }
+
+ // If all pr versions are the equal but one has further prversion, this one greater
+ if i == len(v.Pre) && i == len(o.Pre) {
+ return 0
+ } else if i == len(v.Pre) && i < len(o.Pre) {
+ return -1
+ } else {
+ return 1
+ }
+
+}
+
+// IncrementPatch increments the patch version
+func (v *Version) IncrementPatch() error {
+ v.Patch++
+ return nil
+}
+
+// IncrementMinor increments the minor version
+func (v *Version) IncrementMinor() error {
+ v.Minor++
+ v.Patch = 0
+ return nil
+}
+
+// IncrementMajor increments the major version
+func (v *Version) IncrementMajor() error {
+ v.Major++
+ v.Minor = 0
+ v.Patch = 0
+ return nil
+}
+
+// Validate validates v and returns error in case
+func (v Version) Validate() error {
+ // Major, Minor, Patch already validated using uint64
+
+ for _, pre := range v.Pre {
+ if !pre.IsNum { //Numeric prerelease versions already uint64
+ if len(pre.VersionStr) == 0 {
+ return fmt.Errorf("Prerelease can not be empty %q", pre.VersionStr)
+ }
+ if !containsOnly(pre.VersionStr, alphanum) {
+ return fmt.Errorf("Invalid character(s) found in prerelease %q", pre.VersionStr)
+ }
+ }
+ }
+
+ for _, build := range v.Build {
+ if len(build) == 0 {
+ return fmt.Errorf("Build meta data can not be empty %q", build)
+ }
+ if !containsOnly(build, alphanum) {
+ return fmt.Errorf("Invalid character(s) found in build meta data %q", build)
+ }
+ }
+
+ return nil
+}
+
+// New is an alias for Parse and returns a pointer, parses version string and returns a validated Version or error
+func New(s string) (*Version, error) {
+ v, err := Parse(s)
+ vp := &v
+ return vp, err
+}
+
+// Make is an alias for Parse, parses version string and returns a validated Version or error
+func Make(s string) (Version, error) {
+ return Parse(s)
+}
+
+// ParseTolerant allows for certain version specifications that do not strictly adhere to semver
+// specs to be parsed by this library. It does so by normalizing versions before passing them to
+// Parse(). It currently trims spaces, removes a "v" prefix, adds a 0 patch number to versions
+// with only major and minor components specified, and removes leading 0s.
+func ParseTolerant(s string) (Version, error) {
+ s = strings.TrimSpace(s)
+ s = strings.TrimPrefix(s, "v")
+
+ // Split into major.minor.(patch+pr+meta)
+ parts := strings.SplitN(s, ".", 3)
+ // Remove leading zeros.
+ for i, p := range parts {
+ if len(p) > 1 {
+ p = strings.TrimLeft(p, "0")
+ if len(p) == 0 || !strings.ContainsAny(p[0:1], "0123456789") {
+ p = "0" + p
+ }
+ parts[i] = p
+ }
+ }
+ // Fill up shortened versions.
+ if len(parts) < 3 {
+ if strings.ContainsAny(parts[len(parts)-1], "+-") {
+ return Version{}, errors.New("Short version cannot contain PreRelease/Build meta data")
+ }
+ for len(parts) < 3 {
+ parts = append(parts, "0")
+ }
+ }
+ s = strings.Join(parts, ".")
+
+ return Parse(s)
+}
+
+// Parse parses version string and returns a validated Version or error
+func Parse(s string) (Version, error) {
+ if len(s) == 0 {
+ return Version{}, errors.New("Version string empty")
+ }
+
+ // Split into major.minor.(patch+pr+meta)
+ parts := strings.SplitN(s, ".", 3)
+ if len(parts) != 3 {
+ return Version{}, errors.New("No Major.Minor.Patch elements found")
+ }
+
+ // Major
+ if !containsOnly(parts[0], numbers) {
+ return Version{}, fmt.Errorf("Invalid character(s) found in major number %q", parts[0])
+ }
+ if hasLeadingZeroes(parts[0]) {
+ return Version{}, fmt.Errorf("Major number must not contain leading zeroes %q", parts[0])
+ }
+ major, err := strconv.ParseUint(parts[0], 10, 64)
+ if err != nil {
+ return Version{}, err
+ }
+
+ // Minor
+ if !containsOnly(parts[1], numbers) {
+ return Version{}, fmt.Errorf("Invalid character(s) found in minor number %q", parts[1])
+ }
+ if hasLeadingZeroes(parts[1]) {
+ return Version{}, fmt.Errorf("Minor number must not contain leading zeroes %q", parts[1])
+ }
+ minor, err := strconv.ParseUint(parts[1], 10, 64)
+ if err != nil {
+ return Version{}, err
+ }
+
+ v := Version{}
+ v.Major = major
+ v.Minor = minor
+
+ var build, prerelease []string
+ patchStr := parts[2]
+
+ if buildIndex := strings.IndexRune(patchStr, '+'); buildIndex != -1 {
+ build = strings.Split(patchStr[buildIndex+1:], ".")
+ patchStr = patchStr[:buildIndex]
+ }
+
+ if preIndex := strings.IndexRune(patchStr, '-'); preIndex != -1 {
+ prerelease = strings.Split(patchStr[preIndex+1:], ".")
+ patchStr = patchStr[:preIndex]
+ }
+
+ if !containsOnly(patchStr, numbers) {
+ return Version{}, fmt.Errorf("Invalid character(s) found in patch number %q", patchStr)
+ }
+ if hasLeadingZeroes(patchStr) {
+ return Version{}, fmt.Errorf("Patch number must not contain leading zeroes %q", patchStr)
+ }
+ patch, err := strconv.ParseUint(patchStr, 10, 64)
+ if err != nil {
+ return Version{}, err
+ }
+
+ v.Patch = patch
+
+ // Prerelease
+ for _, prstr := range prerelease {
+ parsedPR, err := NewPRVersion(prstr)
+ if err != nil {
+ return Version{}, err
+ }
+ v.Pre = append(v.Pre, parsedPR)
+ }
+
+ // Build meta data
+ for _, str := range build {
+ if len(str) == 0 {
+ return Version{}, errors.New("Build meta data is empty")
+ }
+ if !containsOnly(str, alphanum) {
+ return Version{}, fmt.Errorf("Invalid character(s) found in build meta data %q", str)
+ }
+ v.Build = append(v.Build, str)
+ }
+
+ return v, nil
+}
+
+// MustParse is like Parse but panics if the version cannot be parsed.
+func MustParse(s string) Version {
+ v, err := Parse(s)
+ if err != nil {
+ panic(`semver: Parse(` + s + `): ` + err.Error())
+ }
+ return v
+}
+
+// PRVersion represents a PreRelease Version
+type PRVersion struct {
+ VersionStr string
+ VersionNum uint64
+ IsNum bool
+}
+
+// NewPRVersion creates a new valid prerelease version
+func NewPRVersion(s string) (PRVersion, error) {
+ if len(s) == 0 {
+ return PRVersion{}, errors.New("Prerelease is empty")
+ }
+ v := PRVersion{}
+ if containsOnly(s, numbers) {
+ if hasLeadingZeroes(s) {
+ return PRVersion{}, fmt.Errorf("Numeric PreRelease version must not contain leading zeroes %q", s)
+ }
+ num, err := strconv.ParseUint(s, 10, 64)
+
+ // Might never be hit, but just in case
+ if err != nil {
+ return PRVersion{}, err
+ }
+ v.VersionNum = num
+ v.IsNum = true
+ } else if containsOnly(s, alphanum) {
+ v.VersionStr = s
+ v.IsNum = false
+ } else {
+ return PRVersion{}, fmt.Errorf("Invalid character(s) found in prerelease %q", s)
+ }
+ return v, nil
+}
+
+// IsNumeric checks if prerelease-version is numeric
+func (v PRVersion) IsNumeric() bool {
+ return v.IsNum
+}
+
+// Compare compares two PreRelease Versions v and o:
+// -1 == v is less than o
+// 0 == v is equal to o
+// 1 == v is greater than o
+func (v PRVersion) Compare(o PRVersion) int {
+ if v.IsNum && !o.IsNum {
+ return -1
+ } else if !v.IsNum && o.IsNum {
+ return 1
+ } else if v.IsNum && o.IsNum {
+ if v.VersionNum == o.VersionNum {
+ return 0
+ } else if v.VersionNum > o.VersionNum {
+ return 1
+ } else {
+ return -1
+ }
+ } else { // both are Alphas
+ if v.VersionStr == o.VersionStr {
+ return 0
+ } else if v.VersionStr > o.VersionStr {
+ return 1
+ } else {
+ return -1
+ }
+ }
+}
+
+// PreRelease version to string
+func (v PRVersion) String() string {
+ if v.IsNum {
+ return strconv.FormatUint(v.VersionNum, 10)
+ }
+ return v.VersionStr
+}
+
+func containsOnly(s string, set string) bool {
+ return strings.IndexFunc(s, func(r rune) bool {
+ return !strings.ContainsRune(set, r)
+ }) == -1
+}
+
+func hasLeadingZeroes(s string) bool {
+ return len(s) > 1 && s[0] == '0'
+}
+
+// NewBuildVersion creates a new valid build version
+func NewBuildVersion(s string) (string, error) {
+ if len(s) == 0 {
+ return "", errors.New("Buildversion is empty")
+ }
+ if !containsOnly(s, alphanum) {
+ return "", fmt.Errorf("Invalid character(s) found in build meta data %q", s)
+ }
+ return s, nil
+}
+
+// FinalizeVersion returns the major, minor and patch number only and discards
+// prerelease and build number.
+func FinalizeVersion(s string) (string, error) {
+ v, err := Parse(s)
+ if err != nil {
+ return "", err
+ }
+ v.Pre = nil
+ v.Build = nil
+
+ finalVer := v.String()
+ return finalVer, nil
+}
diff --git a/vendor/github.com/blang/semver/v4/sort.go b/vendor/github.com/blang/semver/v4/sort.go
new file mode 100644
index 000000000..e18f88082
--- /dev/null
+++ b/vendor/github.com/blang/semver/v4/sort.go
@@ -0,0 +1,28 @@
+package semver
+
+import (
+ "sort"
+)
+
+// Versions represents multiple versions.
+type Versions []Version
+
+// Len returns length of version collection
+func (s Versions) Len() int {
+ return len(s)
+}
+
+// Swap swaps two versions inside the collection by its indices
+func (s Versions) Swap(i, j int) {
+ s[i], s[j] = s[j], s[i]
+}
+
+// Less checks if version at index i is less than version at index j
+func (s Versions) Less(i, j int) bool {
+ return s[i].LT(s[j])
+}
+
+// Sort sorts a slice of versions
+func Sort(versions []Version) {
+ sort.Sort(Versions(versions))
+}
diff --git a/vendor/github.com/blang/semver/v4/sql.go b/vendor/github.com/blang/semver/v4/sql.go
new file mode 100644
index 000000000..db958134f
--- /dev/null
+++ b/vendor/github.com/blang/semver/v4/sql.go
@@ -0,0 +1,30 @@
+package semver
+
+import (
+ "database/sql/driver"
+ "fmt"
+)
+
+// Scan implements the database/sql.Scanner interface.
+func (v *Version) Scan(src interface{}) (err error) {
+ var str string
+ switch src := src.(type) {
+ case string:
+ str = src
+ case []byte:
+ str = string(src)
+ default:
+ return fmt.Errorf("version.Scan: cannot convert %T to string", src)
+ }
+
+ if t, err := Parse(str); err == nil {
+ *v = t
+ }
+
+ return
+}
+
+// Value implements the database/sql/driver.Valuer interface.
+func (v Version) Value() (driver.Value, error) {
+ return v.String(), nil
+}
diff --git a/vendor/github.com/containerd/containerd/LICENSE b/vendor/github.com/containerd/containerd/LICENSE
new file mode 100644
index 000000000..584149b6e
--- /dev/null
+++ b/vendor/github.com/containerd/containerd/LICENSE
@@ -0,0 +1,191 @@
+
+ Apache License
+ Version 2.0, January 2004
+ https://www.apache.org/licenses/
+
+ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
+
+ 1. Definitions.
+
+ "License" shall mean the terms and conditions for use, reproduction,
+ and distribution as defined by Sections 1 through 9 of this document.
+
+ "Licensor" shall mean the copyright owner or entity authorized by
+ the copyright owner that is granting the License.
+
+ "Legal Entity" shall mean the union of the acting entity and all
+ other entities that control, are controlled by, or are under common
+ control with that entity. For the purposes of this definition,
+ "control" means (i) the power, direct or indirect, to cause the
+ direction or management of such entity, whether by contract or
+ otherwise, or (ii) ownership of fifty percent (50%) or more of the
+ outstanding shares, or (iii) beneficial ownership of such entity.
+
+ "You" (or "Your") shall mean an individual or Legal Entity
+ exercising permissions granted by this License.
+
+ "Source" form shall mean the preferred form for making modifications,
+ including but not limited to software source code, documentation
+ source, and configuration files.
+
+ "Object" form shall mean any form resulting from mechanical
+ transformation or translation of a Source form, including but
+ not limited to compiled object code, generated documentation,
+ and conversions to other media types.
+
+ "Work" shall mean the work of authorship, whether in Source or
+ Object form, made available under the License, as indicated by a
+ copyright notice that is included in or attached to the work
+ (an example is provided in the Appendix below).
+
+ "Derivative Works" shall mean any work, whether in Source or Object
+ form, that is based on (or derived from) the Work and for which the
+ editorial revisions, annotations, elaborations, or other modifications
+ represent, as a whole, an original work of authorship. For the purposes
+ of this License, Derivative Works shall not include works that remain
+ separable from, or merely link (or bind by name) to the interfaces of,
+ the Work and Derivative Works thereof.
+
+ "Contribution" shall mean any work of authorship, including
+ the original version of the Work and any modifications or additions
+ to that Work or Derivative Works thereof, that is intentionally
+ submitted to Licensor for inclusion in the Work by the copyright owner
+ or by an individual or Legal Entity authorized to submit on behalf of
+ the copyright owner. For the purposes of this definition, "submitted"
+ means any form of electronic, verbal, or written communication sent
+ to the Licensor or its representatives, including but not limited to
+ communication on electronic mailing lists, source code control systems,
+ and issue tracking systems that are managed by, or on behalf of, the
+ Licensor for the purpose of discussing and improving the Work, but
+ excluding communication that is conspicuously marked or otherwise
+ designated in writing by the copyright owner as "Not a Contribution."
+
+ "Contributor" shall mean Licensor and any individual or Legal Entity
+ on behalf of whom a Contribution has been received by Licensor and
+ subsequently incorporated within the Work.
+
+ 2. Grant of Copyright License. Subject to the terms and conditions of
+ this License, each Contributor hereby grants to You a perpetual,
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
+ copyright license to reproduce, prepare Derivative Works of,
+ publicly display, publicly perform, sublicense, and distribute the
+ Work and such Derivative Works in Source or Object form.
+
+ 3. Grant of Patent License. Subject to the terms and conditions of
+ this License, each Contributor hereby grants to You a perpetual,
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
+ (except as stated in this section) patent license to make, have made,
+ use, offer to sell, sell, import, and otherwise transfer the Work,
+ where such license applies only to those patent claims licensable
+ by such Contributor that are necessarily infringed by their
+ Contribution(s) alone or by combination of their Contribution(s)
+ with the Work to which such Contribution(s) was submitted. If You
+ institute patent litigation against any entity (including a
+ cross-claim or counterclaim in a lawsuit) alleging that the Work
+ or a Contribution incorporated within the Work constitutes direct
+ or contributory patent infringement, then any patent licenses
+ granted to You under this License for that Work shall terminate
+ as of the date such litigation is filed.
+
+ 4. Redistribution. You may reproduce and distribute copies of the
+ Work or Derivative Works thereof in any medium, with or without
+ modifications, and in Source or Object form, provided that You
+ meet the following conditions:
+
+ (a) You must give any other recipients of the Work or
+ Derivative Works a copy of this License; and
+
+ (b) You must cause any modified files to carry prominent notices
+ stating that You changed the files; and
+
+ (c) You must retain, in the Source form of any Derivative Works
+ that You distribute, all copyright, patent, trademark, and
+ attribution notices from the Source form of the Work,
+ excluding those notices that do not pertain to any part of
+ the Derivative Works; and
+
+ (d) If the Work includes a "NOTICE" text file as part of its
+ distribution, then any Derivative Works that You distribute must
+ include a readable copy of the attribution notices contained
+ within such NOTICE file, excluding those notices that do not
+ pertain to any part of the Derivative Works, in at least one
+ of the following places: within a NOTICE text file distributed
+ as part of the Derivative Works; within the Source form or
+ documentation, if provided along with the Derivative Works; or,
+ within a display generated by the Derivative Works, if and
+ wherever such third-party notices normally appear. The contents
+ of the NOTICE file are for informational purposes only and
+ do not modify the License. You may add Your own attribution
+ notices within Derivative Works that You distribute, alongside
+ or as an addendum to the NOTICE text from the Work, provided
+ that such additional attribution notices cannot be construed
+ as modifying the License.
+
+ You may add Your own copyright statement to Your modifications and
+ may provide additional or different license terms and conditions
+ for use, reproduction, or distribution of Your modifications, or
+ for any such Derivative Works as a whole, provided Your use,
+ reproduction, and distribution of the Work otherwise complies with
+ the conditions stated in this License.
+
+ 5. Submission of Contributions. Unless You explicitly state otherwise,
+ any Contribution intentionally submitted for inclusion in the Work
+ by You to the Licensor shall be under the terms and conditions of
+ this License, without any additional terms or conditions.
+ Notwithstanding the above, nothing herein shall supersede or modify
+ the terms of any separate license agreement you may have executed
+ with Licensor regarding such Contributions.
+
+ 6. Trademarks. This License does not grant permission to use the trade
+ names, trademarks, service marks, or product names of the Licensor,
+ except as required for reasonable and customary use in describing the
+ origin of the Work and reproducing the content of the NOTICE file.
+
+ 7. Disclaimer of Warranty. Unless required by applicable law or
+ agreed to in writing, Licensor provides the Work (and each
+ Contributor provides its Contributions) on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
+ implied, including, without limitation, any warranties or conditions
+ of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
+ PARTICULAR PURPOSE. You are solely responsible for determining the
+ appropriateness of using or redistributing the Work and assume any
+ risks associated with Your exercise of permissions under this License.
+
+ 8. Limitation of Liability. In no event and under no legal theory,
+ whether in tort (including negligence), contract, or otherwise,
+ unless required by applicable law (such as deliberate and grossly
+ negligent acts) or agreed to in writing, shall any Contributor be
+ liable to You for damages, including any direct, indirect, special,
+ incidental, or consequential damages of any character arising as a
+ result of this License or out of the use or inability to use the
+ Work (including but not limited to damages for loss of goodwill,
+ work stoppage, computer failure or malfunction, or any and all
+ other commercial damages or losses), even if such Contributor
+ has been advised of the possibility of such damages.
+
+ 9. Accepting Warranty or Additional Liability. While redistributing
+ the Work or Derivative Works thereof, You may choose to offer,
+ and charge a fee for, acceptance of support, warranty, indemnity,
+ or other liability obligations and/or rights consistent with this
+ License. However, in accepting such obligations, You may act only
+ on Your own behalf and on Your sole responsibility, not on behalf
+ of any other Contributor, and only if You agree to indemnify,
+ defend, and hold each Contributor harmless for any liability
+ incurred by, or claims asserted against, such Contributor by reason
+ of your accepting any such warranty or additional liability.
+
+ END OF TERMS AND CONDITIONS
+
+ Copyright The containerd Authors
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ https://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
diff --git a/vendor/github.com/containerd/containerd/NOTICE b/vendor/github.com/containerd/containerd/NOTICE
new file mode 100644
index 000000000..8915f0277
--- /dev/null
+++ b/vendor/github.com/containerd/containerd/NOTICE
@@ -0,0 +1,16 @@
+Docker
+Copyright 2012-2015 Docker, Inc.
+
+This product includes software developed at Docker, Inc. (https://www.docker.com).
+
+The following is courtesy of our legal counsel:
+
+
+Use and transfer of Docker may be subject to certain restrictions by the
+United States and other governments.
+It is your responsibility to ensure that your use and/or transfer does not
+violate applicable laws.
+
+For more information, please see https://www.bis.doc.gov
+
+See also https://www.apache.org/dev/crypto.html and/or seek legal counsel.
diff --git a/vendor/github.com/containerd/containerd/archive/compression/compression.go b/vendor/github.com/containerd/containerd/archive/compression/compression.go
new file mode 100644
index 000000000..23ddfab1a
--- /dev/null
+++ b/vendor/github.com/containerd/containerd/archive/compression/compression.go
@@ -0,0 +1,323 @@
+/*
+ Copyright The containerd Authors.
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+*/
+
+package compression
+
+import (
+ "bufio"
+ "bytes"
+ "compress/gzip"
+ "context"
+ "encoding/binary"
+ "fmt"
+ "io"
+ "os"
+ "os/exec"
+ "strconv"
+ "sync"
+
+ "github.com/containerd/log"
+ "github.com/klauspost/compress/zstd"
+)
+
+type (
+ // Compression is the state represents if compressed or not.
+ Compression int
+)
+
+const (
+ // Uncompressed represents the uncompressed.
+ Uncompressed Compression = iota
+ // Gzip is gzip compression algorithm.
+ Gzip
+ // Zstd is zstd compression algorithm.
+ Zstd
+)
+
+const disablePigzEnv = "CONTAINERD_DISABLE_PIGZ"
+
+var (
+ initPigz sync.Once
+ unpigzPath string
+)
+
+var (
+ bufioReader32KPool = &sync.Pool{
+ New: func() interface{} { return bufio.NewReaderSize(nil, 32*1024) },
+ }
+)
+
+// DecompressReadCloser include the stream after decompress and the compress method detected.
+type DecompressReadCloser interface {
+ io.ReadCloser
+ // GetCompression returns the compress method which is used before decompressing
+ GetCompression() Compression
+}
+
+type readCloserWrapper struct {
+ io.Reader
+ compression Compression
+ closer func() error
+}
+
+func (r *readCloserWrapper) Close() error {
+ if r.closer != nil {
+ return r.closer()
+ }
+ return nil
+}
+
+func (r *readCloserWrapper) GetCompression() Compression {
+ return r.compression
+}
+
+type writeCloserWrapper struct {
+ io.Writer
+ closer func() error
+}
+
+func (w *writeCloserWrapper) Close() error {
+ if w.closer != nil {
+ w.closer()
+ }
+ return nil
+}
+
+type bufferedReader struct {
+ buf *bufio.Reader
+}
+
+func newBufferedReader(r io.Reader) *bufferedReader {
+ buf := bufioReader32KPool.Get().(*bufio.Reader)
+ buf.Reset(r)
+ return &bufferedReader{buf}
+}
+
+func (r *bufferedReader) Read(p []byte) (n int, err error) {
+ if r.buf == nil {
+ return 0, io.EOF
+ }
+ n, err = r.buf.Read(p)
+ if err == io.EOF {
+ r.buf.Reset(nil)
+ bufioReader32KPool.Put(r.buf)
+ r.buf = nil
+ }
+ return
+}
+
+func (r *bufferedReader) Peek(n int) ([]byte, error) {
+ if r.buf == nil {
+ return nil, io.EOF
+ }
+ return r.buf.Peek(n)
+}
+
+const (
+ zstdMagicSkippableStart = 0x184D2A50
+ zstdMagicSkippableMask = 0xFFFFFFF0
+)
+
+var (
+ gzipMagic = []byte{0x1F, 0x8B, 0x08}
+ zstdMagic = []byte{0x28, 0xb5, 0x2f, 0xfd}
+)
+
+type matcher = func([]byte) bool
+
+func magicNumberMatcher(m []byte) matcher {
+ return func(source []byte) bool {
+ return bytes.HasPrefix(source, m)
+ }
+}
+
+// zstdMatcher detects zstd compression algorithm.
+// There are two frame formats defined by Zstandard: Zstandard frames and Skippable frames.
+// See https://tools.ietf.org/id/draft-kucherawy-dispatch-zstd-00.html#rfc.section.2 for more details.
+func zstdMatcher() matcher {
+ return func(source []byte) bool {
+ if bytes.HasPrefix(source, zstdMagic) {
+ // Zstandard frame
+ return true
+ }
+ // skippable frame
+ if len(source) < 8 {
+ return false
+ }
+ // magic number from 0x184D2A50 to 0x184D2A5F.
+ if binary.LittleEndian.Uint32(source[:4])&zstdMagicSkippableMask == zstdMagicSkippableStart {
+ return true
+ }
+ return false
+ }
+}
+
+// DetectCompression detects the compression algorithm of the source.
+func DetectCompression(source []byte) Compression {
+ for compression, fn := range map[Compression]matcher{
+ Gzip: magicNumberMatcher(gzipMagic),
+ Zstd: zstdMatcher(),
+ } {
+ if fn(source) {
+ return compression
+ }
+ }
+ return Uncompressed
+}
+
+// DecompressStream decompresses the archive and returns a ReaderCloser with the decompressed archive.
+func DecompressStream(archive io.Reader) (DecompressReadCloser, error) {
+ buf := newBufferedReader(archive)
+ bs, err := buf.Peek(10)
+ if err != nil && err != io.EOF {
+ // Note: we'll ignore any io.EOF error because there are some odd
+ // cases where the layer.tar file will be empty (zero bytes) and
+ // that results in an io.EOF from the Peek() call. So, in those
+ // cases we'll just treat it as a non-compressed stream and
+ // that means just create an empty layer.
+ // See Issue docker/docker#18170
+ return nil, err
+ }
+
+ switch compression := DetectCompression(bs); compression {
+ case Uncompressed:
+ return &readCloserWrapper{
+ Reader: buf,
+ compression: compression,
+ }, nil
+ case Gzip:
+ ctx, cancel := context.WithCancel(context.Background())
+ gzReader, err := gzipDecompress(ctx, buf)
+ if err != nil {
+ cancel()
+ return nil, err
+ }
+
+ return &readCloserWrapper{
+ Reader: gzReader,
+ compression: compression,
+ closer: func() error {
+ cancel()
+ return gzReader.Close()
+ },
+ }, nil
+ case Zstd:
+ zstdReader, err := zstd.NewReader(buf)
+ if err != nil {
+ return nil, err
+ }
+ return &readCloserWrapper{
+ Reader: zstdReader,
+ compression: compression,
+ closer: func() error {
+ zstdReader.Close()
+ return nil
+ },
+ }, nil
+
+ default:
+ return nil, fmt.Errorf("unsupported compression format %s", (&compression).Extension())
+ }
+}
+
+// CompressStream compresses the dest with specified compression algorithm.
+func CompressStream(dest io.Writer, compression Compression) (io.WriteCloser, error) {
+ switch compression {
+ case Uncompressed:
+ return &writeCloserWrapper{dest, nil}, nil
+ case Gzip:
+ return gzip.NewWriter(dest), nil
+ case Zstd:
+ return zstd.NewWriter(dest)
+ default:
+ return nil, fmt.Errorf("unsupported compression format %s", (&compression).Extension())
+ }
+}
+
+// Extension returns the extension of a file that uses the specified compression algorithm.
+func (compression *Compression) Extension() string {
+ switch *compression {
+ case Gzip:
+ return "gz"
+ case Zstd:
+ return "zst"
+ }
+ return ""
+}
+
+func gzipDecompress(ctx context.Context, buf io.Reader) (io.ReadCloser, error) {
+ initPigz.Do(func() {
+ if unpigzPath = detectPigz(); unpigzPath != "" {
+ log.L.Debug("using pigz for decompression")
+ }
+ })
+
+ if unpigzPath == "" {
+ return gzip.NewReader(buf)
+ }
+
+ return cmdStream(exec.CommandContext(ctx, unpigzPath, "-d", "-c"), buf)
+}
+
+func cmdStream(cmd *exec.Cmd, in io.Reader) (io.ReadCloser, error) {
+ reader, writer := io.Pipe()
+
+ cmd.Stdin = in
+ cmd.Stdout = writer
+
+ var errBuf bytes.Buffer
+ cmd.Stderr = &errBuf
+
+ if err := cmd.Start(); err != nil {
+ return nil, err
+ }
+
+ go func() {
+ if err := cmd.Wait(); err != nil {
+ writer.CloseWithError(fmt.Errorf("%s: %s", err, errBuf.String()))
+ } else {
+ writer.Close()
+ }
+ }()
+
+ return reader, nil
+}
+
+func detectPigz() string {
+ path, err := exec.LookPath("unpigz")
+ if err != nil {
+ log.L.WithError(err).Debug("unpigz not found, falling back to go gzip")
+ return ""
+ }
+
+ // Check if pigz disabled via CONTAINERD_DISABLE_PIGZ env variable
+ value := os.Getenv(disablePigzEnv)
+ if value == "" {
+ return path
+ }
+
+ disable, err := strconv.ParseBool(value)
+ if err != nil {
+ log.L.WithError(err).Warnf("could not parse %s: %s", disablePigzEnv, value)
+ return path
+ }
+
+ if disable {
+ return ""
+ }
+
+ return path
+}
diff --git a/vendor/github.com/containerd/containerd/archive/compression/compression_fuzzer.go b/vendor/github.com/containerd/containerd/archive/compression/compression_fuzzer.go
new file mode 100644
index 000000000..3516494ac
--- /dev/null
+++ b/vendor/github.com/containerd/containerd/archive/compression/compression_fuzzer.go
@@ -0,0 +1,28 @@
+//go:build gofuzz
+
+/*
+ Copyright The containerd Authors.
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+*/
+
+package compression
+
+import (
+ "bytes"
+)
+
+func FuzzDecompressStream(data []byte) int {
+ _, _ = DecompressStream(bytes.NewReader(data))
+ return 1
+}
diff --git a/vendor/github.com/containerd/containerd/content/adaptor.go b/vendor/github.com/containerd/containerd/content/adaptor.go
new file mode 100644
index 000000000..88bad2610
--- /dev/null
+++ b/vendor/github.com/containerd/containerd/content/adaptor.go
@@ -0,0 +1,52 @@
+/*
+ Copyright The containerd Authors.
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+*/
+
+package content
+
+import (
+ "strings"
+
+ "github.com/containerd/containerd/filters"
+)
+
+// AdaptInfo returns `filters.Adaptor` that handles `content.Info`.
+func AdaptInfo(info Info) filters.Adaptor {
+ return filters.AdapterFunc(func(fieldpath []string) (string, bool) {
+ if len(fieldpath) == 0 {
+ return "", false
+ }
+
+ switch fieldpath[0] {
+ case "digest":
+ return info.Digest.String(), true
+ case "size":
+ // TODO: support size based filtering
+ case "labels":
+ return checkMap(fieldpath[1:], info.Labels)
+ }
+
+ return "", false
+ })
+}
+
+func checkMap(fieldpath []string, m map[string]string) (string, bool) {
+ if len(m) == 0 {
+ return "", false
+ }
+
+ value, ok := m[strings.Join(fieldpath, ".")]
+ return value, ok
+}
diff --git a/vendor/github.com/containerd/containerd/content/content.go b/vendor/github.com/containerd/containerd/content/content.go
new file mode 100644
index 000000000..2dc7bf8b5
--- /dev/null
+++ b/vendor/github.com/containerd/containerd/content/content.go
@@ -0,0 +1,207 @@
+/*
+ Copyright The containerd Authors.
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+*/
+
+package content
+
+import (
+ "context"
+ "io"
+ "time"
+
+ "github.com/opencontainers/go-digest"
+ ocispec "github.com/opencontainers/image-spec/specs-go/v1"
+)
+
+// Store combines the methods of content-oriented interfaces into a set that
+// are commonly provided by complete implementations.
+//
+// Overall content lifecycle:
+// - Ingester is used to initiate a write operation (aka ingestion)
+// - IngestManager is used to manage (e.g. list, abort) active ingestions
+// - Once an ingestion is complete (see Writer.Commit), Provider is used to
+// query a single piece of content by its digest
+// - Manager is used to manage (e.g. list, delete) previously committed content
+//
+// Note that until ingestion is complete, its content is not visible through
+// Provider or Manager. Once ingestion is complete, it is no longer exposed
+// through IngestManager.
+type Store interface {
+ Manager
+ Provider
+ IngestManager
+ Ingester
+}
+
+// ReaderAt extends the standard io.ReaderAt interface with reporting of Size and io.Closer
+type ReaderAt interface {
+ io.ReaderAt
+ io.Closer
+ Size() int64
+}
+
+// Provider provides a reader interface for specific content
+type Provider interface {
+ // ReaderAt only requires desc.Digest to be set.
+ // Other fields in the descriptor may be used internally for resolving
+ // the location of the actual data.
+ ReaderAt(ctx context.Context, desc ocispec.Descriptor) (ReaderAt, error)
+}
+
+// Ingester writes content
+type Ingester interface {
+ // Writer initiates a writing operation (aka ingestion). A single ingestion
+ // is uniquely identified by its ref, provided using a WithRef option.
+ // Writer can be called multiple times with the same ref to access the same
+ // ingestion.
+ // Once all the data is written, use Writer.Commit to complete the ingestion.
+ Writer(ctx context.Context, opts ...WriterOpt) (Writer, error)
+}
+
+// IngestManager provides methods for managing ingestions. An ingestion is a
+// not-yet-complete writing operation initiated using Ingester and identified
+// by a ref string.
+type IngestManager interface {
+ // Status returns the status of the provided ref.
+ Status(ctx context.Context, ref string) (Status, error)
+
+ // ListStatuses returns the status of any active ingestions whose ref match
+ // the provided regular expression. If empty, all active ingestions will be
+ // returned.
+ ListStatuses(ctx context.Context, filters ...string) ([]Status, error)
+
+ // Abort completely cancels the ingest operation targeted by ref.
+ Abort(ctx context.Context, ref string) error
+}
+
+// Info holds content specific information
+type Info struct {
+ Digest digest.Digest
+ Size int64
+ CreatedAt time.Time
+ UpdatedAt time.Time
+ Labels map[string]string
+}
+
+// Status of a content operation (i.e. an ingestion)
+type Status struct {
+ Ref string
+ Offset int64
+ Total int64
+ Expected digest.Digest
+ StartedAt time.Time
+ UpdatedAt time.Time
+}
+
+// WalkFunc defines the callback for a blob walk.
+type WalkFunc func(Info) error
+
+// InfoReaderProvider provides both info and reader for the specific content.
+type InfoReaderProvider interface {
+ InfoProvider
+ Provider
+}
+
+// InfoProvider provides info for content inspection.
+type InfoProvider interface {
+ // Info will return metadata about content available in the content store.
+ //
+ // If the content is not present, ErrNotFound will be returned.
+ Info(ctx context.Context, dgst digest.Digest) (Info, error)
+}
+
+// Manager provides methods for inspecting, listing and removing content.
+type Manager interface {
+ InfoProvider
+
+ // Update updates mutable information related to content.
+ // If one or more fieldpaths are provided, only those
+ // fields will be updated.
+ // Mutable fields:
+ // labels.*
+ Update(ctx context.Context, info Info, fieldpaths ...string) (Info, error)
+
+ // Walk will call fn for each item in the content store which
+ // match the provided filters. If no filters are given all
+ // items will be walked.
+ Walk(ctx context.Context, fn WalkFunc, filters ...string) error
+
+ // Delete removes the content from the store.
+ Delete(ctx context.Context, dgst digest.Digest) error
+}
+
+// Writer handles writing of content into a content store
+type Writer interface {
+ // Close closes the writer, if the writer has not been
+ // committed this allows resuming or aborting.
+ // Calling Close on a closed writer will not error.
+ io.WriteCloser
+
+ // Digest may return empty digest or panics until committed.
+ Digest() digest.Digest
+
+ // Commit commits the blob (but no roll-back is guaranteed on an error).
+ // size and expected can be zero-value when unknown.
+ // Commit always closes the writer, even on error.
+ // ErrAlreadyExists aborts the writer.
+ Commit(ctx context.Context, size int64, expected digest.Digest, opts ...Opt) error
+
+ // Status returns the current state of write
+ Status() (Status, error)
+
+ // Truncate updates the size of the target blob
+ Truncate(size int64) error
+}
+
+// Opt is used to alter the mutable properties of content
+type Opt func(*Info) error
+
+// WithLabels allows labels to be set on content
+func WithLabels(labels map[string]string) Opt {
+ return func(info *Info) error {
+ info.Labels = labels
+ return nil
+ }
+}
+
+// WriterOpts is internally used by WriterOpt.
+type WriterOpts struct {
+ Ref string
+ Desc ocispec.Descriptor
+}
+
+// WriterOpt is used for passing options to Ingester.Writer.
+type WriterOpt func(*WriterOpts) error
+
+// WithDescriptor specifies an OCI descriptor.
+// Writer may optionally use the descriptor internally for resolving
+// the location of the actual data.
+// Write does not require any field of desc to be set.
+// If the data size is unknown, desc.Size should be set to 0.
+// Some implementations may also accept negative values as "unknown".
+func WithDescriptor(desc ocispec.Descriptor) WriterOpt {
+ return func(opts *WriterOpts) error {
+ opts.Desc = desc
+ return nil
+ }
+}
+
+// WithRef specifies a ref string.
+func WithRef(ref string) WriterOpt {
+ return func(opts *WriterOpts) error {
+ opts.Ref = ref
+ return nil
+ }
+}
diff --git a/vendor/github.com/containerd/containerd/content/helpers.go b/vendor/github.com/containerd/containerd/content/helpers.go
new file mode 100644
index 000000000..93bcdde10
--- /dev/null
+++ b/vendor/github.com/containerd/containerd/content/helpers.go
@@ -0,0 +1,340 @@
+/*
+ Copyright The containerd Authors.
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+*/
+
+package content
+
+import (
+ "context"
+ "errors"
+ "fmt"
+ "io"
+ "sync"
+ "time"
+
+ "github.com/containerd/log"
+ "github.com/opencontainers/go-digest"
+ ocispec "github.com/opencontainers/image-spec/specs-go/v1"
+
+ "github.com/containerd/containerd/errdefs"
+ "github.com/containerd/containerd/pkg/randutil"
+)
+
+var ErrReset = errors.New("writer has been reset")
+
+var bufPool = sync.Pool{
+ New: func() interface{} {
+ buffer := make([]byte, 1<<20)
+ return &buffer
+ },
+}
+
+type reader interface {
+ Reader() io.Reader
+}
+
+// NewReader returns a io.Reader from a ReaderAt
+func NewReader(ra ReaderAt) io.Reader {
+ if rd, ok := ra.(reader); ok {
+ return rd.Reader()
+ }
+ return io.NewSectionReader(ra, 0, ra.Size())
+}
+
+// ReadBlob retrieves the entire contents of the blob from the provider.
+//
+// Avoid using this for large blobs, such as layers.
+func ReadBlob(ctx context.Context, provider Provider, desc ocispec.Descriptor) ([]byte, error) {
+ if int64(len(desc.Data)) == desc.Size && digest.FromBytes(desc.Data) == desc.Digest {
+ return desc.Data, nil
+ }
+
+ ra, err := provider.ReaderAt(ctx, desc)
+ if err != nil {
+ return nil, err
+ }
+ defer ra.Close()
+
+ p := make([]byte, ra.Size())
+
+ n, err := ra.ReadAt(p, 0)
+ if err == io.EOF {
+ if int64(n) != ra.Size() {
+ err = io.ErrUnexpectedEOF
+ } else {
+ err = nil
+ }
+ }
+ return p, err
+}
+
+// WriteBlob writes data with the expected digest into the content store. If
+// expected already exists, the method returns immediately and the reader will
+// not be consumed.
+//
+// This is useful when the digest and size are known beforehand.
+//
+// Copy is buffered, so no need to wrap reader in buffered io.
+func WriteBlob(ctx context.Context, cs Ingester, ref string, r io.Reader, desc ocispec.Descriptor, opts ...Opt) error {
+ cw, err := OpenWriter(ctx, cs, WithRef(ref), WithDescriptor(desc))
+ if err != nil {
+ if !errdefs.IsAlreadyExists(err) {
+ return fmt.Errorf("failed to open writer: %w", err)
+ }
+
+ return nil // already present
+ }
+ defer cw.Close()
+
+ return Copy(ctx, cw, r, desc.Size, desc.Digest, opts...)
+}
+
+// OpenWriter opens a new writer for the given reference, retrying if the writer
+// is locked until the reference is available or returns an error.
+func OpenWriter(ctx context.Context, cs Ingester, opts ...WriterOpt) (Writer, error) {
+ var (
+ cw Writer
+ err error
+ retry = 16
+ )
+ for {
+ cw, err = cs.Writer(ctx, opts...)
+ if err != nil {
+ if !errdefs.IsUnavailable(err) {
+ return nil, err
+ }
+
+ // TODO: Check status to determine if the writer is active,
+ // continue waiting while active, otherwise return lock
+ // error or abort. Requires asserting for an ingest manager
+
+ select {
+ case <-time.After(time.Millisecond * time.Duration(randutil.Intn(retry))):
+ if retry < 2048 {
+ retry = retry << 1
+ }
+ continue
+ case <-ctx.Done():
+ // Propagate lock error
+ return nil, err
+ }
+
+ }
+ break
+ }
+
+ return cw, err
+}
+
+// Copy copies data with the expected digest from the reader into the
+// provided content store writer. This copy commits the writer.
+//
+// This is useful when the digest and size are known beforehand. When
+// the size or digest is unknown, these values may be empty.
+//
+// Copy is buffered, so no need to wrap reader in buffered io.
+func Copy(ctx context.Context, cw Writer, or io.Reader, size int64, expected digest.Digest, opts ...Opt) error {
+ ws, err := cw.Status()
+ if err != nil {
+ return fmt.Errorf("failed to get status: %w", err)
+ }
+ r := or
+ if ws.Offset > 0 {
+ r, err = seekReader(or, ws.Offset, size)
+ if err != nil {
+ return fmt.Errorf("unable to resume write to %v: %w", ws.Ref, err)
+ }
+ }
+
+ for i := 0; ; i++ {
+ if i >= 1 {
+ log.G(ctx).WithField("digest", expected).Debugf("retrying copy due to reset")
+ }
+ copied, err := copyWithBuffer(cw, r)
+ if errors.Is(err, ErrReset) {
+ ws, err := cw.Status()
+ if err != nil {
+ return fmt.Errorf("failed to get status: %w", err)
+ }
+ r, err = seekReader(or, ws.Offset, size)
+ if err != nil {
+ return fmt.Errorf("unable to resume write to %v: %w", ws.Ref, err)
+ }
+ continue
+ }
+ if err != nil {
+ return fmt.Errorf("failed to copy: %w", err)
+ }
+ if size != 0 && copied < size-ws.Offset {
+ // Short writes would return its own error, this indicates a read failure
+ return fmt.Errorf("failed to read expected number of bytes: %w", io.ErrUnexpectedEOF)
+ }
+ if err := cw.Commit(ctx, size, expected, opts...); err != nil {
+ if errors.Is(err, ErrReset) {
+ ws, err := cw.Status()
+ if err != nil {
+ return fmt.Errorf("failed to get status: %w", err)
+ }
+ r, err = seekReader(or, ws.Offset, size)
+ if err != nil {
+ return fmt.Errorf("unable to resume write to %v: %w", ws.Ref, err)
+ }
+ continue
+ }
+ if !errdefs.IsAlreadyExists(err) {
+ return fmt.Errorf("failed commit on ref %q: %w", ws.Ref, err)
+ }
+ }
+ return nil
+ }
+}
+
+// CopyReaderAt copies to a writer from a given reader at for the given
+// number of bytes. This copy does not commit the writer.
+func CopyReaderAt(cw Writer, ra ReaderAt, n int64) error {
+ ws, err := cw.Status()
+ if err != nil {
+ return err
+ }
+
+ copied, err := copyWithBuffer(cw, io.NewSectionReader(ra, ws.Offset, n))
+ if err != nil {
+ return fmt.Errorf("failed to copy: %w", err)
+ }
+ if copied < n {
+ // Short writes would return its own error, this indicates a read failure
+ return fmt.Errorf("failed to read expected number of bytes: %w", io.ErrUnexpectedEOF)
+ }
+ return nil
+}
+
+// CopyReader copies to a writer from a given reader, returning
+// the number of bytes copied.
+// Note: if the writer has a non-zero offset, the total number
+// of bytes read may be greater than those copied if the reader
+// is not an io.Seeker.
+// This copy does not commit the writer.
+func CopyReader(cw Writer, r io.Reader) (int64, error) {
+ ws, err := cw.Status()
+ if err != nil {
+ return 0, fmt.Errorf("failed to get status: %w", err)
+ }
+
+ if ws.Offset > 0 {
+ r, err = seekReader(r, ws.Offset, 0)
+ if err != nil {
+ return 0, fmt.Errorf("unable to resume write to %v: %w", ws.Ref, err)
+ }
+ }
+
+ return copyWithBuffer(cw, r)
+}
+
+// seekReader attempts to seek the reader to the given offset, either by
+// resolving `io.Seeker`, by detecting `io.ReaderAt`, or discarding
+// up to the given offset.
+func seekReader(r io.Reader, offset, size int64) (io.Reader, error) {
+ // attempt to resolve r as a seeker and setup the offset.
+ seeker, ok := r.(io.Seeker)
+ if ok {
+ nn, err := seeker.Seek(offset, io.SeekStart)
+ if nn != offset {
+ if err == nil {
+ err = fmt.Errorf("unexpected seek location without seek error")
+ }
+ return nil, fmt.Errorf("failed to seek to offset %v: %w", offset, err)
+ }
+
+ if err != nil {
+ return nil, err
+ }
+
+ return r, nil
+ }
+
+ // ok, let's try io.ReaderAt!
+ readerAt, ok := r.(io.ReaderAt)
+ if ok && size > offset {
+ sr := io.NewSectionReader(readerAt, offset, size)
+ return sr, nil
+ }
+
+ // well then, let's just discard up to the offset
+ n, err := copyWithBuffer(io.Discard, io.LimitReader(r, offset))
+ if err != nil {
+ return nil, fmt.Errorf("failed to discard to offset: %w", err)
+ }
+ if n != offset {
+ return nil, errors.New("unable to discard to offset")
+ }
+
+ return r, nil
+}
+
+// copyWithBuffer is very similar to io.CopyBuffer https://golang.org/pkg/io/#CopyBuffer
+// but instead of using Read to read from the src, we use ReadAtLeast to make sure we have
+// a full buffer before we do a write operation to dst to reduce overheads associated
+// with the write operations of small buffers.
+func copyWithBuffer(dst io.Writer, src io.Reader) (written int64, err error) {
+ // If the reader has a WriteTo method, use it to do the copy.
+ // Avoids an allocation and a copy.
+ if wt, ok := src.(io.WriterTo); ok {
+ return wt.WriteTo(dst)
+ }
+ // Similarly, if the writer has a ReadFrom method, use it to do the copy.
+ if rt, ok := dst.(io.ReaderFrom); ok {
+ return rt.ReadFrom(src)
+ }
+ bufRef := bufPool.Get().(*[]byte)
+ defer bufPool.Put(bufRef)
+ buf := *bufRef
+ for {
+ nr, er := io.ReadAtLeast(src, buf, len(buf))
+ if nr > 0 {
+ nw, ew := dst.Write(buf[0:nr])
+ if nw > 0 {
+ written += int64(nw)
+ }
+ if ew != nil {
+ err = ew
+ break
+ }
+ if nr != nw {
+ err = io.ErrShortWrite
+ break
+ }
+ }
+ if er != nil {
+ // If an EOF happens after reading fewer than the requested bytes,
+ // ReadAtLeast returns ErrUnexpectedEOF.
+ if er != io.EOF && er != io.ErrUnexpectedEOF {
+ err = er
+ }
+ break
+ }
+ }
+ return
+}
+
+// Exists returns whether an attempt to access the content would not error out
+// with an ErrNotFound error. It will return an encountered error if it was
+// different than ErrNotFound.
+func Exists(ctx context.Context, provider InfoProvider, desc ocispec.Descriptor) (bool, error) {
+ _, err := provider.Info(ctx, desc.Digest)
+ if errdefs.IsNotFound(err) {
+ return false, nil
+ }
+ return err == nil, err
+}
diff --git a/vendor/github.com/containerd/containerd/content/local/content_local_fuzzer.go b/vendor/github.com/containerd/containerd/content/local/content_local_fuzzer.go
new file mode 100644
index 000000000..a523f28d9
--- /dev/null
+++ b/vendor/github.com/containerd/containerd/content/local/content_local_fuzzer.go
@@ -0,0 +1,76 @@
+//go:build gofuzz
+
+/*
+ Copyright The containerd Authors.
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+*/
+
+package local
+
+import (
+ "bufio"
+ "bytes"
+ "context"
+ _ "crypto/sha256"
+ "io"
+ "testing"
+
+ "github.com/opencontainers/go-digest"
+
+ "github.com/containerd/containerd/content"
+)
+
+func FuzzContentStoreWriter(data []byte) int {
+ t := &testing.T{}
+ ctx := context.Background()
+ ctx, _, cs, cleanup := contentStoreEnv(t)
+ defer cleanup()
+
+ cw, err := cs.Writer(ctx, content.WithRef("myref"))
+ if err != nil {
+ return 0
+ }
+ if err := cw.Close(); err != nil {
+ return 0
+ }
+
+ // reopen, so we can test things
+ cw, err = cs.Writer(ctx, content.WithRef("myref"))
+ if err != nil {
+ return 0
+ }
+
+ err = checkCopyFuzz(int64(len(data)), cw, bufio.NewReader(io.NopCloser(bytes.NewReader(data))))
+ if err != nil {
+ return 0
+ }
+ expected := digest.FromBytes(data)
+
+ if err = cw.Commit(ctx, int64(len(data)), expected); err != nil {
+ return 0
+ }
+ return 1
+}
+
+func checkCopyFuzz(size int64, dst io.Writer, src io.Reader) error {
+ nn, err := io.Copy(dst, src)
+ if err != nil {
+ return err
+ }
+
+ if nn != size {
+ return err
+ }
+ return nil
+}
diff --git a/vendor/github.com/containerd/containerd/content/local/locks.go b/vendor/github.com/containerd/containerd/content/local/locks.go
new file mode 100644
index 000000000..1e59f39b3
--- /dev/null
+++ b/vendor/github.com/containerd/containerd/content/local/locks.go
@@ -0,0 +1,62 @@
+/*
+ Copyright The containerd Authors.
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+*/
+
+package local
+
+import (
+ "fmt"
+ "sync"
+ "time"
+
+ "github.com/containerd/containerd/errdefs"
+)
+
+// Handles locking references
+
+type lock struct {
+ since time.Time
+}
+
+var (
+ // locks lets us lock in process
+ locks = make(map[string]*lock)
+ locksMu sync.Mutex
+)
+
+func tryLock(ref string) error {
+ locksMu.Lock()
+ defer locksMu.Unlock()
+
+ if v, ok := locks[ref]; ok {
+ // Returning the duration may help developers distinguish dead locks (long duration) from
+ // lock contentions (short duration).
+ now := time.Now()
+ return fmt.Errorf(
+ "ref %s locked for %s (since %s): %w", ref, now.Sub(v.since), v.since,
+ errdefs.ErrUnavailable,
+ )
+ }
+
+ locks[ref] = &lock{time.Now()}
+ return nil
+}
+
+func unlock(ref string) {
+ locksMu.Lock()
+ defer locksMu.Unlock()
+
+ delete(locks, ref)
+}
diff --git a/vendor/github.com/containerd/containerd/content/local/readerat.go b/vendor/github.com/containerd/containerd/content/local/readerat.go
new file mode 100644
index 000000000..899e85c0b
--- /dev/null
+++ b/vendor/github.com/containerd/containerd/content/local/readerat.go
@@ -0,0 +1,72 @@
+/*
+ Copyright The containerd Authors.
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+*/
+
+package local
+
+import (
+ "fmt"
+ "io"
+ "os"
+
+ "github.com/containerd/containerd/content"
+ "github.com/containerd/containerd/errdefs"
+)
+
+// readerat implements io.ReaderAt in a completely stateless manner by opening
+// the referenced file for each call to ReadAt.
+type sizeReaderAt struct {
+ size int64
+ fp *os.File
+}
+
+// OpenReader creates ReaderAt from a file
+func OpenReader(p string) (content.ReaderAt, error) {
+ fi, err := os.Stat(p)
+ if err != nil {
+ if !os.IsNotExist(err) {
+ return nil, err
+ }
+
+ return nil, fmt.Errorf("blob not found: %w", errdefs.ErrNotFound)
+ }
+
+ fp, err := os.Open(p)
+ if err != nil {
+ if !os.IsNotExist(err) {
+ return nil, err
+ }
+
+ return nil, fmt.Errorf("blob not found: %w", errdefs.ErrNotFound)
+ }
+
+ return sizeReaderAt{size: fi.Size(), fp: fp}, nil
+}
+
+func (ra sizeReaderAt) ReadAt(p []byte, offset int64) (int, error) {
+ return ra.fp.ReadAt(p, offset)
+}
+
+func (ra sizeReaderAt) Size() int64 {
+ return ra.size
+}
+
+func (ra sizeReaderAt) Close() error {
+ return ra.fp.Close()
+}
+
+func (ra sizeReaderAt) Reader() io.Reader {
+ return io.LimitReader(ra.fp, ra.size)
+}
diff --git a/vendor/github.com/containerd/containerd/content/local/store.go b/vendor/github.com/containerd/containerd/content/local/store.go
new file mode 100644
index 000000000..e1baee4c2
--- /dev/null
+++ b/vendor/github.com/containerd/containerd/content/local/store.go
@@ -0,0 +1,705 @@
+/*
+ Copyright The containerd Authors.
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+*/
+
+package local
+
+import (
+ "context"
+ "fmt"
+ "io"
+ "os"
+ "path/filepath"
+ "strconv"
+ "strings"
+ "sync"
+ "time"
+
+ "github.com/containerd/log"
+ "github.com/sirupsen/logrus"
+
+ "github.com/containerd/containerd/content"
+ "github.com/containerd/containerd/errdefs"
+ "github.com/containerd/containerd/filters"
+ "github.com/containerd/containerd/pkg/randutil"
+
+ "github.com/opencontainers/go-digest"
+ ocispec "github.com/opencontainers/image-spec/specs-go/v1"
+)
+
+var bufPool = sync.Pool{
+ New: func() interface{} {
+ buffer := make([]byte, 1<<20)
+ return &buffer
+ },
+}
+
+// LabelStore is used to store mutable labels for digests
+type LabelStore interface {
+ // Get returns all the labels for the given digest
+ Get(digest.Digest) (map[string]string, error)
+
+ // Set sets all the labels for a given digest
+ Set(digest.Digest, map[string]string) error
+
+ // Update replaces the given labels for a digest,
+ // a key with an empty value removes a label.
+ Update(digest.Digest, map[string]string) (map[string]string, error)
+}
+
+// Store is digest-keyed store for content. All data written into the store is
+// stored under a verifiable digest.
+//
+// Store can generally support multi-reader, single-writer ingest of data,
+// including resumable ingest.
+type store struct {
+ root string
+ ls LabelStore
+}
+
+// NewStore returns a local content store
+func NewStore(root string) (content.Store, error) {
+ return NewLabeledStore(root, nil)
+}
+
+// NewLabeledStore returns a new content store using the provided label store
+//
+// Note: content stores which are used underneath a metadata store may not
+// require labels and should use `NewStore`. `NewLabeledStore` is primarily
+// useful for tests or standalone implementations.
+func NewLabeledStore(root string, ls LabelStore) (content.Store, error) {
+ if err := os.MkdirAll(filepath.Join(root, "ingest"), 0777); err != nil {
+ return nil, err
+ }
+
+ return &store{
+ root: root,
+ ls: ls,
+ }, nil
+}
+
+func (s *store) Info(ctx context.Context, dgst digest.Digest) (content.Info, error) {
+ p, err := s.blobPath(dgst)
+ if err != nil {
+ return content.Info{}, fmt.Errorf("calculating blob info path: %w", err)
+ }
+
+ fi, err := os.Stat(p)
+ if err != nil {
+ if os.IsNotExist(err) {
+ err = fmt.Errorf("content %v: %w", dgst, errdefs.ErrNotFound)
+ }
+
+ return content.Info{}, err
+ }
+ var labels map[string]string
+ if s.ls != nil {
+ labels, err = s.ls.Get(dgst)
+ if err != nil {
+ return content.Info{}, err
+ }
+ }
+ return s.info(dgst, fi, labels), nil
+}
+
+func (s *store) info(dgst digest.Digest, fi os.FileInfo, labels map[string]string) content.Info {
+ return content.Info{
+ Digest: dgst,
+ Size: fi.Size(),
+ CreatedAt: fi.ModTime(),
+ UpdatedAt: getATime(fi),
+ Labels: labels,
+ }
+}
+
+// ReaderAt returns an io.ReaderAt for the blob.
+func (s *store) ReaderAt(ctx context.Context, desc ocispec.Descriptor) (content.ReaderAt, error) {
+ p, err := s.blobPath(desc.Digest)
+ if err != nil {
+ return nil, fmt.Errorf("calculating blob path for ReaderAt: %w", err)
+ }
+
+ reader, err := OpenReader(p)
+ if err != nil {
+ return nil, fmt.Errorf("blob %s expected at %s: %w", desc.Digest, p, err)
+ }
+
+ return reader, nil
+}
+
+// Delete removes a blob by its digest.
+//
+// While this is safe to do concurrently, safe exist-removal logic must hold
+// some global lock on the store.
+func (s *store) Delete(ctx context.Context, dgst digest.Digest) error {
+ bp, err := s.blobPath(dgst)
+ if err != nil {
+ return fmt.Errorf("calculating blob path for delete: %w", err)
+ }
+
+ if err := os.RemoveAll(bp); err != nil {
+ if !os.IsNotExist(err) {
+ return err
+ }
+
+ return fmt.Errorf("content %v: %w", dgst, errdefs.ErrNotFound)
+ }
+
+ return nil
+}
+
+func (s *store) Update(ctx context.Context, info content.Info, fieldpaths ...string) (content.Info, error) {
+ if s.ls == nil {
+ return content.Info{}, fmt.Errorf("update not supported on immutable content store: %w", errdefs.ErrFailedPrecondition)
+ }
+
+ p, err := s.blobPath(info.Digest)
+ if err != nil {
+ return content.Info{}, fmt.Errorf("calculating blob path for update: %w", err)
+ }
+
+ fi, err := os.Stat(p)
+ if err != nil {
+ if os.IsNotExist(err) {
+ err = fmt.Errorf("content %v: %w", info.Digest, errdefs.ErrNotFound)
+ }
+
+ return content.Info{}, err
+ }
+
+ var (
+ all bool
+ labels map[string]string
+ )
+ if len(fieldpaths) > 0 {
+ for _, path := range fieldpaths {
+ if strings.HasPrefix(path, "labels.") {
+ if labels == nil {
+ labels = map[string]string{}
+ }
+
+ key := strings.TrimPrefix(path, "labels.")
+ labels[key] = info.Labels[key]
+ continue
+ }
+
+ switch path {
+ case "labels":
+ all = true
+ labels = info.Labels
+ default:
+ return content.Info{}, fmt.Errorf("cannot update %q field on content info %q: %w", path, info.Digest, errdefs.ErrInvalidArgument)
+ }
+ }
+ } else {
+ all = true
+ labels = info.Labels
+ }
+
+ if all {
+ err = s.ls.Set(info.Digest, labels)
+ } else {
+ labels, err = s.ls.Update(info.Digest, labels)
+ }
+ if err != nil {
+ return content.Info{}, err
+ }
+
+ info = s.info(info.Digest, fi, labels)
+ info.UpdatedAt = time.Now()
+
+ if err := os.Chtimes(p, info.UpdatedAt, info.CreatedAt); err != nil {
+ log.G(ctx).WithError(err).Warnf("could not change access time for %s", info.Digest)
+ }
+
+ return info, nil
+}
+
+func (s *store) Walk(ctx context.Context, fn content.WalkFunc, fs ...string) error {
+ root := filepath.Join(s.root, "blobs")
+
+ filter, err := filters.ParseAll(fs...)
+ if err != nil {
+ return err
+ }
+
+ var alg digest.Algorithm
+ return filepath.Walk(root, func(path string, fi os.FileInfo, err error) error {
+ if err != nil {
+ return err
+ }
+ if !fi.IsDir() && !alg.Available() {
+ return nil
+ }
+
+ // TODO(stevvooe): There are few more cases with subdirs that should be
+ // handled in case the layout gets corrupted. This isn't strict enough
+ // and may spew bad data.
+
+ if path == root {
+ return nil
+ }
+ if filepath.Dir(path) == root {
+ alg = digest.Algorithm(filepath.Base(path))
+
+ if !alg.Available() {
+ alg = ""
+ return filepath.SkipDir
+ }
+
+ // descending into a hash directory
+ return nil
+ }
+
+ dgst := digest.NewDigestFromEncoded(alg, filepath.Base(path))
+ if err := dgst.Validate(); err != nil {
+ // log error but don't report
+ log.L.WithError(err).WithField("path", path).Error("invalid digest for blob path")
+ // if we see this, it could mean some sort of corruption of the
+ // store or extra paths not expected previously.
+ }
+
+ var labels map[string]string
+ if s.ls != nil {
+ labels, err = s.ls.Get(dgst)
+ if err != nil {
+ return err
+ }
+ }
+
+ info := s.info(dgst, fi, labels)
+ if !filter.Match(content.AdaptInfo(info)) {
+ return nil
+ }
+ return fn(info)
+ })
+}
+
+func (s *store) Status(ctx context.Context, ref string) (content.Status, error) {
+ return s.status(s.ingestRoot(ref))
+}
+
+func (s *store) ListStatuses(ctx context.Context, fs ...string) ([]content.Status, error) {
+ fp, err := os.Open(filepath.Join(s.root, "ingest"))
+ if err != nil {
+ return nil, err
+ }
+
+ defer fp.Close()
+
+ fis, err := fp.Readdir(-1)
+ if err != nil {
+ return nil, err
+ }
+
+ filter, err := filters.ParseAll(fs...)
+ if err != nil {
+ return nil, err
+ }
+
+ var active []content.Status
+ for _, fi := range fis {
+ p := filepath.Join(s.root, "ingest", fi.Name())
+ stat, err := s.status(p)
+ if err != nil {
+ if !os.IsNotExist(err) {
+ return nil, err
+ }
+
+ // TODO(stevvooe): This is a common error if uploads are being
+ // completed while making this listing. Need to consider taking a
+ // lock on the whole store to coordinate this aspect.
+ //
+ // Another option is to cleanup downloads asynchronously and
+ // coordinate this method with the cleanup process.
+ //
+ // For now, we just skip them, as they really don't exist.
+ continue
+ }
+
+ if filter.Match(adaptStatus(stat)) {
+ active = append(active, stat)
+ }
+ }
+
+ return active, nil
+}
+
+// WalkStatusRefs is used to walk all status references
+// Failed status reads will be logged and ignored, if
+// this function is called while references are being altered,
+// these error messages may be produced.
+func (s *store) WalkStatusRefs(ctx context.Context, fn func(string) error) error {
+ fp, err := os.Open(filepath.Join(s.root, "ingest"))
+ if err != nil {
+ return err
+ }
+
+ defer fp.Close()
+
+ fis, err := fp.Readdir(-1)
+ if err != nil {
+ return err
+ }
+
+ for _, fi := range fis {
+ rf := filepath.Join(s.root, "ingest", fi.Name(), "ref")
+
+ ref, err := readFileString(rf)
+ if err != nil {
+ log.G(ctx).WithError(err).WithField("path", rf).Error("failed to read ingest ref")
+ continue
+ }
+
+ if err := fn(ref); err != nil {
+ return err
+ }
+ }
+
+ return nil
+}
+
+// status works like stat above except uses the path to the ingest.
+func (s *store) status(ingestPath string) (content.Status, error) {
+ dp := filepath.Join(ingestPath, "data")
+ fi, err := os.Stat(dp)
+ if err != nil {
+ if os.IsNotExist(err) {
+ err = fmt.Errorf("%s: %w", err.Error(), errdefs.ErrNotFound)
+ }
+ return content.Status{}, err
+ }
+
+ ref, err := readFileString(filepath.Join(ingestPath, "ref"))
+ if err != nil {
+ if os.IsNotExist(err) {
+ err = fmt.Errorf("%s: %w", err.Error(), errdefs.ErrNotFound)
+ }
+ return content.Status{}, err
+ }
+
+ startedAt, err := readFileTimestamp(filepath.Join(ingestPath, "startedat"))
+ if err != nil {
+ return content.Status{}, fmt.Errorf("could not read startedat: %w", err)
+ }
+
+ updatedAt, err := readFileTimestamp(filepath.Join(ingestPath, "updatedat"))
+ if err != nil {
+ return content.Status{}, fmt.Errorf("could not read updatedat: %w", err)
+ }
+
+ // because we don't write updatedat on every write, the mod time may
+ // actually be more up to date.
+ if fi.ModTime().After(updatedAt) {
+ updatedAt = fi.ModTime()
+ }
+
+ return content.Status{
+ Ref: ref,
+ Offset: fi.Size(),
+ Total: s.total(ingestPath),
+ UpdatedAt: updatedAt,
+ StartedAt: startedAt,
+ }, nil
+}
+
+func adaptStatus(status content.Status) filters.Adaptor {
+ return filters.AdapterFunc(func(fieldpath []string) (string, bool) {
+ if len(fieldpath) == 0 {
+ return "", false
+ }
+ switch fieldpath[0] {
+ case "ref":
+ return status.Ref, true
+ }
+
+ return "", false
+ })
+}
+
+// total attempts to resolve the total expected size for the write.
+func (s *store) total(ingestPath string) int64 {
+ totalS, err := readFileString(filepath.Join(ingestPath, "total"))
+ if err != nil {
+ return 0
+ }
+
+ total, err := strconv.ParseInt(totalS, 10, 64)
+ if err != nil {
+ // represents a corrupted file, should probably remove.
+ return 0
+ }
+
+ return total
+}
+
+// Writer begins or resumes the active writer identified by ref. If the writer
+// is already in use, an error is returned. Only one writer may be in use per
+// ref at a time.
+//
+// The argument `ref` is used to uniquely identify a long-lived writer transaction.
+func (s *store) Writer(ctx context.Context, opts ...content.WriterOpt) (content.Writer, error) {
+ var wOpts content.WriterOpts
+ for _, opt := range opts {
+ if err := opt(&wOpts); err != nil {
+ return nil, err
+ }
+ }
+ // TODO(AkihiroSuda): we could create a random string or one calculated based on the context
+ // https://github.com/containerd/containerd/issues/2129#issuecomment-380255019
+ if wOpts.Ref == "" {
+ return nil, fmt.Errorf("ref must not be empty: %w", errdefs.ErrInvalidArgument)
+ }
+ var lockErr error
+ for count := uint64(0); count < 10; count++ {
+ if err := tryLock(wOpts.Ref); err != nil {
+ if !errdefs.IsUnavailable(err) {
+ return nil, err
+ }
+
+ lockErr = err
+ } else {
+ lockErr = nil
+ break
+ }
+ time.Sleep(time.Millisecond * time.Duration(randutil.Intn(1< 0 && status.Total > 0 && total != status.Total {
+ return status, fmt.Errorf("provided total differs from status: %v != %v", total, status.Total)
+ }
+
+ //nolint:dupword
+ // TODO(stevvooe): slow slow slow!!, send to goroutine or use resumable hashes
+ fp, err := os.Open(data)
+ if err != nil {
+ return status, err
+ }
+
+ p := bufPool.Get().(*[]byte)
+ status.Offset, err = io.CopyBuffer(digester.Hash(), fp, *p)
+ bufPool.Put(p)
+ fp.Close()
+ return status, err
+}
+
+// writer provides the main implementation of the Writer method. The caller
+// must hold the lock correctly and release on error if there is a problem.
+func (s *store) writer(ctx context.Context, ref string, total int64, expected digest.Digest) (content.Writer, error) {
+ // TODO(stevvooe): Need to actually store expected here. We have
+ // code in the service that shouldn't be dealing with this.
+ if expected != "" {
+ p, err := s.blobPath(expected)
+ if err != nil {
+ return nil, fmt.Errorf("calculating expected blob path for writer: %w", err)
+ }
+ if _, err := os.Stat(p); err == nil {
+ return nil, fmt.Errorf("content %v: %w", expected, errdefs.ErrAlreadyExists)
+ }
+ }
+
+ path, refp, data := s.ingestPaths(ref)
+
+ var (
+ digester = digest.Canonical.Digester()
+ offset int64
+ startedAt time.Time
+ updatedAt time.Time
+ )
+
+ foundValidIngest := false
+ // ensure that the ingest path has been created.
+ if err := os.Mkdir(path, 0755); err != nil {
+ if !os.IsExist(err) {
+ return nil, err
+ }
+ status, err := s.resumeStatus(ref, total, digester)
+ if err == nil {
+ foundValidIngest = true
+ updatedAt = status.UpdatedAt
+ startedAt = status.StartedAt
+ total = status.Total
+ offset = status.Offset
+ } else {
+ logrus.Infof("failed to resume the status from path %s: %s. will recreate them", path, err.Error())
+ }
+ }
+
+ if !foundValidIngest {
+ startedAt = time.Now()
+ updatedAt = startedAt
+
+ // the ingest is new, we need to setup the target location.
+ // write the ref to a file for later use
+ if err := os.WriteFile(refp, []byte(ref), 0666); err != nil {
+ return nil, err
+ }
+
+ if err := writeTimestampFile(filepath.Join(path, "startedat"), startedAt); err != nil {
+ return nil, err
+ }
+
+ if err := writeTimestampFile(filepath.Join(path, "updatedat"), startedAt); err != nil {
+ return nil, err
+ }
+
+ if total > 0 {
+ if err := os.WriteFile(filepath.Join(path, "total"), []byte(fmt.Sprint(total)), 0666); err != nil {
+ return nil, err
+ }
+ }
+ }
+
+ fp, err := os.OpenFile(data, os.O_WRONLY|os.O_CREATE, 0666)
+ if err != nil {
+ return nil, fmt.Errorf("failed to open data file: %w", err)
+ }
+
+ if _, err := fp.Seek(offset, io.SeekStart); err != nil {
+ fp.Close()
+ return nil, fmt.Errorf("could not seek to current write offset: %w", err)
+ }
+
+ return &writer{
+ s: s,
+ fp: fp,
+ ref: ref,
+ path: path,
+ offset: offset,
+ total: total,
+ digester: digester,
+ startedAt: startedAt,
+ updatedAt: updatedAt,
+ }, nil
+}
+
+// Abort an active transaction keyed by ref. If the ingest is active, it will
+// be cancelled. Any resources associated with the ingest will be cleaned.
+func (s *store) Abort(ctx context.Context, ref string) error {
+ root := s.ingestRoot(ref)
+ if err := os.RemoveAll(root); err != nil {
+ if os.IsNotExist(err) {
+ return fmt.Errorf("ingest ref %q: %w", ref, errdefs.ErrNotFound)
+ }
+
+ return err
+ }
+
+ return nil
+}
+
+func (s *store) blobPath(dgst digest.Digest) (string, error) {
+ if err := dgst.Validate(); err != nil {
+ return "", fmt.Errorf("cannot calculate blob path from invalid digest: %v: %w", err, errdefs.ErrInvalidArgument)
+ }
+
+ return filepath.Join(s.root, "blobs", dgst.Algorithm().String(), dgst.Encoded()), nil
+}
+
+func (s *store) ingestRoot(ref string) string {
+ // we take a digest of the ref to keep the ingest paths constant length.
+ // Note that this is not the current or potential digest of incoming content.
+ dgst := digest.FromString(ref)
+ return filepath.Join(s.root, "ingest", dgst.Encoded())
+}
+
+// ingestPaths are returned. The paths are the following:
+//
+// - root: entire ingest directory
+// - ref: name of the starting ref, must be unique
+// - data: file where data is written
+func (s *store) ingestPaths(ref string) (string, string, string) {
+ var (
+ fp = s.ingestRoot(ref)
+ rp = filepath.Join(fp, "ref")
+ dp = filepath.Join(fp, "data")
+ )
+
+ return fp, rp, dp
+}
+
+func readFileString(path string) (string, error) {
+ p, err := os.ReadFile(path)
+ return string(p), err
+}
+
+// readFileTimestamp reads a file with just a timestamp present.
+func readFileTimestamp(p string) (time.Time, error) {
+ b, err := os.ReadFile(p)
+ if err != nil {
+ if os.IsNotExist(err) {
+ err = fmt.Errorf("%s: %w", err.Error(), errdefs.ErrNotFound)
+ }
+ return time.Time{}, err
+ }
+
+ var t time.Time
+ if err := t.UnmarshalText(b); err != nil {
+ return time.Time{}, fmt.Errorf("could not parse timestamp file %v: %w", p, err)
+ }
+
+ return t, nil
+}
+
+func writeTimestampFile(p string, t time.Time) error {
+ b, err := t.MarshalText()
+ if err != nil {
+ return err
+ }
+ return writeToCompletion(p, b, 0666)
+}
+
+func writeToCompletion(path string, data []byte, mode os.FileMode) error {
+ tmp := fmt.Sprintf("%s.tmp", path)
+ f, err := os.OpenFile(tmp, os.O_RDWR|os.O_CREATE|os.O_TRUNC|os.O_SYNC, mode)
+ if err != nil {
+ return fmt.Errorf("create tmp file: %w", err)
+ }
+ _, err = f.Write(data)
+ f.Close()
+ if err != nil {
+ return fmt.Errorf("write tmp file: %w", err)
+ }
+ err = os.Rename(tmp, path)
+ if err != nil {
+ return fmt.Errorf("rename tmp file: %w", err)
+ }
+ return nil
+}
diff --git a/vendor/github.com/containerd/containerd/content/local/store_bsd.go b/vendor/github.com/containerd/containerd/content/local/store_bsd.go
new file mode 100644
index 000000000..7dcc19232
--- /dev/null
+++ b/vendor/github.com/containerd/containerd/content/local/store_bsd.go
@@ -0,0 +1,33 @@
+//go:build darwin || freebsd || netbsd
+
+/*
+ Copyright The containerd Authors.
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+*/
+
+package local
+
+import (
+ "os"
+ "syscall"
+ "time"
+)
+
+func getATime(fi os.FileInfo) time.Time {
+ if st, ok := fi.Sys().(*syscall.Stat_t); ok {
+ return time.Unix(st.Atimespec.Unix())
+ }
+
+ return fi.ModTime()
+}
diff --git a/vendor/github.com/containerd/containerd/content/local/store_openbsd.go b/vendor/github.com/containerd/containerd/content/local/store_openbsd.go
new file mode 100644
index 000000000..45dfa9997
--- /dev/null
+++ b/vendor/github.com/containerd/containerd/content/local/store_openbsd.go
@@ -0,0 +1,33 @@
+//go:build openbsd
+
+/*
+ Copyright The containerd Authors.
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+*/
+
+package local
+
+import (
+ "os"
+ "syscall"
+ "time"
+)
+
+func getATime(fi os.FileInfo) time.Time {
+ if st, ok := fi.Sys().(*syscall.Stat_t); ok {
+ return time.Unix(st.Atim.Unix())
+ }
+
+ return fi.ModTime()
+}
diff --git a/vendor/github.com/containerd/containerd/content/local/store_unix.go b/vendor/github.com/containerd/containerd/content/local/store_unix.go
new file mode 100644
index 000000000..cb01c91c7
--- /dev/null
+++ b/vendor/github.com/containerd/containerd/content/local/store_unix.go
@@ -0,0 +1,33 @@
+//go:build linux || solaris
+
+/*
+ Copyright The containerd Authors.
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+*/
+
+package local
+
+import (
+ "os"
+ "syscall"
+ "time"
+)
+
+func getATime(fi os.FileInfo) time.Time {
+ if st, ok := fi.Sys().(*syscall.Stat_t); ok {
+ return time.Unix(st.Atim.Unix())
+ }
+
+ return fi.ModTime()
+}
diff --git a/vendor/github.com/containerd/containerd/content/local/store_windows.go b/vendor/github.com/containerd/containerd/content/local/store_windows.go
new file mode 100644
index 000000000..bce849979
--- /dev/null
+++ b/vendor/github.com/containerd/containerd/content/local/store_windows.go
@@ -0,0 +1,26 @@
+/*
+ Copyright The containerd Authors.
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+*/
+
+package local
+
+import (
+ "os"
+ "time"
+)
+
+func getATime(fi os.FileInfo) time.Time {
+ return fi.ModTime()
+}
diff --git a/vendor/github.com/containerd/containerd/content/local/test_helper.go b/vendor/github.com/containerd/containerd/content/local/test_helper.go
new file mode 100644
index 000000000..42de01cae
--- /dev/null
+++ b/vendor/github.com/containerd/containerd/content/local/test_helper.go
@@ -0,0 +1,38 @@
+/*
+ Copyright The containerd Authors.
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+*/
+
+package local
+
+import (
+ "context"
+ "testing"
+
+ "github.com/containerd/containerd/content"
+)
+
+func contentStoreEnv(t testing.TB) (context.Context, string, content.Store, func()) {
+ tmpdir := t.TempDir()
+
+ cs, err := NewStore(tmpdir)
+ if err != nil {
+ t.Fatal(err)
+ }
+
+ ctx, cancel := context.WithCancel(context.Background())
+ return ctx, tmpdir, cs, func() {
+ cancel()
+ }
+}
diff --git a/vendor/github.com/containerd/containerd/content/local/writer.go b/vendor/github.com/containerd/containerd/content/local/writer.go
new file mode 100644
index 000000000..0cd8f2d04
--- /dev/null
+++ b/vendor/github.com/containerd/containerd/content/local/writer.go
@@ -0,0 +1,209 @@
+/*
+ Copyright The containerd Authors.
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+*/
+
+package local
+
+import (
+ "context"
+ "errors"
+ "fmt"
+ "io"
+ "os"
+ "path/filepath"
+ "runtime"
+ "time"
+
+ "github.com/containerd/log"
+ "github.com/opencontainers/go-digest"
+
+ "github.com/containerd/containerd/content"
+ "github.com/containerd/containerd/errdefs"
+)
+
+// writer represents a write transaction against the blob store.
+type writer struct {
+ s *store
+ fp *os.File // opened data file
+ path string // path to writer dir
+ ref string // ref key
+ offset int64
+ total int64
+ digester digest.Digester
+ startedAt time.Time
+ updatedAt time.Time
+}
+
+func (w *writer) Status() (content.Status, error) {
+ return content.Status{
+ Ref: w.ref,
+ Offset: w.offset,
+ Total: w.total,
+ StartedAt: w.startedAt,
+ UpdatedAt: w.updatedAt,
+ }, nil
+}
+
+// Digest returns the current digest of the content, up to the current write.
+//
+// Cannot be called concurrently with `Write`.
+func (w *writer) Digest() digest.Digest {
+ return w.digester.Digest()
+}
+
+// Write p to the transaction.
+//
+// Note that writes are unbuffered to the backing file. When writing, it is
+// recommended to wrap in a bufio.Writer or, preferably, use io.CopyBuffer.
+func (w *writer) Write(p []byte) (n int, err error) {
+ n, err = w.fp.Write(p)
+ w.digester.Hash().Write(p[:n])
+ w.offset += int64(len(p))
+ w.updatedAt = time.Now()
+ return n, err
+}
+
+func (w *writer) Commit(ctx context.Context, size int64, expected digest.Digest, opts ...content.Opt) error {
+ // Ensure even on error the writer is fully closed
+ defer unlock(w.ref)
+
+ var base content.Info
+ for _, opt := range opts {
+ if err := opt(&base); err != nil {
+ return err
+ }
+ }
+
+ fp := w.fp
+ w.fp = nil
+
+ if fp == nil {
+ return fmt.Errorf("cannot commit on closed writer: %w", errdefs.ErrFailedPrecondition)
+ }
+
+ if err := fp.Sync(); err != nil {
+ fp.Close()
+ return fmt.Errorf("sync failed: %w", err)
+ }
+
+ fi, err := fp.Stat()
+ closeErr := fp.Close()
+ if err != nil {
+ return fmt.Errorf("stat on ingest file failed: %w", err)
+ }
+ if closeErr != nil {
+ return fmt.Errorf("failed to close ingest file: %w", closeErr)
+ }
+
+ if size > 0 && size != fi.Size() {
+ return fmt.Errorf("unexpected commit size %d, expected %d: %w", fi.Size(), size, errdefs.ErrFailedPrecondition)
+ }
+
+ dgst := w.digester.Digest()
+ if expected != "" && expected != dgst {
+ return fmt.Errorf("unexpected commit digest %s, expected %s: %w", dgst, expected, errdefs.ErrFailedPrecondition)
+ }
+
+ var (
+ ingest = filepath.Join(w.path, "data")
+ target, _ = w.s.blobPath(dgst) // ignore error because we calculated this dgst
+ )
+
+ // make sure parent directories of blob exist
+ if err := os.MkdirAll(filepath.Dir(target), 0755); err != nil {
+ return err
+ }
+
+ if _, err := os.Stat(target); err == nil {
+ // collision with the target file!
+ if err := os.RemoveAll(w.path); err != nil {
+ log.G(ctx).WithField("ref", w.ref).WithField("path", w.path).Error("failed to remove ingest directory")
+ }
+ return fmt.Errorf("content %v: %w", dgst, errdefs.ErrAlreadyExists)
+ }
+
+ if err := os.Rename(ingest, target); err != nil {
+ return err
+ }
+
+ // Ingest has now been made available in the content store, attempt to complete
+ // setting metadata but errors should only be logged and not returned since
+ // the content store cannot be cleanly rolled back.
+
+ commitTime := time.Now()
+ if err := os.Chtimes(target, commitTime, commitTime); err != nil {
+ log.G(ctx).WithField("digest", dgst).Error("failed to change file time to commit time")
+ }
+
+ // clean up!!
+ if err := os.RemoveAll(w.path); err != nil {
+ log.G(ctx).WithField("ref", w.ref).WithField("path", w.path).Error("failed to remove ingest directory")
+ }
+
+ if w.s.ls != nil && base.Labels != nil {
+ if err := w.s.ls.Set(dgst, base.Labels); err != nil {
+ log.G(ctx).WithField("digest", dgst).Error("failed to set labels")
+ }
+ }
+
+ // change to readonly, more important for read, but provides _some_
+ // protection from this point on. We use the existing perms with a mask
+ // only allowing reads honoring the umask on creation.
+ //
+ // This removes write and exec, only allowing read per the creation umask.
+ //
+ // NOTE: Windows does not support this operation
+ if runtime.GOOS != "windows" {
+ if err := os.Chmod(target, (fi.Mode()&os.ModePerm)&^0333); err != nil {
+ log.G(ctx).WithField("ref", w.ref).Error("failed to make readonly")
+ }
+ }
+
+ return nil
+}
+
+// Close the writer, flushing any unwritten data and leaving the progress in
+// tact.
+//
+// If one needs to resume the transaction, a new writer can be obtained from
+// `Ingester.Writer` using the same key. The write can then be continued
+// from it was left off.
+//
+// To abandon a transaction completely, first call close then `IngestManager.Abort` to
+// clean up the associated resources.
+func (w *writer) Close() (err error) {
+ if w.fp != nil {
+ w.fp.Sync()
+ err = w.fp.Close()
+ writeTimestampFile(filepath.Join(w.path, "updatedat"), w.updatedAt)
+ w.fp = nil
+ unlock(w.ref)
+ return
+ }
+
+ return nil
+}
+
+func (w *writer) Truncate(size int64) error {
+ if size != 0 {
+ return errors.New("Truncate: unsupported size")
+ }
+ w.offset = 0
+ w.digester.Hash().Reset()
+ if _, err := w.fp.Seek(0, io.SeekStart); err != nil {
+ return err
+ }
+ return w.fp.Truncate(0)
+}
diff --git a/vendor/github.com/containerd/containerd/errdefs/errors.go b/vendor/github.com/containerd/containerd/errdefs/errors.go
new file mode 100644
index 000000000..de22cadd4
--- /dev/null
+++ b/vendor/github.com/containerd/containerd/errdefs/errors.go
@@ -0,0 +1,72 @@
+/*
+ Copyright The containerd Authors.
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+*/
+
+// Package errdefs defines the common errors used throughout containerd
+// packages.
+//
+// Use with fmt.Errorf to add context to an error.
+//
+// To detect an error class, use the IsXXX functions to tell whether an error
+// is of a certain type.
+package errdefs
+
+import (
+ "github.com/containerd/errdefs"
+)
+
+// Definitions of common error types used throughout containerd. All containerd
+// errors returned by most packages will map into one of these errors classes.
+// Packages should return errors of these types when they want to instruct a
+// client to take a particular action.
+//
+// These errors map closely to grpc errors.
+var (
+ ErrUnknown = errdefs.ErrUnknown
+ ErrInvalidArgument = errdefs.ErrInvalidArgument
+ ErrNotFound = errdefs.ErrNotFound
+ ErrAlreadyExists = errdefs.ErrAlreadyExists
+ ErrPermissionDenied = errdefs.ErrPermissionDenied
+ ErrResourceExhausted = errdefs.ErrResourceExhausted
+ ErrFailedPrecondition = errdefs.ErrFailedPrecondition
+ ErrConflict = errdefs.ErrConflict
+ ErrNotModified = errdefs.ErrNotModified
+ ErrAborted = errdefs.ErrAborted
+ ErrOutOfRange = errdefs.ErrOutOfRange
+ ErrNotImplemented = errdefs.ErrNotImplemented
+ ErrInternal = errdefs.ErrInternal
+ ErrUnavailable = errdefs.ErrUnavailable
+ ErrDataLoss = errdefs.ErrDataLoss
+ ErrUnauthenticated = errdefs.ErrUnauthenticated
+
+ IsCanceled = errdefs.IsCanceled
+ IsUnknown = errdefs.IsUnknown
+ IsInvalidArgument = errdefs.IsInvalidArgument
+ IsDeadlineExceeded = errdefs.IsDeadlineExceeded
+ IsNotFound = errdefs.IsNotFound
+ IsAlreadyExists = errdefs.IsAlreadyExists
+ IsPermissionDenied = errdefs.IsPermissionDenied
+ IsResourceExhausted = errdefs.IsResourceExhausted
+ IsFailedPrecondition = errdefs.IsFailedPrecondition
+ IsConflict = errdefs.IsConflict
+ IsNotModified = errdefs.IsNotModified
+ IsAborted = errdefs.IsAborted
+ IsOutOfRange = errdefs.IsOutOfRange
+ IsNotImplemented = errdefs.IsNotImplemented
+ IsInternal = errdefs.IsInternal
+ IsUnavailable = errdefs.IsUnavailable
+ IsDataLoss = errdefs.IsDataLoss
+ IsUnauthorized = errdefs.IsUnauthorized
+)
diff --git a/vendor/github.com/containerd/containerd/errdefs/grpc.go b/vendor/github.com/containerd/containerd/errdefs/grpc.go
new file mode 100644
index 000000000..7a9b33e05
--- /dev/null
+++ b/vendor/github.com/containerd/containerd/errdefs/grpc.go
@@ -0,0 +1,147 @@
+/*
+ Copyright The containerd Authors.
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+*/
+
+package errdefs
+
+import (
+ "context"
+ "fmt"
+ "strings"
+
+ "google.golang.org/grpc/codes"
+ "google.golang.org/grpc/status"
+)
+
+// ToGRPC will attempt to map the backend containerd error into a grpc error,
+// using the original error message as a description.
+//
+// Further information may be extracted from certain errors depending on their
+// type.
+//
+// If the error is unmapped, the original error will be returned to be handled
+// by the regular grpc error handling stack.
+func ToGRPC(err error) error {
+ if err == nil {
+ return nil
+ }
+
+ if isGRPCError(err) {
+ // error has already been mapped to grpc
+ return err
+ }
+
+ switch {
+ case IsInvalidArgument(err):
+ return status.Errorf(codes.InvalidArgument, err.Error())
+ case IsNotFound(err):
+ return status.Errorf(codes.NotFound, err.Error())
+ case IsAlreadyExists(err):
+ return status.Errorf(codes.AlreadyExists, err.Error())
+ case IsFailedPrecondition(err):
+ return status.Errorf(codes.FailedPrecondition, err.Error())
+ case IsUnavailable(err):
+ return status.Errorf(codes.Unavailable, err.Error())
+ case IsNotImplemented(err):
+ return status.Errorf(codes.Unimplemented, err.Error())
+ case IsCanceled(err):
+ return status.Errorf(codes.Canceled, err.Error())
+ case IsDeadlineExceeded(err):
+ return status.Errorf(codes.DeadlineExceeded, err.Error())
+ }
+
+ return err
+}
+
+// ToGRPCf maps the error to grpc error codes, assembling the formatting string
+// and combining it with the target error string.
+//
+// This is equivalent to errdefs.ToGRPC(fmt.Errorf("%s: %w", fmt.Sprintf(format, args...), err))
+func ToGRPCf(err error, format string, args ...interface{}) error {
+ return ToGRPC(fmt.Errorf("%s: %w", fmt.Sprintf(format, args...), err))
+}
+
+// FromGRPC returns the underlying error from a grpc service based on the grpc error code
+func FromGRPC(err error) error {
+ if err == nil {
+ return nil
+ }
+
+ var cls error // divide these into error classes, becomes the cause
+
+ switch code(err) {
+ case codes.InvalidArgument:
+ cls = ErrInvalidArgument
+ case codes.AlreadyExists:
+ cls = ErrAlreadyExists
+ case codes.NotFound:
+ cls = ErrNotFound
+ case codes.Unavailable:
+ cls = ErrUnavailable
+ case codes.FailedPrecondition:
+ cls = ErrFailedPrecondition
+ case codes.Unimplemented:
+ cls = ErrNotImplemented
+ case codes.Canceled:
+ cls = context.Canceled
+ case codes.DeadlineExceeded:
+ cls = context.DeadlineExceeded
+ default:
+ cls = ErrUnknown
+ }
+
+ msg := rebaseMessage(cls, err)
+ if msg != "" {
+ err = fmt.Errorf("%s: %w", msg, cls)
+ } else {
+ err = cls
+ }
+
+ return err
+}
+
+// rebaseMessage removes the repeats for an error at the end of an error
+// string. This will happen when taking an error over grpc then remapping it.
+//
+// Effectively, we just remove the string of cls from the end of err if it
+// appears there.
+func rebaseMessage(cls error, err error) string {
+ desc := errDesc(err)
+ clss := cls.Error()
+ if desc == clss {
+ return ""
+ }
+
+ return strings.TrimSuffix(desc, ": "+clss)
+}
+
+func isGRPCError(err error) bool {
+ _, ok := status.FromError(err)
+ return ok
+}
+
+func code(err error) codes.Code {
+ if s, ok := status.FromError(err); ok {
+ return s.Code()
+ }
+ return codes.Unknown
+}
+
+func errDesc(err error) string {
+ if s, ok := status.FromError(err); ok {
+ return s.Message()
+ }
+ return err.Error()
+}
diff --git a/vendor/github.com/containerd/containerd/filters/adaptor.go b/vendor/github.com/containerd/containerd/filters/adaptor.go
new file mode 100644
index 000000000..5a9c559c1
--- /dev/null
+++ b/vendor/github.com/containerd/containerd/filters/adaptor.go
@@ -0,0 +1,33 @@
+/*
+ Copyright The containerd Authors.
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+*/
+
+package filters
+
+// Adaptor specifies the mapping of fieldpaths to a type. For the given field
+// path, the value and whether it is present should be returned. The mapping of
+// the fieldpath to a field is deferred to the adaptor implementation, but
+// should generally follow protobuf field path/mask semantics.
+type Adaptor interface {
+ Field(fieldpath []string) (value string, present bool)
+}
+
+// AdapterFunc allows implementation specific matching of fieldpaths
+type AdapterFunc func(fieldpath []string) (string, bool)
+
+// Field returns the field name and true if it exists
+func (fn AdapterFunc) Field(fieldpath []string) (string, bool) {
+ return fn(fieldpath)
+}
diff --git a/vendor/github.com/containerd/containerd/filters/filter.go b/vendor/github.com/containerd/containerd/filters/filter.go
new file mode 100644
index 000000000..dcc569a4b
--- /dev/null
+++ b/vendor/github.com/containerd/containerd/filters/filter.go
@@ -0,0 +1,178 @@
+/*
+ Copyright The containerd Authors.
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+*/
+
+// Package filters defines a syntax and parser that can be used for the
+// filtration of items across the containerd API. The core is built on the
+// concept of protobuf field paths, with quoting. Several operators allow the
+// user to flexibly select items based on field presence, equality, inequality
+// and regular expressions. Flexible adaptors support working with any type.
+//
+// The syntax is fairly familiar, if you've used container ecosystem
+// projects. At the core, we base it on the concept of protobuf field
+// paths, augmenting with the ability to quote portions of the field path
+// to match arbitrary labels. These "selectors" come in the following
+// syntax:
+//
+// ```
+// []
+// ```
+//
+// A basic example is as follows:
+//
+// ```
+// name==foo
+// ```
+//
+// This would match all objects that have a field `name` with the value
+// `foo`. If we only want to test if the field is present, we can omit the
+// operator. This is most useful for matching labels in containerd. The
+// following will match objects that have the field "labels" and have the
+// label "foo" defined:
+//
+// ```
+// labels.foo
+// ```
+//
+// We also allow for quoting of parts of the field path to allow matching
+// of arbitrary items:
+//
+// ```
+// labels."very complex label"==something
+// ```
+//
+// We also define `!=` and `~=` as operators. The `!=` will match all
+// objects that don't match the value for a field and `~=` will compile the
+// target value as a regular expression and match the field value against that.
+//
+// Selectors can be combined using a comma, such that the resulting
+// selector will require all selectors are matched for the object to match.
+// The following example will match objects that are named `foo` and have
+// the label `bar`:
+//
+// ```
+// name==foo,labels.bar
+// ```
+package filters
+
+import (
+ "regexp"
+
+ "github.com/containerd/log"
+)
+
+// Filter matches specific resources based the provided filter
+type Filter interface {
+ Match(adaptor Adaptor) bool
+}
+
+// FilterFunc is a function that handles matching with an adaptor
+type FilterFunc func(Adaptor) bool
+
+// Match matches the FilterFunc returning true if the object matches the filter
+func (fn FilterFunc) Match(adaptor Adaptor) bool {
+ return fn(adaptor)
+}
+
+// Always is a filter that always returns true for any type of object
+var Always FilterFunc = func(adaptor Adaptor) bool {
+ return true
+}
+
+// Any allows multiple filters to be matched against the object
+type Any []Filter
+
+// Match returns true if any of the provided filters are true
+func (m Any) Match(adaptor Adaptor) bool {
+ for _, m := range m {
+ if m.Match(adaptor) {
+ return true
+ }
+ }
+
+ return false
+}
+
+// All allows multiple filters to be matched against the object
+type All []Filter
+
+// Match only returns true if all filters match the object
+func (m All) Match(adaptor Adaptor) bool {
+ for _, m := range m {
+ if !m.Match(adaptor) {
+ return false
+ }
+ }
+
+ return true
+}
+
+type operator int
+
+const (
+ operatorPresent = iota
+ operatorEqual
+ operatorNotEqual
+ operatorMatches
+)
+
+func (op operator) String() string {
+ switch op {
+ case operatorPresent:
+ return "?"
+ case operatorEqual:
+ return "=="
+ case operatorNotEqual:
+ return "!="
+ case operatorMatches:
+ return "~="
+ }
+
+ return "unknown"
+}
+
+type selector struct {
+ fieldpath []string
+ operator operator
+ value string
+ re *regexp.Regexp
+}
+
+func (m selector) Match(adaptor Adaptor) bool {
+ value, present := adaptor.Field(m.fieldpath)
+
+ switch m.operator {
+ case operatorPresent:
+ return present
+ case operatorEqual:
+ return present && value == m.value
+ case operatorNotEqual:
+ return value != m.value
+ case operatorMatches:
+ if m.re == nil {
+ r, err := regexp.Compile(m.value)
+ if err != nil {
+ log.L.Errorf("error compiling regexp %q", m.value)
+ return false
+ }
+
+ m.re = r
+ }
+
+ return m.re.MatchString(value)
+ default:
+ return false
+ }
+}
diff --git a/vendor/github.com/containerd/containerd/filters/parser.go b/vendor/github.com/containerd/containerd/filters/parser.go
new file mode 100644
index 000000000..32767909b
--- /dev/null
+++ b/vendor/github.com/containerd/containerd/filters/parser.go
@@ -0,0 +1,290 @@
+/*
+ Copyright The containerd Authors.
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+*/
+
+package filters
+
+import (
+ "fmt"
+ "io"
+
+ "github.com/containerd/containerd/errdefs"
+)
+
+/*
+Parse the strings into a filter that may be used with an adaptor.
+
+The filter is made up of zero or more selectors.
+
+The format is a comma separated list of expressions, in the form of
+``, known as selectors. All selectors must match the
+target object for the filter to be true.
+
+We define the operators "==" for equality, "!=" for not equal and "~=" for a
+regular expression. If the operator and value are not present, the matcher will
+test for the presence of a value, as defined by the target object.
+
+The formal grammar is as follows:
+
+selectors := selector ("," selector)*
+selector := fieldpath (operator value)
+fieldpath := field ('.' field)*
+field := quoted | [A-Za-z] [A-Za-z0-9_]+
+operator := "==" | "!=" | "~="
+value := quoted | [^\s,]+
+quoted :=
+*/
+func Parse(s string) (Filter, error) {
+ // special case empty to match all
+ if s == "" {
+ return Always, nil
+ }
+
+ p := parser{input: s}
+ return p.parse()
+}
+
+// ParseAll parses each filter in ss and returns a filter that will return true
+// if any filter matches the expression.
+//
+// If no filters are provided, the filter will match anything.
+func ParseAll(ss ...string) (Filter, error) {
+ if len(ss) == 0 {
+ return Always, nil
+ }
+
+ var fs []Filter
+ for _, s := range ss {
+ f, err := Parse(s)
+ if err != nil {
+ return nil, fmt.Errorf("%s: %w", err.Error(), errdefs.ErrInvalidArgument)
+ }
+
+ fs = append(fs, f)
+ }
+
+ return Any(fs), nil
+}
+
+type parser struct {
+ input string
+ scanner scanner
+}
+
+func (p *parser) parse() (Filter, error) {
+ p.scanner.init(p.input)
+
+ ss, err := p.selectors()
+ if err != nil {
+ return nil, fmt.Errorf("filters: %w", err)
+ }
+
+ return ss, nil
+}
+
+func (p *parser) selectors() (Filter, error) {
+ s, err := p.selector()
+ if err != nil {
+ return nil, err
+ }
+
+ ss := All{s}
+
+loop:
+ for {
+ tok := p.scanner.peek()
+ switch tok {
+ case ',':
+ pos, tok, _ := p.scanner.scan()
+ if tok != tokenSeparator {
+ return nil, p.mkerr(pos, "expected a separator")
+ }
+
+ s, err := p.selector()
+ if err != nil {
+ return nil, err
+ }
+
+ ss = append(ss, s)
+ case tokenEOF:
+ break loop
+ default:
+ return nil, p.mkerr(p.scanner.ppos, "unexpected input: %v", string(tok))
+ }
+ }
+
+ return ss, nil
+}
+
+func (p *parser) selector() (selector, error) {
+ fieldpath, err := p.fieldpath()
+ if err != nil {
+ return selector{}, err
+ }
+
+ switch p.scanner.peek() {
+ case ',', tokenSeparator, tokenEOF:
+ return selector{
+ fieldpath: fieldpath,
+ operator: operatorPresent,
+ }, nil
+ }
+
+ op, err := p.operator()
+ if err != nil {
+ return selector{}, err
+ }
+
+ var allowAltQuotes bool
+ if op == operatorMatches {
+ allowAltQuotes = true
+ }
+
+ value, err := p.value(allowAltQuotes)
+ if err != nil {
+ if err == io.EOF {
+ return selector{}, io.ErrUnexpectedEOF
+ }
+ return selector{}, err
+ }
+
+ return selector{
+ fieldpath: fieldpath,
+ value: value,
+ operator: op,
+ }, nil
+}
+
+func (p *parser) fieldpath() ([]string, error) {
+ f, err := p.field()
+ if err != nil {
+ return nil, err
+ }
+
+ fs := []string{f}
+loop:
+ for {
+ tok := p.scanner.peek() // lookahead to consume field separator
+
+ switch tok {
+ case '.':
+ pos, tok, _ := p.scanner.scan() // consume separator
+ if tok != tokenSeparator {
+ return nil, p.mkerr(pos, "expected a field separator (`.`)")
+ }
+
+ f, err := p.field()
+ if err != nil {
+ return nil, err
+ }
+
+ fs = append(fs, f)
+ default:
+ // let the layer above handle the other bad cases.
+ break loop
+ }
+ }
+
+ return fs, nil
+}
+
+func (p *parser) field() (string, error) {
+ pos, tok, s := p.scanner.scan()
+ switch tok {
+ case tokenField:
+ return s, nil
+ case tokenQuoted:
+ return p.unquote(pos, s, false)
+ case tokenIllegal:
+ return "", p.mkerr(pos, p.scanner.err)
+ }
+
+ return "", p.mkerr(pos, "expected field or quoted")
+}
+
+func (p *parser) operator() (operator, error) {
+ pos, tok, s := p.scanner.scan()
+ switch tok {
+ case tokenOperator:
+ switch s {
+ case "==":
+ return operatorEqual, nil
+ case "!=":
+ return operatorNotEqual, nil
+ case "~=":
+ return operatorMatches, nil
+ default:
+ return 0, p.mkerr(pos, "unsupported operator %q", s)
+ }
+ case tokenIllegal:
+ return 0, p.mkerr(pos, p.scanner.err)
+ }
+
+ return 0, p.mkerr(pos, `expected an operator ("=="|"!="|"~=")`)
+}
+
+func (p *parser) value(allowAltQuotes bool) (string, error) {
+ pos, tok, s := p.scanner.scan()
+
+ switch tok {
+ case tokenValue, tokenField:
+ return s, nil
+ case tokenQuoted:
+ return p.unquote(pos, s, allowAltQuotes)
+ case tokenIllegal:
+ return "", p.mkerr(pos, p.scanner.err)
+ }
+
+ return "", p.mkerr(pos, "expected value or quoted")
+}
+
+func (p *parser) unquote(pos int, s string, allowAlts bool) (string, error) {
+ if !allowAlts && s[0] != '\'' && s[0] != '"' {
+ return "", p.mkerr(pos, "invalid quote encountered")
+ }
+
+ uq, err := unquote(s)
+ if err != nil {
+ return "", p.mkerr(pos, "unquoting failed: %v", err)
+ }
+
+ return uq, nil
+}
+
+type parseError struct {
+ input string
+ pos int
+ msg string
+}
+
+func (pe parseError) Error() string {
+ if pe.pos < len(pe.input) {
+ before := pe.input[:pe.pos]
+ location := pe.input[pe.pos : pe.pos+1] // need to handle end
+ after := pe.input[pe.pos+1:]
+
+ return fmt.Sprintf("[%s >|%s|< %s]: %v", before, location, after, pe.msg)
+ }
+
+ return fmt.Sprintf("[%s]: %v", pe.input, pe.msg)
+}
+
+func (p *parser) mkerr(pos int, format string, args ...interface{}) error {
+ return fmt.Errorf("parse error: %w", parseError{
+ input: p.input,
+ pos: pos,
+ msg: fmt.Sprintf(format, args...),
+ })
+}
diff --git a/vendor/github.com/containerd/containerd/filters/quote.go b/vendor/github.com/containerd/containerd/filters/quote.go
new file mode 100644
index 000000000..5c800ef84
--- /dev/null
+++ b/vendor/github.com/containerd/containerd/filters/quote.go
@@ -0,0 +1,252 @@
+/*
+ Copyright The containerd Authors.
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+*/
+
+package filters
+
+import (
+ "errors"
+ "unicode/utf8"
+)
+
+// NOTE(stevvooe): Most of this code in this file is copied from the stdlib
+// strconv package and modified to be able to handle quoting with `/` and `|`
+// as delimiters. The copyright is held by the Go authors.
+
+var errQuoteSyntax = errors.New("quote syntax error")
+
+// UnquoteChar decodes the first character or byte in the escaped string
+// or character literal represented by the string s.
+// It returns four values:
+//
+// 1. value, the decoded Unicode code point or byte value;
+// 2. multibyte, a boolean indicating whether the decoded character requires a multibyte UTF-8 representation;
+// 3. tail, the remainder of the string after the character; and
+// 4. an error that will be nil if the character is syntactically valid.
+//
+// The second argument, quote, specifies the type of literal being parsed
+// and therefore which escaped quote character is permitted.
+// If set to a single quote, it permits the sequence \' and disallows unescaped '.
+// If set to a double quote, it permits \" and disallows unescaped ".
+// If set to zero, it does not permit either escape and allows both quote characters to appear unescaped.
+//
+// This is from Go strconv package, modified to support `|` and `/` as double
+// quotes for use with regular expressions.
+func unquoteChar(s string, quote byte) (value rune, multibyte bool, tail string, err error) {
+ // easy cases
+ switch c := s[0]; {
+ case c == quote && (quote == '\'' || quote == '"' || quote == '/' || quote == '|'):
+ err = errQuoteSyntax
+ return
+ case c >= utf8.RuneSelf:
+ r, size := utf8.DecodeRuneInString(s)
+ return r, true, s[size:], nil
+ case c != '\\':
+ return rune(s[0]), false, s[1:], nil
+ }
+
+ // hard case: c is backslash
+ if len(s) <= 1 {
+ err = errQuoteSyntax
+ return
+ }
+ c := s[1]
+ s = s[2:]
+
+ switch c {
+ case 'a':
+ value = '\a'
+ case 'b':
+ value = '\b'
+ case 'f':
+ value = '\f'
+ case 'n':
+ value = '\n'
+ case 'r':
+ value = '\r'
+ case 't':
+ value = '\t'
+ case 'v':
+ value = '\v'
+ case 'x', 'u', 'U':
+ n := 0
+ switch c {
+ case 'x':
+ n = 2
+ case 'u':
+ n = 4
+ case 'U':
+ n = 8
+ }
+ var v rune
+ if len(s) < n {
+ err = errQuoteSyntax
+ return
+ }
+ for j := 0; j < n; j++ {
+ x, ok := unhex(s[j])
+ if !ok {
+ err = errQuoteSyntax
+ return
+ }
+ v = v<<4 | x
+ }
+ s = s[n:]
+ if c == 'x' {
+ // single-byte string, possibly not UTF-8
+ value = v
+ break
+ }
+ if v > utf8.MaxRune {
+ err = errQuoteSyntax
+ return
+ }
+ value = v
+ multibyte = true
+ case '0', '1', '2', '3', '4', '5', '6', '7':
+ v := rune(c) - '0'
+ if len(s) < 2 {
+ err = errQuoteSyntax
+ return
+ }
+ for j := 0; j < 2; j++ { // one digit already; two more
+ x := rune(s[j]) - '0'
+ if x < 0 || x > 7 {
+ err = errQuoteSyntax
+ return
+ }
+ v = (v << 3) | x
+ }
+ s = s[2:]
+ if v > 255 {
+ err = errQuoteSyntax
+ return
+ }
+ value = v
+ case '\\':
+ value = '\\'
+ case '\'', '"', '|', '/':
+ if c != quote {
+ err = errQuoteSyntax
+ return
+ }
+ value = rune(c)
+ default:
+ err = errQuoteSyntax
+ return
+ }
+ tail = s
+ return
+}
+
+// unquote interprets s as a single-quoted, double-quoted,
+// or backquoted Go string literal, returning the string value
+// that s quotes. (If s is single-quoted, it would be a Go
+// character literal; Unquote returns the corresponding
+// one-character string.)
+//
+// This is modified from the standard library to support `|` and `/` as quote
+// characters for use with regular expressions.
+func unquote(s string) (string, error) {
+ n := len(s)
+ if n < 2 {
+ return "", errQuoteSyntax
+ }
+ quote := s[0]
+ if quote != s[n-1] {
+ return "", errQuoteSyntax
+ }
+ s = s[1 : n-1]
+
+ if quote == '`' {
+ if contains(s, '`') {
+ return "", errQuoteSyntax
+ }
+ if contains(s, '\r') {
+ // -1 because we know there is at least one \r to remove.
+ buf := make([]byte, 0, len(s)-1)
+ for i := 0; i < len(s); i++ {
+ if s[i] != '\r' {
+ buf = append(buf, s[i])
+ }
+ }
+ return string(buf), nil
+ }
+ return s, nil
+ }
+ if quote != '"' && quote != '\'' && quote != '|' && quote != '/' {
+ return "", errQuoteSyntax
+ }
+ if contains(s, '\n') {
+ return "", errQuoteSyntax
+ }
+
+ // Is it trivial? Avoid allocation.
+ if !contains(s, '\\') && !contains(s, quote) {
+ switch quote {
+ case '"', '/', '|': // pipe and slash are treated like double quote
+ return s, nil
+ case '\'':
+ r, size := utf8.DecodeRuneInString(s)
+ if size == len(s) && (r != utf8.RuneError || size != 1) {
+ return s, nil
+ }
+ }
+ }
+
+ var runeTmp [utf8.UTFMax]byte
+ buf := make([]byte, 0, 3*len(s)/2) // Try to avoid more allocations.
+ for len(s) > 0 {
+ c, multibyte, ss, err := unquoteChar(s, quote)
+ if err != nil {
+ return "", err
+ }
+ s = ss
+ if c < utf8.RuneSelf || !multibyte {
+ buf = append(buf, byte(c))
+ } else {
+ n := utf8.EncodeRune(runeTmp[:], c)
+ buf = append(buf, runeTmp[:n]...)
+ }
+ if quote == '\'' && len(s) != 0 {
+ // single-quoted must be single character
+ return "", errQuoteSyntax
+ }
+ }
+ return string(buf), nil
+}
+
+// contains reports whether the string contains the byte c.
+func contains(s string, c byte) bool {
+ for i := 0; i < len(s); i++ {
+ if s[i] == c {
+ return true
+ }
+ }
+ return false
+}
+
+func unhex(b byte) (v rune, ok bool) {
+ c := rune(b)
+ switch {
+ case '0' <= c && c <= '9':
+ return c - '0', true
+ case 'a' <= c && c <= 'f':
+ return c - 'a' + 10, true
+ case 'A' <= c && c <= 'F':
+ return c - 'A' + 10, true
+ }
+ return
+}
diff --git a/vendor/github.com/containerd/containerd/filters/scanner.go b/vendor/github.com/containerd/containerd/filters/scanner.go
new file mode 100644
index 000000000..6a485467b
--- /dev/null
+++ b/vendor/github.com/containerd/containerd/filters/scanner.go
@@ -0,0 +1,297 @@
+/*
+ Copyright The containerd Authors.
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+*/
+
+package filters
+
+import (
+ "unicode"
+ "unicode/utf8"
+)
+
+const (
+ tokenEOF = -(iota + 1)
+ tokenQuoted
+ tokenValue
+ tokenField
+ tokenSeparator
+ tokenOperator
+ tokenIllegal
+)
+
+type token rune
+
+func (t token) String() string {
+ switch t {
+ case tokenEOF:
+ return "EOF"
+ case tokenQuoted:
+ return "Quoted"
+ case tokenValue:
+ return "Value"
+ case tokenField:
+ return "Field"
+ case tokenSeparator:
+ return "Separator"
+ case tokenOperator:
+ return "Operator"
+ case tokenIllegal:
+ return "Illegal"
+ }
+
+ return string(t)
+}
+
+func (t token) GoString() string {
+ return "token" + t.String()
+}
+
+type scanner struct {
+ input string
+ pos int
+ ppos int // bounds the current rune in the string
+ value bool
+ err string
+}
+
+func (s *scanner) init(input string) {
+ s.input = input
+ s.pos = 0
+ s.ppos = 0
+}
+
+func (s *scanner) next() rune {
+ if s.pos >= len(s.input) {
+ return tokenEOF
+ }
+ s.pos = s.ppos
+
+ r, w := utf8.DecodeRuneInString(s.input[s.ppos:])
+ s.ppos += w
+ if r == utf8.RuneError {
+ if w > 0 {
+ s.error("rune error")
+ return tokenIllegal
+ }
+ return tokenEOF
+ }
+
+ if r == 0 {
+ s.error("unexpected null")
+ return tokenIllegal
+ }
+
+ return r
+}
+
+func (s *scanner) peek() rune {
+ pos := s.pos
+ ppos := s.ppos
+ ch := s.next()
+ s.pos = pos
+ s.ppos = ppos
+ return ch
+}
+
+func (s *scanner) scan() (nextp int, tk token, text string) {
+ var (
+ ch = s.next()
+ pos = s.pos
+ )
+
+chomp:
+ switch {
+ case ch == tokenEOF:
+ case ch == tokenIllegal:
+ case isQuoteRune(ch):
+ if !s.scanQuoted(ch) {
+ return pos, tokenIllegal, s.input[pos:s.ppos]
+ }
+ return pos, tokenQuoted, s.input[pos:s.ppos]
+ case isSeparatorRune(ch):
+ s.value = false
+ return pos, tokenSeparator, s.input[pos:s.ppos]
+ case isOperatorRune(ch):
+ s.scanOperator()
+ s.value = true
+ return pos, tokenOperator, s.input[pos:s.ppos]
+ case unicode.IsSpace(ch):
+ // chomp
+ ch = s.next()
+ pos = s.pos
+ goto chomp
+ case s.value:
+ s.scanValue()
+ s.value = false
+ return pos, tokenValue, s.input[pos:s.ppos]
+ case isFieldRune(ch):
+ s.scanField()
+ return pos, tokenField, s.input[pos:s.ppos]
+ }
+
+ return s.pos, token(ch), ""
+}
+
+func (s *scanner) scanField() {
+ for {
+ ch := s.peek()
+ if !isFieldRune(ch) {
+ break
+ }
+ s.next()
+ }
+}
+
+func (s *scanner) scanOperator() {
+ for {
+ ch := s.peek()
+ switch ch {
+ case '=', '!', '~':
+ s.next()
+ default:
+ return
+ }
+ }
+}
+
+func (s *scanner) scanValue() {
+ for {
+ ch := s.peek()
+ if !isValueRune(ch) {
+ break
+ }
+ s.next()
+ }
+}
+
+func (s *scanner) scanQuoted(quote rune) bool {
+ var illegal bool
+ ch := s.next() // read character after quote
+ for ch != quote {
+ if ch == '\n' || ch < 0 {
+ s.error("quoted literal not terminated")
+ return false
+ }
+ if ch == '\\' {
+ var legal bool
+ ch, legal = s.scanEscape(quote)
+ if !legal {
+ illegal = true
+ }
+ } else {
+ ch = s.next()
+ }
+ }
+ return !illegal
+}
+
+func (s *scanner) scanEscape(quote rune) (ch rune, legal bool) {
+ ch = s.next() // read character after '/'
+ switch ch {
+ case 'a', 'b', 'f', 'n', 'r', 't', 'v', '\\', quote:
+ // nothing to do
+ ch = s.next()
+ legal = true
+ case '0', '1', '2', '3', '4', '5', '6', '7':
+ ch, legal = s.scanDigits(ch, 8, 3)
+ case 'x':
+ ch, legal = s.scanDigits(s.next(), 16, 2)
+ case 'u':
+ ch, legal = s.scanDigits(s.next(), 16, 4)
+ case 'U':
+ ch, legal = s.scanDigits(s.next(), 16, 8)
+ default:
+ s.error("illegal escape sequence")
+ }
+ return
+}
+
+func (s *scanner) scanDigits(ch rune, base, n int) (rune, bool) {
+ for n > 0 && digitVal(ch) < base {
+ ch = s.next()
+ n--
+ }
+ if n > 0 {
+ s.error("illegal numeric escape sequence")
+ return ch, false
+ }
+ return ch, true
+}
+
+func (s *scanner) error(msg string) {
+ if s.err == "" {
+ s.err = msg
+ }
+}
+
+func digitVal(ch rune) int {
+ switch {
+ case '0' <= ch && ch <= '9':
+ return int(ch - '0')
+ case 'a' <= ch && ch <= 'f':
+ return int(ch - 'a' + 10)
+ case 'A' <= ch && ch <= 'F':
+ return int(ch - 'A' + 10)
+ }
+ return 16 // larger than any legal digit val
+}
+
+func isFieldRune(r rune) bool {
+ return (r == '_' || isAlphaRune(r) || isDigitRune(r))
+}
+
+func isAlphaRune(r rune) bool {
+ return r >= 'A' && r <= 'Z' || r >= 'a' && r <= 'z'
+}
+
+func isDigitRune(r rune) bool {
+ return r >= '0' && r <= '9'
+}
+
+func isOperatorRune(r rune) bool {
+ switch r {
+ case '=', '!', '~':
+ return true
+ }
+
+ return false
+}
+
+func isQuoteRune(r rune) bool {
+ switch r {
+ case '/', '|', '"': // maybe add single quoting?
+ return true
+ }
+
+ return false
+}
+
+func isSeparatorRune(r rune) bool {
+ switch r {
+ case ',', '.':
+ return true
+ }
+
+ return false
+}
+
+func isValueRune(r rune) bool {
+ return r != ',' && !unicode.IsSpace(r) &&
+ (unicode.IsLetter(r) ||
+ unicode.IsDigit(r) ||
+ unicode.IsNumber(r) ||
+ unicode.IsGraphic(r) ||
+ unicode.IsPunct(r))
+}
diff --git a/vendor/github.com/containerd/containerd/images/annotations.go b/vendor/github.com/containerd/containerd/images/annotations.go
new file mode 100644
index 000000000..47d92104c
--- /dev/null
+++ b/vendor/github.com/containerd/containerd/images/annotations.go
@@ -0,0 +1,23 @@
+/*
+ Copyright The containerd Authors.
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+*/
+
+package images
+
+const (
+ // AnnotationImageName is an annotation on a Descriptor in an index.json
+ // containing the `Name` value as used by an `Image` struct
+ AnnotationImageName = "io.containerd.image.name"
+)
diff --git a/vendor/github.com/containerd/containerd/images/diffid.go b/vendor/github.com/containerd/containerd/images/diffid.go
new file mode 100644
index 000000000..85577eede
--- /dev/null
+++ b/vendor/github.com/containerd/containerd/images/diffid.go
@@ -0,0 +1,81 @@
+/*
+ Copyright The containerd Authors.
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+*/
+
+package images
+
+import (
+ "context"
+ "io"
+
+ "github.com/containerd/containerd/archive/compression"
+ "github.com/containerd/containerd/content"
+ "github.com/containerd/containerd/labels"
+ "github.com/opencontainers/go-digest"
+ ocispec "github.com/opencontainers/image-spec/specs-go/v1"
+ "github.com/sirupsen/logrus"
+)
+
+// GetDiffID gets the diff ID of the layer blob descriptor.
+func GetDiffID(ctx context.Context, cs content.Store, desc ocispec.Descriptor) (digest.Digest, error) {
+ switch desc.MediaType {
+ case
+ // If the layer is already uncompressed, we can just return its digest
+ MediaTypeDockerSchema2Layer,
+ ocispec.MediaTypeImageLayer,
+ MediaTypeDockerSchema2LayerForeign,
+ ocispec.MediaTypeImageLayerNonDistributable: //nolint:staticcheck // deprecated
+ return desc.Digest, nil
+ }
+ info, err := cs.Info(ctx, desc.Digest)
+ if err != nil {
+ return "", err
+ }
+ v, ok := info.Labels[labels.LabelUncompressed]
+ if ok {
+ // Fast path: if the image is already unpacked, we can use the label value
+ return digest.Parse(v)
+ }
+ // if the image is not unpacked, we may not have the label
+ ra, err := cs.ReaderAt(ctx, desc)
+ if err != nil {
+ return "", err
+ }
+ defer ra.Close()
+ r := content.NewReader(ra)
+ uR, err := compression.DecompressStream(r)
+ if err != nil {
+ return "", err
+ }
+ defer uR.Close()
+ digester := digest.Canonical.Digester()
+ hashW := digester.Hash()
+ if _, err := io.Copy(hashW, uR); err != nil {
+ return "", err
+ }
+ if err := ra.Close(); err != nil {
+ return "", err
+ }
+ digest := digester.Digest()
+ // memorize the computed value
+ if info.Labels == nil {
+ info.Labels = make(map[string]string)
+ }
+ info.Labels[labels.LabelUncompressed] = digest.String()
+ if _, err := cs.Update(ctx, info, "labels"); err != nil {
+ logrus.WithError(err).Warnf("failed to set %s label for %s", labels.LabelUncompressed, desc.Digest)
+ }
+ return digest, nil
+}
diff --git a/vendor/github.com/containerd/containerd/images/handlers.go b/vendor/github.com/containerd/containerd/images/handlers.go
new file mode 100644
index 000000000..a685092e2
--- /dev/null
+++ b/vendor/github.com/containerd/containerd/images/handlers.go
@@ -0,0 +1,323 @@
+/*
+ Copyright The containerd Authors.
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+*/
+
+package images
+
+import (
+ "context"
+ "errors"
+ "fmt"
+ "sort"
+
+ "github.com/containerd/platforms"
+ ocispec "github.com/opencontainers/image-spec/specs-go/v1"
+ "golang.org/x/sync/errgroup"
+ "golang.org/x/sync/semaphore"
+
+ "github.com/containerd/containerd/content"
+ "github.com/containerd/containerd/errdefs"
+)
+
+var (
+ // ErrSkipDesc is used to skip processing of a descriptor and
+ // its descendants.
+ ErrSkipDesc = errors.New("skip descriptor")
+
+ // ErrStopHandler is used to signify that the descriptor
+ // has been handled and should not be handled further.
+ // This applies only to a single descriptor in a handler
+ // chain and does not apply to descendant descriptors.
+ ErrStopHandler = errors.New("stop handler")
+
+ // ErrEmptyWalk is used when the WalkNotEmpty handlers return no
+ // children (e.g.: they were filtered out).
+ ErrEmptyWalk = errors.New("image might be filtered out")
+)
+
+// Handler handles image manifests
+type Handler interface {
+ Handle(ctx context.Context, desc ocispec.Descriptor) (subdescs []ocispec.Descriptor, err error)
+}
+
+// HandlerFunc function implementing the Handler interface
+type HandlerFunc func(ctx context.Context, desc ocispec.Descriptor) (subdescs []ocispec.Descriptor, err error)
+
+// Handle image manifests
+func (fn HandlerFunc) Handle(ctx context.Context, desc ocispec.Descriptor) (subdescs []ocispec.Descriptor, err error) {
+ return fn(ctx, desc)
+}
+
+// Handlers returns a handler that will run the handlers in sequence.
+//
+// A handler may return `ErrStopHandler` to stop calling additional handlers
+func Handlers(handlers ...Handler) HandlerFunc {
+ return func(ctx context.Context, desc ocispec.Descriptor) (subdescs []ocispec.Descriptor, err error) {
+ var children []ocispec.Descriptor
+ for _, handler := range handlers {
+ ch, err := handler.Handle(ctx, desc)
+ if err != nil {
+ if errors.Is(err, ErrStopHandler) {
+ break
+ }
+ return nil, err
+ }
+
+ children = append(children, ch...)
+ }
+
+ return children, nil
+ }
+}
+
+// Walk the resources of an image and call the handler for each. If the handler
+// decodes the sub-resources for each image,
+//
+// This differs from dispatch in that each sibling resource is considered
+// synchronously.
+func Walk(ctx context.Context, handler Handler, descs ...ocispec.Descriptor) error {
+ for _, desc := range descs {
+
+ children, err := handler.Handle(ctx, desc)
+ if err != nil {
+ if errors.Is(err, ErrSkipDesc) {
+ continue // don't traverse the children.
+ }
+ return err
+ }
+
+ if len(children) > 0 {
+ if err := Walk(ctx, handler, children...); err != nil {
+ return err
+ }
+ }
+ }
+ return nil
+}
+
+// WalkNotEmpty works the same way Walk does, with the exception that it ensures that
+// some children are still found by Walking the descriptors (for example, not all of
+// them have been filtered out by one of the handlers). If there are no children,
+// then an ErrEmptyWalk error is returned.
+func WalkNotEmpty(ctx context.Context, handler Handler, descs ...ocispec.Descriptor) error {
+ isEmpty := true
+ var notEmptyHandler HandlerFunc = func(ctx context.Context, desc ocispec.Descriptor) ([]ocispec.Descriptor, error) {
+ children, err := handler.Handle(ctx, desc)
+ if err != nil {
+ return children, err
+ }
+
+ if len(children) > 0 {
+ isEmpty = false
+ }
+
+ return children, nil
+ }
+
+ err := Walk(ctx, notEmptyHandler, descs...)
+ if err != nil {
+ return err
+ }
+
+ if isEmpty {
+ return ErrEmptyWalk
+ }
+
+ return nil
+}
+
+// Dispatch runs the provided handler for content specified by the descriptors.
+// If the handler decode subresources, they will be visited, as well.
+//
+// Handlers for siblings are run in parallel on the provided descriptors. A
+// handler may return `ErrSkipDesc` to signal to the dispatcher to not traverse
+// any children.
+//
+// A concurrency limiter can be passed in to limit the number of concurrent
+// handlers running. When limiter is nil, there is no limit.
+//
+// Typically, this function will be used with `FetchHandler`, often composed
+// with other handlers.
+//
+// If any handler returns an error, the dispatch session will be canceled.
+func Dispatch(ctx context.Context, handler Handler, limiter *semaphore.Weighted, descs ...ocispec.Descriptor) error {
+ eg, ctx2 := errgroup.WithContext(ctx)
+ for _, desc := range descs {
+ desc := desc
+
+ if limiter != nil {
+ if err := limiter.Acquire(ctx, 1); err != nil {
+ return err
+ }
+ }
+
+ eg.Go(func() error {
+ desc := desc
+
+ children, err := handler.Handle(ctx2, desc)
+ if limiter != nil {
+ limiter.Release(1)
+ }
+ if err != nil {
+ if errors.Is(err, ErrSkipDesc) {
+ return nil // don't traverse the children.
+ }
+ return err
+ }
+
+ if len(children) > 0 {
+ return Dispatch(ctx2, handler, limiter, children...)
+ }
+
+ return nil
+ })
+ }
+
+ return eg.Wait()
+}
+
+// ChildrenHandler decodes well-known manifest types and returns their children.
+//
+// This is useful for supporting recursive fetch and other use cases where you
+// want to do a full walk of resources.
+//
+// One can also replace this with another implementation to allow descending of
+// arbitrary types.
+func ChildrenHandler(provider content.Provider) HandlerFunc {
+ return func(ctx context.Context, desc ocispec.Descriptor) ([]ocispec.Descriptor, error) {
+ return Children(ctx, provider, desc)
+ }
+}
+
+// SetChildrenLabels is a handler wrapper which sets labels for the content on
+// the children returned by the handler and passes through the children.
+// Must follow a handler that returns the children to be labeled.
+func SetChildrenLabels(manager content.Manager, f HandlerFunc) HandlerFunc {
+ return SetChildrenMappedLabels(manager, f, nil)
+}
+
+// SetChildrenMappedLabels is a handler wrapper which sets labels for the content on
+// the children returned by the handler and passes through the children.
+// Must follow a handler that returns the children to be labeled.
+// The label map allows the caller to control the labels per child descriptor.
+// For returned labels, the index of the child will be appended to the end
+// except for the first index when the returned label does not end with '.'.
+func SetChildrenMappedLabels(manager content.Manager, f HandlerFunc, labelMap func(ocispec.Descriptor) []string) HandlerFunc {
+ if labelMap == nil {
+ labelMap = ChildGCLabels
+ }
+ return func(ctx context.Context, desc ocispec.Descriptor) ([]ocispec.Descriptor, error) {
+ children, err := f(ctx, desc)
+ if err != nil {
+ return children, err
+ }
+
+ if len(children) > 0 {
+ var (
+ info = content.Info{
+ Digest: desc.Digest,
+ Labels: map[string]string{},
+ }
+ fields = []string{}
+ keys = map[string]uint{}
+ )
+ for _, ch := range children {
+ labelKeys := labelMap(ch)
+ for _, key := range labelKeys {
+ idx := keys[key]
+ keys[key] = idx + 1
+ if idx > 0 || key[len(key)-1] == '.' {
+ key = fmt.Sprintf("%s%d", key, idx)
+ }
+
+ info.Labels[key] = ch.Digest.String()
+ fields = append(fields, "labels."+key)
+ }
+ }
+
+ _, err := manager.Update(ctx, info, fields...)
+ if err != nil {
+ return nil, err
+ }
+ }
+
+ return children, err
+ }
+}
+
+// FilterPlatforms is a handler wrapper which limits the descriptors returned
+// based on matching the specified platform matcher.
+func FilterPlatforms(f HandlerFunc, m platforms.Matcher) HandlerFunc {
+ return func(ctx context.Context, desc ocispec.Descriptor) ([]ocispec.Descriptor, error) {
+ children, err := f(ctx, desc)
+ if err != nil {
+ return children, err
+ }
+
+ var descs []ocispec.Descriptor
+
+ if m == nil {
+ descs = children
+ } else {
+ for _, d := range children {
+ if d.Platform == nil || m.Match(*d.Platform) {
+ descs = append(descs, d)
+ }
+ }
+ }
+
+ return descs, nil
+ }
+}
+
+// LimitManifests is a handler wrapper which filters the manifest descriptors
+// returned using the provided platform.
+// The results will be ordered according to the comparison operator and
+// use the ordering in the manifests for equal matches.
+// A limit of 0 or less is considered no limit.
+// A not found error is returned if no manifest is matched.
+func LimitManifests(f HandlerFunc, m platforms.MatchComparer, n int) HandlerFunc {
+ return func(ctx context.Context, desc ocispec.Descriptor) ([]ocispec.Descriptor, error) {
+ children, err := f(ctx, desc)
+ if err != nil {
+ return children, err
+ }
+
+ switch desc.MediaType {
+ case ocispec.MediaTypeImageIndex, MediaTypeDockerSchema2ManifestList:
+ sort.SliceStable(children, func(i, j int) bool {
+ if children[i].Platform == nil {
+ return false
+ }
+ if children[j].Platform == nil {
+ return true
+ }
+ return m.Less(*children[i].Platform, *children[j].Platform)
+ })
+
+ if n > 0 {
+ if len(children) == 0 {
+ return children, fmt.Errorf("no match for platform in manifest: %w", errdefs.ErrNotFound)
+ }
+ if len(children) > n {
+ children = children[:n]
+ }
+ }
+ default:
+ // only limit manifests from an index
+ }
+ return children, nil
+ }
+}
diff --git a/vendor/github.com/containerd/containerd/images/image.go b/vendor/github.com/containerd/containerd/images/image.go
new file mode 100644
index 000000000..b934e3496
--- /dev/null
+++ b/vendor/github.com/containerd/containerd/images/image.go
@@ -0,0 +1,444 @@
+/*
+ Copyright The containerd Authors.
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+*/
+
+package images
+
+import (
+ "context"
+ "encoding/json"
+ "fmt"
+ "sort"
+ "time"
+
+ "github.com/containerd/log"
+ "github.com/containerd/platforms"
+ digest "github.com/opencontainers/go-digest"
+ ocispec "github.com/opencontainers/image-spec/specs-go/v1"
+
+ "github.com/containerd/containerd/content"
+ "github.com/containerd/containerd/errdefs"
+)
+
+// Image provides the model for how containerd views container images.
+type Image struct {
+ // Name of the image.
+ //
+ // To be pulled, it must be a reference compatible with resolvers.
+ //
+ // This field is required.
+ Name string
+
+ // Labels provide runtime decoration for the image record.
+ //
+ // There is no default behavior for how these labels are propagated. They
+ // only decorate the static metadata object.
+ //
+ // This field is optional.
+ Labels map[string]string
+
+ // Target describes the root content for this image. Typically, this is
+ // a manifest, index or manifest list.
+ Target ocispec.Descriptor
+
+ CreatedAt, UpdatedAt time.Time
+}
+
+// DeleteOptions provide options on image delete
+type DeleteOptions struct {
+ Synchronous bool
+}
+
+// DeleteOpt allows configuring a delete operation
+type DeleteOpt func(context.Context, *DeleteOptions) error
+
+// SynchronousDelete is used to indicate that an image deletion and removal of
+// the image resources should occur synchronously before returning a result.
+func SynchronousDelete() DeleteOpt {
+ return func(ctx context.Context, o *DeleteOptions) error {
+ o.Synchronous = true
+ return nil
+ }
+}
+
+// Store and interact with images
+type Store interface {
+ Get(ctx context.Context, name string) (Image, error)
+ List(ctx context.Context, filters ...string) ([]Image, error)
+ Create(ctx context.Context, image Image) (Image, error)
+
+ // Update will replace the data in the store with the provided image. If
+ // one or more fieldpaths are provided, only those fields will be updated.
+ Update(ctx context.Context, image Image, fieldpaths ...string) (Image, error)
+
+ Delete(ctx context.Context, name string, opts ...DeleteOpt) error
+}
+
+// TODO(stevvooe): Many of these functions make strong platform assumptions,
+// which are untrue in a lot of cases. More refactoring must be done here to
+// make this work in all cases.
+
+// Config resolves the image configuration descriptor.
+//
+// The caller can then use the descriptor to resolve and process the
+// configuration of the image.
+func (image *Image) Config(ctx context.Context, provider content.Provider, platform platforms.MatchComparer) (ocispec.Descriptor, error) {
+ return Config(ctx, provider, image.Target, platform)
+}
+
+// RootFS returns the unpacked diffids that make up and images rootfs.
+//
+// These are used to verify that a set of layers unpacked to the expected
+// values.
+func (image *Image) RootFS(ctx context.Context, provider content.Provider, platform platforms.MatchComparer) ([]digest.Digest, error) {
+ desc, err := image.Config(ctx, provider, platform)
+ if err != nil {
+ return nil, err
+ }
+ return RootFS(ctx, provider, desc)
+}
+
+// Size returns the total size of an image's packed resources.
+func (image *Image) Size(ctx context.Context, provider content.Provider, platform platforms.MatchComparer) (int64, error) {
+ var size int64
+ return size, Walk(ctx, Handlers(HandlerFunc(func(ctx context.Context, desc ocispec.Descriptor) ([]ocispec.Descriptor, error) {
+ if desc.Size < 0 {
+ return nil, fmt.Errorf("invalid size %v in %v (%v)", desc.Size, desc.Digest, desc.MediaType)
+ }
+ size += desc.Size
+ return nil, nil
+ }), LimitManifests(FilterPlatforms(ChildrenHandler(provider), platform), platform, 1)), image.Target)
+}
+
+type platformManifest struct {
+ p *ocispec.Platform
+ m *ocispec.Manifest
+}
+
+// Manifest resolves a manifest from the image for the given platform.
+//
+// When a manifest descriptor inside of a manifest index does not have
+// a platform defined, the platform from the image config is considered.
+//
+// If the descriptor points to a non-index manifest, then the manifest is
+// unmarshalled and returned without considering the platform inside of the
+// config.
+//
+// TODO(stevvooe): This violates the current platform agnostic approach to this
+// package by returning a specific manifest type. We'll need to refactor this
+// to return a manifest descriptor or decide that we want to bring the API in
+// this direction because this abstraction is not needed.
+func Manifest(ctx context.Context, provider content.Provider, image ocispec.Descriptor, platform platforms.MatchComparer) (ocispec.Manifest, error) {
+ var (
+ limit = 1
+ m []platformManifest
+ wasIndex bool
+ )
+
+ if err := Walk(ctx, HandlerFunc(func(ctx context.Context, desc ocispec.Descriptor) ([]ocispec.Descriptor, error) {
+ switch desc.MediaType {
+ case MediaTypeDockerSchema2Manifest, ocispec.MediaTypeImageManifest:
+ p, err := content.ReadBlob(ctx, provider, desc)
+ if err != nil {
+ return nil, err
+ }
+
+ if err := validateMediaType(p, desc.MediaType); err != nil {
+ return nil, fmt.Errorf("manifest: invalid desc %s: %w", desc.Digest, err)
+ }
+
+ var manifest ocispec.Manifest
+ if err := json.Unmarshal(p, &manifest); err != nil {
+ return nil, err
+ }
+
+ if desc.Digest != image.Digest && platform != nil {
+ if desc.Platform != nil && !platform.Match(*desc.Platform) {
+ return nil, nil
+ }
+
+ if desc.Platform == nil {
+ p, err := content.ReadBlob(ctx, provider, manifest.Config)
+ if err != nil {
+ return nil, err
+ }
+
+ var image ocispec.Image
+ if err := json.Unmarshal(p, &image); err != nil {
+ return nil, err
+ }
+
+ if !platform.Match(platforms.Normalize(ocispec.Platform{OS: image.OS, Architecture: image.Architecture})) {
+ return nil, nil
+ }
+
+ }
+ }
+
+ m = append(m, platformManifest{
+ p: desc.Platform,
+ m: &manifest,
+ })
+
+ return nil, nil
+ case MediaTypeDockerSchema2ManifestList, ocispec.MediaTypeImageIndex:
+ p, err := content.ReadBlob(ctx, provider, desc)
+ if err != nil {
+ return nil, err
+ }
+
+ if err := validateMediaType(p, desc.MediaType); err != nil {
+ return nil, fmt.Errorf("manifest: invalid desc %s: %w", desc.Digest, err)
+ }
+
+ var idx ocispec.Index
+ if err := json.Unmarshal(p, &idx); err != nil {
+ return nil, err
+ }
+
+ if platform == nil {
+ return idx.Manifests, nil
+ }
+
+ var descs []ocispec.Descriptor
+ for _, d := range idx.Manifests {
+ if d.Platform == nil || platform.Match(*d.Platform) {
+ descs = append(descs, d)
+ }
+ }
+
+ sort.SliceStable(descs, func(i, j int) bool {
+ if descs[i].Platform == nil {
+ return false
+ }
+ if descs[j].Platform == nil {
+ return true
+ }
+ return platform.Less(*descs[i].Platform, *descs[j].Platform)
+ })
+
+ wasIndex = true
+
+ if len(descs) > limit {
+ return descs[:limit], nil
+ }
+ return descs, nil
+ }
+ return nil, fmt.Errorf("unexpected media type %v for %v: %w", desc.MediaType, desc.Digest, errdefs.ErrNotFound)
+ }), image); err != nil {
+ return ocispec.Manifest{}, err
+ }
+
+ if len(m) == 0 {
+ err := fmt.Errorf("manifest %v: %w", image.Digest, errdefs.ErrNotFound)
+ if wasIndex {
+ err = fmt.Errorf("no match for platform in manifest %v: %w", image.Digest, errdefs.ErrNotFound)
+ }
+ return ocispec.Manifest{}, err
+ }
+ return *m[0].m, nil
+}
+
+// Config resolves the image configuration descriptor using a content provided
+// to resolve child resources on the image.
+//
+// The caller can then use the descriptor to resolve and process the
+// configuration of the image.
+func Config(ctx context.Context, provider content.Provider, image ocispec.Descriptor, platform platforms.MatchComparer) (ocispec.Descriptor, error) {
+ manifest, err := Manifest(ctx, provider, image, platform)
+ if err != nil {
+ return ocispec.Descriptor{}, err
+ }
+ return manifest.Config, err
+}
+
+// Platforms returns one or more platforms supported by the image.
+func Platforms(ctx context.Context, provider content.Provider, image ocispec.Descriptor) ([]ocispec.Platform, error) {
+ var platformSpecs []ocispec.Platform
+ return platformSpecs, Walk(ctx, Handlers(HandlerFunc(func(ctx context.Context, desc ocispec.Descriptor) ([]ocispec.Descriptor, error) {
+ if desc.Platform != nil {
+ if desc.Platform.OS == "unknown" || desc.Platform.Architecture == "unknown" {
+ return nil, ErrSkipDesc
+ }
+ platformSpecs = append(platformSpecs, *desc.Platform)
+ return nil, ErrSkipDesc
+ }
+
+ switch desc.MediaType {
+ case MediaTypeDockerSchema2Config, ocispec.MediaTypeImageConfig:
+ p, err := content.ReadBlob(ctx, provider, desc)
+ if err != nil {
+ return nil, err
+ }
+
+ var image ocispec.Image
+ if err := json.Unmarshal(p, &image); err != nil {
+ return nil, err
+ }
+
+ platformSpecs = append(platformSpecs,
+ platforms.Normalize(ocispec.Platform{OS: image.OS, Architecture: image.Architecture}))
+ }
+ return nil, nil
+ }), ChildrenHandler(provider)), image)
+}
+
+// Check returns nil if the all components of an image are available in the
+// provider for the specified platform.
+//
+// If available is true, the caller can assume that required represents the
+// complete set of content required for the image.
+//
+// missing will have the components that are part of required but not available
+// in the provider.
+//
+// If there is a problem resolving content, an error will be returned.
+func Check(ctx context.Context, provider content.Provider, image ocispec.Descriptor, platform platforms.MatchComparer) (available bool, required, present, missing []ocispec.Descriptor, err error) {
+ mfst, err := Manifest(ctx, provider, image, platform)
+ if err != nil {
+ if errdefs.IsNotFound(err) {
+ return false, []ocispec.Descriptor{image}, nil, []ocispec.Descriptor{image}, nil
+ }
+
+ return false, nil, nil, nil, fmt.Errorf("failed to check image %v: %w", image.Digest, err)
+ }
+
+ // TODO(stevvooe): It is possible that referenced components could have
+ // children, but this is rare. For now, we ignore this and only verify
+ // that manifest components are present.
+ required = append([]ocispec.Descriptor{mfst.Config}, mfst.Layers...)
+
+ for _, desc := range required {
+ ra, err := provider.ReaderAt(ctx, desc)
+ if err != nil {
+ if errdefs.IsNotFound(err) {
+ missing = append(missing, desc)
+ continue
+ } else {
+ return false, nil, nil, nil, fmt.Errorf("failed to check image %v: %w", desc.Digest, err)
+ }
+ }
+ ra.Close()
+ present = append(present, desc)
+
+ }
+
+ return true, required, present, missing, nil
+}
+
+// Children returns the immediate children of content described by the descriptor.
+func Children(ctx context.Context, provider content.Provider, desc ocispec.Descriptor) ([]ocispec.Descriptor, error) {
+ var descs []ocispec.Descriptor
+ switch desc.MediaType {
+ case MediaTypeDockerSchema2Manifest, ocispec.MediaTypeImageManifest:
+ p, err := content.ReadBlob(ctx, provider, desc)
+ if err != nil {
+ return nil, err
+ }
+
+ if err := validateMediaType(p, desc.MediaType); err != nil {
+ return nil, fmt.Errorf("children: invalid desc %s: %w", desc.Digest, err)
+ }
+
+ // TODO(stevvooe): We just assume oci manifest, for now. There may be
+ // subtle differences from the docker version.
+ var manifest ocispec.Manifest
+ if err := json.Unmarshal(p, &manifest); err != nil {
+ return nil, err
+ }
+
+ descs = append(descs, manifest.Config)
+ descs = append(descs, manifest.Layers...)
+ case MediaTypeDockerSchema2ManifestList, ocispec.MediaTypeImageIndex:
+ p, err := content.ReadBlob(ctx, provider, desc)
+ if err != nil {
+ return nil, err
+ }
+
+ if err := validateMediaType(p, desc.MediaType); err != nil {
+ return nil, fmt.Errorf("children: invalid desc %s: %w", desc.Digest, err)
+ }
+
+ var index ocispec.Index
+ if err := json.Unmarshal(p, &index); err != nil {
+ return nil, err
+ }
+
+ descs = append(descs, index.Manifests...)
+ default:
+ if IsLayerType(desc.MediaType) || IsKnownConfig(desc.MediaType) {
+ // childless data types.
+ return nil, nil
+ }
+ log.G(ctx).Debugf("encountered unknown type %v; children may not be fetched", desc.MediaType)
+ }
+
+ return descs, nil
+}
+
+// unknownDocument represents a manifest, manifest list, or index that has not
+// yet been validated.
+type unknownDocument struct {
+ MediaType string `json:"mediaType,omitempty"`
+ Config json.RawMessage `json:"config,omitempty"`
+ Layers json.RawMessage `json:"layers,omitempty"`
+ Manifests json.RawMessage `json:"manifests,omitempty"`
+ FSLayers json.RawMessage `json:"fsLayers,omitempty"` // schema 1
+}
+
+// validateMediaType returns an error if the byte slice is invalid JSON or if
+// the media type identifies the blob as one format but it contains elements of
+// another format.
+func validateMediaType(b []byte, mt string) error {
+ var doc unknownDocument
+ if err := json.Unmarshal(b, &doc); err != nil {
+ return err
+ }
+ if len(doc.FSLayers) != 0 {
+ return fmt.Errorf("media-type: schema 1 not supported")
+ }
+ switch mt {
+ case MediaTypeDockerSchema2Manifest, ocispec.MediaTypeImageManifest:
+ if len(doc.Manifests) != 0 ||
+ doc.MediaType == MediaTypeDockerSchema2ManifestList ||
+ doc.MediaType == ocispec.MediaTypeImageIndex {
+ return fmt.Errorf("media-type: expected manifest but found index (%s)", mt)
+ }
+ case MediaTypeDockerSchema2ManifestList, ocispec.MediaTypeImageIndex:
+ if len(doc.Config) != 0 || len(doc.Layers) != 0 ||
+ doc.MediaType == MediaTypeDockerSchema2Manifest ||
+ doc.MediaType == ocispec.MediaTypeImageManifest {
+ return fmt.Errorf("media-type: expected index but found manifest (%s)", mt)
+ }
+ }
+ return nil
+}
+
+// RootFS returns the unpacked diffids that make up and images rootfs.
+//
+// These are used to verify that a set of layers unpacked to the expected
+// values.
+func RootFS(ctx context.Context, provider content.Provider, configDesc ocispec.Descriptor) ([]digest.Digest, error) {
+ p, err := content.ReadBlob(ctx, provider, configDesc)
+ if err != nil {
+ return nil, err
+ }
+
+ var config ocispec.Image
+ if err := json.Unmarshal(p, &config); err != nil {
+ return nil, err
+ }
+ return config.RootFS.DiffIDs, nil
+}
diff --git a/vendor/github.com/containerd/containerd/images/importexport.go b/vendor/github.com/containerd/containerd/images/importexport.go
new file mode 100644
index 000000000..843adcadc
--- /dev/null
+++ b/vendor/github.com/containerd/containerd/images/importexport.go
@@ -0,0 +1,37 @@
+/*
+ Copyright The containerd Authors.
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+*/
+
+package images
+
+import (
+ "context"
+ "io"
+
+ "github.com/containerd/containerd/content"
+ ocispec "github.com/opencontainers/image-spec/specs-go/v1"
+)
+
+// Importer is the interface for image importer.
+type Importer interface {
+ // Import imports an image from a tar stream.
+ Import(ctx context.Context, store content.Store, reader io.Reader) (ocispec.Descriptor, error)
+}
+
+// Exporter is the interface for image exporter.
+type Exporter interface {
+ // Export exports an image to a tar stream.
+ Export(ctx context.Context, store content.Provider, desc ocispec.Descriptor, writer io.Writer) error
+}
diff --git a/vendor/github.com/containerd/containerd/images/labels.go b/vendor/github.com/containerd/containerd/images/labels.go
new file mode 100644
index 000000000..06dfed572
--- /dev/null
+++ b/vendor/github.com/containerd/containerd/images/labels.go
@@ -0,0 +1,21 @@
+/*
+ Copyright The containerd Authors.
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+*/
+
+package images
+
+const (
+ ConvertedDockerSchema1LabelKey = "io.containerd.image/converted-docker-schema1"
+)
diff --git a/vendor/github.com/containerd/containerd/images/mediatypes.go b/vendor/github.com/containerd/containerd/images/mediatypes.go
new file mode 100644
index 000000000..d3b28d42d
--- /dev/null
+++ b/vendor/github.com/containerd/containerd/images/mediatypes.go
@@ -0,0 +1,215 @@
+/*
+ Copyright The containerd Authors.
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+*/
+
+package images
+
+import (
+ "context"
+ "fmt"
+ "sort"
+ "strings"
+
+ "github.com/containerd/containerd/errdefs"
+ ocispec "github.com/opencontainers/image-spec/specs-go/v1"
+)
+
+// mediatype definitions for image components handled in containerd.
+//
+// oci components are generally referenced directly, although we may centralize
+// here for clarity.
+const (
+ MediaTypeDockerSchema2Layer = "application/vnd.docker.image.rootfs.diff.tar"
+ MediaTypeDockerSchema2LayerForeign = "application/vnd.docker.image.rootfs.foreign.diff.tar"
+ MediaTypeDockerSchema2LayerGzip = "application/vnd.docker.image.rootfs.diff.tar.gzip"
+ MediaTypeDockerSchema2LayerForeignGzip = "application/vnd.docker.image.rootfs.foreign.diff.tar.gzip"
+ MediaTypeDockerSchema2Config = "application/vnd.docker.container.image.v1+json"
+ MediaTypeDockerSchema2Manifest = "application/vnd.docker.distribution.manifest.v2+json"
+ MediaTypeDockerSchema2ManifestList = "application/vnd.docker.distribution.manifest.list.v2+json"
+
+ // Checkpoint/Restore Media Types
+
+ MediaTypeContainerd1Checkpoint = "application/vnd.containerd.container.criu.checkpoint.criu.tar"
+ MediaTypeContainerd1CheckpointPreDump = "application/vnd.containerd.container.criu.checkpoint.predump.tar"
+ MediaTypeContainerd1Resource = "application/vnd.containerd.container.resource.tar"
+ MediaTypeContainerd1RW = "application/vnd.containerd.container.rw.tar"
+ MediaTypeContainerd1CheckpointConfig = "application/vnd.containerd.container.checkpoint.config.v1+proto"
+ MediaTypeContainerd1CheckpointOptions = "application/vnd.containerd.container.checkpoint.options.v1+proto"
+ MediaTypeContainerd1CheckpointRuntimeName = "application/vnd.containerd.container.checkpoint.runtime.name"
+ MediaTypeContainerd1CheckpointRuntimeOptions = "application/vnd.containerd.container.checkpoint.runtime.options+proto"
+
+ // MediaTypeDockerSchema1Manifest is the legacy Docker schema1 manifest
+ MediaTypeDockerSchema1Manifest = "application/vnd.docker.distribution.manifest.v1+prettyjws"
+
+ // Encrypted media types
+
+ MediaTypeImageLayerEncrypted = ocispec.MediaTypeImageLayer + "+encrypted"
+ MediaTypeImageLayerGzipEncrypted = ocispec.MediaTypeImageLayerGzip + "+encrypted"
+)
+
+// DiffCompression returns the compression as defined by the layer diff media
+// type. For Docker media types without compression, "unknown" is returned to
+// indicate that the media type may be compressed. If the media type is not
+// recognized as a layer diff, then it returns errdefs.ErrNotImplemented
+func DiffCompression(ctx context.Context, mediaType string) (string, error) {
+ base, ext := parseMediaTypes(mediaType)
+ switch base {
+ case MediaTypeDockerSchema2Layer, MediaTypeDockerSchema2LayerForeign:
+ if len(ext) > 0 {
+ // Type is wrapped
+ return "", nil
+ }
+ // These media types may have been compressed but failed to
+ // use the correct media type. The decompression function
+ // should detect and handle this case.
+ return "unknown", nil
+ case MediaTypeDockerSchema2LayerGzip, MediaTypeDockerSchema2LayerForeignGzip:
+ if len(ext) > 0 {
+ // Type is wrapped
+ return "", nil
+ }
+ return "gzip", nil
+ case ocispec.MediaTypeImageLayer, ocispec.MediaTypeImageLayerNonDistributable: //nolint:staticcheck // Non-distributable layers are deprecated
+ if len(ext) > 0 {
+ switch ext[len(ext)-1] {
+ case "gzip":
+ return "gzip", nil
+ case "zstd":
+ return "zstd", nil
+ }
+ }
+ return "", nil
+ default:
+ return "", fmt.Errorf("unrecognised mediatype %s: %w", mediaType, errdefs.ErrNotImplemented)
+ }
+}
+
+// parseMediaTypes splits the media type into the base type and
+// an array of sorted extensions
+func parseMediaTypes(mt string) (mediaType string, suffixes []string) {
+ if mt == "" {
+ return "", []string{}
+ }
+ mediaType, ext, ok := strings.Cut(mt, "+")
+ if !ok {
+ return mediaType, []string{}
+ }
+
+ // Splitting the extensions following the mediatype "(+)gzip+encrypted".
+ // We expect this to be a limited list, so add an arbitrary limit (50).
+ //
+ // Note that DiffCompression is only using the last element, so perhaps we
+ // should split on the last "+" only.
+ suffixes = strings.SplitN(ext, "+", 50)
+ sort.Strings(suffixes)
+ return mediaType, suffixes
+}
+
+// IsNonDistributable returns true if the media type is non-distributable.
+func IsNonDistributable(mt string) bool {
+ return strings.HasPrefix(mt, "application/vnd.oci.image.layer.nondistributable.") ||
+ strings.HasPrefix(mt, "application/vnd.docker.image.rootfs.foreign.")
+}
+
+// IsLayerType returns true if the media type is a layer
+func IsLayerType(mt string) bool {
+ if strings.HasPrefix(mt, "application/vnd.oci.image.layer.") {
+ return true
+ }
+
+ // Parse Docker media types, strip off any + suffixes first
+ switch base, _ := parseMediaTypes(mt); base {
+ case MediaTypeDockerSchema2Layer, MediaTypeDockerSchema2LayerGzip,
+ MediaTypeDockerSchema2LayerForeign, MediaTypeDockerSchema2LayerForeignGzip:
+ return true
+ }
+ return false
+}
+
+// IsDockerType returns true if the media type has "application/vnd.docker." prefix
+func IsDockerType(mt string) bool {
+ return strings.HasPrefix(mt, "application/vnd.docker.")
+}
+
+// IsManifestType returns true if the media type is an OCI-compatible manifest.
+// No support for schema1 manifest.
+func IsManifestType(mt string) bool {
+ switch mt {
+ case MediaTypeDockerSchema2Manifest, ocispec.MediaTypeImageManifest:
+ return true
+ default:
+ return false
+ }
+}
+
+// IsIndexType returns true if the media type is an OCI-compatible index.
+func IsIndexType(mt string) bool {
+ switch mt {
+ case ocispec.MediaTypeImageIndex, MediaTypeDockerSchema2ManifestList:
+ return true
+ default:
+ return false
+ }
+}
+
+// IsConfigType returns true if the media type is an OCI-compatible image config.
+// No support for containerd checkpoint configs.
+func IsConfigType(mt string) bool {
+ switch mt {
+ case MediaTypeDockerSchema2Config, ocispec.MediaTypeImageConfig:
+ return true
+ default:
+ return false
+ }
+}
+
+// IsKnownConfig returns true if the media type is a known config type,
+// including containerd checkpoint configs
+func IsKnownConfig(mt string) bool {
+ switch mt {
+ case MediaTypeDockerSchema2Config, ocispec.MediaTypeImageConfig,
+ MediaTypeContainerd1Checkpoint, MediaTypeContainerd1CheckpointConfig:
+ return true
+ }
+ return false
+}
+
+// ChildGCLabels returns the label for a given descriptor to reference it
+func ChildGCLabels(desc ocispec.Descriptor) []string {
+ mt := desc.MediaType
+ if IsKnownConfig(mt) {
+ return []string{"containerd.io/gc.ref.content.config"}
+ }
+
+ switch mt {
+ case MediaTypeDockerSchema2Manifest, ocispec.MediaTypeImageManifest:
+ return []string{"containerd.io/gc.ref.content.m."}
+ }
+
+ if IsLayerType(mt) {
+ return []string{"containerd.io/gc.ref.content.l."}
+ }
+
+ return []string{"containerd.io/gc.ref.content."}
+}
+
+// ChildGCLabelsFilterLayers returns the labels for a given descriptor to
+// reference it, skipping layer media types
+func ChildGCLabelsFilterLayers(desc ocispec.Descriptor) []string {
+ if IsLayerType(desc.MediaType) {
+ return nil
+ }
+ return ChildGCLabels(desc)
+}
diff --git a/vendor/github.com/containerd/containerd/labels/labels.go b/vendor/github.com/containerd/containerd/labels/labels.go
new file mode 100644
index 000000000..0f9bab5c5
--- /dev/null
+++ b/vendor/github.com/containerd/containerd/labels/labels.go
@@ -0,0 +1,29 @@
+/*
+ Copyright The containerd Authors.
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+*/
+
+package labels
+
+// LabelUncompressed is added to compressed layer contents.
+// The value is digest of the uncompressed content.
+const LabelUncompressed = "containerd.io/uncompressed"
+
+// LabelSharedNamespace is added to a namespace to allow that namespaces
+// contents to be shared.
+const LabelSharedNamespace = "containerd.io/namespace.shareable"
+
+// LabelDistributionSource is added to content to indicate its origin.
+// e.g., "containerd.io/distribution.source.docker.io=library/redis"
+const LabelDistributionSource = "containerd.io/distribution.source"
diff --git a/vendor/github.com/containerd/containerd/labels/validate.go b/vendor/github.com/containerd/containerd/labels/validate.go
new file mode 100644
index 000000000..f83b5dde2
--- /dev/null
+++ b/vendor/github.com/containerd/containerd/labels/validate.go
@@ -0,0 +1,41 @@
+/*
+ Copyright The containerd Authors.
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+*/
+
+package labels
+
+import (
+ "fmt"
+
+ "github.com/containerd/containerd/errdefs"
+)
+
+const (
+ maxSize = 4096
+ // maximum length of key portion of error message if len of key + len of value > maxSize
+ keyMaxLen = 64
+)
+
+// Validate a label's key and value are under 4096 bytes
+func Validate(k, v string) error {
+ total := len(k) + len(v)
+ if total > maxSize {
+ if len(k) > keyMaxLen {
+ k = k[:keyMaxLen]
+ }
+ return fmt.Errorf("label key and value length (%d bytes) greater than maximum size (%d bytes), key: %s: %w", total, maxSize, k, errdefs.ErrInvalidArgument)
+ }
+ return nil
+}
diff --git a/vendor/github.com/containerd/containerd/log/context_deprecated.go b/vendor/github.com/containerd/containerd/log/context_deprecated.go
new file mode 100644
index 000000000..9e9e8b491
--- /dev/null
+++ b/vendor/github.com/containerd/containerd/log/context_deprecated.go
@@ -0,0 +1,149 @@
+/*
+ Copyright The containerd Authors.
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+*/
+
+package log
+
+import (
+ "context"
+
+ "github.com/containerd/log"
+)
+
+// G is a shorthand for [GetLogger].
+//
+// Deprecated: use [log.G].
+var G = log.G
+
+// L is an alias for the standard logger.
+//
+// Deprecated: use [log.L].
+var L = log.L
+
+// Fields type to pass to "WithFields".
+//
+// Deprecated: use [log.Fields].
+type Fields = log.Fields
+
+// Entry is a logging entry.
+//
+// Deprecated: use [log.Entry].
+type Entry = log.Entry
+
+// RFC3339NanoFixed is [time.RFC3339Nano] with nanoseconds padded using
+// zeros to ensure the formatted time is always the same number of
+// characters.
+//
+// Deprecated: use [log.RFC3339NanoFixed].
+const RFC3339NanoFixed = log.RFC3339NanoFixed
+
+// Level is a logging level.
+//
+// Deprecated: use [log.Level].
+type Level = log.Level
+
+// Supported log levels.
+const (
+ // TraceLevel level.
+ //
+ // Deprecated: use [log.TraceLevel].
+ TraceLevel Level = log.TraceLevel
+
+ // DebugLevel level.
+ //
+ // Deprecated: use [log.DebugLevel].
+ DebugLevel Level = log.DebugLevel
+
+ // InfoLevel level.
+ //
+ // Deprecated: use [log.InfoLevel].
+ InfoLevel Level = log.InfoLevel
+
+ // WarnLevel level.
+ //
+ // Deprecated: use [log.WarnLevel].
+ WarnLevel Level = log.WarnLevel
+
+ // ErrorLevel level
+ //
+ // Deprecated: use [log.ErrorLevel].
+ ErrorLevel Level = log.ErrorLevel
+
+ // FatalLevel level.
+ //
+ // Deprecated: use [log.FatalLevel].
+ FatalLevel Level = log.FatalLevel
+
+ // PanicLevel level.
+ //
+ // Deprecated: use [log.PanicLevel].
+ PanicLevel Level = log.PanicLevel
+)
+
+// SetLevel sets log level globally. It returns an error if the given
+// level is not supported.
+//
+// Deprecated: use [log.SetLevel].
+func SetLevel(level string) error {
+ return log.SetLevel(level)
+}
+
+// GetLevel returns the current log level.
+//
+// Deprecated: use [log.GetLevel].
+func GetLevel() log.Level {
+ return log.GetLevel()
+}
+
+// OutputFormat specifies a log output format.
+//
+// Deprecated: use [log.OutputFormat].
+type OutputFormat = log.OutputFormat
+
+// Supported log output formats.
+const (
+ // TextFormat represents the text logging format.
+ //
+ // Deprecated: use [log.TextFormat].
+ TextFormat log.OutputFormat = "text"
+
+ // JSONFormat represents the JSON logging format.
+ //
+ // Deprecated: use [log.JSONFormat].
+ JSONFormat log.OutputFormat = "json"
+)
+
+// SetFormat sets the log output format.
+//
+// Deprecated: use [log.SetFormat].
+func SetFormat(format OutputFormat) error {
+ return log.SetFormat(format)
+}
+
+// WithLogger returns a new context with the provided logger. Use in
+// combination with logger.WithField(s) for great effect.
+//
+// Deprecated: use [log.WithLogger].
+func WithLogger(ctx context.Context, logger *log.Entry) context.Context {
+ return log.WithLogger(ctx, logger)
+}
+
+// GetLogger retrieves the current logger from the context. If no logger is
+// available, the default logger is returned.
+//
+// Deprecated: use [log.GetLogger].
+func GetLogger(ctx context.Context) *log.Entry {
+ return log.GetLogger(ctx)
+}
diff --git a/vendor/github.com/containerd/containerd/pkg/randutil/randutil.go b/vendor/github.com/containerd/containerd/pkg/randutil/randutil.go
new file mode 100644
index 000000000..f4b657d7d
--- /dev/null
+++ b/vendor/github.com/containerd/containerd/pkg/randutil/randutil.go
@@ -0,0 +1,48 @@
+/*
+ Copyright The containerd Authors.
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+*/
+
+// Package randutil provides utilities for [cyrpto/rand].
+package randutil
+
+import (
+ "crypto/rand"
+ "math"
+ "math/big"
+)
+
+// Int63n is similar to [math/rand.Int63n] but uses [crypto/rand.Reader] under the hood.
+func Int63n(n int64) int64 {
+ b, err := rand.Int(rand.Reader, big.NewInt(n))
+ if err != nil {
+ panic(err)
+ }
+ return b.Int64()
+}
+
+// Int63 is similar to [math/rand.Int63] but uses [crypto/rand.Reader] under the hood.
+func Int63() int64 {
+ return Int63n(math.MaxInt64)
+}
+
+// Intn is similar to [math/rand.Intn] but uses [crypto/rand.Reader] under the hood.
+func Intn(n int) int {
+ return int(Int63n(int64(n)))
+}
+
+// Int is similar to [math/rand.Int] but uses [crypto/rand.Reader] under the hood.
+func Int() int {
+ return int(Int63())
+}
diff --git a/vendor/github.com/containerd/containerd/reference/reference.go b/vendor/github.com/containerd/containerd/reference/reference.go
new file mode 100644
index 000000000..9329a9aab
--- /dev/null
+++ b/vendor/github.com/containerd/containerd/reference/reference.go
@@ -0,0 +1,179 @@
+/*
+ Copyright The containerd Authors.
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+*/
+
+package reference
+
+import (
+ "errors"
+ "net/url"
+ "path"
+ "regexp"
+ "strings"
+
+ digest "github.com/opencontainers/go-digest"
+)
+
+var (
+ // ErrInvalid is returned when there is an invalid reference
+ ErrInvalid = errors.New("invalid reference")
+ // ErrObjectRequired is returned when the object is required
+ ErrObjectRequired = errors.New("object required")
+ // ErrHostnameRequired is returned when the hostname is required
+ ErrHostnameRequired = errors.New("hostname required")
+)
+
+// Spec defines the main components of a reference specification.
+//
+// A reference specification is a schema-less URI parsed into common
+// components. The two main components, locator and object, are required to be
+// supported by remotes. It represents a superset of the naming define in
+// docker's reference schema. It aims to be compatible but not prescriptive.
+//
+// While the interpretation of the components, locator and object, are up to
+// the remote, we define a few common parts, accessible via helper methods.
+//
+// The first is the hostname, which is part of the locator. This doesn't need
+// to map to a physical resource, but it must parse as a hostname. We refer to
+// this as the namespace.
+//
+// The other component made accessible by helper method is the digest. This is
+// part of the object identifier, always prefixed with an '@'. If present, the
+// remote may use the digest portion directly or resolve it against a prefix.
+// If the object does not include the `@` symbol, the return value for `Digest`
+// will be empty.
+type Spec struct {
+ // Locator is the host and path portion of the specification. The host
+ // portion may refer to an actual host or just a namespace of related
+ // images.
+ //
+ // Typically, the locator may used to resolve the remote to fetch specific
+ // resources.
+ Locator string
+
+ // Object contains the identifier for the remote resource. Classically,
+ // this is a tag but can refer to anything in a remote. By convention, any
+ // portion that may be a partial or whole digest will be preceded by an
+ // `@`. Anything preceding the `@` will be referred to as the "tag".
+ //
+ // In practice, we will see this broken down into the following formats:
+ //
+ // 1.
+ // 2. @
+ // 3. @
+ //
+ // We define the tag to be anything except '@' and ':'. may
+ // be a full valid digest or shortened version, possibly with elided
+ // algorithm.
+ Object string
+}
+
+var splitRe = regexp.MustCompile(`[:@]`)
+
+// Parse parses the string into a structured ref.
+func Parse(s string) (Spec, error) {
+ if strings.Contains(s, "://") {
+ return Spec{}, ErrInvalid
+ }
+
+ u, err := url.Parse("dummy://" + s)
+ if err != nil {
+ return Spec{}, err
+ }
+
+ if u.Scheme != "dummy" {
+ return Spec{}, ErrInvalid
+ }
+
+ if u.Host == "" {
+ return Spec{}, ErrHostnameRequired
+ }
+
+ var object string
+
+ if idx := splitRe.FindStringIndex(u.Path); idx != nil {
+ // This allows us to retain the @ to signify digests or shortened digests in
+ // the object.
+ object = u.Path[idx[0]:]
+ if object[:1] == ":" {
+ object = object[1:]
+ }
+ u.Path = u.Path[:idx[0]]
+ }
+
+ return Spec{
+ Locator: path.Join(u.Host, u.Path),
+ Object: object,
+ }, nil
+}
+
+// Hostname returns the hostname portion of the locator.
+//
+// Remotes are not required to directly access the resources at this host. This
+// method is provided for convenience.
+func (r Spec) Hostname() string {
+ i := strings.Index(r.Locator, "/")
+
+ if i < 0 {
+ return r.Locator
+ }
+ return r.Locator[:i]
+}
+
+// Digest returns the digest portion of the reference spec. This may be a
+// partial or invalid digest, which may be used to lookup a complete digest.
+func (r Spec) Digest() digest.Digest {
+ i := strings.Index(r.Object, "@")
+
+ if i < 0 {
+ return ""
+ }
+ return digest.Digest(r.Object[i+1:])
+}
+
+// String returns the normalized string for the ref.
+func (r Spec) String() string {
+ if r.Object == "" {
+ return r.Locator
+ }
+ if r.Object[:1] == "@" {
+ return r.Locator + r.Object
+ }
+
+ return r.Locator + ":" + r.Object
+}
+
+// SplitObject provides two parts of the object spec, delimited by an "@"
+// symbol. It does not perform any validation on correctness of the values
+// returned, and it's the callers' responsibility to validate the result.
+//
+// If an "@" delimiter is found, it returns the part *including* the "@"
+// delimiter as "tag", and the part after the "@" as digest.
+//
+// The example below produces "docker.io/library/ubuntu:latest@" and
+// "sha256:deadbeef";
+//
+// t, d := SplitObject("docker.io/library/ubuntu:latest@sha256:deadbeef")
+// fmt.Println(t) // docker.io/library/ubuntu:latest@
+// fmt.Println(d) // sha256:deadbeef
+//
+// Deprecated: use [Parse] and [Spec.Digest] instead.
+func SplitObject(obj string) (tag string, dgst digest.Digest) {
+ if i := strings.Index(obj, "@"); i >= 0 {
+ // Offset by one so preserve the "@" in the tag returned.
+ return obj[:i+1], digest.Digest(obj[i+1:])
+ }
+ return obj, ""
+}
diff --git a/vendor/github.com/containerd/containerd/remotes/docker/auth/fetch.go b/vendor/github.com/containerd/containerd/remotes/docker/auth/fetch.go
new file mode 100644
index 000000000..244e03509
--- /dev/null
+++ b/vendor/github.com/containerd/containerd/remotes/docker/auth/fetch.go
@@ -0,0 +1,225 @@
+/*
+ Copyright The containerd Authors.
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+*/
+
+package auth
+
+import (
+ "context"
+ "encoding/json"
+ "errors"
+ "fmt"
+ "net/http"
+ "net/url"
+ "strings"
+ "time"
+
+ remoteserrors "github.com/containerd/containerd/remotes/errors"
+ "github.com/containerd/containerd/version"
+ "github.com/containerd/log"
+)
+
+var (
+ // ErrNoToken is returned if a request is successful but the body does not
+ // contain an authorization token.
+ ErrNoToken = errors.New("authorization server did not include a token in the response")
+)
+
+// GenerateTokenOptions generates options for fetching a token based on a challenge
+func GenerateTokenOptions(ctx context.Context, host, username, secret string, c Challenge) (TokenOptions, error) {
+ realm, ok := c.Parameters["realm"]
+ if !ok {
+ return TokenOptions{}, errors.New("no realm specified for token auth challenge")
+ }
+
+ realmURL, err := url.Parse(realm)
+ if err != nil {
+ return TokenOptions{}, fmt.Errorf("invalid token auth challenge realm: %w", err)
+ }
+
+ to := TokenOptions{
+ Realm: realmURL.String(),
+ Service: c.Parameters["service"],
+ Username: username,
+ Secret: secret,
+ }
+
+ scope, ok := c.Parameters["scope"]
+ if ok {
+ to.Scopes = append(to.Scopes, strings.Split(scope, " ")...)
+ } else {
+ log.G(ctx).WithField("host", host).Debug("no scope specified for token auth challenge")
+ }
+
+ return to, nil
+}
+
+// TokenOptions are options for requesting a token
+type TokenOptions struct {
+ Realm string
+ Service string
+ Scopes []string
+ Username string
+ Secret string
+
+ // FetchRefreshToken enables fetching a refresh token (aka "identity token", "offline token") along with the bearer token.
+ //
+ // For HTTP GET mode (FetchToken), FetchRefreshToken sets `offline_token=true` in the request.
+ // https://docs.docker.com/registry/spec/auth/token/#requesting-a-token
+ //
+ // For HTTP POST mode (FetchTokenWithOAuth), FetchRefreshToken sets `access_type=offline` in the request.
+ // https://docs.docker.com/registry/spec/auth/oauth/#getting-a-token
+ FetchRefreshToken bool
+}
+
+// OAuthTokenResponse is response from fetching token with a OAuth POST request
+type OAuthTokenResponse struct {
+ AccessToken string `json:"access_token"`
+ RefreshToken string `json:"refresh_token"`
+ ExpiresIn int `json:"expires_in"`
+ IssuedAt time.Time `json:"issued_at"`
+ Scope string `json:"scope"`
+}
+
+// FetchTokenWithOAuth fetches a token using a POST request
+func FetchTokenWithOAuth(ctx context.Context, client *http.Client, headers http.Header, clientID string, to TokenOptions) (*OAuthTokenResponse, error) {
+ form := url.Values{}
+ if len(to.Scopes) > 0 {
+ form.Set("scope", strings.Join(to.Scopes, " "))
+ }
+ form.Set("service", to.Service)
+ form.Set("client_id", clientID)
+
+ if to.Username == "" {
+ form.Set("grant_type", "refresh_token")
+ form.Set("refresh_token", to.Secret)
+ } else {
+ form.Set("grant_type", "password")
+ form.Set("username", to.Username)
+ form.Set("password", to.Secret)
+ }
+ if to.FetchRefreshToken {
+ form.Set("access_type", "offline")
+ }
+
+ req, err := http.NewRequestWithContext(ctx, http.MethodPost, to.Realm, strings.NewReader(form.Encode()))
+ if err != nil {
+ return nil, err
+ }
+ req.Header.Set("Content-Type", "application/x-www-form-urlencoded; charset=utf-8")
+ for k, v := range headers {
+ req.Header[k] = append(req.Header[k], v...)
+ }
+ if len(req.Header.Get("User-Agent")) == 0 {
+ req.Header.Set("User-Agent", "containerd/"+version.Version)
+ }
+
+ resp, err := client.Do(req)
+ if err != nil {
+ return nil, err
+ }
+ defer resp.Body.Close()
+
+ if resp.StatusCode < 200 || resp.StatusCode >= 400 {
+ return nil, remoteserrors.NewUnexpectedStatusErr(resp)
+ }
+
+ decoder := json.NewDecoder(resp.Body)
+
+ var tr OAuthTokenResponse
+ if err = decoder.Decode(&tr); err != nil {
+ return nil, fmt.Errorf("unable to decode token response: %w", err)
+ }
+
+ if tr.AccessToken == "" {
+ return nil, ErrNoToken
+ }
+
+ return &tr, nil
+}
+
+// FetchTokenResponse is response from fetching token with GET request
+type FetchTokenResponse struct {
+ Token string `json:"token"`
+ AccessToken string `json:"access_token"`
+ ExpiresIn int `json:"expires_in"`
+ IssuedAt time.Time `json:"issued_at"`
+ RefreshToken string `json:"refresh_token"`
+}
+
+// FetchToken fetches a token using a GET request
+func FetchToken(ctx context.Context, client *http.Client, headers http.Header, to TokenOptions) (*FetchTokenResponse, error) {
+ req, err := http.NewRequestWithContext(ctx, http.MethodGet, to.Realm, nil)
+ if err != nil {
+ return nil, err
+ }
+
+ for k, v := range headers {
+ req.Header[k] = append(req.Header[k], v...)
+ }
+ if len(req.Header.Get("User-Agent")) == 0 {
+ req.Header.Set("User-Agent", "containerd/"+version.Version)
+ }
+
+ reqParams := req.URL.Query()
+
+ if to.Service != "" {
+ reqParams.Add("service", to.Service)
+ }
+
+ for _, scope := range to.Scopes {
+ reqParams.Add("scope", scope)
+ }
+
+ if to.Secret != "" {
+ req.SetBasicAuth(to.Username, to.Secret)
+ }
+
+ if to.FetchRefreshToken {
+ reqParams.Add("offline_token", "true")
+ }
+
+ req.URL.RawQuery = reqParams.Encode()
+
+ resp, err := client.Do(req)
+ if err != nil {
+ return nil, err
+ }
+ defer resp.Body.Close()
+
+ if resp.StatusCode < 200 || resp.StatusCode >= 400 {
+ return nil, remoteserrors.NewUnexpectedStatusErr(resp)
+ }
+
+ decoder := json.NewDecoder(resp.Body)
+
+ var tr FetchTokenResponse
+ if err = decoder.Decode(&tr); err != nil {
+ return nil, fmt.Errorf("unable to decode token response: %w", err)
+ }
+
+ // `access_token` is equivalent to `token` and if both are specified
+ // the choice is undefined. Canonicalize `access_token` by sticking
+ // things in `token`.
+ if tr.AccessToken != "" {
+ tr.Token = tr.AccessToken
+ }
+
+ if tr.Token == "" {
+ return nil, ErrNoToken
+ }
+
+ return &tr, nil
+}
diff --git a/vendor/github.com/containerd/containerd/remotes/docker/auth/parse.go b/vendor/github.com/containerd/containerd/remotes/docker/auth/parse.go
new file mode 100644
index 000000000..e4529a776
--- /dev/null
+++ b/vendor/github.com/containerd/containerd/remotes/docker/auth/parse.go
@@ -0,0 +1,200 @@
+/*
+ Copyright The containerd Authors.
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+*/
+
+package auth
+
+import (
+ "net/http"
+ "sort"
+ "strings"
+)
+
+// AuthenticationScheme defines scheme of the authentication method
+type AuthenticationScheme byte
+
+const (
+ // BasicAuth is scheme for Basic HTTP Authentication RFC 7617
+ BasicAuth AuthenticationScheme = 1 << iota
+ // DigestAuth is scheme for HTTP Digest Access Authentication RFC 7616
+ DigestAuth
+ // BearerAuth is scheme for OAuth 2.0 Bearer Tokens RFC 6750
+ BearerAuth
+)
+
+// Challenge carries information from a WWW-Authenticate response header.
+// See RFC 2617.
+type Challenge struct {
+ // scheme is the auth-scheme according to RFC 2617
+ Scheme AuthenticationScheme
+
+ // parameters are the auth-params according to RFC 2617
+ Parameters map[string]string
+}
+
+type byScheme []Challenge
+
+func (bs byScheme) Len() int { return len(bs) }
+func (bs byScheme) Swap(i, j int) { bs[i], bs[j] = bs[j], bs[i] }
+
+// Sort in priority order: token > digest > basic
+func (bs byScheme) Less(i, j int) bool { return bs[i].Scheme > bs[j].Scheme }
+
+// Octet types from RFC 2616.
+type octetType byte
+
+var octetTypes [256]octetType
+
+const (
+ isToken octetType = 1 << iota
+ isSpace
+)
+
+func init() {
+ // OCTET =
+ // CHAR =
+ // CTL =
+ // CR =
+ // LF =
+ // SP =
+ // HT =
+ // <"> =
+ // CRLF = CR LF
+ // LWS = [CRLF] 1*( SP | HT )
+ // TEXT =
+ // separators = "(" | ")" | "<" | ">" | "@" | "," | ";" | ":" | "\" | <">
+ // | "/" | "[" | "]" | "?" | "=" | "{" | "}" | SP | HT
+ // token = 1*
+ // qdtext = >
+
+ for c := 0; c < 256; c++ {
+ var t octetType
+ isCtl := c <= 31 || c == 127
+ isChar := 0 <= c && c <= 127
+ isSeparator := strings.ContainsRune(" \t\"(),/:;<=>?@[]\\{}", rune(c))
+ if strings.ContainsRune(" \t\r\n", rune(c)) {
+ t |= isSpace
+ }
+ if isChar && !isCtl && !isSeparator {
+ t |= isToken
+ }
+ octetTypes[c] = t
+ }
+}
+
+// ParseAuthHeader parses challenges from WWW-Authenticate header
+func ParseAuthHeader(header http.Header) []Challenge {
+ challenges := []Challenge{}
+ for _, h := range header[http.CanonicalHeaderKey("WWW-Authenticate")] {
+ v, p := parseValueAndParams(h)
+ var s AuthenticationScheme
+ switch v {
+ case "basic":
+ s = BasicAuth
+ case "digest":
+ s = DigestAuth
+ case "bearer":
+ s = BearerAuth
+ default:
+ continue
+ }
+ challenges = append(challenges, Challenge{Scheme: s, Parameters: p})
+ }
+ sort.Stable(byScheme(challenges))
+ return challenges
+}
+
+func parseValueAndParams(header string) (value string, params map[string]string) {
+ params = make(map[string]string)
+ value, s := expectToken(header)
+ if value == "" {
+ return
+ }
+ value = strings.ToLower(value)
+ for {
+ var pkey string
+ pkey, s = expectToken(skipSpace(s))
+ if pkey == "" {
+ return
+ }
+ if !strings.HasPrefix(s, "=") {
+ return
+ }
+ var pvalue string
+ pvalue, s = expectTokenOrQuoted(s[1:])
+ pkey = strings.ToLower(pkey)
+ params[pkey] = pvalue
+ s = skipSpace(s)
+ if !strings.HasPrefix(s, ",") {
+ return
+ }
+ s = s[1:]
+ }
+}
+
+func skipSpace(s string) (rest string) {
+ i := 0
+ for ; i < len(s); i++ {
+ if octetTypes[s[i]]&isSpace == 0 {
+ break
+ }
+ }
+ return s[i:]
+}
+
+func expectToken(s string) (token, rest string) {
+ i := 0
+ for ; i < len(s); i++ {
+ if octetTypes[s[i]]&isToken == 0 {
+ break
+ }
+ }
+ return s[:i], s[i:]
+}
+
+func expectTokenOrQuoted(s string) (value string, rest string) {
+ if !strings.HasPrefix(s, "\"") {
+ return expectToken(s)
+ }
+ s = s[1:]
+ for i := 0; i < len(s); i++ {
+ switch s[i] {
+ case '"':
+ return s[:i], s[i+1:]
+ case '\\':
+ p := make([]byte, len(s)-1)
+ j := copy(p, s[:i])
+ escape := true
+ for i = i + 1; i < len(s); i++ {
+ b := s[i]
+ switch {
+ case escape:
+ escape = false
+ p[j] = b
+ j++
+ case b == '\\':
+ escape = true
+ case b == '"':
+ return string(p[:j]), s[i+1:]
+ default:
+ p[j] = b
+ j++
+ }
+ }
+ return "", ""
+ }
+ }
+ return "", ""
+}
diff --git a/vendor/github.com/containerd/containerd/remotes/docker/authorizer.go b/vendor/github.com/containerd/containerd/remotes/docker/authorizer.go
new file mode 100644
index 000000000..2bf388e8c
--- /dev/null
+++ b/vendor/github.com/containerd/containerd/remotes/docker/authorizer.go
@@ -0,0 +1,362 @@
+/*
+ Copyright The containerd Authors.
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+*/
+
+package docker
+
+import (
+ "context"
+ "encoding/base64"
+ "errors"
+ "fmt"
+ "net/http"
+ "strings"
+ "sync"
+
+ "github.com/containerd/log"
+
+ "github.com/containerd/containerd/errdefs"
+ "github.com/containerd/containerd/remotes/docker/auth"
+ remoteerrors "github.com/containerd/containerd/remotes/errors"
+)
+
+type dockerAuthorizer struct {
+ credentials func(string) (string, string, error)
+
+ client *http.Client
+ header http.Header
+ mu sync.RWMutex
+
+ // indexed by host name
+ handlers map[string]*authHandler
+
+ onFetchRefreshToken OnFetchRefreshToken
+}
+
+type authorizerConfig struct {
+ credentials func(string) (string, string, error)
+ client *http.Client
+ header http.Header
+ onFetchRefreshToken OnFetchRefreshToken
+}
+
+// AuthorizerOpt configures an authorizer
+type AuthorizerOpt func(*authorizerConfig)
+
+// WithAuthClient provides the HTTP client for the authorizer
+func WithAuthClient(client *http.Client) AuthorizerOpt {
+ return func(opt *authorizerConfig) {
+ opt.client = client
+ }
+}
+
+// WithAuthCreds provides a credential function to the authorizer
+func WithAuthCreds(creds func(string) (string, string, error)) AuthorizerOpt {
+ return func(opt *authorizerConfig) {
+ opt.credentials = creds
+ }
+}
+
+// WithAuthHeader provides HTTP headers for authorization
+func WithAuthHeader(hdr http.Header) AuthorizerOpt {
+ return func(opt *authorizerConfig) {
+ opt.header = hdr
+ }
+}
+
+// OnFetchRefreshToken is called on fetching request token.
+type OnFetchRefreshToken func(ctx context.Context, refreshToken string, req *http.Request)
+
+// WithFetchRefreshToken enables fetching "refresh token" (aka "identity token", "offline token").
+func WithFetchRefreshToken(f OnFetchRefreshToken) AuthorizerOpt {
+ return func(opt *authorizerConfig) {
+ opt.onFetchRefreshToken = f
+ }
+}
+
+// NewDockerAuthorizer creates an authorizer using Docker's registry
+// authentication spec.
+// See https://docs.docker.com/registry/spec/auth/
+func NewDockerAuthorizer(opts ...AuthorizerOpt) Authorizer {
+ var ao authorizerConfig
+ for _, opt := range opts {
+ opt(&ao)
+ }
+
+ if ao.client == nil {
+ ao.client = http.DefaultClient
+ }
+
+ return &dockerAuthorizer{
+ credentials: ao.credentials,
+ client: ao.client,
+ header: ao.header,
+ handlers: make(map[string]*authHandler),
+ onFetchRefreshToken: ao.onFetchRefreshToken,
+ }
+}
+
+// Authorize handles auth request.
+func (a *dockerAuthorizer) Authorize(ctx context.Context, req *http.Request) error {
+ // skip if there is no auth handler
+ ah := a.getAuthHandler(req.URL.Host)
+ if ah == nil {
+ return nil
+ }
+
+ auth, refreshToken, err := ah.authorize(ctx)
+ if err != nil {
+ return err
+ }
+
+ req.Header.Set("Authorization", auth)
+
+ if refreshToken != "" {
+ a.mu.RLock()
+ onFetchRefreshToken := a.onFetchRefreshToken
+ a.mu.RUnlock()
+ if onFetchRefreshToken != nil {
+ onFetchRefreshToken(ctx, refreshToken, req)
+ }
+ }
+ return nil
+}
+
+func (a *dockerAuthorizer) getAuthHandler(host string) *authHandler {
+ a.mu.Lock()
+ defer a.mu.Unlock()
+
+ return a.handlers[host]
+}
+
+func (a *dockerAuthorizer) AddResponses(ctx context.Context, responses []*http.Response) error {
+ last := responses[len(responses)-1]
+ host := last.Request.URL.Host
+
+ a.mu.Lock()
+ defer a.mu.Unlock()
+ for _, c := range auth.ParseAuthHeader(last.Header) {
+ if c.Scheme == auth.BearerAuth {
+ if retry, err := invalidAuthorization(ctx, c, responses); err != nil {
+ delete(a.handlers, host)
+ return err
+ } else if retry {
+ delete(a.handlers, host)
+ }
+
+ // reuse existing handler
+ //
+ // assume that one registry will return the common
+ // challenge information, including realm and service.
+ // and the resource scope is only different part
+ // which can be provided by each request.
+ if _, ok := a.handlers[host]; ok {
+ return nil
+ }
+
+ var username, secret string
+ if a.credentials != nil {
+ var err error
+ username, secret, err = a.credentials(host)
+ if err != nil {
+ return err
+ }
+ }
+
+ common, err := auth.GenerateTokenOptions(ctx, host, username, secret, c)
+ if err != nil {
+ return err
+ }
+ common.FetchRefreshToken = a.onFetchRefreshToken != nil
+
+ a.handlers[host] = newAuthHandler(a.client, a.header, c.Scheme, common)
+ return nil
+ } else if c.Scheme == auth.BasicAuth && a.credentials != nil {
+ username, secret, err := a.credentials(host)
+ if err != nil {
+ return err
+ }
+
+ if username == "" || secret == "" {
+ return fmt.Errorf("%w: no basic auth credentials", ErrInvalidAuthorization)
+ }
+
+ a.handlers[host] = newAuthHandler(a.client, a.header, c.Scheme, auth.TokenOptions{
+ Username: username,
+ Secret: secret,
+ })
+ return nil
+ }
+ }
+ return fmt.Errorf("failed to find supported auth scheme: %w", errdefs.ErrNotImplemented)
+}
+
+// authResult is used to control limit rate.
+type authResult struct {
+ sync.WaitGroup
+ token string
+ refreshToken string
+ err error
+}
+
+// authHandler is used to handle auth request per registry server.
+type authHandler struct {
+ sync.Mutex
+
+ header http.Header
+
+ client *http.Client
+
+ // only support basic and bearer schemes
+ scheme auth.AuthenticationScheme
+
+ // common contains common challenge answer
+ common auth.TokenOptions
+
+ // scopedTokens caches token indexed by scopes, which used in
+ // bearer auth case
+ scopedTokens map[string]*authResult
+}
+
+func newAuthHandler(client *http.Client, hdr http.Header, scheme auth.AuthenticationScheme, opts auth.TokenOptions) *authHandler {
+ return &authHandler{
+ header: hdr,
+ client: client,
+ scheme: scheme,
+ common: opts,
+ scopedTokens: map[string]*authResult{},
+ }
+}
+
+func (ah *authHandler) authorize(ctx context.Context) (string, string, error) {
+ switch ah.scheme {
+ case auth.BasicAuth:
+ return ah.doBasicAuth(ctx)
+ case auth.BearerAuth:
+ return ah.doBearerAuth(ctx)
+ default:
+ return "", "", fmt.Errorf("failed to find supported auth scheme: %s: %w", string(ah.scheme), errdefs.ErrNotImplemented)
+ }
+}
+
+func (ah *authHandler) doBasicAuth(ctx context.Context) (string, string, error) {
+ username, secret := ah.common.Username, ah.common.Secret
+
+ if username == "" || secret == "" {
+ return "", "", fmt.Errorf("failed to handle basic auth because missing username or secret")
+ }
+
+ auth := base64.StdEncoding.EncodeToString([]byte(username + ":" + secret))
+ return fmt.Sprintf("Basic %s", auth), "", nil
+}
+
+func (ah *authHandler) doBearerAuth(ctx context.Context) (token, refreshToken string, err error) {
+ // copy common tokenOptions
+ to := ah.common
+
+ to.Scopes = GetTokenScopes(ctx, to.Scopes)
+
+ // Docs: https://docs.docker.com/registry/spec/auth/scope
+ scoped := strings.Join(to.Scopes, " ")
+
+ ah.Lock()
+ if r, exist := ah.scopedTokens[scoped]; exist {
+ ah.Unlock()
+ r.Wait()
+ return r.token, r.refreshToken, r.err
+ }
+
+ // only one fetch token job
+ r := new(authResult)
+ r.Add(1)
+ ah.scopedTokens[scoped] = r
+ ah.Unlock()
+
+ defer func() {
+ token = fmt.Sprintf("Bearer %s", token)
+ r.token, r.refreshToken, r.err = token, refreshToken, err
+ r.Done()
+ }()
+
+ // fetch token for the resource scope
+ if to.Secret != "" {
+ defer func() {
+ if err != nil {
+ err = fmt.Errorf("failed to fetch oauth token: %w", err)
+ }
+ }()
+ // credential information is provided, use oauth POST endpoint
+ // TODO: Allow setting client_id
+ resp, err := auth.FetchTokenWithOAuth(ctx, ah.client, ah.header, "containerd-client", to)
+ if err != nil {
+ var errStatus remoteerrors.ErrUnexpectedStatus
+ if errors.As(err, &errStatus) {
+ // Registries without support for POST may return 404 for POST /v2/token.
+ // As of September 2017, GCR is known to return 404.
+ // As of February 2018, JFrog Artifactory is known to return 401.
+ // As of January 2022, ACR is known to return 400.
+ if (errStatus.StatusCode == 405 && to.Username != "") || errStatus.StatusCode == 404 || errStatus.StatusCode == 401 || errStatus.StatusCode == 400 {
+ resp, err := auth.FetchToken(ctx, ah.client, ah.header, to)
+ if err != nil {
+ return "", "", err
+ }
+ return resp.Token, resp.RefreshToken, nil
+ }
+ log.G(ctx).WithFields(log.Fields{
+ "status": errStatus.Status,
+ "body": string(errStatus.Body),
+ }).Debugf("token request failed")
+ }
+ return "", "", err
+ }
+ return resp.AccessToken, resp.RefreshToken, nil
+ }
+ // do request anonymously
+ resp, err := auth.FetchToken(ctx, ah.client, ah.header, to)
+ if err != nil {
+ return "", "", fmt.Errorf("failed to fetch anonymous token: %w", err)
+ }
+ return resp.Token, resp.RefreshToken, nil
+}
+
+func invalidAuthorization(ctx context.Context, c auth.Challenge, responses []*http.Response) (retry bool, _ error) {
+ errStr := c.Parameters["error"]
+ if errStr == "" {
+ return retry, nil
+ }
+
+ n := len(responses)
+ if n == 1 || (n > 1 && !sameRequest(responses[n-2].Request, responses[n-1].Request)) {
+ limitedErr := errStr
+ errLenghLimit := 64
+ if len(limitedErr) > errLenghLimit {
+ limitedErr = limitedErr[:errLenghLimit] + "..."
+ }
+ log.G(ctx).WithField("error", limitedErr).Debug("authorization error using bearer token, retrying")
+ return true, nil
+ }
+
+ return retry, fmt.Errorf("server message: %s: %w", errStr, ErrInvalidAuthorization)
+}
+
+func sameRequest(r1, r2 *http.Request) bool {
+ if r1.Method != r2.Method {
+ return false
+ }
+ if *r1.URL != *r2.URL {
+ return false
+ }
+ return true
+}
diff --git a/vendor/github.com/containerd/containerd/remotes/docker/converter.go b/vendor/github.com/containerd/containerd/remotes/docker/converter.go
new file mode 100644
index 000000000..95a68d70e
--- /dev/null
+++ b/vendor/github.com/containerd/containerd/remotes/docker/converter.go
@@ -0,0 +1,87 @@
+/*
+ Copyright The containerd Authors.
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+*/
+
+package docker
+
+import (
+ "bytes"
+ "context"
+ "encoding/json"
+ "fmt"
+
+ "github.com/containerd/containerd/content"
+ "github.com/containerd/containerd/images"
+ "github.com/containerd/containerd/remotes"
+ "github.com/containerd/log"
+ digest "github.com/opencontainers/go-digest"
+ ocispec "github.com/opencontainers/image-spec/specs-go/v1"
+)
+
+// LegacyConfigMediaType should be replaced by OCI image spec.
+//
+// More detail: docker/distribution#1622
+const LegacyConfigMediaType = "application/octet-stream"
+
+// ConvertManifest changes application/octet-stream to schema2 config media type if need.
+//
+// NOTE:
+// 1. original manifest will be deleted by next gc round.
+// 2. don't cover manifest list.
+func ConvertManifest(ctx context.Context, store content.Store, desc ocispec.Descriptor) (ocispec.Descriptor, error) {
+ if !(desc.MediaType == images.MediaTypeDockerSchema2Manifest ||
+ desc.MediaType == ocispec.MediaTypeImageManifest) {
+
+ log.G(ctx).Warnf("do nothing for media type: %s", desc.MediaType)
+ return desc, nil
+ }
+
+ // read manifest data
+ mb, err := content.ReadBlob(ctx, store, desc)
+ if err != nil {
+ return ocispec.Descriptor{}, fmt.Errorf("failed to read index data: %w", err)
+ }
+
+ var manifest ocispec.Manifest
+ if err := json.Unmarshal(mb, &manifest); err != nil {
+ return ocispec.Descriptor{}, fmt.Errorf("failed to unmarshal data into manifest: %w", err)
+ }
+
+ // check config media type
+ if manifest.Config.MediaType != LegacyConfigMediaType {
+ return desc, nil
+ }
+
+ manifest.Config.MediaType = images.MediaTypeDockerSchema2Config
+ data, err := json.MarshalIndent(manifest, "", " ")
+ if err != nil {
+ return ocispec.Descriptor{}, fmt.Errorf("failed to marshal manifest: %w", err)
+ }
+
+ // update manifest with gc labels
+ desc.Digest = digest.Canonical.FromBytes(data)
+ desc.Size = int64(len(data))
+
+ labels := map[string]string{}
+ for i, c := range append([]ocispec.Descriptor{manifest.Config}, manifest.Layers...) {
+ labels[fmt.Sprintf("containerd.io/gc.ref.content.%d", i)] = c.Digest.String()
+ }
+
+ ref := remotes.MakeRefKey(ctx, desc)
+ if err := content.WriteBlob(ctx, store, ref, bytes.NewReader(data), desc, content.WithLabels(labels)); err != nil {
+ return ocispec.Descriptor{}, fmt.Errorf("failed to update content: %w", err)
+ }
+ return desc, nil
+}
diff --git a/vendor/github.com/containerd/containerd/remotes/docker/converter_fuzz.go b/vendor/github.com/containerd/containerd/remotes/docker/converter_fuzz.go
new file mode 100644
index 000000000..aa7cf4666
--- /dev/null
+++ b/vendor/github.com/containerd/containerd/remotes/docker/converter_fuzz.go
@@ -0,0 +1,55 @@
+//go:build gofuzz
+
+/*
+ Copyright The containerd Authors.
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+*/
+
+package docker
+
+import (
+ "context"
+ "os"
+
+ fuzz "github.com/AdaLogics/go-fuzz-headers"
+ "github.com/containerd/containerd/content/local"
+ "github.com/containerd/log"
+ ocispec "github.com/opencontainers/image-spec/specs-go/v1"
+ "github.com/sirupsen/logrus"
+)
+
+func FuzzConvertManifest(data []byte) int {
+ ctx := context.Background()
+
+ // Do not log the message below
+ // level=warning msg="do nothing for media type: ..."
+ log.G(ctx).Logger.SetLevel(logrus.PanicLevel)
+
+ f := fuzz.NewConsumer(data)
+ desc := ocispec.Descriptor{}
+ err := f.GenerateStruct(&desc)
+ if err != nil {
+ return 0
+ }
+ tmpdir, err := os.MkdirTemp("", "fuzzing-")
+ if err != nil {
+ return 0
+ }
+ cs, err := local.NewStore(tmpdir)
+ if err != nil {
+ return 0
+ }
+ _, _ = ConvertManifest(ctx, cs, desc)
+ return 1
+}
diff --git a/vendor/github.com/containerd/containerd/remotes/docker/errcode.go b/vendor/github.com/containerd/containerd/remotes/docker/errcode.go
new file mode 100644
index 000000000..8c623bcbe
--- /dev/null
+++ b/vendor/github.com/containerd/containerd/remotes/docker/errcode.go
@@ -0,0 +1,283 @@
+/*
+ Copyright The containerd Authors.
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+*/
+
+package docker
+
+import (
+ "encoding/json"
+ "fmt"
+ "strings"
+)
+
+// ErrorCoder is the base interface for ErrorCode and Error allowing
+// users of each to just call ErrorCode to get the real ID of each
+type ErrorCoder interface {
+ ErrorCode() ErrorCode
+}
+
+// ErrorCode represents the error type. The errors are serialized via strings
+// and the integer format may change and should *never* be exported.
+type ErrorCode int
+
+var _ error = ErrorCode(0)
+
+// ErrorCode just returns itself
+func (ec ErrorCode) ErrorCode() ErrorCode {
+ return ec
+}
+
+// Error returns the ID/Value
+func (ec ErrorCode) Error() string {
+ // NOTE(stevvooe): Cannot use message here since it may have unpopulated args.
+ return strings.ToLower(strings.Replace(ec.String(), "_", " ", -1))
+}
+
+// Descriptor returns the descriptor for the error code.
+func (ec ErrorCode) Descriptor() ErrorDescriptor {
+ d, ok := errorCodeToDescriptors[ec]
+
+ if !ok {
+ return ErrorCodeUnknown.Descriptor()
+ }
+
+ return d
+}
+
+// String returns the canonical identifier for this error code.
+func (ec ErrorCode) String() string {
+ return ec.Descriptor().Value
+}
+
+// Message returned the human-readable error message for this error code.
+func (ec ErrorCode) Message() string {
+ return ec.Descriptor().Message
+}
+
+// MarshalText encodes the receiver into UTF-8-encoded text and returns the
+// result.
+func (ec ErrorCode) MarshalText() (text []byte, err error) {
+ return []byte(ec.String()), nil
+}
+
+// UnmarshalText decodes the form generated by MarshalText.
+func (ec *ErrorCode) UnmarshalText(text []byte) error {
+ desc, ok := idToDescriptors[string(text)]
+
+ if !ok {
+ desc = ErrorCodeUnknown.Descriptor()
+ }
+
+ *ec = desc.Code
+
+ return nil
+}
+
+// WithMessage creates a new Error struct based on the passed-in info and
+// overrides the Message property.
+func (ec ErrorCode) WithMessage(message string) Error {
+ return Error{
+ Code: ec,
+ Message: message,
+ }
+}
+
+// WithDetail creates a new Error struct based on the passed-in info and
+// set the Detail property appropriately
+func (ec ErrorCode) WithDetail(detail interface{}) Error {
+ return Error{
+ Code: ec,
+ Message: ec.Message(),
+ }.WithDetail(detail)
+}
+
+// WithArgs creates a new Error struct and sets the Args slice
+func (ec ErrorCode) WithArgs(args ...interface{}) Error {
+ return Error{
+ Code: ec,
+ Message: ec.Message(),
+ }.WithArgs(args...)
+}
+
+// Error provides a wrapper around ErrorCode with extra Details provided.
+type Error struct {
+ Code ErrorCode `json:"code"`
+ Message string `json:"message"`
+ Detail interface{} `json:"detail,omitempty"`
+
+ // TODO(duglin): See if we need an "args" property so we can do the
+ // variable substitution right before showing the message to the user
+}
+
+var _ error = Error{}
+
+// ErrorCode returns the ID/Value of this Error
+func (e Error) ErrorCode() ErrorCode {
+ return e.Code
+}
+
+// Error returns a human readable representation of the error.
+func (e Error) Error() string {
+ return fmt.Sprintf("%s: %s", e.Code.Error(), e.Message)
+}
+
+// WithDetail will return a new Error, based on the current one, but with
+// some Detail info added
+func (e Error) WithDetail(detail interface{}) Error {
+ return Error{
+ Code: e.Code,
+ Message: e.Message,
+ Detail: detail,
+ }
+}
+
+// WithArgs uses the passed-in list of interface{} as the substitution
+// variables in the Error's Message string, but returns a new Error
+func (e Error) WithArgs(args ...interface{}) Error {
+ return Error{
+ Code: e.Code,
+ Message: fmt.Sprintf(e.Code.Message(), args...),
+ Detail: e.Detail,
+ }
+}
+
+// ErrorDescriptor provides relevant information about a given error code.
+type ErrorDescriptor struct {
+ // Code is the error code that this descriptor describes.
+ Code ErrorCode
+
+ // Value provides a unique, string key, often captilized with
+ // underscores, to identify the error code. This value is used as the
+ // keyed value when serializing api errors.
+ Value string
+
+ // Message is a short, human readable description of the error condition
+ // included in API responses.
+ Message string
+
+ // Description provides a complete account of the errors purpose, suitable
+ // for use in documentation.
+ Description string
+
+ // HTTPStatusCode provides the http status code that is associated with
+ // this error condition.
+ HTTPStatusCode int
+}
+
+// ParseErrorCode returns the value by the string error code.
+// `ErrorCodeUnknown` will be returned if the error is not known.
+func ParseErrorCode(value string) ErrorCode {
+ ed, ok := idToDescriptors[value]
+ if ok {
+ return ed.Code
+ }
+
+ return ErrorCodeUnknown
+}
+
+// Errors provides the envelope for multiple errors and a few sugar methods
+// for use within the application.
+type Errors []error
+
+var _ error = Errors{}
+
+func (errs Errors) Error() string {
+ switch len(errs) {
+ case 0:
+ return ""
+ case 1:
+ return errs[0].Error()
+ default:
+ msg := "errors:\n"
+ for _, err := range errs {
+ msg += err.Error() + "\n"
+ }
+ return msg
+ }
+}
+
+// Len returns the current number of errors.
+func (errs Errors) Len() int {
+ return len(errs)
+}
+
+// MarshalJSON converts slice of error, ErrorCode or Error into a
+// slice of Error - then serializes
+func (errs Errors) MarshalJSON() ([]byte, error) {
+ var tmpErrs struct {
+ Errors []Error `json:"errors,omitempty"`
+ }
+
+ for _, daErr := range errs {
+ var err Error
+
+ switch daErr := daErr.(type) {
+ case ErrorCode:
+ err = daErr.WithDetail(nil)
+ case Error:
+ err = daErr
+ default:
+ err = ErrorCodeUnknown.WithDetail(daErr)
+
+ }
+
+ // If the Error struct was setup and they forgot to set the
+ // Message field (meaning its "") then grab it from the ErrCode
+ msg := err.Message
+ if msg == "" {
+ msg = err.Code.Message()
+ }
+
+ tmpErrs.Errors = append(tmpErrs.Errors, Error{
+ Code: err.Code,
+ Message: msg,
+ Detail: err.Detail,
+ })
+ }
+
+ return json.Marshal(tmpErrs)
+}
+
+// UnmarshalJSON deserializes []Error and then converts it into slice of
+// Error or ErrorCode
+func (errs *Errors) UnmarshalJSON(data []byte) error {
+ var tmpErrs struct {
+ Errors []Error
+ }
+
+ if err := json.Unmarshal(data, &tmpErrs); err != nil {
+ return err
+ }
+
+ var newErrs Errors
+ for _, daErr := range tmpErrs.Errors {
+ // If Message is empty or exactly matches the Code's message string
+ // then just use the Code, no need for a full Error struct
+ if daErr.Detail == nil && (daErr.Message == "" || daErr.Message == daErr.Code.Message()) {
+ // Error's w/o details get converted to ErrorCode
+ newErrs = append(newErrs, daErr.Code)
+ } else {
+ // Error's w/ details are untouched
+ newErrs = append(newErrs, Error{
+ Code: daErr.Code,
+ Message: daErr.Message,
+ Detail: daErr.Detail,
+ })
+ }
+ }
+
+ *errs = newErrs
+ return nil
+}
diff --git a/vendor/github.com/containerd/containerd/remotes/docker/errdesc.go b/vendor/github.com/containerd/containerd/remotes/docker/errdesc.go
new file mode 100644
index 000000000..b2bd4d82b
--- /dev/null
+++ b/vendor/github.com/containerd/containerd/remotes/docker/errdesc.go
@@ -0,0 +1,154 @@
+/*
+ Copyright The containerd Authors.
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+*/
+
+package docker
+
+import (
+ "fmt"
+ "net/http"
+ "sort"
+ "sync"
+)
+
+var (
+ errorCodeToDescriptors = map[ErrorCode]ErrorDescriptor{}
+ idToDescriptors = map[string]ErrorDescriptor{}
+ groupToDescriptors = map[string][]ErrorDescriptor{}
+)
+
+var (
+ // ErrorCodeUnknown is a generic error that can be used as a last
+ // resort if there is no situation-specific error message that can be used
+ ErrorCodeUnknown = Register("errcode", ErrorDescriptor{
+ Value: "UNKNOWN",
+ Message: "unknown error",
+ Description: `Generic error returned when the error does not have an
+ API classification.`,
+ HTTPStatusCode: http.StatusInternalServerError,
+ })
+
+ // ErrorCodeUnsupported is returned when an operation is not supported.
+ ErrorCodeUnsupported = Register("errcode", ErrorDescriptor{
+ Value: "UNSUPPORTED",
+ Message: "The operation is unsupported.",
+ Description: `The operation was unsupported due to a missing
+ implementation or invalid set of parameters.`,
+ HTTPStatusCode: http.StatusMethodNotAllowed,
+ })
+
+ // ErrorCodeUnauthorized is returned if a request requires
+ // authentication.
+ ErrorCodeUnauthorized = Register("errcode", ErrorDescriptor{
+ Value: "UNAUTHORIZED",
+ Message: "authentication required",
+ Description: `The access controller was unable to authenticate
+ the client. Often this will be accompanied by a
+ Www-Authenticate HTTP response header indicating how to
+ authenticate.`,
+ HTTPStatusCode: http.StatusUnauthorized,
+ })
+
+ // ErrorCodeDenied is returned if a client does not have sufficient
+ // permission to perform an action.
+ ErrorCodeDenied = Register("errcode", ErrorDescriptor{
+ Value: "DENIED",
+ Message: "requested access to the resource is denied",
+ Description: `The access controller denied access for the
+ operation on a resource.`,
+ HTTPStatusCode: http.StatusForbidden,
+ })
+
+ // ErrorCodeUnavailable provides a common error to report unavailability
+ // of a service or endpoint.
+ ErrorCodeUnavailable = Register("errcode", ErrorDescriptor{
+ Value: "UNAVAILABLE",
+ Message: "service unavailable",
+ Description: "Returned when a service is not available",
+ HTTPStatusCode: http.StatusServiceUnavailable,
+ })
+
+ // ErrorCodeTooManyRequests is returned if a client attempts too many
+ // times to contact a service endpoint.
+ ErrorCodeTooManyRequests = Register("errcode", ErrorDescriptor{
+ Value: "TOOMANYREQUESTS",
+ Message: "too many requests",
+ Description: `Returned when a client attempts to contact a
+ service too many times`,
+ HTTPStatusCode: http.StatusTooManyRequests,
+ })
+)
+
+var nextCode = 1000
+var registerLock sync.Mutex
+
+// Register will make the passed-in error known to the environment and
+// return a new ErrorCode
+func Register(group string, descriptor ErrorDescriptor) ErrorCode {
+ registerLock.Lock()
+ defer registerLock.Unlock()
+
+ descriptor.Code = ErrorCode(nextCode)
+
+ if _, ok := idToDescriptors[descriptor.Value]; ok {
+ panic(fmt.Sprintf("ErrorValue %q is already registered", descriptor.Value))
+ }
+ if _, ok := errorCodeToDescriptors[descriptor.Code]; ok {
+ panic(fmt.Sprintf("ErrorCode %v is already registered", descriptor.Code))
+ }
+
+ groupToDescriptors[group] = append(groupToDescriptors[group], descriptor)
+ errorCodeToDescriptors[descriptor.Code] = descriptor
+ idToDescriptors[descriptor.Value] = descriptor
+
+ nextCode++
+ return descriptor.Code
+}
+
+type byValue []ErrorDescriptor
+
+func (a byValue) Len() int { return len(a) }
+func (a byValue) Swap(i, j int) { a[i], a[j] = a[j], a[i] }
+func (a byValue) Less(i, j int) bool { return a[i].Value < a[j].Value }
+
+// GetGroupNames returns the list of Error group names that are registered
+func GetGroupNames() []string {
+ keys := []string{}
+
+ for k := range groupToDescriptors {
+ keys = append(keys, k)
+ }
+ sort.Strings(keys)
+ return keys
+}
+
+// GetErrorCodeGroup returns the named group of error descriptors
+func GetErrorCodeGroup(name string) []ErrorDescriptor {
+ desc := groupToDescriptors[name]
+ sort.Sort(byValue(desc))
+ return desc
+}
+
+// GetErrorAllDescriptors returns a slice of all ErrorDescriptors that are
+// registered, irrespective of what group they're in
+func GetErrorAllDescriptors() []ErrorDescriptor {
+ result := []ErrorDescriptor{}
+
+ for _, group := range GetGroupNames() {
+ result = append(result, GetErrorCodeGroup(group)...)
+ }
+ sort.Sort(byValue(result))
+ return result
+}
diff --git a/vendor/github.com/containerd/containerd/remotes/docker/fetcher.go b/vendor/github.com/containerd/containerd/remotes/docker/fetcher.go
new file mode 100644
index 000000000..c4c401ad1
--- /dev/null
+++ b/vendor/github.com/containerd/containerd/remotes/docker/fetcher.go
@@ -0,0 +1,314 @@
+/*
+ Copyright The containerd Authors.
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+*/
+
+package docker
+
+import (
+ "context"
+ "encoding/json"
+ "errors"
+ "fmt"
+ "io"
+ "net/http"
+ "net/url"
+ "strings"
+
+ "github.com/containerd/log"
+ digest "github.com/opencontainers/go-digest"
+ ocispec "github.com/opencontainers/image-spec/specs-go/v1"
+
+ "github.com/containerd/containerd/errdefs"
+ "github.com/containerd/containerd/images"
+)
+
+type dockerFetcher struct {
+ *dockerBase
+}
+
+func (r dockerFetcher) Fetch(ctx context.Context, desc ocispec.Descriptor) (io.ReadCloser, error) {
+ ctx = log.WithLogger(ctx, log.G(ctx).WithField("digest", desc.Digest))
+
+ hosts := r.filterHosts(HostCapabilityPull)
+ if len(hosts) == 0 {
+ return nil, fmt.Errorf("no pull hosts: %w", errdefs.ErrNotFound)
+ }
+
+ ctx, err := ContextWithRepositoryScope(ctx, r.refspec, false)
+ if err != nil {
+ return nil, err
+ }
+
+ return newHTTPReadSeeker(desc.Size, func(offset int64) (io.ReadCloser, error) {
+ // firstly try fetch via external urls
+ for _, us := range desc.URLs {
+ u, err := url.Parse(us)
+ if err != nil {
+ log.G(ctx).WithError(err).Debugf("failed to parse %q", us)
+ continue
+ }
+ if u.Scheme != "http" && u.Scheme != "https" {
+ log.G(ctx).Debug("non-http(s) alternative url is unsupported")
+ continue
+ }
+ ctx = log.WithLogger(ctx, log.G(ctx).WithField("url", u))
+ log.G(ctx).Info("request")
+
+ // Try this first, parse it
+ host := RegistryHost{
+ Client: http.DefaultClient,
+ Host: u.Host,
+ Scheme: u.Scheme,
+ Path: u.Path,
+ Capabilities: HostCapabilityPull,
+ }
+ req := r.request(host, http.MethodGet)
+ // Strip namespace from base
+ req.path = u.Path
+ if u.RawQuery != "" {
+ req.path = req.path + "?" + u.RawQuery
+ }
+
+ rc, err := r.open(ctx, req, desc.MediaType, offset)
+ if err != nil {
+ if errdefs.IsNotFound(err) {
+ continue // try one of the other urls.
+ }
+
+ return nil, err
+ }
+
+ return rc, nil
+ }
+
+ // Try manifests endpoints for manifests types
+ switch desc.MediaType {
+ case images.MediaTypeDockerSchema2Manifest, images.MediaTypeDockerSchema2ManifestList,
+ images.MediaTypeDockerSchema1Manifest,
+ ocispec.MediaTypeImageManifest, ocispec.MediaTypeImageIndex:
+
+ var firstErr error
+ for _, host := range r.hosts {
+ req := r.request(host, http.MethodGet, "manifests", desc.Digest.String())
+ if err := req.addNamespace(r.refspec.Hostname()); err != nil {
+ return nil, err
+ }
+
+ rc, err := r.open(ctx, req, desc.MediaType, offset)
+ if err != nil {
+ // Store the error for referencing later
+ if firstErr == nil {
+ firstErr = err
+ }
+ continue // try another host
+ }
+
+ return rc, nil
+ }
+
+ return nil, firstErr
+ }
+
+ // Finally use blobs endpoints
+ var firstErr error
+ for _, host := range r.hosts {
+ req := r.request(host, http.MethodGet, "blobs", desc.Digest.String())
+ if err := req.addNamespace(r.refspec.Hostname()); err != nil {
+ return nil, err
+ }
+
+ rc, err := r.open(ctx, req, desc.MediaType, offset)
+ if err != nil {
+ // Store the error for referencing later
+ if firstErr == nil {
+ firstErr = err
+ }
+ continue // try another host
+ }
+
+ return rc, nil
+ }
+
+ if errdefs.IsNotFound(firstErr) {
+ firstErr = fmt.Errorf("could not fetch content descriptor %v (%v) from remote: %w",
+ desc.Digest, desc.MediaType, errdefs.ErrNotFound,
+ )
+ }
+
+ return nil, firstErr
+
+ })
+}
+
+func (r dockerFetcher) createGetReq(ctx context.Context, host RegistryHost, ps ...string) (*request, int64, error) {
+ headReq := r.request(host, http.MethodHead, ps...)
+ if err := headReq.addNamespace(r.refspec.Hostname()); err != nil {
+ return nil, 0, err
+ }
+
+ headResp, err := headReq.doWithRetries(ctx, nil)
+ if err != nil {
+ return nil, 0, err
+ }
+ if headResp.Body != nil {
+ headResp.Body.Close()
+ }
+ if headResp.StatusCode > 299 {
+ return nil, 0, fmt.Errorf("unexpected HEAD status code %v: %s", headReq.String(), headResp.Status)
+ }
+
+ getReq := r.request(host, http.MethodGet, ps...)
+ if err := getReq.addNamespace(r.refspec.Hostname()); err != nil {
+ return nil, 0, err
+ }
+ return getReq, headResp.ContentLength, nil
+}
+
+func (r dockerFetcher) FetchByDigest(ctx context.Context, dgst digest.Digest) (io.ReadCloser, ocispec.Descriptor, error) {
+ var desc ocispec.Descriptor
+ ctx = log.WithLogger(ctx, log.G(ctx).WithField("digest", dgst))
+
+ hosts := r.filterHosts(HostCapabilityPull)
+ if len(hosts) == 0 {
+ return nil, desc, fmt.Errorf("no pull hosts: %w", errdefs.ErrNotFound)
+ }
+
+ ctx, err := ContextWithRepositoryScope(ctx, r.refspec, false)
+ if err != nil {
+ return nil, desc, err
+ }
+
+ var (
+ getReq *request
+ sz int64
+ firstErr error
+ )
+
+ for _, host := range r.hosts {
+ getReq, sz, err = r.createGetReq(ctx, host, "blobs", dgst.String())
+ if err == nil {
+ break
+ }
+ // Store the error for referencing later
+ if firstErr == nil {
+ firstErr = err
+ }
+ }
+
+ if getReq == nil {
+ // Fall back to the "manifests" endpoint
+ for _, host := range r.hosts {
+ getReq, sz, err = r.createGetReq(ctx, host, "manifests", dgst.String())
+ if err == nil {
+ break
+ }
+ // Store the error for referencing later
+ if firstErr == nil {
+ firstErr = err
+ }
+ }
+ }
+
+ if getReq == nil {
+ if errdefs.IsNotFound(firstErr) {
+ firstErr = fmt.Errorf("could not fetch content %v from remote: %w", dgst, errdefs.ErrNotFound)
+ }
+ if firstErr == nil {
+ firstErr = fmt.Errorf("could not fetch content %v from remote: (unknown)", dgst)
+ }
+ return nil, desc, firstErr
+ }
+
+ seeker, err := newHTTPReadSeeker(sz, func(offset int64) (io.ReadCloser, error) {
+ return r.open(ctx, getReq, "", offset)
+ })
+ if err != nil {
+ return nil, desc, err
+ }
+
+ desc = ocispec.Descriptor{
+ MediaType: "application/octet-stream",
+ Digest: dgst,
+ Size: sz,
+ }
+ return seeker, desc, nil
+}
+
+func (r dockerFetcher) open(ctx context.Context, req *request, mediatype string, offset int64) (_ io.ReadCloser, retErr error) {
+ if mediatype == "" {
+ req.header.Set("Accept", "*/*")
+ } else {
+ req.header.Set("Accept", strings.Join([]string{mediatype, `*/*`}, ", "))
+ }
+
+ if offset > 0 {
+ // Note: "Accept-Ranges: bytes" cannot be trusted as some endpoints
+ // will return the header without supporting the range. The content
+ // range must always be checked.
+ req.header.Set("Range", fmt.Sprintf("bytes=%d-", offset))
+ }
+
+ resp, err := req.doWithRetries(ctx, nil)
+ if err != nil {
+ return nil, err
+ }
+ defer func() {
+ if retErr != nil {
+ resp.Body.Close()
+ }
+ }()
+
+ if resp.StatusCode > 299 {
+ // TODO(stevvooe): When doing a offset specific request, we should
+ // really distinguish between a 206 and a 200. In the case of 200, we
+ // can discard the bytes, hiding the seek behavior from the
+ // implementation.
+
+ if resp.StatusCode == http.StatusNotFound {
+ return nil, fmt.Errorf("content at %v not found: %w", req.String(), errdefs.ErrNotFound)
+ }
+ var registryErr Errors
+ if err := json.NewDecoder(resp.Body).Decode(®istryErr); err != nil || registryErr.Len() < 1 {
+ return nil, fmt.Errorf("unexpected status code %v: %v", req.String(), resp.Status)
+ }
+ return nil, fmt.Errorf("unexpected status code %v: %s - Server message: %s", req.String(), resp.Status, registryErr.Error())
+ }
+ if offset > 0 {
+ cr := resp.Header.Get("content-range")
+ if cr != "" {
+ if !strings.HasPrefix(cr, fmt.Sprintf("bytes %d-", offset)) {
+ return nil, fmt.Errorf("unhandled content range in response: %v", cr)
+
+ }
+ } else {
+ // TODO: Should any cases where use of content range
+ // without the proper header be considered?
+ // 206 responses?
+
+ // Discard up to offset
+ // Could use buffer pool here but this case should be rare
+ n, err := io.Copy(io.Discard, io.LimitReader(resp.Body, offset))
+ if err != nil {
+ return nil, fmt.Errorf("failed to discard to offset: %w", err)
+ }
+ if n != offset {
+ return nil, errors.New("unable to discard to offset")
+ }
+
+ }
+ }
+
+ return resp.Body, nil
+}
diff --git a/vendor/github.com/containerd/containerd/remotes/docker/fetcher_fuzz.go b/vendor/github.com/containerd/containerd/remotes/docker/fetcher_fuzz.go
new file mode 100644
index 000000000..f396a74f4
--- /dev/null
+++ b/vendor/github.com/containerd/containerd/remotes/docker/fetcher_fuzz.go
@@ -0,0 +1,74 @@
+//go:build gofuzz
+
+/*
+ Copyright The containerd Authors.
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+*/
+
+package docker
+
+import (
+ "context"
+ "fmt"
+ "io"
+ "net/http"
+ "net/http/httptest"
+ "net/url"
+)
+
+func FuzzFetcher(data []byte) int {
+ dataLen := len(data)
+ if dataLen == 0 {
+ return -1
+ }
+
+ s := httptest.NewServer(http.HandlerFunc(func(rw http.ResponseWriter, r *http.Request) {
+ rw.Header().Set("content-range", fmt.Sprintf("bytes %d-%d/%d", 0, dataLen-1, dataLen))
+ rw.Header().Set("content-length", fmt.Sprintf("%d", dataLen))
+ rw.Write(data)
+ }))
+ defer s.Close()
+
+ u, err := url.Parse(s.URL)
+ if err != nil {
+ return 0
+ }
+
+ f := dockerFetcher{&dockerBase{
+ repository: "nonempty",
+ }}
+ host := RegistryHost{
+ Client: s.Client(),
+ Host: u.Host,
+ Scheme: u.Scheme,
+ Path: u.Path,
+ }
+
+ ctx := context.Background()
+ req := f.request(host, http.MethodGet)
+ rc, err := f.open(ctx, req, "", 0)
+ if err != nil {
+ return 0
+ }
+ b, err := io.ReadAll(rc)
+ if err != nil {
+ return 0
+ }
+
+ expected := data
+ if len(b) != len(expected) {
+ panic("len of request is not equal to len of expected but should be")
+ }
+ return 1
+}
diff --git a/vendor/github.com/containerd/containerd/remotes/docker/handler.go b/vendor/github.com/containerd/containerd/remotes/docker/handler.go
new file mode 100644
index 000000000..ccec49013
--- /dev/null
+++ b/vendor/github.com/containerd/containerd/remotes/docker/handler.go
@@ -0,0 +1,149 @@
+/*
+ Copyright The containerd Authors.
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+*/
+
+package docker
+
+import (
+ "context"
+ "fmt"
+ "net/url"
+ "strings"
+
+ "github.com/containerd/containerd/content"
+ "github.com/containerd/containerd/images"
+ "github.com/containerd/containerd/labels"
+ "github.com/containerd/containerd/reference"
+ "github.com/containerd/log"
+ ocispec "github.com/opencontainers/image-spec/specs-go/v1"
+)
+
+// AppendDistributionSourceLabel updates the label of blob with distribution source.
+func AppendDistributionSourceLabel(manager content.Manager, ref string) (images.HandlerFunc, error) {
+ refspec, err := reference.Parse(ref)
+ if err != nil {
+ return nil, err
+ }
+
+ u, err := url.Parse("dummy://" + refspec.Locator)
+ if err != nil {
+ return nil, err
+ }
+
+ source, repo := u.Hostname(), strings.TrimPrefix(u.Path, "/")
+ return func(ctx context.Context, desc ocispec.Descriptor) ([]ocispec.Descriptor, error) {
+ info, err := manager.Info(ctx, desc.Digest)
+ if err != nil {
+ return nil, err
+ }
+
+ key := distributionSourceLabelKey(source)
+
+ originLabel := ""
+ if info.Labels != nil {
+ originLabel = info.Labels[key]
+ }
+ value := appendDistributionSourceLabel(originLabel, repo)
+
+ // The repo name has been limited under 256 and the distribution
+ // label might hit the limitation of label size, when blob data
+ // is used as the very, very common layer.
+ if err := labels.Validate(key, value); err != nil {
+ log.G(ctx).Warnf("skip to append distribution label: %s", err)
+ return nil, nil
+ }
+
+ info = content.Info{
+ Digest: desc.Digest,
+ Labels: map[string]string{
+ key: value,
+ },
+ }
+ _, err = manager.Update(ctx, info, fmt.Sprintf("labels.%s", key))
+ return nil, err
+ }, nil
+}
+
+func appendDistributionSourceLabel(originLabel, repo string) string {
+ repos := []string{}
+ if originLabel != "" {
+ repos = strings.Split(originLabel, ",")
+ }
+ repos = append(repos, repo)
+
+ // use empty string to present duplicate items
+ for i := 1; i < len(repos); i++ {
+ tmp, j := repos[i], i-1
+ for ; j >= 0 && repos[j] >= tmp; j-- {
+ if repos[j] == tmp {
+ tmp = ""
+ }
+ repos[j+1] = repos[j]
+ }
+ repos[j+1] = tmp
+ }
+
+ i := 0
+ for ; i < len(repos) && repos[i] == ""; i++ {
+ }
+
+ return strings.Join(repos[i:], ",")
+}
+
+func distributionSourceLabelKey(source string) string {
+ return fmt.Sprintf("%s.%s", labels.LabelDistributionSource, source)
+}
+
+// selectRepositoryMountCandidate will select the repo which has longest
+// common prefix components as the candidate.
+func selectRepositoryMountCandidate(refspec reference.Spec, sources map[string]string) string {
+ u, err := url.Parse("dummy://" + refspec.Locator)
+ if err != nil {
+ // NOTE: basically, it won't be error here
+ return ""
+ }
+
+ source, target := u.Hostname(), strings.TrimPrefix(u.Path, "/")
+ repoLabel, ok := sources[distributionSourceLabelKey(source)]
+ if !ok || repoLabel == "" {
+ return ""
+ }
+
+ n, match := 0, ""
+ components := strings.Split(target, "/")
+ for _, repo := range strings.Split(repoLabel, ",") {
+ // the target repo is not a candidate
+ if repo == target {
+ continue
+ }
+
+ if l := commonPrefixComponents(components, repo); l >= n {
+ n, match = l, repo
+ }
+ }
+ return match
+}
+
+func commonPrefixComponents(components []string, target string) int {
+ targetComponents := strings.Split(target, "/")
+
+ i := 0
+ for ; i < len(components) && i < len(targetComponents); i++ {
+ if components[i] != targetComponents[i] {
+ break
+ }
+ }
+ return i
+}
diff --git a/vendor/github.com/containerd/containerd/remotes/docker/httpreadseeker.go b/vendor/github.com/containerd/containerd/remotes/docker/httpreadseeker.go
new file mode 100644
index 000000000..deb888cbc
--- /dev/null
+++ b/vendor/github.com/containerd/containerd/remotes/docker/httpreadseeker.go
@@ -0,0 +1,179 @@
+/*
+ Copyright The containerd Authors.
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+*/
+
+package docker
+
+import (
+ "bytes"
+ "fmt"
+ "io"
+
+ "github.com/containerd/log"
+
+ "github.com/containerd/containerd/errdefs"
+)
+
+const maxRetry = 3
+
+type httpReadSeeker struct {
+ size int64
+ offset int64
+ rc io.ReadCloser
+ open func(offset int64) (io.ReadCloser, error)
+ closed bool
+
+ errsWithNoProgress int
+}
+
+func newHTTPReadSeeker(size int64, open func(offset int64) (io.ReadCloser, error)) (io.ReadCloser, error) {
+ return &httpReadSeeker{
+ size: size,
+ open: open,
+ }, nil
+}
+
+func (hrs *httpReadSeeker) Read(p []byte) (n int, err error) {
+ if hrs.closed {
+ return 0, io.EOF
+ }
+
+ rd, err := hrs.reader()
+ if err != nil {
+ return 0, err
+ }
+
+ n, err = rd.Read(p)
+ hrs.offset += int64(n)
+ if n > 0 || err == nil {
+ hrs.errsWithNoProgress = 0
+ }
+ if err == io.ErrUnexpectedEOF {
+ // connection closed unexpectedly. try reconnecting.
+ if n == 0 {
+ hrs.errsWithNoProgress++
+ if hrs.errsWithNoProgress > maxRetry {
+ return // too many retries for this offset with no progress
+ }
+ }
+ if hrs.rc != nil {
+ if clsErr := hrs.rc.Close(); clsErr != nil {
+ log.L.WithError(clsErr).Error("httpReadSeeker: failed to close ReadCloser")
+ }
+ hrs.rc = nil
+ }
+ if _, err2 := hrs.reader(); err2 == nil {
+ return n, nil
+ }
+ } else if err == io.EOF {
+ // The CRI's imagePullProgressTimeout relies on responseBody.Close to
+ // update the process monitor's status. If the err is io.EOF, close
+ // the connection since there is no more available data.
+ if hrs.rc != nil {
+ if clsErr := hrs.rc.Close(); clsErr != nil {
+ log.L.WithError(clsErr).Error("httpReadSeeker: failed to close ReadCloser after io.EOF")
+ }
+ hrs.rc = nil
+ }
+ }
+ return
+}
+
+func (hrs *httpReadSeeker) Close() error {
+ if hrs.closed {
+ return nil
+ }
+ hrs.closed = true
+ if hrs.rc != nil {
+ return hrs.rc.Close()
+ }
+
+ return nil
+}
+
+func (hrs *httpReadSeeker) Seek(offset int64, whence int) (int64, error) {
+ if hrs.closed {
+ return 0, fmt.Errorf("Fetcher.Seek: closed: %w", errdefs.ErrUnavailable)
+ }
+
+ abs := hrs.offset
+ switch whence {
+ case io.SeekStart:
+ abs = offset
+ case io.SeekCurrent:
+ abs += offset
+ case io.SeekEnd:
+ if hrs.size == -1 {
+ return 0, fmt.Errorf("Fetcher.Seek: unknown size, cannot seek from end: %w", errdefs.ErrUnavailable)
+ }
+ abs = hrs.size + offset
+ default:
+ return 0, fmt.Errorf("Fetcher.Seek: invalid whence: %w", errdefs.ErrInvalidArgument)
+ }
+
+ if abs < 0 {
+ return 0, fmt.Errorf("Fetcher.Seek: negative offset: %w", errdefs.ErrInvalidArgument)
+ }
+
+ if abs != hrs.offset {
+ if hrs.rc != nil {
+ if err := hrs.rc.Close(); err != nil {
+ log.L.WithError(err).Error("Fetcher.Seek: failed to close ReadCloser")
+ }
+
+ hrs.rc = nil
+ }
+
+ hrs.offset = abs
+ }
+
+ return hrs.offset, nil
+}
+
+func (hrs *httpReadSeeker) reader() (io.Reader, error) {
+ if hrs.rc != nil {
+ return hrs.rc, nil
+ }
+
+ if hrs.size == -1 || hrs.offset < hrs.size {
+ // only try to reopen the body request if we are seeking to a value
+ // less than the actual size.
+ if hrs.open == nil {
+ return nil, fmt.Errorf("cannot open: %w", errdefs.ErrNotImplemented)
+ }
+
+ rc, err := hrs.open(hrs.offset)
+ if err != nil {
+ return nil, fmt.Errorf("httpReadSeeker: failed open: %w", err)
+ }
+
+ if hrs.rc != nil {
+ if err := hrs.rc.Close(); err != nil {
+ log.L.WithError(err).Error("httpReadSeeker: failed to close ReadCloser")
+ }
+ }
+ hrs.rc = rc
+ } else {
+ // There is an edge case here where offset == size of the content. If
+ // we seek, we will probably get an error for content that cannot be
+ // sought (?). In that case, we should err on committing the content,
+ // as the length is already satisfied but we just return the empty
+ // reader instead.
+
+ hrs.rc = io.NopCloser(bytes.NewReader([]byte{}))
+ }
+
+ return hrs.rc, nil
+}
diff --git a/vendor/github.com/containerd/containerd/remotes/docker/pusher.go b/vendor/github.com/containerd/containerd/remotes/docker/pusher.go
new file mode 100644
index 000000000..f97ab144e
--- /dev/null
+++ b/vendor/github.com/containerd/containerd/remotes/docker/pusher.go
@@ -0,0 +1,563 @@
+/*
+ Copyright The containerd Authors.
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+*/
+
+package docker
+
+import (
+ "context"
+ "errors"
+ "fmt"
+ "io"
+ "net/http"
+ "net/url"
+ "path"
+ "strings"
+ "sync"
+ "time"
+
+ "github.com/containerd/log"
+ digest "github.com/opencontainers/go-digest"
+ ocispec "github.com/opencontainers/image-spec/specs-go/v1"
+
+ "github.com/containerd/containerd/content"
+ "github.com/containerd/containerd/errdefs"
+ "github.com/containerd/containerd/images"
+ "github.com/containerd/containerd/remotes"
+ remoteserrors "github.com/containerd/containerd/remotes/errors"
+)
+
+type dockerPusher struct {
+ *dockerBase
+ object string
+
+ // TODO: namespace tracker
+ tracker StatusTracker
+}
+
+// Writer implements Ingester API of content store. This allows the client
+// to receive ErrUnavailable when there is already an on-going upload.
+// Note that the tracker MUST implement StatusTrackLocker interface to avoid
+// race condition on StatusTracker.
+func (p dockerPusher) Writer(ctx context.Context, opts ...content.WriterOpt) (content.Writer, error) {
+ var wOpts content.WriterOpts
+ for _, opt := range opts {
+ if err := opt(&wOpts); err != nil {
+ return nil, err
+ }
+ }
+ if wOpts.Ref == "" {
+ return nil, fmt.Errorf("ref must not be empty: %w", errdefs.ErrInvalidArgument)
+ }
+ return p.push(ctx, wOpts.Desc, wOpts.Ref, true)
+}
+
+func (p dockerPusher) Push(ctx context.Context, desc ocispec.Descriptor) (content.Writer, error) {
+ return p.push(ctx, desc, remotes.MakeRefKey(ctx, desc), false)
+}
+
+func (p dockerPusher) push(ctx context.Context, desc ocispec.Descriptor, ref string, unavailableOnFail bool) (content.Writer, error) {
+ if l, ok := p.tracker.(StatusTrackLocker); ok {
+ l.Lock(ref)
+ defer l.Unlock(ref)
+ }
+ ctx, err := ContextWithRepositoryScope(ctx, p.refspec, true)
+ if err != nil {
+ return nil, err
+ }
+ status, err := p.tracker.GetStatus(ref)
+ if err == nil {
+ if status.Committed && status.Offset == status.Total {
+ return nil, fmt.Errorf("ref %v: %w", ref, errdefs.ErrAlreadyExists)
+ }
+ if unavailableOnFail && status.ErrClosed == nil {
+ // Another push of this ref is happening elsewhere. The rest of function
+ // will continue only when `errdefs.IsNotFound(err) == true` (i.e. there
+ // is no actively-tracked ref already).
+ return nil, fmt.Errorf("push is on-going: %w", errdefs.ErrUnavailable)
+ }
+ // TODO: Handle incomplete status
+ } else if !errdefs.IsNotFound(err) {
+ return nil, fmt.Errorf("failed to get status: %w", err)
+ }
+
+ hosts := p.filterHosts(HostCapabilityPush)
+ if len(hosts) == 0 {
+ return nil, fmt.Errorf("no push hosts: %w", errdefs.ErrNotFound)
+ }
+
+ var (
+ isManifest bool
+ existCheck []string
+ host = hosts[0]
+ )
+
+ switch desc.MediaType {
+ case images.MediaTypeDockerSchema2Manifest, images.MediaTypeDockerSchema2ManifestList,
+ ocispec.MediaTypeImageManifest, ocispec.MediaTypeImageIndex:
+ isManifest = true
+ existCheck = getManifestPath(p.object, desc.Digest)
+ default:
+ existCheck = []string{"blobs", desc.Digest.String()}
+ }
+
+ req := p.request(host, http.MethodHead, existCheck...)
+ req.header.Set("Accept", strings.Join([]string{desc.MediaType, `*/*`}, ", "))
+
+ log.G(ctx).WithField("url", req.String()).Debugf("checking and pushing to")
+
+ resp, err := req.doWithRetries(ctx, nil)
+ if err != nil {
+ if !errors.Is(err, ErrInvalidAuthorization) {
+ return nil, err
+ }
+ log.G(ctx).WithError(err).Debugf("Unable to check existence, continuing with push")
+ } else {
+ if resp.StatusCode == http.StatusOK {
+ var exists bool
+ if isManifest && existCheck[1] != desc.Digest.String() {
+ dgstHeader := digest.Digest(resp.Header.Get("Docker-Content-Digest"))
+ if dgstHeader == desc.Digest {
+ exists = true
+ }
+ } else {
+ exists = true
+ }
+
+ if exists {
+ p.tracker.SetStatus(ref, Status{
+ Committed: true,
+ PushStatus: PushStatus{
+ Exists: true,
+ },
+ Status: content.Status{
+ Ref: ref,
+ Total: desc.Size,
+ Offset: desc.Size,
+ // TODO: Set updated time?
+ },
+ })
+ resp.Body.Close()
+ return nil, fmt.Errorf("content %v on remote: %w", desc.Digest, errdefs.ErrAlreadyExists)
+ }
+ } else if resp.StatusCode != http.StatusNotFound {
+ err := remoteserrors.NewUnexpectedStatusErr(resp)
+ log.G(ctx).WithField("resp", resp).WithField("body", string(err.(remoteserrors.ErrUnexpectedStatus).Body)).Debug("unexpected response")
+ resp.Body.Close()
+ return nil, err
+ }
+ resp.Body.Close()
+ }
+
+ if isManifest {
+ putPath := getManifestPath(p.object, desc.Digest)
+ req = p.request(host, http.MethodPut, putPath...)
+ req.header.Add("Content-Type", desc.MediaType)
+ } else {
+ // Start upload request
+ req = p.request(host, http.MethodPost, "blobs", "uploads/")
+
+ mountedFrom := ""
+ var resp *http.Response
+ if fromRepo := selectRepositoryMountCandidate(p.refspec, desc.Annotations); fromRepo != "" {
+ preq := requestWithMountFrom(req, desc.Digest.String(), fromRepo)
+ pctx := ContextWithAppendPullRepositoryScope(ctx, fromRepo)
+
+ // NOTE: the fromRepo might be private repo and
+ // auth service still can grant token without error.
+ // but the post request will fail because of 401.
+ //
+ // for the private repo, we should remove mount-from
+ // query and send the request again.
+ resp, err = preq.doWithRetries(pctx, nil)
+ if err != nil {
+ return nil, err
+ }
+
+ switch resp.StatusCode {
+ case http.StatusUnauthorized:
+ log.G(ctx).Debugf("failed to mount from repository %s", fromRepo)
+
+ resp.Body.Close()
+ resp = nil
+ case http.StatusCreated:
+ mountedFrom = path.Join(p.refspec.Hostname(), fromRepo)
+ }
+ }
+
+ if resp == nil {
+ resp, err = req.doWithRetries(ctx, nil)
+ if err != nil {
+ if errors.Is(err, ErrInvalidAuthorization) {
+ return nil, fmt.Errorf("push access denied, repository does not exist or may require authorization: %w", err)
+ }
+ return nil, err
+ }
+ }
+ defer resp.Body.Close()
+
+ switch resp.StatusCode {
+ case http.StatusOK, http.StatusAccepted, http.StatusNoContent:
+ case http.StatusCreated:
+ p.tracker.SetStatus(ref, Status{
+ Committed: true,
+ PushStatus: PushStatus{
+ MountedFrom: mountedFrom,
+ },
+ Status: content.Status{
+ Ref: ref,
+ Total: desc.Size,
+ Offset: desc.Size,
+ },
+ })
+ return nil, fmt.Errorf("content %v on remote: %w", desc.Digest, errdefs.ErrAlreadyExists)
+ default:
+ err := remoteserrors.NewUnexpectedStatusErr(resp)
+ log.G(ctx).WithField("resp", resp).WithField("body", string(err.(remoteserrors.ErrUnexpectedStatus).Body)).Debug("unexpected response")
+ return nil, err
+ }
+
+ var (
+ location = resp.Header.Get("Location")
+ lurl *url.URL
+ lhost = host
+ )
+ // Support paths without host in location
+ if strings.HasPrefix(location, "/") {
+ lurl, err = url.Parse(lhost.Scheme + "://" + lhost.Host + location)
+ if err != nil {
+ return nil, fmt.Errorf("unable to parse location %v: %w", location, err)
+ }
+ } else {
+ if !strings.Contains(location, "://") {
+ location = lhost.Scheme + "://" + location
+ }
+ lurl, err = url.Parse(location)
+ if err != nil {
+ return nil, fmt.Errorf("unable to parse location %v: %w", location, err)
+ }
+
+ if lurl.Host != lhost.Host || lhost.Scheme != lurl.Scheme {
+ lhost.Scheme = lurl.Scheme
+ lhost.Host = lurl.Host
+
+ // Check if different than what was requested, accounting for fallback in the transport layer
+ requested := resp.Request.URL
+ if requested.Host != lhost.Host || requested.Scheme != lhost.Scheme {
+ // Strip authorizer if change to host or scheme
+ lhost.Authorizer = nil
+ log.G(ctx).WithField("host", lhost.Host).WithField("scheme", lhost.Scheme).Debug("upload changed destination, authorizer removed")
+ }
+ }
+ }
+ q := lurl.Query()
+ q.Add("digest", desc.Digest.String())
+
+ req = p.request(lhost, http.MethodPut)
+ req.header.Set("Content-Type", "application/octet-stream")
+ req.path = lurl.Path + "?" + q.Encode()
+ }
+ p.tracker.SetStatus(ref, Status{
+ Status: content.Status{
+ Ref: ref,
+ Total: desc.Size,
+ Expected: desc.Digest,
+ StartedAt: time.Now(),
+ },
+ })
+
+ // TODO: Support chunked upload
+
+ pushw := newPushWriter(p.dockerBase, ref, desc.Digest, p.tracker, isManifest)
+
+ req.body = func() (io.ReadCloser, error) {
+ pr, pw := io.Pipe()
+ pushw.setPipe(pw)
+ return pr, nil
+ }
+ req.size = desc.Size
+
+ go func() {
+ resp, err := req.doWithRetries(ctx, nil)
+ if err != nil {
+ pushw.setError(err)
+ return
+ }
+
+ switch resp.StatusCode {
+ case http.StatusOK, http.StatusCreated, http.StatusNoContent:
+ default:
+ err := remoteserrors.NewUnexpectedStatusErr(resp)
+ log.G(ctx).WithField("resp", resp).WithField("body", string(err.(remoteserrors.ErrUnexpectedStatus).Body)).Debug("unexpected response")
+ pushw.setError(err)
+ return
+ }
+ pushw.setResponse(resp)
+ }()
+
+ return pushw, nil
+}
+
+func getManifestPath(object string, dgst digest.Digest) []string {
+ if i := strings.IndexByte(object, '@'); i >= 0 {
+ if object[i+1:] != dgst.String() {
+ // use digest, not tag
+ object = ""
+ } else {
+ // strip @ for registry path to make tag
+ object = object[:i]
+ }
+
+ }
+
+ if object == "" {
+ return []string{"manifests", dgst.String()}
+ }
+
+ return []string{"manifests", object}
+}
+
+type pushWriter struct {
+ base *dockerBase
+ ref string
+
+ pipe *io.PipeWriter
+
+ done chan struct{}
+ closeOnce sync.Once
+
+ pipeC chan *io.PipeWriter
+ respC chan *http.Response
+ errC chan error
+
+ isManifest bool
+
+ expected digest.Digest
+ tracker StatusTracker
+}
+
+func newPushWriter(db *dockerBase, ref string, expected digest.Digest, tracker StatusTracker, isManifest bool) *pushWriter {
+ // Initialize and create response
+ return &pushWriter{
+ base: db,
+ ref: ref,
+ expected: expected,
+ tracker: tracker,
+ pipeC: make(chan *io.PipeWriter, 1),
+ respC: make(chan *http.Response, 1),
+ errC: make(chan error, 1),
+ done: make(chan struct{}),
+ isManifest: isManifest,
+ }
+}
+
+func (pw *pushWriter) setPipe(p *io.PipeWriter) {
+ select {
+ case <-pw.done:
+ case pw.pipeC <- p:
+ }
+}
+
+func (pw *pushWriter) setError(err error) {
+ select {
+ case <-pw.done:
+ case pw.errC <- err:
+ }
+}
+
+func (pw *pushWriter) setResponse(resp *http.Response) {
+ select {
+ case <-pw.done:
+ case pw.respC <- resp:
+ }
+}
+
+func (pw *pushWriter) replacePipe(p *io.PipeWriter) error {
+ if pw.pipe == nil {
+ pw.pipe = p
+ return nil
+ }
+
+ pw.pipe.CloseWithError(content.ErrReset)
+ pw.pipe = p
+
+ // If content has already been written, the bytes
+ // cannot be written again and the caller must reset
+ status, err := pw.tracker.GetStatus(pw.ref)
+ if err != nil {
+ return err
+ }
+ status.Offset = 0
+ status.UpdatedAt = time.Now()
+ pw.tracker.SetStatus(pw.ref, status)
+ return content.ErrReset
+}
+
+func (pw *pushWriter) Write(p []byte) (n int, err error) {
+ status, err := pw.tracker.GetStatus(pw.ref)
+ if err != nil {
+ return n, err
+ }
+
+ if pw.pipe == nil {
+ select {
+ case <-pw.done:
+ return 0, io.ErrClosedPipe
+ case p := <-pw.pipeC:
+ pw.replacePipe(p)
+ }
+ } else {
+ select {
+ case <-pw.done:
+ return 0, io.ErrClosedPipe
+ case p := <-pw.pipeC:
+ return 0, pw.replacePipe(p)
+ default:
+ }
+ }
+
+ n, err = pw.pipe.Write(p)
+ if errors.Is(err, io.ErrClosedPipe) {
+ // if the pipe is closed, we might have the original error on the error
+ // channel - so we should try and get it
+ select {
+ case <-pw.done:
+ case err = <-pw.errC:
+ pw.Close()
+ case p := <-pw.pipeC:
+ return 0, pw.replacePipe(p)
+ case resp := <-pw.respC:
+ pw.setResponse(resp)
+ }
+ }
+ status.Offset += int64(n)
+ status.UpdatedAt = time.Now()
+ pw.tracker.SetStatus(pw.ref, status)
+ return
+}
+
+func (pw *pushWriter) Close() error {
+ // Ensure pipeC is closed but handle `Close()` being
+ // called multiple times without panicking
+ pw.closeOnce.Do(func() {
+ close(pw.done)
+ })
+ if pw.pipe != nil {
+ status, err := pw.tracker.GetStatus(pw.ref)
+ if err == nil && !status.Committed {
+ // Closing an incomplete writer. Record this as an error so that following write can retry it.
+ status.ErrClosed = errors.New("closed incomplete writer")
+ pw.tracker.SetStatus(pw.ref, status)
+ }
+ return pw.pipe.Close()
+ }
+ return nil
+}
+
+func (pw *pushWriter) Status() (content.Status, error) {
+ status, err := pw.tracker.GetStatus(pw.ref)
+ if err != nil {
+ return content.Status{}, err
+ }
+ return status.Status, nil
+
+}
+
+func (pw *pushWriter) Digest() digest.Digest {
+ // TODO: Get rid of this function?
+ return pw.expected
+}
+
+func (pw *pushWriter) Commit(ctx context.Context, size int64, expected digest.Digest, opts ...content.Opt) error {
+ // Check whether read has already thrown an error
+ if _, err := pw.pipe.Write([]byte{}); err != nil && !errors.Is(err, io.ErrClosedPipe) {
+ return fmt.Errorf("pipe error before commit: %w", err)
+ }
+
+ if err := pw.pipe.Close(); err != nil {
+ return err
+ }
+ // TODO: timeout waiting for response
+ var resp *http.Response
+ select {
+ case <-pw.done:
+ return io.ErrClosedPipe
+ case err := <-pw.errC:
+ pw.Close()
+ return err
+ case resp = <-pw.respC:
+ defer resp.Body.Close()
+ case p := <-pw.pipeC:
+ // check whether the pipe has changed in the commit, because sometimes Write
+ // can complete successfully, but the pipe may have changed. In that case, the
+ // content needs to be reset.
+ return pw.replacePipe(p)
+ }
+
+ // 201 is specified return status, some registries return
+ // 200, 202 or 204.
+ switch resp.StatusCode {
+ case http.StatusOK, http.StatusCreated, http.StatusNoContent, http.StatusAccepted:
+ default:
+ return remoteserrors.NewUnexpectedStatusErr(resp)
+ }
+
+ status, err := pw.tracker.GetStatus(pw.ref)
+ if err != nil {
+ return fmt.Errorf("failed to get status: %w", err)
+ }
+
+ if size > 0 && size != status.Offset {
+ return fmt.Errorf("unexpected size %d, expected %d", status.Offset, size)
+ }
+
+ if expected == "" {
+ expected = status.Expected
+ }
+
+ actual, err := digest.Parse(resp.Header.Get("Docker-Content-Digest"))
+ if err != nil {
+ return fmt.Errorf("invalid content digest in response: %w", err)
+ }
+
+ if actual != expected {
+ return fmt.Errorf("got digest %s, expected %s", actual, expected)
+ }
+
+ status.Committed = true
+ status.UpdatedAt = time.Now()
+ pw.tracker.SetStatus(pw.ref, status)
+
+ return nil
+}
+
+func (pw *pushWriter) Truncate(size int64) error {
+ // TODO: if blob close request and start new request at offset
+ // TODO: always error on manifest
+ return errors.New("cannot truncate remote upload")
+}
+
+func requestWithMountFrom(req *request, mount, from string) *request {
+ creq := *req
+
+ sep := "?"
+ if strings.Contains(creq.path, sep) {
+ sep = "&"
+ }
+
+ creq.path = creq.path + sep + "mount=" + mount + "&from=" + from
+
+ return &creq
+}
diff --git a/vendor/github.com/containerd/containerd/remotes/docker/registry.go b/vendor/github.com/containerd/containerd/remotes/docker/registry.go
new file mode 100644
index 000000000..98cafcd06
--- /dev/null
+++ b/vendor/github.com/containerd/containerd/remotes/docker/registry.go
@@ -0,0 +1,244 @@
+/*
+ Copyright The containerd Authors.
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+*/
+
+package docker
+
+import (
+ "errors"
+ "net"
+ "net/http"
+)
+
+// HostCapabilities represent the capabilities of the registry
+// host. This also represents the set of operations for which
+// the registry host may be trusted to perform.
+//
+// For example pushing is a capability which should only be
+// performed on an upstream source, not a mirror.
+// Resolving (the process of converting a name into a digest)
+// must be considered a trusted operation and only done by
+// a host which is trusted (or more preferably by secure process
+// which can prove the provenance of the mapping). A public
+// mirror should never be trusted to do a resolve action.
+//
+// | Registry Type | Pull | Resolve | Push |
+// |------------------|------|---------|------|
+// | Public Registry | yes | yes | yes |
+// | Private Registry | yes | yes | yes |
+// | Public Mirror | yes | no | no |
+// | Private Mirror | yes | yes | no |
+type HostCapabilities uint8
+
+const (
+ // HostCapabilityPull represents the capability to fetch manifests
+ // and blobs by digest
+ HostCapabilityPull HostCapabilities = 1 << iota
+
+ // HostCapabilityResolve represents the capability to fetch manifests
+ // by name
+ HostCapabilityResolve
+
+ // HostCapabilityPush represents the capability to push blobs and
+ // manifests
+ HostCapabilityPush
+
+ // Reserved for future capabilities (i.e. search, catalog, remove)
+)
+
+// Has checks whether the capabilities list has the provide capability
+func (c HostCapabilities) Has(t HostCapabilities) bool {
+ return c&t == t
+}
+
+// RegistryHost represents a complete configuration for a registry
+// host, representing the capabilities, authorizations, connection
+// configuration, and location.
+type RegistryHost struct {
+ Client *http.Client
+ Authorizer Authorizer
+ Host string
+ Scheme string
+ Path string
+ Capabilities HostCapabilities
+ Header http.Header
+}
+
+func (h RegistryHost) isProxy(refhost string) bool {
+ if refhost != h.Host {
+ if refhost != "docker.io" || h.Host != "registry-1.docker.io" {
+ return true
+ }
+ }
+ return false
+}
+
+// RegistryHosts fetches the registry hosts for a given namespace,
+// provided by the host component of an distribution image reference.
+type RegistryHosts func(string) ([]RegistryHost, error)
+
+// Registries joins multiple registry configuration functions, using the same
+// order as provided within the arguments. When an empty registry configuration
+// is returned with a nil error, the next function will be called.
+// NOTE: This function will not join configurations, as soon as a non-empty
+// configuration is returned from a configuration function, it will be returned
+// to the caller.
+func Registries(registries ...RegistryHosts) RegistryHosts {
+ return func(host string) ([]RegistryHost, error) {
+ for _, registry := range registries {
+ config, err := registry(host)
+ if err != nil {
+ return config, err
+ }
+ if len(config) > 0 {
+ return config, nil
+ }
+ }
+ return nil, nil
+ }
+}
+
+type registryOpts struct {
+ authorizer Authorizer
+ plainHTTP func(string) (bool, error)
+ host func(string) (string, error)
+ client *http.Client
+}
+
+// RegistryOpt defines a registry default option
+type RegistryOpt func(*registryOpts)
+
+// WithPlainHTTP configures registries to use plaintext http scheme
+// for the provided host match function.
+func WithPlainHTTP(f func(string) (bool, error)) RegistryOpt {
+ return func(opts *registryOpts) {
+ opts.plainHTTP = f
+ }
+}
+
+// WithAuthorizer configures the default authorizer for a registry
+func WithAuthorizer(a Authorizer) RegistryOpt {
+ return func(opts *registryOpts) {
+ opts.authorizer = a
+ }
+}
+
+// WithHostTranslator defines the default translator to use for registry hosts
+func WithHostTranslator(h func(string) (string, error)) RegistryOpt {
+ return func(opts *registryOpts) {
+ opts.host = h
+ }
+}
+
+// WithClient configures the default http client for a registry
+func WithClient(c *http.Client) RegistryOpt {
+ return func(opts *registryOpts) {
+ opts.client = c
+ }
+}
+
+// ConfigureDefaultRegistries is used to create a default configuration for
+// registries. For more advanced configurations or per-domain setups,
+// the RegistryHosts interface should be used directly.
+// NOTE: This function will always return a non-empty value or error
+func ConfigureDefaultRegistries(ropts ...RegistryOpt) RegistryHosts {
+ var opts registryOpts
+ for _, opt := range ropts {
+ opt(&opts)
+ }
+
+ return func(host string) ([]RegistryHost, error) {
+ config := RegistryHost{
+ Client: opts.client,
+ Authorizer: opts.authorizer,
+ Host: host,
+ Scheme: "https",
+ Path: "/v2",
+ Capabilities: HostCapabilityPull | HostCapabilityResolve | HostCapabilityPush,
+ }
+
+ if config.Client == nil {
+ config.Client = http.DefaultClient
+ }
+
+ if opts.plainHTTP != nil {
+ match, err := opts.plainHTTP(host)
+ if err != nil {
+ return nil, err
+ }
+ if match {
+ config.Scheme = "http"
+ }
+ }
+
+ if opts.host != nil {
+ var err error
+ config.Host, err = opts.host(config.Host)
+ if err != nil {
+ return nil, err
+ }
+ } else if host == "docker.io" {
+ config.Host = "registry-1.docker.io"
+ }
+
+ return []RegistryHost{config}, nil
+ }
+}
+
+// MatchAllHosts is a host match function which is always true.
+func MatchAllHosts(string) (bool, error) {
+ return true, nil
+}
+
+// MatchLocalhost is a host match function which returns true for
+// localhost.
+//
+// Note: this does not handle matching of ip addresses in octal,
+// decimal or hex form.
+func MatchLocalhost(host string) (bool, error) {
+ switch {
+ case host == "::1":
+ return true, nil
+ case host == "[::1]":
+ return true, nil
+ }
+ h, p, err := net.SplitHostPort(host)
+
+ // addrError helps distinguish between errors of form
+ // "no colon in address" and "too many colons in address".
+ // The former is fine as the host string need not have a
+ // port. Latter needs to be handled.
+ addrError := &net.AddrError{
+ Err: "missing port in address",
+ Addr: host,
+ }
+ if err != nil {
+ if err.Error() != addrError.Error() {
+ return false, err
+ }
+ // host string without any port specified
+ h = host
+ } else if len(p) == 0 {
+ return false, errors.New("invalid host name format")
+ }
+
+ // use ipv4 dotted decimal for further checking
+ if h == "localhost" {
+ h = "127.0.0.1"
+ }
+ ip := net.ParseIP(h)
+
+ return ip.IsLoopback(), nil
+}
diff --git a/vendor/github.com/containerd/containerd/remotes/docker/resolver.go b/vendor/github.com/containerd/containerd/remotes/docker/resolver.go
new file mode 100644
index 000000000..4ca2b921e
--- /dev/null
+++ b/vendor/github.com/containerd/containerd/remotes/docker/resolver.go
@@ -0,0 +1,800 @@
+/*
+ Copyright The containerd Authors.
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+*/
+
+package docker
+
+import (
+ "context"
+ "crypto/tls"
+ "errors"
+ "fmt"
+ "io"
+ "net"
+ "net/http"
+ "net/url"
+ "path"
+ "strings"
+
+ "github.com/containerd/log"
+ "github.com/opencontainers/go-digest"
+ ocispec "github.com/opencontainers/image-spec/specs-go/v1"
+
+ "github.com/containerd/containerd/errdefs"
+ "github.com/containerd/containerd/images"
+ "github.com/containerd/containerd/reference"
+ "github.com/containerd/containerd/remotes"
+ "github.com/containerd/containerd/remotes/docker/schema1" //nolint:staticcheck // Ignore SA1019. Need to keep deprecated package for compatibility.
+ remoteerrors "github.com/containerd/containerd/remotes/errors"
+ "github.com/containerd/containerd/tracing"
+ "github.com/containerd/containerd/version"
+)
+
+var (
+ // ErrInvalidAuthorization is used when credentials are passed to a server but
+ // those credentials are rejected.
+ ErrInvalidAuthorization = errors.New("authorization failed")
+
+ // MaxManifestSize represents the largest size accepted from a registry
+ // during resolution. Larger manifests may be accepted using a
+ // resolution method other than the registry.
+ //
+ // NOTE: The max supported layers by some runtimes is 128 and individual
+ // layers will not contribute more than 256 bytes, making a
+ // reasonable limit for a large image manifests of 32K bytes.
+ // 4M bytes represents a much larger upper bound for images which may
+ // contain large annotations or be non-images. A proper manifest
+ // design puts large metadata in subobjects, as is consistent the
+ // intent of the manifest design.
+ MaxManifestSize int64 = 4 * 1048 * 1048
+)
+
+// Authorizer is used to authorize HTTP requests based on 401 HTTP responses.
+// An Authorizer is responsible for caching tokens or credentials used by
+// requests.
+type Authorizer interface {
+ // Authorize sets the appropriate `Authorization` header on the given
+ // request.
+ //
+ // If no authorization is found for the request, the request remains
+ // unmodified. It may also add an `Authorization` header as
+ // "bearer "
+ // "basic "
+ //
+ // It may return remotes/errors.ErrUnexpectedStatus, which for example,
+ // can be used by the caller to find out the status code returned by the registry.
+ Authorize(context.Context, *http.Request) error
+
+ // AddResponses adds a 401 response for the authorizer to consider when
+ // authorizing requests. The last response should be unauthorized and
+ // the previous requests are used to consider redirects and retries
+ // that may have led to the 401.
+ //
+ // If response is not handled, returns `ErrNotImplemented`
+ AddResponses(context.Context, []*http.Response) error
+}
+
+// ResolverOptions are used to configured a new Docker register resolver
+type ResolverOptions struct {
+ // Hosts returns registry host configurations for a namespace.
+ Hosts RegistryHosts
+
+ // Headers are the HTTP request header fields sent by the resolver
+ Headers http.Header
+
+ // Tracker is used to track uploads to the registry. This is used
+ // since the registry does not have upload tracking and the existing
+ // mechanism for getting blob upload status is expensive.
+ Tracker StatusTracker
+
+ // Authorizer is used to authorize registry requests
+ //
+ // Deprecated: use Hosts.
+ Authorizer Authorizer
+
+ // Credentials provides username and secret given a host.
+ // If username is empty but a secret is given, that secret
+ // is interpreted as a long lived token.
+ //
+ // Deprecated: use Hosts.
+ Credentials func(string) (string, string, error)
+
+ // Host provides the hostname given a namespace.
+ //
+ // Deprecated: use Hosts.
+ Host func(string) (string, error)
+
+ // PlainHTTP specifies to use plain http and not https
+ //
+ // Deprecated: use Hosts.
+ PlainHTTP bool
+
+ // Client is the http client to used when making registry requests
+ //
+ // Deprecated: use Hosts.
+ Client *http.Client
+}
+
+// DefaultHost is the default host function.
+func DefaultHost(ns string) (string, error) {
+ if ns == "docker.io" {
+ return "registry-1.docker.io", nil
+ }
+ return ns, nil
+}
+
+type dockerResolver struct {
+ hosts RegistryHosts
+ header http.Header
+ resolveHeader http.Header
+ tracker StatusTracker
+}
+
+// NewResolver returns a new resolver to a Docker registry
+func NewResolver(options ResolverOptions) remotes.Resolver {
+ if options.Tracker == nil {
+ options.Tracker = NewInMemoryTracker()
+ }
+
+ if options.Headers == nil {
+ options.Headers = make(http.Header)
+ } else {
+ // make a copy of the headers to avoid race due to concurrent map write
+ options.Headers = options.Headers.Clone()
+ }
+ if _, ok := options.Headers["User-Agent"]; !ok {
+ options.Headers.Set("User-Agent", "containerd/"+version.Version)
+ }
+
+ resolveHeader := http.Header{}
+ if _, ok := options.Headers["Accept"]; !ok {
+ // set headers for all the types we support for resolution.
+ resolveHeader.Set("Accept", strings.Join([]string{
+ images.MediaTypeDockerSchema2Manifest,
+ images.MediaTypeDockerSchema2ManifestList,
+ ocispec.MediaTypeImageManifest,
+ ocispec.MediaTypeImageIndex, "*/*",
+ }, ", "))
+ } else {
+ resolveHeader["Accept"] = options.Headers["Accept"]
+ delete(options.Headers, "Accept")
+ }
+
+ if options.Hosts == nil {
+ opts := []RegistryOpt{}
+ if options.Host != nil {
+ opts = append(opts, WithHostTranslator(options.Host))
+ }
+
+ if options.Authorizer == nil {
+ options.Authorizer = NewDockerAuthorizer(
+ WithAuthClient(options.Client),
+ WithAuthHeader(options.Headers),
+ WithAuthCreds(options.Credentials))
+ }
+ opts = append(opts, WithAuthorizer(options.Authorizer))
+
+ if options.Client != nil {
+ opts = append(opts, WithClient(options.Client))
+ }
+ if options.PlainHTTP {
+ opts = append(opts, WithPlainHTTP(MatchAllHosts))
+ } else {
+ opts = append(opts, WithPlainHTTP(MatchLocalhost))
+ }
+ options.Hosts = ConfigureDefaultRegistries(opts...)
+ }
+ return &dockerResolver{
+ hosts: options.Hosts,
+ header: options.Headers,
+ resolveHeader: resolveHeader,
+ tracker: options.Tracker,
+ }
+}
+
+func getManifestMediaType(resp *http.Response) string {
+ // Strip encoding data (manifests should always be ascii JSON)
+ contentType := resp.Header.Get("Content-Type")
+ if sp := strings.IndexByte(contentType, ';'); sp != -1 {
+ contentType = contentType[0:sp]
+ }
+
+ // As of Apr 30 2019 the registry.access.redhat.com registry does not specify
+ // the content type of any data but uses schema1 manifests.
+ if contentType == "text/plain" {
+ contentType = images.MediaTypeDockerSchema1Manifest
+ }
+ return contentType
+}
+
+type countingReader struct {
+ reader io.Reader
+ bytesRead int64
+}
+
+func (r *countingReader) Read(p []byte) (int, error) {
+ n, err := r.reader.Read(p)
+ r.bytesRead += int64(n)
+ return n, err
+}
+
+var _ remotes.Resolver = &dockerResolver{}
+
+func (r *dockerResolver) Resolve(ctx context.Context, ref string) (string, ocispec.Descriptor, error) {
+ base, err := r.resolveDockerBase(ref)
+ if err != nil {
+ return "", ocispec.Descriptor{}, err
+ }
+ refspec := base.refspec
+ if refspec.Object == "" {
+ return "", ocispec.Descriptor{}, reference.ErrObjectRequired
+ }
+
+ var (
+ firstErr error
+ paths [][]string
+ dgst = refspec.Digest()
+ caps = HostCapabilityPull
+ )
+
+ if dgst != "" {
+ if err := dgst.Validate(); err != nil {
+ // need to fail here, since we can't actually resolve the invalid
+ // digest.
+ return "", ocispec.Descriptor{}, err
+ }
+
+ // turns out, we have a valid digest, make a url.
+ paths = append(paths, []string{"manifests", dgst.String()})
+
+ // fallback to blobs on not found.
+ paths = append(paths, []string{"blobs", dgst.String()})
+ } else {
+ // Add
+ paths = append(paths, []string{"manifests", refspec.Object})
+ caps |= HostCapabilityResolve
+ }
+
+ hosts := base.filterHosts(caps)
+ if len(hosts) == 0 {
+ return "", ocispec.Descriptor{}, fmt.Errorf("no resolve hosts: %w", errdefs.ErrNotFound)
+ }
+
+ ctx, err = ContextWithRepositoryScope(ctx, refspec, false)
+ if err != nil {
+ return "", ocispec.Descriptor{}, err
+ }
+
+ for _, u := range paths {
+ for _, host := range hosts {
+ ctx := log.WithLogger(ctx, log.G(ctx).WithField("host", host.Host))
+
+ req := base.request(host, http.MethodHead, u...)
+ if err := req.addNamespace(base.refspec.Hostname()); err != nil {
+ return "", ocispec.Descriptor{}, err
+ }
+
+ for key, value := range r.resolveHeader {
+ req.header[key] = append(req.header[key], value...)
+ }
+
+ log.G(ctx).Debug("resolving")
+ resp, err := req.doWithRetries(ctx, nil)
+ if err != nil {
+ if errors.Is(err, ErrInvalidAuthorization) {
+ err = fmt.Errorf("pull access denied, repository does not exist or may require authorization: %w", err)
+ }
+ // Store the error for referencing later
+ if firstErr == nil {
+ firstErr = err
+ }
+ log.G(ctx).WithError(err).Info("trying next host")
+ continue // try another host
+ }
+ resp.Body.Close() // don't care about body contents.
+
+ if resp.StatusCode > 299 {
+ if resp.StatusCode == http.StatusNotFound {
+ log.G(ctx).Info("trying next host - response was http.StatusNotFound")
+ continue
+ }
+ if resp.StatusCode > 399 {
+ // Set firstErr when encountering the first non-404 status code.
+ if firstErr == nil {
+ firstErr = remoteerrors.NewUnexpectedStatusErr(resp)
+ }
+ continue // try another host
+ }
+ return "", ocispec.Descriptor{}, remoteerrors.NewUnexpectedStatusErr(resp)
+ }
+ size := resp.ContentLength
+ contentType := getManifestMediaType(resp)
+
+ // if no digest was provided, then only a resolve
+ // trusted registry was contacted, in this case use
+ // the digest header (or content from GET)
+ if dgst == "" {
+ // this is the only point at which we trust the registry. we use the
+ // content headers to assemble a descriptor for the name. when this becomes
+ // more robust, we mostly get this information from a secure trust store.
+ dgstHeader := digest.Digest(resp.Header.Get("Docker-Content-Digest"))
+
+ if dgstHeader != "" && size != -1 {
+ if err := dgstHeader.Validate(); err != nil {
+ return "", ocispec.Descriptor{}, fmt.Errorf("%q in header not a valid digest: %w", dgstHeader, err)
+ }
+ dgst = dgstHeader
+ }
+ }
+ if dgst == "" || size == -1 {
+ log.G(ctx).Debug("no Docker-Content-Digest header, fetching manifest instead")
+
+ req = base.request(host, http.MethodGet, u...)
+ if err := req.addNamespace(base.refspec.Hostname()); err != nil {
+ return "", ocispec.Descriptor{}, err
+ }
+
+ for key, value := range r.resolveHeader {
+ req.header[key] = append(req.header[key], value...)
+ }
+
+ resp, err := req.doWithRetries(ctx, nil)
+ if err != nil {
+ return "", ocispec.Descriptor{}, err
+ }
+
+ bodyReader := countingReader{reader: resp.Body}
+
+ contentType = getManifestMediaType(resp)
+ err = func() error {
+ defer resp.Body.Close()
+ if dgst != "" {
+ _, err = io.Copy(io.Discard, &bodyReader)
+ return err
+ }
+
+ if contentType == images.MediaTypeDockerSchema1Manifest {
+ b, err := schema1.ReadStripSignature(&bodyReader)
+ if err != nil {
+ return err
+ }
+
+ dgst = digest.FromBytes(b)
+ return nil
+ }
+
+ dgst, err = digest.FromReader(&bodyReader)
+ return err
+ }()
+ if err != nil {
+ return "", ocispec.Descriptor{}, err
+ }
+ size = bodyReader.bytesRead
+ }
+ // Prevent resolving to excessively large manifests
+ if size > MaxManifestSize {
+ if firstErr == nil {
+ firstErr = fmt.Errorf("rejecting %d byte manifest for %s: %w", size, ref, errdefs.ErrNotFound)
+ }
+ continue
+ }
+
+ desc := ocispec.Descriptor{
+ Digest: dgst,
+ MediaType: contentType,
+ Size: size,
+ }
+
+ log.G(ctx).WithField("desc.digest", desc.Digest).Debug("resolved")
+ return ref, desc, nil
+ }
+ }
+
+ // If above loop terminates without return, then there was an error.
+ // "firstErr" contains the first non-404 error. That is, "firstErr == nil"
+ // means that either no registries were given or each registry returned 404.
+
+ if firstErr == nil {
+ firstErr = fmt.Errorf("%s: %w", ref, errdefs.ErrNotFound)
+ }
+
+ return "", ocispec.Descriptor{}, firstErr
+}
+
+func (r *dockerResolver) Fetcher(ctx context.Context, ref string) (remotes.Fetcher, error) {
+ base, err := r.resolveDockerBase(ref)
+ if err != nil {
+ return nil, err
+ }
+
+ return dockerFetcher{
+ dockerBase: base,
+ }, nil
+}
+
+func (r *dockerResolver) Pusher(ctx context.Context, ref string) (remotes.Pusher, error) {
+ base, err := r.resolveDockerBase(ref)
+ if err != nil {
+ return nil, err
+ }
+
+ return dockerPusher{
+ dockerBase: base,
+ object: base.refspec.Object,
+ tracker: r.tracker,
+ }, nil
+}
+
+func (r *dockerResolver) resolveDockerBase(ref string) (*dockerBase, error) {
+ refspec, err := reference.Parse(ref)
+ if err != nil {
+ return nil, err
+ }
+
+ return r.base(refspec)
+}
+
+type dockerBase struct {
+ refspec reference.Spec
+ repository string
+ hosts []RegistryHost
+ header http.Header
+}
+
+func (r *dockerResolver) base(refspec reference.Spec) (*dockerBase, error) {
+ host := refspec.Hostname()
+ hosts, err := r.hosts(host)
+ if err != nil {
+ return nil, err
+ }
+ return &dockerBase{
+ refspec: refspec,
+ repository: strings.TrimPrefix(refspec.Locator, host+"/"),
+ hosts: hosts,
+ header: r.header,
+ }, nil
+}
+
+func (r *dockerBase) filterHosts(caps HostCapabilities) (hosts []RegistryHost) {
+ for _, host := range r.hosts {
+ if host.Capabilities.Has(caps) {
+ hosts = append(hosts, host)
+ }
+ }
+ return
+}
+
+func (r *dockerBase) request(host RegistryHost, method string, ps ...string) *request {
+ header := r.header.Clone()
+ if header == nil {
+ header = http.Header{}
+ }
+
+ for key, value := range host.Header {
+ header[key] = append(header[key], value...)
+ }
+ parts := append([]string{"/", host.Path, r.repository}, ps...)
+ p := path.Join(parts...)
+ // Join strips trailing slash, re-add ending "/" if included
+ if len(parts) > 0 && strings.HasSuffix(parts[len(parts)-1], "/") {
+ p = p + "/"
+ }
+ return &request{
+ method: method,
+ path: p,
+ header: header,
+ host: host,
+ }
+}
+
+func (r *request) authorize(ctx context.Context, req *http.Request) error {
+ // Check if has header for host
+ if r.host.Authorizer != nil {
+ if err := r.host.Authorizer.Authorize(ctx, req); err != nil {
+ return err
+ }
+ }
+
+ return nil
+}
+
+func (r *request) addNamespace(ns string) (err error) {
+ if !r.host.isProxy(ns) {
+ return nil
+ }
+ var q url.Values
+ // Parse query
+ if i := strings.IndexByte(r.path, '?'); i > 0 {
+ r.path = r.path[:i+1]
+ q, err = url.ParseQuery(r.path[i+1:])
+ if err != nil {
+ return
+ }
+ } else {
+ r.path = r.path + "?"
+ q = url.Values{}
+ }
+ q.Add("ns", ns)
+
+ r.path = r.path + q.Encode()
+
+ return
+}
+
+type request struct {
+ method string
+ path string
+ header http.Header
+ host RegistryHost
+ body func() (io.ReadCloser, error)
+ size int64
+}
+
+func (r *request) do(ctx context.Context) (*http.Response, error) {
+ u := r.host.Scheme + "://" + r.host.Host + r.path
+ req, err := http.NewRequestWithContext(ctx, r.method, u, nil)
+ if err != nil {
+ return nil, err
+ }
+ if r.header == nil {
+ req.Header = http.Header{}
+ } else {
+ req.Header = r.header.Clone() // headers need to be copied to avoid concurrent map access
+ }
+ if r.body != nil {
+ body, err := r.body()
+ if err != nil {
+ return nil, err
+ }
+ req.Body = body
+ req.GetBody = r.body
+ if r.size > 0 {
+ req.ContentLength = r.size
+ }
+ }
+
+ ctx = log.WithLogger(ctx, log.G(ctx).WithField("url", u))
+ log.G(ctx).WithFields(requestFields(req)).Debug("do request")
+ if err := r.authorize(ctx, req); err != nil {
+ return nil, fmt.Errorf("failed to authorize: %w", err)
+ }
+
+ client := &http.Client{}
+ if r.host.Client != nil {
+ *client = *r.host.Client
+ }
+ if client.CheckRedirect == nil {
+ client.CheckRedirect = func(req *http.Request, via []*http.Request) error {
+ if len(via) >= 10 {
+ return errors.New("stopped after 10 redirects")
+ }
+ if err := r.authorize(ctx, req); err != nil {
+ return fmt.Errorf("failed to authorize redirect: %w", err)
+ }
+ return nil
+ }
+ }
+
+ tracing.UpdateHTTPClient(client, tracing.Name("remotes.docker.resolver", "HTTPRequest"))
+
+ resp, err := client.Do(req)
+ if err != nil {
+ return nil, fmt.Errorf("failed to do request: %w", err)
+ }
+ log.G(ctx).WithFields(responseFields(resp)).Debug("fetch response received")
+ return resp, nil
+}
+
+func (r *request) doWithRetries(ctx context.Context, responses []*http.Response) (*http.Response, error) {
+ resp, err := r.do(ctx)
+ if err != nil {
+ return nil, err
+ }
+
+ responses = append(responses, resp)
+ retry, err := r.retryRequest(ctx, responses)
+ if err != nil {
+ resp.Body.Close()
+ return nil, err
+ }
+ if retry {
+ resp.Body.Close()
+ return r.doWithRetries(ctx, responses)
+ }
+ return resp, err
+}
+
+func (r *request) retryRequest(ctx context.Context, responses []*http.Response) (bool, error) {
+ if len(responses) > 5 {
+ return false, nil
+ }
+ last := responses[len(responses)-1]
+ switch last.StatusCode {
+ case http.StatusUnauthorized:
+ log.G(ctx).WithField("header", last.Header.Get("WWW-Authenticate")).Debug("Unauthorized")
+ if r.host.Authorizer != nil {
+ if err := r.host.Authorizer.AddResponses(ctx, responses); err == nil {
+ return true, nil
+ } else if !errdefs.IsNotImplemented(err) {
+ return false, err
+ }
+ }
+
+ return false, nil
+ case http.StatusMethodNotAllowed:
+ // Support registries which have not properly implemented the HEAD method for
+ // manifests endpoint
+ if r.method == http.MethodHead && strings.Contains(r.path, "/manifests/") {
+ r.method = http.MethodGet
+ return true, nil
+ }
+ case http.StatusRequestTimeout, http.StatusTooManyRequests:
+ return true, nil
+ }
+
+ // TODO: Handle 50x errors accounting for attempt history
+ return false, nil
+}
+
+func (r *request) String() string {
+ return r.host.Scheme + "://" + r.host.Host + r.path
+}
+
+func requestFields(req *http.Request) log.Fields {
+ fields := map[string]interface{}{
+ "request.method": req.Method,
+ }
+ for k, vals := range req.Header {
+ k = strings.ToLower(k)
+ if k == "authorization" {
+ continue
+ }
+ for i, v := range vals {
+ field := "request.header." + k
+ if i > 0 {
+ field = fmt.Sprintf("%s.%d", field, i)
+ }
+ fields[field] = v
+ }
+ }
+
+ return fields
+}
+
+func responseFields(resp *http.Response) log.Fields {
+ fields := map[string]interface{}{
+ "response.status": resp.Status,
+ }
+ for k, vals := range resp.Header {
+ k = strings.ToLower(k)
+ for i, v := range vals {
+ field := "response.header." + k
+ if i > 0 {
+ field = fmt.Sprintf("%s.%d", field, i)
+ }
+ fields[field] = v
+ }
+ }
+
+ return fields
+}
+
+// IsLocalhost checks if the registry host is local.
+func IsLocalhost(host string) bool {
+ if h, _, err := net.SplitHostPort(host); err == nil {
+ host = h
+ }
+
+ if host == "localhost" {
+ return true
+ }
+
+ ip := net.ParseIP(host)
+ return ip.IsLoopback()
+}
+
+// NewHTTPFallback returns http.RoundTripper which allows fallback from https to
+// http for registry endpoints with configurations for both http and TLS,
+// such as defaulted localhost endpoints.
+func NewHTTPFallback(transport http.RoundTripper) http.RoundTripper {
+ return &httpFallback{
+ super: transport,
+ }
+}
+
+type httpFallback struct {
+ super http.RoundTripper
+ host string
+}
+
+func (f *httpFallback) RoundTrip(r *http.Request) (*http.Response, error) {
+ // only fall back if the same host had previously fell back
+ if f.host != r.URL.Host {
+ resp, err := f.super.RoundTrip(r)
+ if !isTLSError(err) {
+ return resp, err
+ }
+ }
+
+ plainHTTPUrl := *r.URL
+ plainHTTPUrl.Scheme = "http"
+
+ plainHTTPRequest := *r
+ plainHTTPRequest.URL = &plainHTTPUrl
+
+ if f.host != r.URL.Host {
+ f.host = r.URL.Host
+
+ // update body on the second attempt
+ if r.Body != nil && r.GetBody != nil {
+ body, err := r.GetBody()
+ if err != nil {
+ return nil, err
+ }
+ plainHTTPRequest.Body = body
+ }
+ }
+
+ return f.super.RoundTrip(&plainHTTPRequest)
+}
+
+func isTLSError(err error) bool {
+ if err == nil {
+ return false
+ }
+ var tlsErr tls.RecordHeaderError
+ if errors.As(err, &tlsErr) && string(tlsErr.RecordHeader[:]) == "HTTP/" {
+ return true
+ }
+ if strings.Contains(err.Error(), "TLS handshake timeout") {
+ return true
+ }
+
+ return false
+}
+
+// HTTPFallback is an http.RoundTripper which allows fallback from https to http
+// for registry endpoints with configurations for both http and TLS, such as
+// defaulted localhost endpoints.
+//
+// Deprecated: Use NewHTTPFallback instead.
+type HTTPFallback struct {
+ http.RoundTripper
+}
+
+func (f HTTPFallback) RoundTrip(r *http.Request) (*http.Response, error) {
+ resp, err := f.RoundTripper.RoundTrip(r)
+ var tlsErr tls.RecordHeaderError
+ if errors.As(err, &tlsErr) && string(tlsErr.RecordHeader[:]) == "HTTP/" {
+ // server gave HTTP response to HTTPS client
+ plainHTTPUrl := *r.URL
+ plainHTTPUrl.Scheme = "http"
+
+ plainHTTPRequest := *r
+ plainHTTPRequest.URL = &plainHTTPUrl
+
+ if r.Body != nil && r.GetBody != nil {
+ body, err := r.GetBody()
+ if err != nil {
+ return nil, err
+ }
+ plainHTTPRequest.Body = body
+ }
+
+ return f.RoundTripper.RoundTrip(&plainHTTPRequest)
+ }
+
+ return resp, err
+}
diff --git a/vendor/github.com/containerd/containerd/remotes/docker/schema1/converter.go b/vendor/github.com/containerd/containerd/remotes/docker/schema1/converter.go
new file mode 100644
index 000000000..b38c73855
--- /dev/null
+++ b/vendor/github.com/containerd/containerd/remotes/docker/schema1/converter.go
@@ -0,0 +1,609 @@
+/*
+ Copyright The containerd Authors.
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+*/
+
+// Package schema1 provides a converter to fetch an image formatted in Docker Image Manifest v2, Schema 1.
+//
+// Deprecated: use images formatted in Docker Image Manifest v2, Schema 2, or OCI Image Spec v1.
+package schema1
+
+import (
+ "bytes"
+ "context"
+ "encoding/base64"
+ "encoding/json"
+ "errors"
+ "fmt"
+ "io"
+ "strconv"
+ "strings"
+ "sync"
+ "time"
+
+ "github.com/containerd/log"
+ digest "github.com/opencontainers/go-digest"
+ specs "github.com/opencontainers/image-spec/specs-go"
+ ocispec "github.com/opencontainers/image-spec/specs-go/v1"
+ "golang.org/x/sync/errgroup"
+
+ "github.com/containerd/containerd/archive/compression"
+ "github.com/containerd/containerd/content"
+ "github.com/containerd/containerd/errdefs"
+ "github.com/containerd/containerd/images"
+ "github.com/containerd/containerd/labels"
+ "github.com/containerd/containerd/remotes"
+)
+
+const (
+ manifestSizeLimit = 8e6 // 8MB
+ labelDockerSchema1EmptyLayer = "containerd.io/docker.schema1.empty-layer"
+)
+
+type blobState struct {
+ diffID digest.Digest
+ empty bool
+}
+
+// Converter converts schema1 manifests to schema2 on fetch
+type Converter struct {
+ contentStore content.Store
+ fetcher remotes.Fetcher
+
+ pulledManifest *manifest
+
+ mu sync.Mutex
+ blobMap map[digest.Digest]blobState
+ layerBlobs map[digest.Digest]ocispec.Descriptor
+}
+
+// NewConverter returns a new converter
+func NewConverter(contentStore content.Store, fetcher remotes.Fetcher) *Converter {
+ return &Converter{
+ contentStore: contentStore,
+ fetcher: fetcher,
+ blobMap: map[digest.Digest]blobState{},
+ layerBlobs: map[digest.Digest]ocispec.Descriptor{},
+ }
+}
+
+// Handle fetching descriptors for a docker media type
+func (c *Converter) Handle(ctx context.Context, desc ocispec.Descriptor) ([]ocispec.Descriptor, error) {
+ switch desc.MediaType {
+ case images.MediaTypeDockerSchema1Manifest:
+ if err := c.fetchManifest(ctx, desc); err != nil {
+ return nil, err
+ }
+
+ m := c.pulledManifest
+ if len(m.FSLayers) != len(m.History) {
+ return nil, errors.New("invalid schema 1 manifest, history and layer mismatch")
+ }
+ descs := make([]ocispec.Descriptor, 0, len(c.pulledManifest.FSLayers))
+
+ for i := range m.FSLayers {
+ if _, ok := c.blobMap[c.pulledManifest.FSLayers[i].BlobSum]; !ok {
+ empty, err := isEmptyLayer([]byte(m.History[i].V1Compatibility))
+ if err != nil {
+ return nil, err
+ }
+
+ // Do no attempt to download a known empty blob
+ if !empty {
+ descs = append([]ocispec.Descriptor{
+ {
+ MediaType: images.MediaTypeDockerSchema2LayerGzip,
+ Digest: c.pulledManifest.FSLayers[i].BlobSum,
+ Size: -1,
+ },
+ }, descs...)
+ }
+ c.blobMap[c.pulledManifest.FSLayers[i].BlobSum] = blobState{
+ empty: empty,
+ }
+ }
+ }
+ return descs, nil
+ case images.MediaTypeDockerSchema2LayerGzip:
+ if c.pulledManifest == nil {
+ return nil, errors.New("manifest required for schema 1 blob pull")
+ }
+ return nil, c.fetchBlob(ctx, desc)
+ default:
+ return nil, fmt.Errorf("%v not support for schema 1 manifests", desc.MediaType)
+ }
+}
+
+// ConvertOptions provides options on converting a docker schema1 manifest.
+type ConvertOptions struct {
+ // ManifestMediaType specifies the media type of the manifest OCI descriptor.
+ ManifestMediaType string
+
+ // ConfigMediaType specifies the media type of the manifest config OCI
+ // descriptor.
+ ConfigMediaType string
+}
+
+// ConvertOpt allows configuring a convert operation.
+type ConvertOpt func(context.Context, *ConvertOptions) error
+
+// UseDockerSchema2 is used to indicate that a schema1 manifest should be
+// converted into the media types for a docker schema2 manifest.
+func UseDockerSchema2() ConvertOpt {
+ return func(ctx context.Context, o *ConvertOptions) error {
+ o.ManifestMediaType = images.MediaTypeDockerSchema2Manifest
+ o.ConfigMediaType = images.MediaTypeDockerSchema2Config
+ return nil
+ }
+}
+
+// Convert a docker manifest to an OCI descriptor
+func (c *Converter) Convert(ctx context.Context, opts ...ConvertOpt) (ocispec.Descriptor, error) {
+ co := ConvertOptions{
+ ManifestMediaType: ocispec.MediaTypeImageManifest,
+ ConfigMediaType: ocispec.MediaTypeImageConfig,
+ }
+ for _, opt := range opts {
+ if err := opt(ctx, &co); err != nil {
+ return ocispec.Descriptor{}, err
+ }
+ }
+
+ history, diffIDs, err := c.schema1ManifestHistory()
+ if err != nil {
+ return ocispec.Descriptor{}, fmt.Errorf("schema 1 conversion failed: %w", err)
+ }
+
+ var img ocispec.Image
+ if err := json.Unmarshal([]byte(c.pulledManifest.History[0].V1Compatibility), &img); err != nil {
+ return ocispec.Descriptor{}, fmt.Errorf("failed to unmarshal image from schema 1 history: %w", err)
+ }
+
+ img.History = history
+ img.RootFS = ocispec.RootFS{
+ Type: "layers",
+ DiffIDs: diffIDs,
+ }
+
+ b, err := json.MarshalIndent(img, "", " ")
+ if err != nil {
+ return ocispec.Descriptor{}, fmt.Errorf("failed to marshal image: %w", err)
+ }
+
+ config := ocispec.Descriptor{
+ MediaType: co.ConfigMediaType,
+ Digest: digest.Canonical.FromBytes(b),
+ Size: int64(len(b)),
+ }
+
+ layers := make([]ocispec.Descriptor, len(diffIDs))
+ for i, diffID := range diffIDs {
+ layers[i] = c.layerBlobs[diffID]
+ }
+
+ manifest := ocispec.Manifest{
+ Versioned: specs.Versioned{
+ SchemaVersion: 2,
+ },
+ Config: config,
+ Layers: layers,
+ }
+
+ mb, err := json.MarshalIndent(manifest, "", " ")
+ if err != nil {
+ return ocispec.Descriptor{}, fmt.Errorf("failed to marshal image: %w", err)
+ }
+
+ desc := ocispec.Descriptor{
+ MediaType: co.ManifestMediaType,
+ Digest: digest.Canonical.FromBytes(mb),
+ Size: int64(len(mb)),
+ }
+
+ labels := map[string]string{}
+ labels["containerd.io/gc.ref.content.0"] = manifest.Config.Digest.String()
+ for i, ch := range manifest.Layers {
+ labels[fmt.Sprintf("containerd.io/gc.ref.content.%d", i+1)] = ch.Digest.String()
+ }
+
+ ref := remotes.MakeRefKey(ctx, desc)
+ if err := content.WriteBlob(ctx, c.contentStore, ref, bytes.NewReader(mb), desc, content.WithLabels(labels)); err != nil {
+ return ocispec.Descriptor{}, fmt.Errorf("failed to write image manifest: %w", err)
+ }
+
+ ref = remotes.MakeRefKey(ctx, config)
+ if err := content.WriteBlob(ctx, c.contentStore, ref, bytes.NewReader(b), config); err != nil {
+ return ocispec.Descriptor{}, fmt.Errorf("failed to write image config: %w", err)
+ }
+
+ return desc, nil
+}
+
+// ReadStripSignature reads in a schema1 manifest and returns a byte array
+// with the "signatures" field stripped
+func ReadStripSignature(schema1Blob io.Reader) ([]byte, error) {
+ b, err := io.ReadAll(io.LimitReader(schema1Blob, manifestSizeLimit)) // limit to 8MB
+ if err != nil {
+ return nil, err
+ }
+
+ return stripSignature(b)
+}
+
+func (c *Converter) fetchManifest(ctx context.Context, desc ocispec.Descriptor) error {
+ log.G(ctx).Debug("fetch schema 1")
+
+ rc, err := c.fetcher.Fetch(ctx, desc)
+ if err != nil {
+ return err
+ }
+
+ b, err := ReadStripSignature(rc)
+ rc.Close()
+ if err != nil {
+ return err
+ }
+
+ var m manifest
+ if err := json.Unmarshal(b, &m); err != nil {
+ return err
+ }
+ if len(m.Manifests) != 0 || len(m.Layers) != 0 {
+ return errors.New("converter: expected schema1 document but found extra keys")
+ }
+ c.pulledManifest = &m
+
+ return nil
+}
+
+func (c *Converter) fetchBlob(ctx context.Context, desc ocispec.Descriptor) error {
+ log.G(ctx).Debug("fetch blob")
+
+ var (
+ ref = remotes.MakeRefKey(ctx, desc)
+ calc = newBlobStateCalculator()
+ compressMethod = compression.Gzip
+ )
+
+ // size may be unknown, set to zero for content ingest
+ ingestDesc := desc
+ if ingestDesc.Size == -1 {
+ ingestDesc.Size = 0
+ }
+
+ cw, err := content.OpenWriter(ctx, c.contentStore, content.WithRef(ref), content.WithDescriptor(ingestDesc))
+ if err != nil {
+ if !errdefs.IsAlreadyExists(err) {
+ return err
+ }
+
+ reuse, err := c.reuseLabelBlobState(ctx, desc)
+ if err != nil {
+ return err
+ }
+
+ if reuse {
+ return nil
+ }
+
+ ra, err := c.contentStore.ReaderAt(ctx, desc)
+ if err != nil {
+ return err
+ }
+ defer ra.Close()
+
+ r, err := compression.DecompressStream(content.NewReader(ra))
+ if err != nil {
+ return err
+ }
+
+ compressMethod = r.GetCompression()
+ _, err = io.Copy(calc, r)
+ r.Close()
+ if err != nil {
+ return err
+ }
+ } else {
+ defer cw.Close()
+
+ rc, err := c.fetcher.Fetch(ctx, desc)
+ if err != nil {
+ return err
+ }
+ defer rc.Close()
+
+ eg, _ := errgroup.WithContext(ctx)
+ pr, pw := io.Pipe()
+
+ eg.Go(func() error {
+ r, err := compression.DecompressStream(pr)
+ if err != nil {
+ return err
+ }
+
+ compressMethod = r.GetCompression()
+ _, err = io.Copy(calc, r)
+ r.Close()
+ pr.CloseWithError(err)
+ return err
+ })
+
+ eg.Go(func() error {
+ defer pw.Close()
+
+ return content.Copy(ctx, cw, io.TeeReader(rc, pw), ingestDesc.Size, ingestDesc.Digest)
+ })
+
+ if err := eg.Wait(); err != nil {
+ return err
+ }
+ }
+
+ if desc.Size == -1 {
+ info, err := c.contentStore.Info(ctx, desc.Digest)
+ if err != nil {
+ return fmt.Errorf("failed to get blob info: %w", err)
+ }
+ desc.Size = info.Size
+ }
+
+ if compressMethod == compression.Uncompressed {
+ log.G(ctx).WithField("id", desc.Digest).Debugf("changed media type for uncompressed schema1 layer blob")
+ desc.MediaType = images.MediaTypeDockerSchema2Layer
+ }
+
+ state := calc.State()
+
+ cinfo := content.Info{
+ Digest: desc.Digest,
+ Labels: map[string]string{
+ labels.LabelUncompressed: state.diffID.String(),
+ labelDockerSchema1EmptyLayer: strconv.FormatBool(state.empty),
+ },
+ }
+
+ if _, err := c.contentStore.Update(ctx, cinfo, "labels."+labels.LabelUncompressed, fmt.Sprintf("labels.%s", labelDockerSchema1EmptyLayer)); err != nil {
+ return fmt.Errorf("failed to update uncompressed label: %w", err)
+ }
+
+ c.mu.Lock()
+ c.blobMap[desc.Digest] = state
+ c.layerBlobs[state.diffID] = desc
+ c.mu.Unlock()
+
+ return nil
+}
+
+func (c *Converter) reuseLabelBlobState(ctx context.Context, desc ocispec.Descriptor) (bool, error) {
+ cinfo, err := c.contentStore.Info(ctx, desc.Digest)
+ if err != nil {
+ return false, fmt.Errorf("failed to get blob info: %w", err)
+ }
+ desc.Size = cinfo.Size
+
+ diffID, ok := cinfo.Labels[labels.LabelUncompressed]
+ if !ok {
+ return false, nil
+ }
+
+ emptyVal, ok := cinfo.Labels[labelDockerSchema1EmptyLayer]
+ if !ok {
+ return false, nil
+ }
+
+ isEmpty, err := strconv.ParseBool(emptyVal)
+ if err != nil {
+ log.G(ctx).WithField("id", desc.Digest).Warnf("failed to parse bool from label %s: %v", labelDockerSchema1EmptyLayer, isEmpty)
+ return false, nil
+ }
+
+ bState := blobState{empty: isEmpty}
+
+ if bState.diffID, err = digest.Parse(diffID); err != nil {
+ log.G(ctx).WithField("id", desc.Digest).Warnf("failed to parse digest from label %s: %v", labels.LabelUncompressed, diffID)
+ return false, nil
+ }
+
+ // NOTE: there is no need to read header to get compression method
+ // because there are only two kinds of methods.
+ if bState.diffID == desc.Digest {
+ desc.MediaType = images.MediaTypeDockerSchema2Layer
+ } else {
+ desc.MediaType = images.MediaTypeDockerSchema2LayerGzip
+ }
+
+ c.mu.Lock()
+ c.blobMap[desc.Digest] = bState
+ c.layerBlobs[bState.diffID] = desc
+ c.mu.Unlock()
+ return true, nil
+}
+
+func (c *Converter) schema1ManifestHistory() ([]ocispec.History, []digest.Digest, error) {
+ if c.pulledManifest == nil {
+ return nil, nil, errors.New("missing schema 1 manifest for conversion")
+ }
+ m := *c.pulledManifest
+
+ if len(m.History) == 0 {
+ return nil, nil, errors.New("no history")
+ }
+
+ history := make([]ocispec.History, len(m.History))
+ diffIDs := []digest.Digest{}
+ for i := range m.History {
+ var h v1History
+ if err := json.Unmarshal([]byte(m.History[i].V1Compatibility), &h); err != nil {
+ return nil, nil, fmt.Errorf("failed to unmarshal history: %w", err)
+ }
+
+ blobSum := m.FSLayers[i].BlobSum
+
+ state := c.blobMap[blobSum]
+
+ history[len(history)-i-1] = ocispec.History{
+ Author: h.Author,
+ Comment: h.Comment,
+ Created: &h.Created,
+ CreatedBy: strings.Join(h.ContainerConfig.Cmd, " "),
+ EmptyLayer: state.empty,
+ }
+
+ if !state.empty {
+ diffIDs = append([]digest.Digest{state.diffID}, diffIDs...)
+
+ }
+ }
+
+ return history, diffIDs, nil
+}
+
+type fsLayer struct {
+ BlobSum digest.Digest `json:"blobSum"`
+}
+
+type history struct {
+ V1Compatibility string `json:"v1Compatibility"`
+}
+
+type manifest struct {
+ FSLayers []fsLayer `json:"fsLayers"`
+ History []history `json:"history"`
+ Layers json.RawMessage `json:"layers,omitempty"` // OCI manifest
+ Manifests json.RawMessage `json:"manifests,omitempty"` // OCI index
+}
+
+type v1History struct {
+ Author string `json:"author,omitempty"`
+ Created time.Time `json:"created"`
+ Comment string `json:"comment,omitempty"`
+ ThrowAway *bool `json:"throwaway,omitempty"`
+ Size *int `json:"Size,omitempty"` // used before ThrowAway field
+ ContainerConfig struct {
+ Cmd []string `json:"Cmd,omitempty"`
+ } `json:"container_config,omitempty"`
+}
+
+// isEmptyLayer returns whether the v1 compatibility history describes an
+// empty layer. A return value of true indicates the layer is empty,
+// however false does not indicate non-empty.
+func isEmptyLayer(compatHistory []byte) (bool, error) {
+ var h v1History
+ if err := json.Unmarshal(compatHistory, &h); err != nil {
+ return false, err
+ }
+
+ if h.ThrowAway != nil {
+ return *h.ThrowAway, nil
+ }
+ if h.Size != nil {
+ return *h.Size == 0, nil
+ }
+
+ // If no `Size` or `throwaway` field is given, then
+ // it cannot be determined whether the layer is empty
+ // from the history, return false
+ return false, nil
+}
+
+type signature struct {
+ Signatures []jsParsedSignature `json:"signatures"`
+}
+
+type jsParsedSignature struct {
+ Protected string `json:"protected"`
+}
+
+type protectedBlock struct {
+ Length int `json:"formatLength"`
+ Tail string `json:"formatTail"`
+}
+
+// joseBase64UrlDecode decodes the given string using the standard base64 url
+// decoder but first adds the appropriate number of trailing '=' characters in
+// accordance with the jose specification.
+// http://tools.ietf.org/html/draft-ietf-jose-json-web-signature-31#section-2
+func joseBase64UrlDecode(s string) ([]byte, error) {
+ switch len(s) % 4 {
+ case 0:
+ case 2:
+ s += "=="
+ case 3:
+ s += "="
+ default:
+ return nil, errors.New("illegal base64url string")
+ }
+ return base64.URLEncoding.DecodeString(s)
+}
+
+func stripSignature(b []byte) ([]byte, error) {
+ var sig signature
+ if err := json.Unmarshal(b, &sig); err != nil {
+ return nil, err
+ }
+ if len(sig.Signatures) == 0 {
+ return nil, errors.New("no signatures")
+ }
+ pb, err := joseBase64UrlDecode(sig.Signatures[0].Protected)
+ if err != nil {
+ return nil, fmt.Errorf("could not decode %s: %w", sig.Signatures[0].Protected, err)
+ }
+
+ var protected protectedBlock
+ if err := json.Unmarshal(pb, &protected); err != nil {
+ return nil, err
+ }
+
+ if protected.Length > len(b) {
+ return nil, errors.New("invalid protected length block")
+ }
+
+ tail, err := joseBase64UrlDecode(protected.Tail)
+ if err != nil {
+ return nil, fmt.Errorf("invalid tail base 64 value: %w", err)
+ }
+
+ return append(b[:protected.Length], tail...), nil
+}
+
+type blobStateCalculator struct {
+ empty bool
+ digester digest.Digester
+}
+
+func newBlobStateCalculator() *blobStateCalculator {
+ return &blobStateCalculator{
+ empty: true,
+ digester: digest.Canonical.Digester(),
+ }
+}
+
+func (c *blobStateCalculator) Write(p []byte) (int, error) {
+ if c.empty {
+ for _, b := range p {
+ if b != 0x00 {
+ c.empty = false
+ break
+ }
+ }
+ }
+ return c.digester.Hash().Write(p)
+}
+
+func (c *blobStateCalculator) State() blobState {
+ return blobState{
+ empty: c.empty,
+ diffID: c.digester.Digest(),
+ }
+}
diff --git a/vendor/github.com/containerd/containerd/remotes/docker/scope.go b/vendor/github.com/containerd/containerd/remotes/docker/scope.go
new file mode 100644
index 000000000..95b4810ab
--- /dev/null
+++ b/vendor/github.com/containerd/containerd/remotes/docker/scope.go
@@ -0,0 +1,101 @@
+/*
+ Copyright The containerd Authors.
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+*/
+
+package docker
+
+import (
+ "context"
+ "fmt"
+ "net/url"
+ "sort"
+ "strings"
+
+ "github.com/containerd/containerd/reference"
+)
+
+// RepositoryScope returns a repository scope string such as "repository:foo/bar:pull"
+// for "host/foo/bar:baz".
+// When push is true, both pull and push are added to the scope.
+func RepositoryScope(refspec reference.Spec, push bool) (string, error) {
+ u, err := url.Parse("dummy://" + refspec.Locator)
+ if err != nil {
+ return "", err
+ }
+ s := "repository:" + strings.TrimPrefix(u.Path, "/") + ":pull"
+ if push {
+ s += ",push"
+ }
+ return s, nil
+}
+
+// tokenScopesKey is used for the key for context.WithValue().
+// value: []string (e.g. {"registry:foo/bar:pull"})
+type tokenScopesKey struct{}
+
+// ContextWithRepositoryScope returns a context with tokenScopesKey{} and the repository scope value.
+func ContextWithRepositoryScope(ctx context.Context, refspec reference.Spec, push bool) (context.Context, error) {
+ s, err := RepositoryScope(refspec, push)
+ if err != nil {
+ return nil, err
+ }
+ return WithScope(ctx, s), nil
+}
+
+// WithScope appends a custom registry auth scope to the context.
+func WithScope(ctx context.Context, scope string) context.Context {
+ var scopes []string
+ if v := ctx.Value(tokenScopesKey{}); v != nil {
+ scopes = v.([]string)
+ scopes = append(scopes, scope)
+ } else {
+ scopes = []string{scope}
+ }
+ return context.WithValue(ctx, tokenScopesKey{}, scopes)
+}
+
+// ContextWithAppendPullRepositoryScope is used to append repository pull
+// scope into existing scopes indexed by the tokenScopesKey{}.
+func ContextWithAppendPullRepositoryScope(ctx context.Context, repo string) context.Context {
+ return WithScope(ctx, fmt.Sprintf("repository:%s:pull", repo))
+}
+
+// GetTokenScopes returns deduplicated and sorted scopes from ctx.Value(tokenScopesKey{}) and common scopes.
+func GetTokenScopes(ctx context.Context, common []string) []string {
+ scopes := []string{}
+ if x := ctx.Value(tokenScopesKey{}); x != nil {
+ scopes = append(scopes, x.([]string)...)
+ }
+
+ scopes = append(scopes, common...)
+ sort.Strings(scopes)
+
+ if len(scopes) == 0 {
+ return scopes
+ }
+
+ l := 0
+ for idx := 1; idx < len(scopes); idx++ {
+ // Note: this comparison is unaware of the scope grammar (https://docs.docker.com/registry/spec/auth/scope/)
+ // So, "repository:foo/bar:pull,push" != "repository:foo/bar:push,pull", although semantically they are equal.
+ if scopes[l] == scopes[idx] {
+ continue
+ }
+
+ l++
+ scopes[l] = scopes[idx]
+ }
+ return scopes[:l+1]
+}
diff --git a/vendor/github.com/containerd/containerd/remotes/docker/status.go b/vendor/github.com/containerd/containerd/remotes/docker/status.go
new file mode 100644
index 000000000..1a9227725
--- /dev/null
+++ b/vendor/github.com/containerd/containerd/remotes/docker/status.go
@@ -0,0 +1,101 @@
+/*
+ Copyright The containerd Authors.
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+*/
+
+package docker
+
+import (
+ "fmt"
+ "sync"
+
+ "github.com/containerd/containerd/content"
+ "github.com/containerd/containerd/errdefs"
+ "github.com/moby/locker"
+)
+
+// Status of a content operation
+type Status struct {
+ content.Status
+
+ Committed bool
+
+ // ErrClosed contains error encountered on close.
+ ErrClosed error
+
+ // UploadUUID is used by the Docker registry to reference blob uploads
+ UploadUUID string
+
+ // PushStatus contains status related to push.
+ PushStatus
+}
+
+type PushStatus struct {
+ // MountedFrom is the source content was cross-repo mounted from (empty if no cross-repo mount was performed).
+ MountedFrom string
+
+ // Exists indicates whether content already exists in the repository and wasn't uploaded.
+ Exists bool
+}
+
+// StatusTracker to track status of operations
+type StatusTracker interface {
+ GetStatus(string) (Status, error)
+ SetStatus(string, Status)
+}
+
+// StatusTrackLocker to track status of operations with lock
+type StatusTrackLocker interface {
+ StatusTracker
+ Lock(string)
+ Unlock(string)
+}
+
+type memoryStatusTracker struct {
+ statuses map[string]Status
+ m sync.Mutex
+ locker *locker.Locker
+}
+
+// NewInMemoryTracker returns a StatusTracker that tracks content status in-memory
+func NewInMemoryTracker() StatusTrackLocker {
+ return &memoryStatusTracker{
+ statuses: map[string]Status{},
+ locker: locker.New(),
+ }
+}
+
+func (t *memoryStatusTracker) GetStatus(ref string) (Status, error) {
+ t.m.Lock()
+ defer t.m.Unlock()
+ status, ok := t.statuses[ref]
+ if !ok {
+ return Status{}, fmt.Errorf("status for ref %v: %w", ref, errdefs.ErrNotFound)
+ }
+ return status, nil
+}
+
+func (t *memoryStatusTracker) SetStatus(ref string, status Status) {
+ t.m.Lock()
+ t.statuses[ref] = status
+ t.m.Unlock()
+}
+
+func (t *memoryStatusTracker) Lock(ref string) {
+ t.locker.Lock(ref)
+}
+
+func (t *memoryStatusTracker) Unlock(ref string) {
+ t.locker.Unlock(ref)
+}
diff --git a/vendor/github.com/containerd/containerd/remotes/errors/errors.go b/vendor/github.com/containerd/containerd/remotes/errors/errors.go
new file mode 100644
index 000000000..f60ff0fc2
--- /dev/null
+++ b/vendor/github.com/containerd/containerd/remotes/errors/errors.go
@@ -0,0 +1,55 @@
+/*
+ Copyright The containerd Authors.
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+*/
+
+package errors
+
+import (
+ "fmt"
+ "io"
+ "net/http"
+)
+
+var _ error = ErrUnexpectedStatus{}
+
+// ErrUnexpectedStatus is returned if a registry API request returned with unexpected HTTP status
+type ErrUnexpectedStatus struct {
+ Status string
+ StatusCode int
+ Body []byte
+ RequestURL, RequestMethod string
+}
+
+func (e ErrUnexpectedStatus) Error() string {
+ return fmt.Sprintf("unexpected status from %s request to %s: %s", e.RequestMethod, e.RequestURL, e.Status)
+}
+
+// NewUnexpectedStatusErr creates an ErrUnexpectedStatus from HTTP response
+func NewUnexpectedStatusErr(resp *http.Response) error {
+ var b []byte
+ if resp.Body != nil {
+ b, _ = io.ReadAll(io.LimitReader(resp.Body, 64000)) // 64KB
+ }
+ err := ErrUnexpectedStatus{
+ Body: b,
+ Status: resp.Status,
+ StatusCode: resp.StatusCode,
+ RequestMethod: resp.Request.Method,
+ }
+ if resp.Request.URL != nil {
+ err.RequestURL = resp.Request.URL.String()
+ }
+ return err
+}
diff --git a/vendor/github.com/containerd/containerd/remotes/handlers.go b/vendor/github.com/containerd/containerd/remotes/handlers.go
new file mode 100644
index 000000000..365ff5fc0
--- /dev/null
+++ b/vendor/github.com/containerd/containerd/remotes/handlers.go
@@ -0,0 +1,414 @@
+/*
+ Copyright The containerd Authors.
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+*/
+
+package remotes
+
+import (
+ "bytes"
+ "context"
+ "errors"
+ "fmt"
+ "io"
+ "strings"
+ "sync"
+
+ "github.com/containerd/log"
+ "github.com/containerd/platforms"
+ ocispec "github.com/opencontainers/image-spec/specs-go/v1"
+ "golang.org/x/sync/semaphore"
+
+ "github.com/containerd/containerd/content"
+ "github.com/containerd/containerd/errdefs"
+ "github.com/containerd/containerd/images"
+ "github.com/containerd/containerd/labels"
+)
+
+type refKeyPrefix struct{}
+
+// WithMediaTypeKeyPrefix adds a custom key prefix for a media type which is used when storing
+// data in the content store from the FetchHandler.
+//
+// Used in `MakeRefKey` to determine what the key prefix should be.
+func WithMediaTypeKeyPrefix(ctx context.Context, mediaType, prefix string) context.Context {
+ var values map[string]string
+ if v := ctx.Value(refKeyPrefix{}); v != nil {
+ values = v.(map[string]string)
+ } else {
+ values = make(map[string]string)
+ }
+
+ values[mediaType] = prefix
+ return context.WithValue(ctx, refKeyPrefix{}, values)
+}
+
+// MakeRefKey returns a unique reference for the descriptor. This reference can be
+// used to lookup ongoing processes related to the descriptor. This function
+// may look to the context to namespace the reference appropriately.
+func MakeRefKey(ctx context.Context, desc ocispec.Descriptor) string {
+ key := desc.Digest.String()
+ if desc.Annotations != nil {
+ if name, ok := desc.Annotations[ocispec.AnnotationRefName]; ok {
+ key = fmt.Sprintf("%s@%s", name, desc.Digest.String())
+ }
+ }
+
+ if v := ctx.Value(refKeyPrefix{}); v != nil {
+ values := v.(map[string]string)
+ if prefix := values[desc.MediaType]; prefix != "" {
+ return prefix + "-" + key
+ }
+ }
+
+ switch mt := desc.MediaType; {
+ case mt == images.MediaTypeDockerSchema2Manifest || mt == ocispec.MediaTypeImageManifest:
+ return "manifest-" + key
+ case mt == images.MediaTypeDockerSchema2ManifestList || mt == ocispec.MediaTypeImageIndex:
+ return "index-" + key
+ case images.IsLayerType(mt):
+ return "layer-" + key
+ case images.IsKnownConfig(mt):
+ return "config-" + key
+ default:
+ log.G(ctx).Warnf("reference for unknown type: %s", mt)
+ return "unknown-" + key
+ }
+}
+
+// FetchHandler returns a handler that will fetch all content into the ingester
+// discovered in a call to Dispatch. Use with ChildrenHandler to do a full
+// recursive fetch.
+func FetchHandler(ingester content.Ingester, fetcher Fetcher) images.HandlerFunc {
+ return func(ctx context.Context, desc ocispec.Descriptor) (subdescs []ocispec.Descriptor, err error) {
+ ctx = log.WithLogger(ctx, log.G(ctx).WithFields(log.Fields{
+ "digest": desc.Digest,
+ "mediatype": desc.MediaType,
+ "size": desc.Size,
+ }))
+
+ switch desc.MediaType {
+ case images.MediaTypeDockerSchema1Manifest:
+ return nil, fmt.Errorf("%v not supported", desc.MediaType)
+ default:
+ err := Fetch(ctx, ingester, fetcher, desc)
+ if errdefs.IsAlreadyExists(err) {
+ return nil, nil
+ }
+ return nil, err
+ }
+ }
+}
+
+// Fetch fetches the given digest into the provided ingester
+func Fetch(ctx context.Context, ingester content.Ingester, fetcher Fetcher, desc ocispec.Descriptor) error {
+ log.G(ctx).Debug("fetch")
+
+ cw, err := content.OpenWriter(ctx, ingester, content.WithRef(MakeRefKey(ctx, desc)), content.WithDescriptor(desc))
+ if err != nil {
+ return err
+ }
+ defer cw.Close()
+
+ ws, err := cw.Status()
+ if err != nil {
+ return err
+ }
+
+ if desc.Size == 0 {
+ // most likely a poorly configured registry/web front end which responded with no
+ // Content-Length header; unable (not to mention useless) to commit a 0-length entry
+ // into the content store. Error out here otherwise the error sent back is confusing
+ return fmt.Errorf("unable to fetch descriptor (%s) which reports content size of zero: %w", desc.Digest, errdefs.ErrInvalidArgument)
+ }
+ if ws.Offset == desc.Size {
+ // If writer is already complete, commit and return
+ err := cw.Commit(ctx, desc.Size, desc.Digest)
+ if err != nil && !errdefs.IsAlreadyExists(err) {
+ return fmt.Errorf("failed commit on ref %q: %w", ws.Ref, err)
+ }
+ return err
+ }
+
+ if desc.Size == int64(len(desc.Data)) {
+ return content.Copy(ctx, cw, bytes.NewReader(desc.Data), desc.Size, desc.Digest)
+ }
+
+ rc, err := fetcher.Fetch(ctx, desc)
+ if err != nil {
+ return err
+ }
+ defer rc.Close()
+
+ return content.Copy(ctx, cw, rc, desc.Size, desc.Digest)
+}
+
+// PushHandler returns a handler that will push all content from the provider
+// using a writer from the pusher.
+func PushHandler(pusher Pusher, provider content.Provider) images.HandlerFunc {
+ return func(ctx context.Context, desc ocispec.Descriptor) ([]ocispec.Descriptor, error) {
+ ctx = log.WithLogger(ctx, log.G(ctx).WithFields(log.Fields{
+ "digest": desc.Digest,
+ "mediatype": desc.MediaType,
+ "size": desc.Size,
+ }))
+
+ err := push(ctx, provider, pusher, desc)
+ return nil, err
+ }
+}
+
+func push(ctx context.Context, provider content.Provider, pusher Pusher, desc ocispec.Descriptor) error {
+ log.G(ctx).Debug("push")
+
+ var (
+ cw content.Writer
+ err error
+ )
+ if cs, ok := pusher.(content.Ingester); ok {
+ cw, err = content.OpenWriter(ctx, cs, content.WithRef(MakeRefKey(ctx, desc)), content.WithDescriptor(desc))
+ } else {
+ cw, err = pusher.Push(ctx, desc)
+ }
+ if err != nil {
+ if !errdefs.IsAlreadyExists(err) {
+ return err
+ }
+
+ return nil
+ }
+ defer cw.Close()
+
+ ra, err := provider.ReaderAt(ctx, desc)
+ if err != nil {
+ return err
+ }
+ defer ra.Close()
+
+ rd := io.NewSectionReader(ra, 0, desc.Size)
+ return content.Copy(ctx, cw, rd, desc.Size, desc.Digest)
+}
+
+// PushContent pushes content specified by the descriptor from the provider.
+//
+// Base handlers can be provided which will be called before any push specific
+// handlers.
+//
+// If the passed in content.Provider is also a content.InfoProvider (such as
+// content.Manager) then this will also annotate the distribution sources using
+// labels prefixed with "containerd.io/distribution.source".
+func PushContent(ctx context.Context, pusher Pusher, desc ocispec.Descriptor, store content.Provider, limiter *semaphore.Weighted, platform platforms.MatchComparer, wrapper func(h images.Handler) images.Handler) error {
+
+ var m sync.Mutex
+ manifests := []ocispec.Descriptor{}
+ indexStack := []ocispec.Descriptor{}
+
+ filterHandler := images.HandlerFunc(func(ctx context.Context, desc ocispec.Descriptor) ([]ocispec.Descriptor, error) {
+ switch desc.MediaType {
+ case images.MediaTypeDockerSchema2Manifest, ocispec.MediaTypeImageManifest:
+ m.Lock()
+ manifests = append(manifests, desc)
+ m.Unlock()
+ return nil, images.ErrStopHandler
+ case images.MediaTypeDockerSchema2ManifestList, ocispec.MediaTypeImageIndex:
+ m.Lock()
+ indexStack = append(indexStack, desc)
+ m.Unlock()
+ return nil, images.ErrStopHandler
+ default:
+ return nil, nil
+ }
+ })
+
+ pushHandler := PushHandler(pusher, store)
+
+ platformFilterhandler := images.FilterPlatforms(images.ChildrenHandler(store), platform)
+
+ var handler images.Handler
+ if m, ok := store.(content.InfoProvider); ok {
+ annotateHandler := annotateDistributionSourceHandler(platformFilterhandler, m)
+ handler = images.Handlers(annotateHandler, filterHandler, pushHandler)
+ } else {
+ handler = images.Handlers(platformFilterhandler, filterHandler, pushHandler)
+ }
+
+ if wrapper != nil {
+ handler = wrapper(handler)
+ }
+
+ if err := images.Dispatch(ctx, handler, limiter, desc); err != nil {
+ return err
+ }
+
+ if err := images.Dispatch(ctx, pushHandler, limiter, manifests...); err != nil {
+ return err
+ }
+
+ // Iterate in reverse order as seen, parent always uploaded after child
+ for i := len(indexStack) - 1; i >= 0; i-- {
+ err := images.Dispatch(ctx, pushHandler, limiter, indexStack[i])
+ if err != nil {
+ // TODO(estesp): until we have a more complete method for index push, we need to report
+ // missing dependencies in an index/manifest list by sensing the "400 Bad Request"
+ // as a marker for this problem
+ if errors.Unwrap(err) != nil && strings.Contains(errors.Unwrap(err).Error(), "400 Bad Request") {
+ return fmt.Errorf("manifest list/index references to blobs and/or manifests are missing in your target registry: %w", err)
+ }
+ return err
+ }
+ }
+
+ return nil
+}
+
+// SkipNonDistributableBlobs returns a handler that skips blobs that have a media type that is "non-distributeable".
+// An example of this kind of content would be a Windows base layer, which is not supposed to be redistributed.
+//
+// This is based on the media type of the content:
+// - application/vnd.oci.image.layer.nondistributable
+// - application/vnd.docker.image.rootfs.foreign
+func SkipNonDistributableBlobs(f images.HandlerFunc) images.HandlerFunc {
+ return func(ctx context.Context, desc ocispec.Descriptor) ([]ocispec.Descriptor, error) {
+ if images.IsNonDistributable(desc.MediaType) {
+ log.G(ctx).WithField("digest", desc.Digest).WithField("mediatype", desc.MediaType).Debug("Skipping non-distributable blob")
+ return nil, images.ErrSkipDesc
+ }
+
+ if images.IsLayerType(desc.MediaType) {
+ return nil, nil
+ }
+
+ children, err := f(ctx, desc)
+ if err != nil {
+ return nil, err
+ }
+ if len(children) == 0 {
+ return nil, nil
+ }
+
+ out := make([]ocispec.Descriptor, 0, len(children))
+ for _, child := range children {
+ if !images.IsNonDistributable(child.MediaType) {
+ out = append(out, child)
+ } else {
+ log.G(ctx).WithField("digest", child.Digest).WithField("mediatype", child.MediaType).Debug("Skipping non-distributable blob")
+ }
+ }
+ return out, nil
+ }
+}
+
+// FilterManifestByPlatformHandler allows Handler to handle non-target
+// platform's manifest and configuration data.
+func FilterManifestByPlatformHandler(f images.HandlerFunc, m platforms.Matcher) images.HandlerFunc {
+ return func(ctx context.Context, desc ocispec.Descriptor) ([]ocispec.Descriptor, error) {
+ children, err := f(ctx, desc)
+ if err != nil {
+ return nil, err
+ }
+
+ // no platform information
+ if desc.Platform == nil || m == nil {
+ return children, nil
+ }
+
+ var descs []ocispec.Descriptor
+ switch desc.MediaType {
+ case images.MediaTypeDockerSchema2Manifest, ocispec.MediaTypeImageManifest:
+ if m.Match(*desc.Platform) {
+ descs = children
+ } else {
+ for _, child := range children {
+ if child.MediaType == images.MediaTypeDockerSchema2Config ||
+ child.MediaType == ocispec.MediaTypeImageConfig {
+
+ descs = append(descs, child)
+ }
+ }
+ }
+ default:
+ descs = children
+ }
+ return descs, nil
+ }
+}
+
+// annotateDistributionSourceHandler add distribution source label into
+// annotation of config or blob descriptor.
+func annotateDistributionSourceHandler(f images.HandlerFunc, provider content.InfoProvider) images.HandlerFunc {
+ return func(ctx context.Context, desc ocispec.Descriptor) ([]ocispec.Descriptor, error) {
+ children, err := f(ctx, desc)
+ if err != nil {
+ return nil, err
+ }
+
+ // Distribution source is only used for config or blob but may be inherited from
+ // a manifest or manifest list
+ switch desc.MediaType {
+ case images.MediaTypeDockerSchema2Manifest, ocispec.MediaTypeImageManifest,
+ images.MediaTypeDockerSchema2ManifestList, ocispec.MediaTypeImageIndex:
+ default:
+ return children, nil
+ }
+
+ parentSourceAnnotations := desc.Annotations
+ var parentLabels map[string]string
+ if pi, err := provider.Info(ctx, desc.Digest); err != nil {
+ if !errdefs.IsNotFound(err) {
+ return nil, err
+ }
+ } else {
+ parentLabels = pi.Labels
+ }
+
+ for i := range children {
+ child := children[i]
+
+ info, err := provider.Info(ctx, child.Digest)
+ if err != nil {
+ if !errdefs.IsNotFound(err) {
+ return nil, err
+ }
+ }
+ copyDistributionSourceLabels(info.Labels, &child)
+
+ // Annotate with parent labels for cross repo mount or fetch.
+ // Parent sources may apply to all children since most registries
+ // enforce that children exist before the manifests.
+ copyDistributionSourceLabels(parentSourceAnnotations, &child)
+ copyDistributionSourceLabels(parentLabels, &child)
+
+ children[i] = child
+ }
+ return children, nil
+ }
+}
+
+func copyDistributionSourceLabels(from map[string]string, to *ocispec.Descriptor) {
+ for k, v := range from {
+ if !strings.HasPrefix(k, labels.LabelDistributionSource+".") {
+ continue
+ }
+
+ if to.Annotations == nil {
+ to.Annotations = make(map[string]string)
+ } else {
+ // Only propagate the parent label if the child doesn't already have it.
+ if _, has := to.Annotations[k]; has {
+ continue
+ }
+ }
+ to.Annotations[k] = v
+ }
+}
diff --git a/vendor/github.com/containerd/containerd/remotes/resolver.go b/vendor/github.com/containerd/containerd/remotes/resolver.go
new file mode 100644
index 000000000..f200c84bc
--- /dev/null
+++ b/vendor/github.com/containerd/containerd/remotes/resolver.go
@@ -0,0 +1,94 @@
+/*
+ Copyright The containerd Authors.
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+*/
+
+package remotes
+
+import (
+ "context"
+ "io"
+
+ "github.com/containerd/containerd/content"
+ "github.com/opencontainers/go-digest"
+ ocispec "github.com/opencontainers/image-spec/specs-go/v1"
+)
+
+// Resolver provides remotes based on a locator.
+type Resolver interface {
+ // Resolve attempts to resolve the reference into a name and descriptor.
+ //
+ // The argument `ref` should be a scheme-less URI representing the remote.
+ // Structurally, it has a host and path. The "host" can be used to directly
+ // reference a specific host or be matched against a specific handler.
+ //
+ // The returned name should be used to identify the referenced entity.
+ // Depending on the remote namespace, this may be immutable or mutable.
+ // While the name may differ from ref, it should itself be a valid ref.
+ //
+ // If the resolution fails, an error will be returned.
+ Resolve(ctx context.Context, ref string) (name string, desc ocispec.Descriptor, err error)
+
+ // Fetcher returns a new fetcher for the provided reference.
+ // All content fetched from the returned fetcher will be
+ // from the namespace referred to by ref.
+ Fetcher(ctx context.Context, ref string) (Fetcher, error)
+
+ // Pusher returns a new pusher for the provided reference
+ // The returned Pusher should satisfy content.Ingester and concurrent attempts
+ // to push the same blob using the Ingester API should result in ErrUnavailable.
+ Pusher(ctx context.Context, ref string) (Pusher, error)
+}
+
+// Fetcher fetches content.
+// A fetcher implementation may implement the FetcherByDigest interface too.
+type Fetcher interface {
+ // Fetch the resource identified by the descriptor.
+ Fetch(ctx context.Context, desc ocispec.Descriptor) (io.ReadCloser, error)
+}
+
+// FetcherByDigest fetches content by the digest.
+type FetcherByDigest interface {
+ // FetchByDigest fetches the resource identified by the digest.
+ //
+ // FetcherByDigest usually returns an incomplete descriptor.
+ // Typically, the media type is always set to "application/octet-stream",
+ // and the annotations are unset.
+ FetchByDigest(ctx context.Context, dgst digest.Digest) (io.ReadCloser, ocispec.Descriptor, error)
+}
+
+// Pusher pushes content
+type Pusher interface {
+ // Push returns a content writer for the given resource identified
+ // by the descriptor.
+ Push(ctx context.Context, d ocispec.Descriptor) (content.Writer, error)
+}
+
+// FetcherFunc allows package users to implement a Fetcher with just a
+// function.
+type FetcherFunc func(ctx context.Context, desc ocispec.Descriptor) (io.ReadCloser, error)
+
+// Fetch content
+func (fn FetcherFunc) Fetch(ctx context.Context, desc ocispec.Descriptor) (io.ReadCloser, error) {
+ return fn(ctx, desc)
+}
+
+// PusherFunc allows package users to implement a Pusher with just a
+// function.
+type PusherFunc func(ctx context.Context, desc ocispec.Descriptor) (content.Writer, error)
+
+// Push content
+func (fn PusherFunc) Push(ctx context.Context, desc ocispec.Descriptor) (content.Writer, error) {
+ return fn(ctx, desc)
+}
diff --git a/vendor/github.com/containerd/containerd/tracing/helpers.go b/vendor/github.com/containerd/containerd/tracing/helpers.go
new file mode 100644
index 000000000..981da6c79
--- /dev/null
+++ b/vendor/github.com/containerd/containerd/tracing/helpers.go
@@ -0,0 +1,94 @@
+/*
+ Copyright The containerd Authors.
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+*/
+
+package tracing
+
+import (
+ "encoding/json"
+ "fmt"
+ "strings"
+
+ "go.opentelemetry.io/otel/attribute"
+)
+
+const (
+ spanDelimiter = "."
+)
+
+func makeSpanName(names ...string) string {
+ return strings.Join(names, spanDelimiter)
+}
+
+func any(k string, v interface{}) attribute.KeyValue {
+ if v == nil {
+ return attribute.String(k, "")
+ }
+
+ switch typed := v.(type) {
+ case bool:
+ return attribute.Bool(k, typed)
+ case []bool:
+ return attribute.BoolSlice(k, typed)
+ case int:
+ return attribute.Int(k, typed)
+ case []int:
+ return attribute.IntSlice(k, typed)
+ case int8:
+ return attribute.Int(k, int(typed))
+ case []int8:
+ ls := make([]int, 0, len(typed))
+ for _, i := range typed {
+ ls = append(ls, int(i))
+ }
+ return attribute.IntSlice(k, ls)
+ case int16:
+ return attribute.Int(k, int(typed))
+ case []int16:
+ ls := make([]int, 0, len(typed))
+ for _, i := range typed {
+ ls = append(ls, int(i))
+ }
+ return attribute.IntSlice(k, ls)
+ case int32:
+ return attribute.Int64(k, int64(typed))
+ case []int32:
+ ls := make([]int64, 0, len(typed))
+ for _, i := range typed {
+ ls = append(ls, int64(i))
+ }
+ return attribute.Int64Slice(k, ls)
+ case int64:
+ return attribute.Int64(k, typed)
+ case []int64:
+ return attribute.Int64Slice(k, typed)
+ case float64:
+ return attribute.Float64(k, typed)
+ case []float64:
+ return attribute.Float64Slice(k, typed)
+ case string:
+ return attribute.String(k, typed)
+ case []string:
+ return attribute.StringSlice(k, typed)
+ }
+
+ if stringer, ok := v.(fmt.Stringer); ok {
+ return attribute.String(k, stringer.String())
+ }
+ if b, err := json.Marshal(v); b != nil && err == nil {
+ return attribute.String(k, string(b))
+ }
+ return attribute.String(k, fmt.Sprintf("%v", v))
+}
diff --git a/vendor/github.com/containerd/containerd/tracing/log.go b/vendor/github.com/containerd/containerd/tracing/log.go
new file mode 100644
index 000000000..98fa16f93
--- /dev/null
+++ b/vendor/github.com/containerd/containerd/tracing/log.go
@@ -0,0 +1,66 @@
+/*
+ Copyright The containerd Authors.
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+*/
+
+package tracing
+
+import (
+ "github.com/sirupsen/logrus"
+ "go.opentelemetry.io/otel/attribute"
+ "go.opentelemetry.io/otel/trace"
+)
+
+// NewLogrusHook creates a new logrus hook
+func NewLogrusHook() *LogrusHook {
+ return &LogrusHook{}
+}
+
+// LogrusHook is a logrus hook which adds logrus events to active spans.
+// If the span is not recording or the span context is invalid, the hook is a no-op.
+type LogrusHook struct{}
+
+// Levels returns the logrus levels that this hook is interested in.
+func (h *LogrusHook) Levels() []logrus.Level {
+ return logrus.AllLevels
+}
+
+// Fire is called when a log event occurs.
+func (h *LogrusHook) Fire(entry *logrus.Entry) error {
+ span := trace.SpanFromContext(entry.Context)
+ if span == nil {
+ return nil
+ }
+
+ if !span.SpanContext().IsValid() || !span.IsRecording() {
+ return nil
+ }
+
+ span.AddEvent(
+ entry.Message,
+ trace.WithAttributes(logrusDataToAttrs(entry.Data)...),
+ trace.WithAttributes(attribute.String("level", entry.Level.String())),
+ trace.WithTimestamp(entry.Time),
+ )
+
+ return nil
+}
+
+func logrusDataToAttrs(data logrus.Fields) []attribute.KeyValue {
+ attrs := make([]attribute.KeyValue, 0, len(data))
+ for k, v := range data {
+ attrs = append(attrs, any(k, v))
+ }
+ return attrs
+}
diff --git a/vendor/github.com/containerd/containerd/tracing/tracing.go b/vendor/github.com/containerd/containerd/tracing/tracing.go
new file mode 100644
index 000000000..80d2b95c0
--- /dev/null
+++ b/vendor/github.com/containerd/containerd/tracing/tracing.go
@@ -0,0 +1,129 @@
+/*
+ Copyright The containerd Authors.
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+*/
+
+package tracing
+
+import (
+ "context"
+ "net/http"
+
+ "go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp"
+ "go.opentelemetry.io/otel"
+ "go.opentelemetry.io/otel/attribute"
+ "go.opentelemetry.io/otel/codes"
+ semconv "go.opentelemetry.io/otel/semconv/v1.21.0"
+ "go.opentelemetry.io/otel/trace"
+)
+
+// StartConfig defines configuration for a new span object.
+type StartConfig struct {
+ spanOpts []trace.SpanStartOption
+}
+
+type SpanOpt func(config *StartConfig)
+
+// WithHTTPRequest marks span as a HTTP request operation from client to server.
+// It'll append attributes from the HTTP request object and mark it with `SpanKindClient` type.
+//
+// Deprecated: use upstream functionality from otelhttp directly instead. This function is kept for API compatibility
+// but no longer works as expected due to required functionality no longer exported in OpenTelemetry libraries.
+func WithHTTPRequest(_ *http.Request) SpanOpt {
+ return func(config *StartConfig) {
+ config.spanOpts = append(config.spanOpts,
+ trace.WithSpanKind(trace.SpanKindClient), // A client making a request to a server
+ )
+ }
+}
+
+// UpdateHTTPClient updates the http client with the necessary otel transport
+func UpdateHTTPClient(client *http.Client, name string) {
+ client.Transport = otelhttp.NewTransport(
+ client.Transport,
+ otelhttp.WithSpanNameFormatter(func(operation string, r *http.Request) string {
+ return name
+ }),
+ )
+}
+
+// StartSpan starts child span in a context.
+func StartSpan(ctx context.Context, opName string, opts ...SpanOpt) (context.Context, *Span) {
+ config := StartConfig{}
+ for _, fn := range opts {
+ fn(&config)
+ }
+ tracer := otel.Tracer("")
+ if parent := trace.SpanFromContext(ctx); parent != nil && parent.SpanContext().IsValid() {
+ tracer = parent.TracerProvider().Tracer("")
+ }
+ ctx, span := tracer.Start(ctx, opName, config.spanOpts...)
+ return ctx, &Span{otelSpan: span}
+}
+
+// SpanFromContext returns the current Span from the context.
+func SpanFromContext(ctx context.Context) *Span {
+ return &Span{
+ otelSpan: trace.SpanFromContext(ctx),
+ }
+}
+
+// Span is wrapper around otel trace.Span.
+// Span is the individual component of a trace. It represents a
+// single named and timed operation of a workflow that is traced.
+type Span struct {
+ otelSpan trace.Span
+}
+
+// End completes the span.
+func (s *Span) End() {
+ s.otelSpan.End()
+}
+
+// AddEvent adds an event with provided name and options.
+func (s *Span) AddEvent(name string, options ...trace.EventOption) {
+ s.otelSpan.AddEvent(name, options...)
+}
+
+// SetStatus sets the status of the current span.
+// If an error is encountered, it records the error and sets span status to Error.
+func (s *Span) SetStatus(err error) {
+ if err != nil {
+ s.otelSpan.RecordError(err)
+ s.otelSpan.SetStatus(codes.Error, err.Error())
+ } else {
+ s.otelSpan.SetStatus(codes.Ok, "")
+ }
+}
+
+// SetAttributes sets kv as attributes of the span.
+func (s *Span) SetAttributes(kv ...attribute.KeyValue) {
+ s.otelSpan.SetAttributes(kv...)
+}
+
+// Name sets the span name by joining a list of strings in dot separated format.
+func Name(names ...string) string {
+ return makeSpanName(names...)
+}
+
+// Attribute takes a key value pair and returns attribute.KeyValue type.
+func Attribute(k string, v interface{}) attribute.KeyValue {
+ return any(k, v)
+}
+
+// HTTPStatusCodeAttributes generates attributes of the HTTP namespace as specified by the OpenTelemetry
+// specification for a span.
+func HTTPStatusCodeAttributes(code int) []attribute.KeyValue {
+ return []attribute.KeyValue{semconv.HTTPStatusCodeKey.Int(code)}
+}
diff --git a/vendor/github.com/containerd/containerd/version/version.go b/vendor/github.com/containerd/containerd/version/version.go
new file mode 100644
index 000000000..c61791188
--- /dev/null
+++ b/vendor/github.com/containerd/containerd/version/version.go
@@ -0,0 +1,34 @@
+/*
+ Copyright The containerd Authors.
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+*/
+
+package version
+
+import "runtime"
+
+var (
+ // Package is filled at linking time
+ Package = "github.com/containerd/containerd"
+
+ // Version holds the complete version number. Filled in at linking time.
+ Version = "1.7.23+unknown"
+
+ // Revision is filled with the VCS (e.g. git) revision being used to build
+ // the program at linking time.
+ Revision = ""
+
+ // GoVersion is Go tree's version.
+ GoVersion = runtime.Version()
+)
diff --git a/vendor/github.com/containerd/errdefs/LICENSE b/vendor/github.com/containerd/errdefs/LICENSE
new file mode 100644
index 000000000..584149b6e
--- /dev/null
+++ b/vendor/github.com/containerd/errdefs/LICENSE
@@ -0,0 +1,191 @@
+
+ Apache License
+ Version 2.0, January 2004
+ https://www.apache.org/licenses/
+
+ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
+
+ 1. Definitions.
+
+ "License" shall mean the terms and conditions for use, reproduction,
+ and distribution as defined by Sections 1 through 9 of this document.
+
+ "Licensor" shall mean the copyright owner or entity authorized by
+ the copyright owner that is granting the License.
+
+ "Legal Entity" shall mean the union of the acting entity and all
+ other entities that control, are controlled by, or are under common
+ control with that entity. For the purposes of this definition,
+ "control" means (i) the power, direct or indirect, to cause the
+ direction or management of such entity, whether by contract or
+ otherwise, or (ii) ownership of fifty percent (50%) or more of the
+ outstanding shares, or (iii) beneficial ownership of such entity.
+
+ "You" (or "Your") shall mean an individual or Legal Entity
+ exercising permissions granted by this License.
+
+ "Source" form shall mean the preferred form for making modifications,
+ including but not limited to software source code, documentation
+ source, and configuration files.
+
+ "Object" form shall mean any form resulting from mechanical
+ transformation or translation of a Source form, including but
+ not limited to compiled object code, generated documentation,
+ and conversions to other media types.
+
+ "Work" shall mean the work of authorship, whether in Source or
+ Object form, made available under the License, as indicated by a
+ copyright notice that is included in or attached to the work
+ (an example is provided in the Appendix below).
+
+ "Derivative Works" shall mean any work, whether in Source or Object
+ form, that is based on (or derived from) the Work and for which the
+ editorial revisions, annotations, elaborations, or other modifications
+ represent, as a whole, an original work of authorship. For the purposes
+ of this License, Derivative Works shall not include works that remain
+ separable from, or merely link (or bind by name) to the interfaces of,
+ the Work and Derivative Works thereof.
+
+ "Contribution" shall mean any work of authorship, including
+ the original version of the Work and any modifications or additions
+ to that Work or Derivative Works thereof, that is intentionally
+ submitted to Licensor for inclusion in the Work by the copyright owner
+ or by an individual or Legal Entity authorized to submit on behalf of
+ the copyright owner. For the purposes of this definition, "submitted"
+ means any form of electronic, verbal, or written communication sent
+ to the Licensor or its representatives, including but not limited to
+ communication on electronic mailing lists, source code control systems,
+ and issue tracking systems that are managed by, or on behalf of, the
+ Licensor for the purpose of discussing and improving the Work, but
+ excluding communication that is conspicuously marked or otherwise
+ designated in writing by the copyright owner as "Not a Contribution."
+
+ "Contributor" shall mean Licensor and any individual or Legal Entity
+ on behalf of whom a Contribution has been received by Licensor and
+ subsequently incorporated within the Work.
+
+ 2. Grant of Copyright License. Subject to the terms and conditions of
+ this License, each Contributor hereby grants to You a perpetual,
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
+ copyright license to reproduce, prepare Derivative Works of,
+ publicly display, publicly perform, sublicense, and distribute the
+ Work and such Derivative Works in Source or Object form.
+
+ 3. Grant of Patent License. Subject to the terms and conditions of
+ this License, each Contributor hereby grants to You a perpetual,
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
+ (except as stated in this section) patent license to make, have made,
+ use, offer to sell, sell, import, and otherwise transfer the Work,
+ where such license applies only to those patent claims licensable
+ by such Contributor that are necessarily infringed by their
+ Contribution(s) alone or by combination of their Contribution(s)
+ with the Work to which such Contribution(s) was submitted. If You
+ institute patent litigation against any entity (including a
+ cross-claim or counterclaim in a lawsuit) alleging that the Work
+ or a Contribution incorporated within the Work constitutes direct
+ or contributory patent infringement, then any patent licenses
+ granted to You under this License for that Work shall terminate
+ as of the date such litigation is filed.
+
+ 4. Redistribution. You may reproduce and distribute copies of the
+ Work or Derivative Works thereof in any medium, with or without
+ modifications, and in Source or Object form, provided that You
+ meet the following conditions:
+
+ (a) You must give any other recipients of the Work or
+ Derivative Works a copy of this License; and
+
+ (b) You must cause any modified files to carry prominent notices
+ stating that You changed the files; and
+
+ (c) You must retain, in the Source form of any Derivative Works
+ that You distribute, all copyright, patent, trademark, and
+ attribution notices from the Source form of the Work,
+ excluding those notices that do not pertain to any part of
+ the Derivative Works; and
+
+ (d) If the Work includes a "NOTICE" text file as part of its
+ distribution, then any Derivative Works that You distribute must
+ include a readable copy of the attribution notices contained
+ within such NOTICE file, excluding those notices that do not
+ pertain to any part of the Derivative Works, in at least one
+ of the following places: within a NOTICE text file distributed
+ as part of the Derivative Works; within the Source form or
+ documentation, if provided along with the Derivative Works; or,
+ within a display generated by the Derivative Works, if and
+ wherever such third-party notices normally appear. The contents
+ of the NOTICE file are for informational purposes only and
+ do not modify the License. You may add Your own attribution
+ notices within Derivative Works that You distribute, alongside
+ or as an addendum to the NOTICE text from the Work, provided
+ that such additional attribution notices cannot be construed
+ as modifying the License.
+
+ You may add Your own copyright statement to Your modifications and
+ may provide additional or different license terms and conditions
+ for use, reproduction, or distribution of Your modifications, or
+ for any such Derivative Works as a whole, provided Your use,
+ reproduction, and distribution of the Work otherwise complies with
+ the conditions stated in this License.
+
+ 5. Submission of Contributions. Unless You explicitly state otherwise,
+ any Contribution intentionally submitted for inclusion in the Work
+ by You to the Licensor shall be under the terms and conditions of
+ this License, without any additional terms or conditions.
+ Notwithstanding the above, nothing herein shall supersede or modify
+ the terms of any separate license agreement you may have executed
+ with Licensor regarding such Contributions.
+
+ 6. Trademarks. This License does not grant permission to use the trade
+ names, trademarks, service marks, or product names of the Licensor,
+ except as required for reasonable and customary use in describing the
+ origin of the Work and reproducing the content of the NOTICE file.
+
+ 7. Disclaimer of Warranty. Unless required by applicable law or
+ agreed to in writing, Licensor provides the Work (and each
+ Contributor provides its Contributions) on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
+ implied, including, without limitation, any warranties or conditions
+ of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
+ PARTICULAR PURPOSE. You are solely responsible for determining the
+ appropriateness of using or redistributing the Work and assume any
+ risks associated with Your exercise of permissions under this License.
+
+ 8. Limitation of Liability. In no event and under no legal theory,
+ whether in tort (including negligence), contract, or otherwise,
+ unless required by applicable law (such as deliberate and grossly
+ negligent acts) or agreed to in writing, shall any Contributor be
+ liable to You for damages, including any direct, indirect, special,
+ incidental, or consequential damages of any character arising as a
+ result of this License or out of the use or inability to use the
+ Work (including but not limited to damages for loss of goodwill,
+ work stoppage, computer failure or malfunction, or any and all
+ other commercial damages or losses), even if such Contributor
+ has been advised of the possibility of such damages.
+
+ 9. Accepting Warranty or Additional Liability. While redistributing
+ the Work or Derivative Works thereof, You may choose to offer,
+ and charge a fee for, acceptance of support, warranty, indemnity,
+ or other liability obligations and/or rights consistent with this
+ License. However, in accepting such obligations, You may act only
+ on Your own behalf and on Your sole responsibility, not on behalf
+ of any other Contributor, and only if You agree to indemnify,
+ defend, and hold each Contributor harmless for any liability
+ incurred by, or claims asserted against, such Contributor by reason
+ of your accepting any such warranty or additional liability.
+
+ END OF TERMS AND CONDITIONS
+
+ Copyright The containerd Authors
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ https://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
diff --git a/vendor/github.com/containerd/errdefs/README.md b/vendor/github.com/containerd/errdefs/README.md
new file mode 100644
index 000000000..bd418c63f
--- /dev/null
+++ b/vendor/github.com/containerd/errdefs/README.md
@@ -0,0 +1,13 @@
+# errdefs
+
+A Go package for defining and checking common containerd errors.
+
+## Project details
+
+**errdefs** is a containerd sub-project, licensed under the [Apache 2.0 license](./LICENSE).
+As a containerd sub-project, you will find the:
+ * [Project governance](https://github.com/containerd/project/blob/main/GOVERNANCE.md),
+ * [Maintainers](https://github.com/containerd/project/blob/main/MAINTAINERS),
+ * and [Contributing guidelines](https://github.com/containerd/project/blob/main/CONTRIBUTING.md)
+
+information in our [`containerd/project`](https://github.com/containerd/project) repository.
diff --git a/vendor/github.com/containerd/errdefs/errors.go b/vendor/github.com/containerd/errdefs/errors.go
new file mode 100644
index 000000000..f654d1964
--- /dev/null
+++ b/vendor/github.com/containerd/errdefs/errors.go
@@ -0,0 +1,443 @@
+/*
+ Copyright The containerd Authors.
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+*/
+
+// Package errdefs defines the common errors used throughout containerd
+// packages.
+//
+// Use with fmt.Errorf to add context to an error.
+//
+// To detect an error class, use the IsXXX functions to tell whether an error
+// is of a certain type.
+package errdefs
+
+import (
+ "context"
+ "errors"
+)
+
+// Definitions of common error types used throughout containerd. All containerd
+// errors returned by most packages will map into one of these errors classes.
+// Packages should return errors of these types when they want to instruct a
+// client to take a particular action.
+//
+// These errors map closely to grpc errors.
+var (
+ ErrUnknown = errUnknown{}
+ ErrInvalidArgument = errInvalidArgument{}
+ ErrNotFound = errNotFound{}
+ ErrAlreadyExists = errAlreadyExists{}
+ ErrPermissionDenied = errPermissionDenied{}
+ ErrResourceExhausted = errResourceExhausted{}
+ ErrFailedPrecondition = errFailedPrecondition{}
+ ErrConflict = errConflict{}
+ ErrNotModified = errNotModified{}
+ ErrAborted = errAborted{}
+ ErrOutOfRange = errOutOfRange{}
+ ErrNotImplemented = errNotImplemented{}
+ ErrInternal = errInternal{}
+ ErrUnavailable = errUnavailable{}
+ ErrDataLoss = errDataLoss{}
+ ErrUnauthenticated = errUnauthorized{}
+)
+
+// cancelled maps to Moby's "ErrCancelled"
+type cancelled interface {
+ Cancelled()
+}
+
+// IsCanceled returns true if the error is due to `context.Canceled`.
+func IsCanceled(err error) bool {
+ return errors.Is(err, context.Canceled) || isInterface[cancelled](err)
+}
+
+type errUnknown struct{}
+
+func (errUnknown) Error() string { return "unknown" }
+
+func (errUnknown) Unknown() {}
+
+func (e errUnknown) WithMessage(msg string) error {
+ return customMessage{e, msg}
+}
+
+// unknown maps to Moby's "ErrUnknown"
+type unknown interface {
+ Unknown()
+}
+
+// IsUnknown returns true if the error is due to an unknown error,
+// unhandled condition or unexpected response.
+func IsUnknown(err error) bool {
+ return errors.Is(err, errUnknown{}) || isInterface[unknown](err)
+}
+
+type errInvalidArgument struct{}
+
+func (errInvalidArgument) Error() string { return "invalid argument" }
+
+func (errInvalidArgument) InvalidParameter() {}
+
+func (e errInvalidArgument) WithMessage(msg string) error {
+ return customMessage{e, msg}
+}
+
+// invalidParameter maps to Moby's "ErrInvalidParameter"
+type invalidParameter interface {
+ InvalidParameter()
+}
+
+// IsInvalidArgument returns true if the error is due to an invalid argument
+func IsInvalidArgument(err error) bool {
+ return errors.Is(err, ErrInvalidArgument) || isInterface[invalidParameter](err)
+}
+
+// deadlineExceed maps to Moby's "ErrDeadline"
+type deadlineExceeded interface {
+ DeadlineExceeded()
+}
+
+// IsDeadlineExceeded returns true if the error is due to
+// `context.DeadlineExceeded`.
+func IsDeadlineExceeded(err error) bool {
+ return errors.Is(err, context.DeadlineExceeded) || isInterface[deadlineExceeded](err)
+}
+
+type errNotFound struct{}
+
+func (errNotFound) Error() string { return "not found" }
+
+func (errNotFound) NotFound() {}
+
+func (e errNotFound) WithMessage(msg string) error {
+ return customMessage{e, msg}
+}
+
+// notFound maps to Moby's "ErrNotFound"
+type notFound interface {
+ NotFound()
+}
+
+// IsNotFound returns true if the error is due to a missing object
+func IsNotFound(err error) bool {
+ return errors.Is(err, ErrNotFound) || isInterface[notFound](err)
+}
+
+type errAlreadyExists struct{}
+
+func (errAlreadyExists) Error() string { return "already exists" }
+
+func (errAlreadyExists) AlreadyExists() {}
+
+func (e errAlreadyExists) WithMessage(msg string) error {
+ return customMessage{e, msg}
+}
+
+type alreadyExists interface {
+ AlreadyExists()
+}
+
+// IsAlreadyExists returns true if the error is due to an already existing
+// metadata item
+func IsAlreadyExists(err error) bool {
+ return errors.Is(err, ErrAlreadyExists) || isInterface[alreadyExists](err)
+}
+
+type errPermissionDenied struct{}
+
+func (errPermissionDenied) Error() string { return "permission denied" }
+
+func (errPermissionDenied) Forbidden() {}
+
+func (e errPermissionDenied) WithMessage(msg string) error {
+ return customMessage{e, msg}
+}
+
+// forbidden maps to Moby's "ErrForbidden"
+type forbidden interface {
+ Forbidden()
+}
+
+// IsPermissionDenied returns true if the error is due to permission denied
+// or forbidden (403) response
+func IsPermissionDenied(err error) bool {
+ return errors.Is(err, ErrPermissionDenied) || isInterface[forbidden](err)
+}
+
+type errResourceExhausted struct{}
+
+func (errResourceExhausted) Error() string { return "resource exhausted" }
+
+func (errResourceExhausted) ResourceExhausted() {}
+
+func (e errResourceExhausted) WithMessage(msg string) error {
+ return customMessage{e, msg}
+}
+
+type resourceExhausted interface {
+ ResourceExhausted()
+}
+
+// IsResourceExhausted returns true if the error is due to
+// a lack of resources or too many attempts.
+func IsResourceExhausted(err error) bool {
+ return errors.Is(err, errResourceExhausted{}) || isInterface[resourceExhausted](err)
+}
+
+type errFailedPrecondition struct{}
+
+func (e errFailedPrecondition) Error() string { return "failed precondition" }
+
+func (errFailedPrecondition) FailedPrecondition() {}
+
+func (e errFailedPrecondition) WithMessage(msg string) error {
+ return customMessage{e, msg}
+}
+
+type failedPrecondition interface {
+ FailedPrecondition()
+}
+
+// IsFailedPrecondition returns true if an operation could not proceed due to
+// the lack of a particular condition
+func IsFailedPrecondition(err error) bool {
+ return errors.Is(err, errFailedPrecondition{}) || isInterface[failedPrecondition](err)
+}
+
+type errConflict struct{}
+
+func (errConflict) Error() string { return "conflict" }
+
+func (errConflict) Conflict() {}
+
+func (e errConflict) WithMessage(msg string) error {
+ return customMessage{e, msg}
+}
+
+// conflict maps to Moby's "ErrConflict"
+type conflict interface {
+ Conflict()
+}
+
+// IsConflict returns true if an operation could not proceed due to
+// a conflict.
+func IsConflict(err error) bool {
+ return errors.Is(err, errConflict{}) || isInterface[conflict](err)
+}
+
+type errNotModified struct{}
+
+func (errNotModified) Error() string { return "not modified" }
+
+func (errNotModified) NotModified() {}
+
+func (e errNotModified) WithMessage(msg string) error {
+ return customMessage{e, msg}
+}
+
+// notModified maps to Moby's "ErrNotModified"
+type notModified interface {
+ NotModified()
+}
+
+// IsNotModified returns true if an operation could not proceed due
+// to an object not modified from a previous state.
+func IsNotModified(err error) bool {
+ return errors.Is(err, errNotModified{}) || isInterface[notModified](err)
+}
+
+type errAborted struct{}
+
+func (errAborted) Error() string { return "aborted" }
+
+func (errAborted) Aborted() {}
+
+func (e errAborted) WithMessage(msg string) error {
+ return customMessage{e, msg}
+}
+
+type aborted interface {
+ Aborted()
+}
+
+// IsAborted returns true if an operation was aborted.
+func IsAborted(err error) bool {
+ return errors.Is(err, errAborted{}) || isInterface[aborted](err)
+}
+
+type errOutOfRange struct{}
+
+func (errOutOfRange) Error() string { return "out of range" }
+
+func (errOutOfRange) OutOfRange() {}
+
+func (e errOutOfRange) WithMessage(msg string) error {
+ return customMessage{e, msg}
+}
+
+type outOfRange interface {
+ OutOfRange()
+}
+
+// IsOutOfRange returns true if an operation could not proceed due
+// to data being out of the expected range.
+func IsOutOfRange(err error) bool {
+ return errors.Is(err, errOutOfRange{}) || isInterface[outOfRange](err)
+}
+
+type errNotImplemented struct{}
+
+func (errNotImplemented) Error() string { return "not implemented" }
+
+func (errNotImplemented) NotImplemented() {}
+
+func (e errNotImplemented) WithMessage(msg string) error {
+ return customMessage{e, msg}
+}
+
+// notImplemented maps to Moby's "ErrNotImplemented"
+type notImplemented interface {
+ NotImplemented()
+}
+
+// IsNotImplemented returns true if the error is due to not being implemented
+func IsNotImplemented(err error) bool {
+ return errors.Is(err, errNotImplemented{}) || isInterface[notImplemented](err)
+}
+
+type errInternal struct{}
+
+func (errInternal) Error() string { return "internal" }
+
+func (errInternal) System() {}
+
+func (e errInternal) WithMessage(msg string) error {
+ return customMessage{e, msg}
+}
+
+// system maps to Moby's "ErrSystem"
+type system interface {
+ System()
+}
+
+// IsInternal returns true if the error returns to an internal or system error
+func IsInternal(err error) bool {
+ return errors.Is(err, errInternal{}) || isInterface[system](err)
+}
+
+type errUnavailable struct{}
+
+func (errUnavailable) Error() string { return "unavailable" }
+
+func (errUnavailable) Unavailable() {}
+
+func (e errUnavailable) WithMessage(msg string) error {
+ return customMessage{e, msg}
+}
+
+// unavailable maps to Moby's "ErrUnavailable"
+type unavailable interface {
+ Unavailable()
+}
+
+// IsUnavailable returns true if the error is due to a resource being unavailable
+func IsUnavailable(err error) bool {
+ return errors.Is(err, errUnavailable{}) || isInterface[unavailable](err)
+}
+
+type errDataLoss struct{}
+
+func (errDataLoss) Error() string { return "data loss" }
+
+func (errDataLoss) DataLoss() {}
+
+func (e errDataLoss) WithMessage(msg string) error {
+ return customMessage{e, msg}
+}
+
+// dataLoss maps to Moby's "ErrDataLoss"
+type dataLoss interface {
+ DataLoss()
+}
+
+// IsDataLoss returns true if data during an operation was lost or corrupted
+func IsDataLoss(err error) bool {
+ return errors.Is(err, errDataLoss{}) || isInterface[dataLoss](err)
+}
+
+type errUnauthorized struct{}
+
+func (errUnauthorized) Error() string { return "unauthorized" }
+
+func (errUnauthorized) Unauthorized() {}
+
+func (e errUnauthorized) WithMessage(msg string) error {
+ return customMessage{e, msg}
+}
+
+// unauthorized maps to Moby's "ErrUnauthorized"
+type unauthorized interface {
+ Unauthorized()
+}
+
+// IsUnauthorized returns true if the error indicates that the user was
+// unauthenticated or unauthorized.
+func IsUnauthorized(err error) bool {
+ return errors.Is(err, errUnauthorized{}) || isInterface[unauthorized](err)
+}
+
+func isInterface[T any](err error) bool {
+ for {
+ switch x := err.(type) {
+ case T:
+ return true
+ case customMessage:
+ err = x.err
+ case interface{ Unwrap() error }:
+ err = x.Unwrap()
+ if err == nil {
+ return false
+ }
+ case interface{ Unwrap() []error }:
+ for _, err := range x.Unwrap() {
+ if isInterface[T](err) {
+ return true
+ }
+ }
+ return false
+ default:
+ return false
+ }
+ }
+}
+
+// customMessage is used to provide a defined error with a custom message.
+// The message is not wrapped but can be compared by the `Is(error) bool` interface.
+type customMessage struct {
+ err error
+ msg string
+}
+
+func (c customMessage) Is(err error) bool {
+ return c.err == err
+}
+
+func (c customMessage) As(target any) bool {
+ return errors.As(c.err, target)
+}
+
+func (c customMessage) Error() string {
+ return c.msg
+}
diff --git a/vendor/github.com/containerd/errdefs/resolve.go b/vendor/github.com/containerd/errdefs/resolve.go
new file mode 100644
index 000000000..c02d4a73f
--- /dev/null
+++ b/vendor/github.com/containerd/errdefs/resolve.go
@@ -0,0 +1,147 @@
+/*
+ Copyright The containerd Authors.
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+*/
+
+package errdefs
+
+import "context"
+
+// Resolve returns the first error found in the error chain which matches an
+// error defined in this package or context error. A raw, unwrapped error is
+// returned or ErrUnknown if no matching error is found.
+//
+// This is useful for determining a response code based on the outermost wrapped
+// error rather than the original cause. For example, a not found error deep
+// in the code may be wrapped as an invalid argument. When determining status
+// code from Is* functions, the depth or ordering of the error is not
+// considered.
+//
+// The search order is depth first, a wrapped error returned from any part of
+// the chain from `Unwrap() error` will be returned before any joined errors
+// as returned by `Unwrap() []error`.
+func Resolve(err error) error {
+ if err == nil {
+ return nil
+ }
+ err = firstError(err)
+ if err == nil {
+ err = ErrUnknown
+ }
+ return err
+}
+
+func firstError(err error) error {
+ for {
+ switch err {
+ case ErrUnknown,
+ ErrInvalidArgument,
+ ErrNotFound,
+ ErrAlreadyExists,
+ ErrPermissionDenied,
+ ErrResourceExhausted,
+ ErrFailedPrecondition,
+ ErrConflict,
+ ErrNotModified,
+ ErrAborted,
+ ErrOutOfRange,
+ ErrNotImplemented,
+ ErrInternal,
+ ErrUnavailable,
+ ErrDataLoss,
+ ErrUnauthenticated,
+ context.DeadlineExceeded,
+ context.Canceled:
+ return err
+ }
+ switch e := err.(type) {
+ case customMessage:
+ err = e.err
+ case unknown:
+ return ErrUnknown
+ case invalidParameter:
+ return ErrInvalidArgument
+ case notFound:
+ return ErrNotFound
+ case alreadyExists:
+ return ErrAlreadyExists
+ case forbidden:
+ return ErrPermissionDenied
+ case resourceExhausted:
+ return ErrResourceExhausted
+ case failedPrecondition:
+ return ErrFailedPrecondition
+ case conflict:
+ return ErrConflict
+ case notModified:
+ return ErrNotModified
+ case aborted:
+ return ErrAborted
+ case errOutOfRange:
+ return ErrOutOfRange
+ case notImplemented:
+ return ErrNotImplemented
+ case system:
+ return ErrInternal
+ case unavailable:
+ return ErrUnavailable
+ case dataLoss:
+ return ErrDataLoss
+ case unauthorized:
+ return ErrUnauthenticated
+ case deadlineExceeded:
+ return context.DeadlineExceeded
+ case cancelled:
+ return context.Canceled
+ case interface{ Unwrap() error }:
+ err = e.Unwrap()
+ if err == nil {
+ return nil
+ }
+ case interface{ Unwrap() []error }:
+ for _, ue := range e.Unwrap() {
+ if fe := firstError(ue); fe != nil {
+ return fe
+ }
+ }
+ return nil
+ case interface{ Is(error) bool }:
+ for _, target := range []error{ErrUnknown,
+ ErrInvalidArgument,
+ ErrNotFound,
+ ErrAlreadyExists,
+ ErrPermissionDenied,
+ ErrResourceExhausted,
+ ErrFailedPrecondition,
+ ErrConflict,
+ ErrNotModified,
+ ErrAborted,
+ ErrOutOfRange,
+ ErrNotImplemented,
+ ErrInternal,
+ ErrUnavailable,
+ ErrDataLoss,
+ ErrUnauthenticated,
+ context.DeadlineExceeded,
+ context.Canceled} {
+ if e.Is(target) {
+ return target
+ }
+ }
+ return nil
+ default:
+ return nil
+ }
+ }
+}
diff --git a/vendor/github.com/containerd/log/.golangci.yml b/vendor/github.com/containerd/log/.golangci.yml
new file mode 100644
index 000000000..a695775df
--- /dev/null
+++ b/vendor/github.com/containerd/log/.golangci.yml
@@ -0,0 +1,30 @@
+linters:
+ enable:
+ - exportloopref # Checks for pointers to enclosing loop variables
+ - gofmt
+ - goimports
+ - gosec
+ - ineffassign
+ - misspell
+ - nolintlint
+ - revive
+ - staticcheck
+ - tenv # Detects using os.Setenv instead of t.Setenv since Go 1.17
+ - unconvert
+ - unused
+ - vet
+ - dupword # Checks for duplicate words in the source code
+ disable:
+ - errcheck
+
+run:
+ timeout: 5m
+ skip-dirs:
+ - api
+ - cluster
+ - design
+ - docs
+ - docs/man
+ - releases
+ - reports
+ - test # e2e scripts
diff --git a/vendor/github.com/containerd/log/LICENSE b/vendor/github.com/containerd/log/LICENSE
new file mode 100644
index 000000000..584149b6e
--- /dev/null
+++ b/vendor/github.com/containerd/log/LICENSE
@@ -0,0 +1,191 @@
+
+ Apache License
+ Version 2.0, January 2004
+ https://www.apache.org/licenses/
+
+ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
+
+ 1. Definitions.
+
+ "License" shall mean the terms and conditions for use, reproduction,
+ and distribution as defined by Sections 1 through 9 of this document.
+
+ "Licensor" shall mean the copyright owner or entity authorized by
+ the copyright owner that is granting the License.
+
+ "Legal Entity" shall mean the union of the acting entity and all
+ other entities that control, are controlled by, or are under common
+ control with that entity. For the purposes of this definition,
+ "control" means (i) the power, direct or indirect, to cause the
+ direction or management of such entity, whether by contract or
+ otherwise, or (ii) ownership of fifty percent (50%) or more of the
+ outstanding shares, or (iii) beneficial ownership of such entity.
+
+ "You" (or "Your") shall mean an individual or Legal Entity
+ exercising permissions granted by this License.
+
+ "Source" form shall mean the preferred form for making modifications,
+ including but not limited to software source code, documentation
+ source, and configuration files.
+
+ "Object" form shall mean any form resulting from mechanical
+ transformation or translation of a Source form, including but
+ not limited to compiled object code, generated documentation,
+ and conversions to other media types.
+
+ "Work" shall mean the work of authorship, whether in Source or
+ Object form, made available under the License, as indicated by a
+ copyright notice that is included in or attached to the work
+ (an example is provided in the Appendix below).
+
+ "Derivative Works" shall mean any work, whether in Source or Object
+ form, that is based on (or derived from) the Work and for which the
+ editorial revisions, annotations, elaborations, or other modifications
+ represent, as a whole, an original work of authorship. For the purposes
+ of this License, Derivative Works shall not include works that remain
+ separable from, or merely link (or bind by name) to the interfaces of,
+ the Work and Derivative Works thereof.
+
+ "Contribution" shall mean any work of authorship, including
+ the original version of the Work and any modifications or additions
+ to that Work or Derivative Works thereof, that is intentionally
+ submitted to Licensor for inclusion in the Work by the copyright owner
+ or by an individual or Legal Entity authorized to submit on behalf of
+ the copyright owner. For the purposes of this definition, "submitted"
+ means any form of electronic, verbal, or written communication sent
+ to the Licensor or its representatives, including but not limited to
+ communication on electronic mailing lists, source code control systems,
+ and issue tracking systems that are managed by, or on behalf of, the
+ Licensor for the purpose of discussing and improving the Work, but
+ excluding communication that is conspicuously marked or otherwise
+ designated in writing by the copyright owner as "Not a Contribution."
+
+ "Contributor" shall mean Licensor and any individual or Legal Entity
+ on behalf of whom a Contribution has been received by Licensor and
+ subsequently incorporated within the Work.
+
+ 2. Grant of Copyright License. Subject to the terms and conditions of
+ this License, each Contributor hereby grants to You a perpetual,
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
+ copyright license to reproduce, prepare Derivative Works of,
+ publicly display, publicly perform, sublicense, and distribute the
+ Work and such Derivative Works in Source or Object form.
+
+ 3. Grant of Patent License. Subject to the terms and conditions of
+ this License, each Contributor hereby grants to You a perpetual,
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
+ (except as stated in this section) patent license to make, have made,
+ use, offer to sell, sell, import, and otherwise transfer the Work,
+ where such license applies only to those patent claims licensable
+ by such Contributor that are necessarily infringed by their
+ Contribution(s) alone or by combination of their Contribution(s)
+ with the Work to which such Contribution(s) was submitted. If You
+ institute patent litigation against any entity (including a
+ cross-claim or counterclaim in a lawsuit) alleging that the Work
+ or a Contribution incorporated within the Work constitutes direct
+ or contributory patent infringement, then any patent licenses
+ granted to You under this License for that Work shall terminate
+ as of the date such litigation is filed.
+
+ 4. Redistribution. You may reproduce and distribute copies of the
+ Work or Derivative Works thereof in any medium, with or without
+ modifications, and in Source or Object form, provided that You
+ meet the following conditions:
+
+ (a) You must give any other recipients of the Work or
+ Derivative Works a copy of this License; and
+
+ (b) You must cause any modified files to carry prominent notices
+ stating that You changed the files; and
+
+ (c) You must retain, in the Source form of any Derivative Works
+ that You distribute, all copyright, patent, trademark, and
+ attribution notices from the Source form of the Work,
+ excluding those notices that do not pertain to any part of
+ the Derivative Works; and
+
+ (d) If the Work includes a "NOTICE" text file as part of its
+ distribution, then any Derivative Works that You distribute must
+ include a readable copy of the attribution notices contained
+ within such NOTICE file, excluding those notices that do not
+ pertain to any part of the Derivative Works, in at least one
+ of the following places: within a NOTICE text file distributed
+ as part of the Derivative Works; within the Source form or
+ documentation, if provided along with the Derivative Works; or,
+ within a display generated by the Derivative Works, if and
+ wherever such third-party notices normally appear. The contents
+ of the NOTICE file are for informational purposes only and
+ do not modify the License. You may add Your own attribution
+ notices within Derivative Works that You distribute, alongside
+ or as an addendum to the NOTICE text from the Work, provided
+ that such additional attribution notices cannot be construed
+ as modifying the License.
+
+ You may add Your own copyright statement to Your modifications and
+ may provide additional or different license terms and conditions
+ for use, reproduction, or distribution of Your modifications, or
+ for any such Derivative Works as a whole, provided Your use,
+ reproduction, and distribution of the Work otherwise complies with
+ the conditions stated in this License.
+
+ 5. Submission of Contributions. Unless You explicitly state otherwise,
+ any Contribution intentionally submitted for inclusion in the Work
+ by You to the Licensor shall be under the terms and conditions of
+ this License, without any additional terms or conditions.
+ Notwithstanding the above, nothing herein shall supersede or modify
+ the terms of any separate license agreement you may have executed
+ with Licensor regarding such Contributions.
+
+ 6. Trademarks. This License does not grant permission to use the trade
+ names, trademarks, service marks, or product names of the Licensor,
+ except as required for reasonable and customary use in describing the
+ origin of the Work and reproducing the content of the NOTICE file.
+
+ 7. Disclaimer of Warranty. Unless required by applicable law or
+ agreed to in writing, Licensor provides the Work (and each
+ Contributor provides its Contributions) on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
+ implied, including, without limitation, any warranties or conditions
+ of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
+ PARTICULAR PURPOSE. You are solely responsible for determining the
+ appropriateness of using or redistributing the Work and assume any
+ risks associated with Your exercise of permissions under this License.
+
+ 8. Limitation of Liability. In no event and under no legal theory,
+ whether in tort (including negligence), contract, or otherwise,
+ unless required by applicable law (such as deliberate and grossly
+ negligent acts) or agreed to in writing, shall any Contributor be
+ liable to You for damages, including any direct, indirect, special,
+ incidental, or consequential damages of any character arising as a
+ result of this License or out of the use or inability to use the
+ Work (including but not limited to damages for loss of goodwill,
+ work stoppage, computer failure or malfunction, or any and all
+ other commercial damages or losses), even if such Contributor
+ has been advised of the possibility of such damages.
+
+ 9. Accepting Warranty or Additional Liability. While redistributing
+ the Work or Derivative Works thereof, You may choose to offer,
+ and charge a fee for, acceptance of support, warranty, indemnity,
+ or other liability obligations and/or rights consistent with this
+ License. However, in accepting such obligations, You may act only
+ on Your own behalf and on Your sole responsibility, not on behalf
+ of any other Contributor, and only if You agree to indemnify,
+ defend, and hold each Contributor harmless for any liability
+ incurred by, or claims asserted against, such Contributor by reason
+ of your accepting any such warranty or additional liability.
+
+ END OF TERMS AND CONDITIONS
+
+ Copyright The containerd Authors
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ https://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
diff --git a/vendor/github.com/containerd/log/README.md b/vendor/github.com/containerd/log/README.md
new file mode 100644
index 000000000..00e084988
--- /dev/null
+++ b/vendor/github.com/containerd/log/README.md
@@ -0,0 +1,17 @@
+# log
+
+A Go package providing a common logging interface across containerd repositories and a way for clients to use and configure logging in containerd packages.
+
+This package is not intended to be used as a standalone logging package outside of the containerd ecosystem and is intended as an interface wrapper around a logging implementation.
+In the future this package may be replaced with a common go logging interface.
+
+## Project details
+
+**log** is a containerd sub-project, licensed under the [Apache 2.0 license](./LICENSE).
+As a containerd sub-project, you will find the:
+ * [Project governance](https://github.com/containerd/project/blob/main/GOVERNANCE.md),
+ * [Maintainers](https://github.com/containerd/project/blob/main/MAINTAINERS),
+ * and [Contributing guidelines](https://github.com/containerd/project/blob/main/CONTRIBUTING.md)
+
+information in our [`containerd/project`](https://github.com/containerd/project) repository.
+
diff --git a/vendor/github.com/containerd/log/context.go b/vendor/github.com/containerd/log/context.go
new file mode 100644
index 000000000..20153066f
--- /dev/null
+++ b/vendor/github.com/containerd/log/context.go
@@ -0,0 +1,182 @@
+/*
+ Copyright The containerd Authors.
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+*/
+
+// Package log provides types and functions related to logging, passing
+// loggers through a context, and attaching context to the logger.
+//
+// # Transitional types
+//
+// This package contains various types that are aliases for types in [logrus].
+// These aliases are intended for transitioning away from hard-coding logrus
+// as logging implementation. Consumers of this package are encouraged to use
+// the type-aliases from this package instead of directly using their logrus
+// equivalent.
+//
+// The intent is to replace these aliases with locally defined types and
+// interfaces once all consumers are no longer directly importing logrus
+// types.
+//
+// IMPORTANT: due to the transitional purpose of this package, it is not
+// guaranteed for the full logrus API to be provided in the future. As
+// outlined, these aliases are provided as a step to transition away from
+// a specific implementation which, as a result, exposes the full logrus API.
+// While no decisions have been made on the ultimate design and interface
+// provided by this package, we do not expect carrying "less common" features.
+package log
+
+import (
+ "context"
+ "fmt"
+
+ "github.com/sirupsen/logrus"
+)
+
+// G is a shorthand for [GetLogger].
+//
+// We may want to define this locally to a package to get package tagged log
+// messages.
+var G = GetLogger
+
+// L is an alias for the standard logger.
+var L = &Entry{
+ Logger: logrus.StandardLogger(),
+ // Default is three fields plus a little extra room.
+ Data: make(Fields, 6),
+}
+
+type loggerKey struct{}
+
+// Fields type to pass to "WithFields".
+type Fields = map[string]any
+
+// Entry is a logging entry. It contains all the fields passed with
+// [Entry.WithFields]. It's finally logged when Trace, Debug, Info, Warn,
+// Error, Fatal or Panic is called on it. These objects can be reused and
+// passed around as much as you wish to avoid field duplication.
+//
+// Entry is a transitional type, and currently an alias for [logrus.Entry].
+type Entry = logrus.Entry
+
+// RFC3339NanoFixed is [time.RFC3339Nano] with nanoseconds padded using
+// zeros to ensure the formatted time is always the same number of
+// characters.
+const RFC3339NanoFixed = "2006-01-02T15:04:05.000000000Z07:00"
+
+// Level is a logging level.
+type Level = logrus.Level
+
+// Supported log levels.
+const (
+ // TraceLevel level. Designates finer-grained informational events
+ // than [DebugLevel].
+ TraceLevel Level = logrus.TraceLevel
+
+ // DebugLevel level. Usually only enabled when debugging. Very verbose
+ // logging.
+ DebugLevel Level = logrus.DebugLevel
+
+ // InfoLevel level. General operational entries about what's going on
+ // inside the application.
+ InfoLevel Level = logrus.InfoLevel
+
+ // WarnLevel level. Non-critical entries that deserve eyes.
+ WarnLevel Level = logrus.WarnLevel
+
+ // ErrorLevel level. Logs errors that should definitely be noted.
+ // Commonly used for hooks to send errors to an error tracking service.
+ ErrorLevel Level = logrus.ErrorLevel
+
+ // FatalLevel level. Logs and then calls "logger.Exit(1)". It exits
+ // even if the logging level is set to Panic.
+ FatalLevel Level = logrus.FatalLevel
+
+ // PanicLevel level. This is the highest level of severity. Logs and
+ // then calls panic with the message passed to Debug, Info, ...
+ PanicLevel Level = logrus.PanicLevel
+)
+
+// SetLevel sets log level globally. It returns an error if the given
+// level is not supported.
+//
+// level can be one of:
+//
+// - "trace" ([TraceLevel])
+// - "debug" ([DebugLevel])
+// - "info" ([InfoLevel])
+// - "warn" ([WarnLevel])
+// - "error" ([ErrorLevel])
+// - "fatal" ([FatalLevel])
+// - "panic" ([PanicLevel])
+func SetLevel(level string) error {
+ lvl, err := logrus.ParseLevel(level)
+ if err != nil {
+ return err
+ }
+
+ L.Logger.SetLevel(lvl)
+ return nil
+}
+
+// GetLevel returns the current log level.
+func GetLevel() Level {
+ return L.Logger.GetLevel()
+}
+
+// OutputFormat specifies a log output format.
+type OutputFormat string
+
+// Supported log output formats.
+const (
+ // TextFormat represents the text logging format.
+ TextFormat OutputFormat = "text"
+
+ // JSONFormat represents the JSON logging format.
+ JSONFormat OutputFormat = "json"
+)
+
+// SetFormat sets the log output format ([TextFormat] or [JSONFormat]).
+func SetFormat(format OutputFormat) error {
+ switch format {
+ case TextFormat:
+ L.Logger.SetFormatter(&logrus.TextFormatter{
+ TimestampFormat: RFC3339NanoFixed,
+ FullTimestamp: true,
+ })
+ return nil
+ case JSONFormat:
+ L.Logger.SetFormatter(&logrus.JSONFormatter{
+ TimestampFormat: RFC3339NanoFixed,
+ })
+ return nil
+ default:
+ return fmt.Errorf("unknown log format: %s", format)
+ }
+}
+
+// WithLogger returns a new context with the provided logger. Use in
+// combination with logger.WithField(s) for great effect.
+func WithLogger(ctx context.Context, logger *Entry) context.Context {
+ return context.WithValue(ctx, loggerKey{}, logger.WithContext(ctx))
+}
+
+// GetLogger retrieves the current logger from the context. If no logger is
+// available, the default logger is returned.
+func GetLogger(ctx context.Context) *Entry {
+ if logger := ctx.Value(loggerKey{}); logger != nil {
+ return logger.(*Entry)
+ }
+ return L.WithContext(ctx)
+}
diff --git a/vendor/github.com/containerd/platforms/.gitattributes b/vendor/github.com/containerd/platforms/.gitattributes
new file mode 100644
index 000000000..a0717e4b3
--- /dev/null
+++ b/vendor/github.com/containerd/platforms/.gitattributes
@@ -0,0 +1 @@
+*.go text eol=lf
\ No newline at end of file
diff --git a/vendor/github.com/containerd/platforms/.golangci.yml b/vendor/github.com/containerd/platforms/.golangci.yml
new file mode 100644
index 000000000..a695775df
--- /dev/null
+++ b/vendor/github.com/containerd/platforms/.golangci.yml
@@ -0,0 +1,30 @@
+linters:
+ enable:
+ - exportloopref # Checks for pointers to enclosing loop variables
+ - gofmt
+ - goimports
+ - gosec
+ - ineffassign
+ - misspell
+ - nolintlint
+ - revive
+ - staticcheck
+ - tenv # Detects using os.Setenv instead of t.Setenv since Go 1.17
+ - unconvert
+ - unused
+ - vet
+ - dupword # Checks for duplicate words in the source code
+ disable:
+ - errcheck
+
+run:
+ timeout: 5m
+ skip-dirs:
+ - api
+ - cluster
+ - design
+ - docs
+ - docs/man
+ - releases
+ - reports
+ - test # e2e scripts
diff --git a/vendor/github.com/containerd/platforms/LICENSE b/vendor/github.com/containerd/platforms/LICENSE
new file mode 100644
index 000000000..584149b6e
--- /dev/null
+++ b/vendor/github.com/containerd/platforms/LICENSE
@@ -0,0 +1,191 @@
+
+ Apache License
+ Version 2.0, January 2004
+ https://www.apache.org/licenses/
+
+ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
+
+ 1. Definitions.
+
+ "License" shall mean the terms and conditions for use, reproduction,
+ and distribution as defined by Sections 1 through 9 of this document.
+
+ "Licensor" shall mean the copyright owner or entity authorized by
+ the copyright owner that is granting the License.
+
+ "Legal Entity" shall mean the union of the acting entity and all
+ other entities that control, are controlled by, or are under common
+ control with that entity. For the purposes of this definition,
+ "control" means (i) the power, direct or indirect, to cause the
+ direction or management of such entity, whether by contract or
+ otherwise, or (ii) ownership of fifty percent (50%) or more of the
+ outstanding shares, or (iii) beneficial ownership of such entity.
+
+ "You" (or "Your") shall mean an individual or Legal Entity
+ exercising permissions granted by this License.
+
+ "Source" form shall mean the preferred form for making modifications,
+ including but not limited to software source code, documentation
+ source, and configuration files.
+
+ "Object" form shall mean any form resulting from mechanical
+ transformation or translation of a Source form, including but
+ not limited to compiled object code, generated documentation,
+ and conversions to other media types.
+
+ "Work" shall mean the work of authorship, whether in Source or
+ Object form, made available under the License, as indicated by a
+ copyright notice that is included in or attached to the work
+ (an example is provided in the Appendix below).
+
+ "Derivative Works" shall mean any work, whether in Source or Object
+ form, that is based on (or derived from) the Work and for which the
+ editorial revisions, annotations, elaborations, or other modifications
+ represent, as a whole, an original work of authorship. For the purposes
+ of this License, Derivative Works shall not include works that remain
+ separable from, or merely link (or bind by name) to the interfaces of,
+ the Work and Derivative Works thereof.
+
+ "Contribution" shall mean any work of authorship, including
+ the original version of the Work and any modifications or additions
+ to that Work or Derivative Works thereof, that is intentionally
+ submitted to Licensor for inclusion in the Work by the copyright owner
+ or by an individual or Legal Entity authorized to submit on behalf of
+ the copyright owner. For the purposes of this definition, "submitted"
+ means any form of electronic, verbal, or written communication sent
+ to the Licensor or its representatives, including but not limited to
+ communication on electronic mailing lists, source code control systems,
+ and issue tracking systems that are managed by, or on behalf of, the
+ Licensor for the purpose of discussing and improving the Work, but
+ excluding communication that is conspicuously marked or otherwise
+ designated in writing by the copyright owner as "Not a Contribution."
+
+ "Contributor" shall mean Licensor and any individual or Legal Entity
+ on behalf of whom a Contribution has been received by Licensor and
+ subsequently incorporated within the Work.
+
+ 2. Grant of Copyright License. Subject to the terms and conditions of
+ this License, each Contributor hereby grants to You a perpetual,
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
+ copyright license to reproduce, prepare Derivative Works of,
+ publicly display, publicly perform, sublicense, and distribute the
+ Work and such Derivative Works in Source or Object form.
+
+ 3. Grant of Patent License. Subject to the terms and conditions of
+ this License, each Contributor hereby grants to You a perpetual,
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
+ (except as stated in this section) patent license to make, have made,
+ use, offer to sell, sell, import, and otherwise transfer the Work,
+ where such license applies only to those patent claims licensable
+ by such Contributor that are necessarily infringed by their
+ Contribution(s) alone or by combination of their Contribution(s)
+ with the Work to which such Contribution(s) was submitted. If You
+ institute patent litigation against any entity (including a
+ cross-claim or counterclaim in a lawsuit) alleging that the Work
+ or a Contribution incorporated within the Work constitutes direct
+ or contributory patent infringement, then any patent licenses
+ granted to You under this License for that Work shall terminate
+ as of the date such litigation is filed.
+
+ 4. Redistribution. You may reproduce and distribute copies of the
+ Work or Derivative Works thereof in any medium, with or without
+ modifications, and in Source or Object form, provided that You
+ meet the following conditions:
+
+ (a) You must give any other recipients of the Work or
+ Derivative Works a copy of this License; and
+
+ (b) You must cause any modified files to carry prominent notices
+ stating that You changed the files; and
+
+ (c) You must retain, in the Source form of any Derivative Works
+ that You distribute, all copyright, patent, trademark, and
+ attribution notices from the Source form of the Work,
+ excluding those notices that do not pertain to any part of
+ the Derivative Works; and
+
+ (d) If the Work includes a "NOTICE" text file as part of its
+ distribution, then any Derivative Works that You distribute must
+ include a readable copy of the attribution notices contained
+ within such NOTICE file, excluding those notices that do not
+ pertain to any part of the Derivative Works, in at least one
+ of the following places: within a NOTICE text file distributed
+ as part of the Derivative Works; within the Source form or
+ documentation, if provided along with the Derivative Works; or,
+ within a display generated by the Derivative Works, if and
+ wherever such third-party notices normally appear. The contents
+ of the NOTICE file are for informational purposes only and
+ do not modify the License. You may add Your own attribution
+ notices within Derivative Works that You distribute, alongside
+ or as an addendum to the NOTICE text from the Work, provided
+ that such additional attribution notices cannot be construed
+ as modifying the License.
+
+ You may add Your own copyright statement to Your modifications and
+ may provide additional or different license terms and conditions
+ for use, reproduction, or distribution of Your modifications, or
+ for any such Derivative Works as a whole, provided Your use,
+ reproduction, and distribution of the Work otherwise complies with
+ the conditions stated in this License.
+
+ 5. Submission of Contributions. Unless You explicitly state otherwise,
+ any Contribution intentionally submitted for inclusion in the Work
+ by You to the Licensor shall be under the terms and conditions of
+ this License, without any additional terms or conditions.
+ Notwithstanding the above, nothing herein shall supersede or modify
+ the terms of any separate license agreement you may have executed
+ with Licensor regarding such Contributions.
+
+ 6. Trademarks. This License does not grant permission to use the trade
+ names, trademarks, service marks, or product names of the Licensor,
+ except as required for reasonable and customary use in describing the
+ origin of the Work and reproducing the content of the NOTICE file.
+
+ 7. Disclaimer of Warranty. Unless required by applicable law or
+ agreed to in writing, Licensor provides the Work (and each
+ Contributor provides its Contributions) on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
+ implied, including, without limitation, any warranties or conditions
+ of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
+ PARTICULAR PURPOSE. You are solely responsible for determining the
+ appropriateness of using or redistributing the Work and assume any
+ risks associated with Your exercise of permissions under this License.
+
+ 8. Limitation of Liability. In no event and under no legal theory,
+ whether in tort (including negligence), contract, or otherwise,
+ unless required by applicable law (such as deliberate and grossly
+ negligent acts) or agreed to in writing, shall any Contributor be
+ liable to You for damages, including any direct, indirect, special,
+ incidental, or consequential damages of any character arising as a
+ result of this License or out of the use or inability to use the
+ Work (including but not limited to damages for loss of goodwill,
+ work stoppage, computer failure or malfunction, or any and all
+ other commercial damages or losses), even if such Contributor
+ has been advised of the possibility of such damages.
+
+ 9. Accepting Warranty or Additional Liability. While redistributing
+ the Work or Derivative Works thereof, You may choose to offer,
+ and charge a fee for, acceptance of support, warranty, indemnity,
+ or other liability obligations and/or rights consistent with this
+ License. However, in accepting such obligations, You may act only
+ on Your own behalf and on Your sole responsibility, not on behalf
+ of any other Contributor, and only if You agree to indemnify,
+ defend, and hold each Contributor harmless for any liability
+ incurred by, or claims asserted against, such Contributor by reason
+ of your accepting any such warranty or additional liability.
+
+ END OF TERMS AND CONDITIONS
+
+ Copyright The containerd Authors
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ https://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
diff --git a/vendor/github.com/containerd/platforms/README.md b/vendor/github.com/containerd/platforms/README.md
new file mode 100644
index 000000000..2059de771
--- /dev/null
+++ b/vendor/github.com/containerd/platforms/README.md
@@ -0,0 +1,32 @@
+# platforms
+
+A Go package for formatting, normalizing and matching container platforms.
+
+This package is based on the Open Containers Image Spec definition of a [platform](https://github.com/opencontainers/image-spec/blob/main/specs-go/v1/descriptor.go#L52).
+
+## Platform Specifier
+
+While the OCI platform specifications provide a tool for components to
+specify structured information, user input typically doesn't need the full
+context and much can be inferred. To solve this problem, this package introduces
+"specifiers". A specifier has the format
+`||/[/]`. The user can provide either the
+operating system or the architecture or both.
+
+An example of a common specifier is `linux/amd64`. If the host has a default
+runtime that matches this, the user can simply provide the component that
+matters. For example, if an image provides `amd64` and `arm64` support, the
+operating system, `linux` can be inferred, so they only have to provide
+`arm64` or `amd64`. Similar behavior is implemented for operating systems,
+where the architecture may be known but a runtime may support images from
+different operating systems.
+
+## Project details
+
+**platforms** is a containerd sub-project, licensed under the [Apache 2.0 license](./LICENSE).
+As a containerd sub-project, you will find the:
+ * [Project governance](https://github.com/containerd/project/blob/main/GOVERNANCE.md),
+ * [Maintainers](https://github.com/containerd/project/blob/main/MAINTAINERS),
+ * and [Contributing guidelines](https://github.com/containerd/project/blob/main/CONTRIBUTING.md)
+
+information in our [`containerd/project`](https://github.com/containerd/project) repository.
\ No newline at end of file
diff --git a/vendor/github.com/containerd/platforms/compare.go b/vendor/github.com/containerd/platforms/compare.go
new file mode 100644
index 000000000..3913ef663
--- /dev/null
+++ b/vendor/github.com/containerd/platforms/compare.go
@@ -0,0 +1,203 @@
+/*
+ Copyright The containerd Authors.
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+*/
+
+package platforms
+
+import (
+ "strconv"
+ "strings"
+
+ specs "github.com/opencontainers/image-spec/specs-go/v1"
+)
+
+// MatchComparer is able to match and compare platforms to
+// filter and sort platforms.
+type MatchComparer interface {
+ Matcher
+
+ Less(specs.Platform, specs.Platform) bool
+}
+
+// platformVector returns an (ordered) vector of appropriate specs.Platform
+// objects to try matching for the given platform object (see platforms.Only).
+func platformVector(platform specs.Platform) []specs.Platform {
+ vector := []specs.Platform{platform}
+
+ switch platform.Architecture {
+ case "amd64":
+ if amd64Version, err := strconv.Atoi(strings.TrimPrefix(platform.Variant, "v")); err == nil && amd64Version > 1 {
+ for amd64Version--; amd64Version >= 1; amd64Version-- {
+ vector = append(vector, specs.Platform{
+ Architecture: platform.Architecture,
+ OS: platform.OS,
+ OSVersion: platform.OSVersion,
+ OSFeatures: platform.OSFeatures,
+ Variant: "v" + strconv.Itoa(amd64Version),
+ })
+ }
+ }
+ vector = append(vector, specs.Platform{
+ Architecture: "386",
+ OS: platform.OS,
+ OSVersion: platform.OSVersion,
+ OSFeatures: platform.OSFeatures,
+ })
+ case "arm":
+ if armVersion, err := strconv.Atoi(strings.TrimPrefix(platform.Variant, "v")); err == nil && armVersion > 5 {
+ for armVersion--; armVersion >= 5; armVersion-- {
+ vector = append(vector, specs.Platform{
+ Architecture: platform.Architecture,
+ OS: platform.OS,
+ OSVersion: platform.OSVersion,
+ OSFeatures: platform.OSFeatures,
+ Variant: "v" + strconv.Itoa(armVersion),
+ })
+ }
+ }
+ case "arm64":
+ variant := platform.Variant
+ if variant == "" {
+ variant = "v8"
+ }
+ vector = append(vector, platformVector(specs.Platform{
+ Architecture: "arm",
+ OS: platform.OS,
+ OSVersion: platform.OSVersion,
+ OSFeatures: platform.OSFeatures,
+ Variant: variant,
+ })...)
+ }
+
+ return vector
+}
+
+// Only returns a match comparer for a single platform
+// using default resolution logic for the platform.
+//
+// For arm/v8, will also match arm/v7, arm/v6 and arm/v5
+// For arm/v7, will also match arm/v6 and arm/v5
+// For arm/v6, will also match arm/v5
+// For amd64, will also match 386
+func Only(platform specs.Platform) MatchComparer {
+ return Ordered(platformVector(Normalize(platform))...)
+}
+
+// OnlyStrict returns a match comparer for a single platform.
+//
+// Unlike Only, OnlyStrict does not match sub platforms.
+// So, "arm/vN" will not match "arm/vM" where M < N,
+// and "amd64" will not also match "386".
+//
+// OnlyStrict matches non-canonical forms.
+// So, "arm64" matches "arm/64/v8".
+func OnlyStrict(platform specs.Platform) MatchComparer {
+ return Ordered(Normalize(platform))
+}
+
+// Ordered returns a platform MatchComparer which matches any of the platforms
+// but orders them in order they are provided.
+func Ordered(platforms ...specs.Platform) MatchComparer {
+ matchers := make([]Matcher, len(platforms))
+ for i := range platforms {
+ matchers[i] = NewMatcher(platforms[i])
+ }
+ return orderedPlatformComparer{
+ matchers: matchers,
+ }
+}
+
+// Any returns a platform MatchComparer which matches any of the platforms
+// with no preference for ordering.
+func Any(platforms ...specs.Platform) MatchComparer {
+ matchers := make([]Matcher, len(platforms))
+ for i := range platforms {
+ matchers[i] = NewMatcher(platforms[i])
+ }
+ return anyPlatformComparer{
+ matchers: matchers,
+ }
+}
+
+// All is a platform MatchComparer which matches all platforms
+// with preference for ordering.
+var All MatchComparer = allPlatformComparer{}
+
+type orderedPlatformComparer struct {
+ matchers []Matcher
+}
+
+func (c orderedPlatformComparer) Match(platform specs.Platform) bool {
+ for _, m := range c.matchers {
+ if m.Match(platform) {
+ return true
+ }
+ }
+ return false
+}
+
+func (c orderedPlatformComparer) Less(p1 specs.Platform, p2 specs.Platform) bool {
+ for _, m := range c.matchers {
+ p1m := m.Match(p1)
+ p2m := m.Match(p2)
+ if p1m && !p2m {
+ return true
+ }
+ if p1m || p2m {
+ return false
+ }
+ }
+ return false
+}
+
+type anyPlatformComparer struct {
+ matchers []Matcher
+}
+
+func (c anyPlatformComparer) Match(platform specs.Platform) bool {
+ for _, m := range c.matchers {
+ if m.Match(platform) {
+ return true
+ }
+ }
+ return false
+}
+
+func (c anyPlatformComparer) Less(p1, p2 specs.Platform) bool {
+ var p1m, p2m bool
+ for _, m := range c.matchers {
+ if !p1m && m.Match(p1) {
+ p1m = true
+ }
+ if !p2m && m.Match(p2) {
+ p2m = true
+ }
+ if p1m && p2m {
+ return false
+ }
+ }
+ // If one matches, and the other does, sort match first
+ return p1m && !p2m
+}
+
+type allPlatformComparer struct{}
+
+func (allPlatformComparer) Match(specs.Platform) bool {
+ return true
+}
+
+func (allPlatformComparer) Less(specs.Platform, specs.Platform) bool {
+ return false
+}
diff --git a/vendor/github.com/containerd/platforms/cpuinfo.go b/vendor/github.com/containerd/platforms/cpuinfo.go
new file mode 100644
index 000000000..91f50e8c8
--- /dev/null
+++ b/vendor/github.com/containerd/platforms/cpuinfo.go
@@ -0,0 +1,43 @@
+/*
+ Copyright The containerd Authors.
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+*/
+
+package platforms
+
+import (
+ "runtime"
+ "sync"
+
+ "github.com/containerd/log"
+)
+
+// Present the ARM instruction set architecture, eg: v7, v8
+// Don't use this value directly; call cpuVariant() instead.
+var cpuVariantValue string
+
+var cpuVariantOnce sync.Once
+
+func cpuVariant() string {
+ cpuVariantOnce.Do(func() {
+ if isArmArch(runtime.GOARCH) {
+ var err error
+ cpuVariantValue, err = getCPUVariant()
+ if err != nil {
+ log.L.Errorf("Error getCPUVariant for OS %s: %v", runtime.GOOS, err)
+ }
+ }
+ })
+ return cpuVariantValue
+}
diff --git a/vendor/github.com/containerd/platforms/cpuinfo_linux.go b/vendor/github.com/containerd/platforms/cpuinfo_linux.go
new file mode 100644
index 000000000..98c7001f9
--- /dev/null
+++ b/vendor/github.com/containerd/platforms/cpuinfo_linux.go
@@ -0,0 +1,160 @@
+/*
+ Copyright The containerd Authors.
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+*/
+
+package platforms
+
+import (
+ "bufio"
+ "bytes"
+ "errors"
+ "fmt"
+ "os"
+ "runtime"
+ "strings"
+
+ "golang.org/x/sys/unix"
+)
+
+// getMachineArch retrieves the machine architecture through system call
+func getMachineArch() (string, error) {
+ var uname unix.Utsname
+ err := unix.Uname(&uname)
+ if err != nil {
+ return "", err
+ }
+
+ arch := string(uname.Machine[:bytes.IndexByte(uname.Machine[:], 0)])
+
+ return arch, nil
+}
+
+// For Linux, the kernel has already detected the ABI, ISA and Features.
+// So we don't need to access the ARM registers to detect platform information
+// by ourselves. We can just parse these information from /proc/cpuinfo
+func getCPUInfo(pattern string) (info string, err error) {
+
+ cpuinfo, err := os.Open("/proc/cpuinfo")
+ if err != nil {
+ return "", err
+ }
+ defer cpuinfo.Close()
+
+ // Start to Parse the Cpuinfo line by line. For SMP SoC, we parse
+ // the first core is enough.
+ scanner := bufio.NewScanner(cpuinfo)
+ for scanner.Scan() {
+ newline := scanner.Text()
+ list := strings.Split(newline, ":")
+
+ if len(list) > 1 && strings.EqualFold(strings.TrimSpace(list[0]), pattern) {
+ return strings.TrimSpace(list[1]), nil
+ }
+ }
+
+ // Check whether the scanner encountered errors
+ err = scanner.Err()
+ if err != nil {
+ return "", err
+ }
+
+ return "", fmt.Errorf("getCPUInfo for pattern %s: %w", pattern, errNotFound)
+}
+
+// getCPUVariantFromArch get CPU variant from arch through a system call
+func getCPUVariantFromArch(arch string) (string, error) {
+
+ var variant string
+
+ arch = strings.ToLower(arch)
+
+ if arch == "aarch64" {
+ variant = "8"
+ } else if arch[0:4] == "armv" && len(arch) >= 5 {
+ // Valid arch format is in form of armvXx
+ switch arch[3:5] {
+ case "v8":
+ variant = "8"
+ case "v7":
+ variant = "7"
+ case "v6":
+ variant = "6"
+ case "v5":
+ variant = "5"
+ case "v4":
+ variant = "4"
+ case "v3":
+ variant = "3"
+ default:
+ variant = "unknown"
+ }
+ } else {
+ return "", fmt.Errorf("getCPUVariantFromArch invalid arch: %s, %w", arch, errInvalidArgument)
+ }
+ return variant, nil
+}
+
+// getCPUVariant returns cpu variant for ARM
+// We first try reading "Cpu architecture" field from /proc/cpuinfo
+// If we can't find it, then fall back using a system call
+// This is to cover running ARM in emulated environment on x86 host as this field in /proc/cpuinfo
+// was not present.
+func getCPUVariant() (string, error) {
+ variant, err := getCPUInfo("Cpu architecture")
+ if err != nil {
+ if errors.Is(err, errNotFound) {
+ // Let's try getting CPU variant from machine architecture
+ arch, err := getMachineArch()
+ if err != nil {
+ return "", fmt.Errorf("failure getting machine architecture: %v", err)
+ }
+
+ variant, err = getCPUVariantFromArch(arch)
+ if err != nil {
+ return "", fmt.Errorf("failure getting CPU variant from machine architecture: %v", err)
+ }
+ } else {
+ return "", fmt.Errorf("failure getting CPU variant: %v", err)
+ }
+ }
+
+ // handle edge case for Raspberry Pi ARMv6 devices (which due to a kernel quirk, report "CPU architecture: 7")
+ // https://www.raspberrypi.org/forums/viewtopic.php?t=12614
+ if runtime.GOARCH == "arm" && variant == "7" {
+ model, err := getCPUInfo("model name")
+ if err == nil && strings.HasPrefix(strings.ToLower(model), "armv6-compatible") {
+ variant = "6"
+ }
+ }
+
+ switch strings.ToLower(variant) {
+ case "8", "aarch64":
+ variant = "v8"
+ case "7", "7m", "?(12)", "?(13)", "?(14)", "?(15)", "?(16)", "?(17)":
+ variant = "v7"
+ case "6", "6tej":
+ variant = "v6"
+ case "5", "5t", "5te", "5tej":
+ variant = "v5"
+ case "4", "4t":
+ variant = "v4"
+ case "3":
+ variant = "v3"
+ default:
+ variant = "unknown"
+ }
+
+ return variant, nil
+}
diff --git a/vendor/github.com/containerd/platforms/cpuinfo_other.go b/vendor/github.com/containerd/platforms/cpuinfo_other.go
new file mode 100644
index 000000000..97a1fe8a3
--- /dev/null
+++ b/vendor/github.com/containerd/platforms/cpuinfo_other.go
@@ -0,0 +1,55 @@
+//go:build !linux
+
+/*
+ Copyright The containerd Authors.
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+*/
+
+package platforms
+
+import (
+ "fmt"
+ "runtime"
+)
+
+func getCPUVariant() (string, error) {
+
+ var variant string
+
+ if runtime.GOOS == "windows" || runtime.GOOS == "darwin" {
+ // Windows/Darwin only supports v7 for ARM32 and v8 for ARM64 and so we can use
+ // runtime.GOARCH to determine the variants
+ switch runtime.GOARCH {
+ case "arm64":
+ variant = "v8"
+ case "arm":
+ variant = "v7"
+ default:
+ variant = "unknown"
+ }
+ } else if runtime.GOOS == "freebsd" {
+ // FreeBSD supports ARMv6 and ARMv7 as well as ARMv4 and ARMv5 (though deprecated)
+ // detecting those variants is currently unimplemented
+ switch runtime.GOARCH {
+ case "arm64":
+ variant = "v8"
+ default:
+ variant = "unknown"
+ }
+ } else {
+ return "", fmt.Errorf("getCPUVariant for OS %s: %v", runtime.GOOS, errNotImplemented)
+ }
+
+ return variant, nil
+}
diff --git a/vendor/github.com/containerd/platforms/database.go b/vendor/github.com/containerd/platforms/database.go
new file mode 100644
index 000000000..2e26fd3b4
--- /dev/null
+++ b/vendor/github.com/containerd/platforms/database.go
@@ -0,0 +1,109 @@
+/*
+ Copyright The containerd Authors.
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+*/
+
+package platforms
+
+import (
+ "runtime"
+ "strings"
+)
+
+// These function are generated from https://golang.org/src/go/build/syslist.go.
+//
+// We use switch statements because they are slightly faster than map lookups
+// and use a little less memory.
+
+// isKnownOS returns true if we know about the operating system.
+//
+// The OS value should be normalized before calling this function.
+func isKnownOS(os string) bool {
+ switch os {
+ case "aix", "android", "darwin", "dragonfly", "freebsd", "hurd", "illumos", "ios", "js", "linux", "nacl", "netbsd", "openbsd", "plan9", "solaris", "windows", "zos":
+ return true
+ }
+ return false
+}
+
+// isArmArch returns true if the architecture is ARM.
+//
+// The arch value should be normalized before being passed to this function.
+func isArmArch(arch string) bool {
+ switch arch {
+ case "arm", "arm64":
+ return true
+ }
+ return false
+}
+
+// isKnownArch returns true if we know about the architecture.
+//
+// The arch value should be normalized before being passed to this function.
+func isKnownArch(arch string) bool {
+ switch arch {
+ case "386", "amd64", "amd64p32", "arm", "armbe", "arm64", "arm64be", "ppc64", "ppc64le", "loong64", "mips", "mipsle", "mips64", "mips64le", "mips64p32", "mips64p32le", "ppc", "riscv", "riscv64", "s390", "s390x", "sparc", "sparc64", "wasm":
+ return true
+ }
+ return false
+}
+
+func normalizeOS(os string) string {
+ if os == "" {
+ return runtime.GOOS
+ }
+ os = strings.ToLower(os)
+
+ switch os {
+ case "macos":
+ os = "darwin"
+ }
+ return os
+}
+
+// normalizeArch normalizes the architecture.
+func normalizeArch(arch, variant string) (string, string) {
+ arch, variant = strings.ToLower(arch), strings.ToLower(variant)
+ switch arch {
+ case "i386":
+ arch = "386"
+ variant = ""
+ case "x86_64", "x86-64", "amd64":
+ arch = "amd64"
+ if variant == "v1" {
+ variant = ""
+ }
+ case "aarch64", "arm64":
+ arch = "arm64"
+ switch variant {
+ case "8", "v8":
+ variant = ""
+ }
+ case "armhf":
+ arch = "arm"
+ variant = "v7"
+ case "armel":
+ arch = "arm"
+ variant = "v6"
+ case "arm":
+ switch variant {
+ case "", "7":
+ variant = "v7"
+ case "5", "6", "8":
+ variant = "v" + variant
+ }
+ }
+
+ return arch, variant
+}
diff --git a/vendor/github.com/containerd/platforms/defaults.go b/vendor/github.com/containerd/platforms/defaults.go
new file mode 100644
index 000000000..9d898d60e
--- /dev/null
+++ b/vendor/github.com/containerd/platforms/defaults.go
@@ -0,0 +1,29 @@
+/*
+ Copyright The containerd Authors.
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+*/
+
+package platforms
+
+// DefaultString returns the default string specifier for the platform,
+// with [PR#6](https://github.com/containerd/platforms/pull/6) the result
+// may now also include the OSVersion from the provided platform specification.
+func DefaultString() string {
+ return FormatAll(DefaultSpec())
+}
+
+// DefaultStrict returns strict form of Default.
+func DefaultStrict() MatchComparer {
+ return OnlyStrict(DefaultSpec())
+}
diff --git a/vendor/github.com/containerd/platforms/defaults_darwin.go b/vendor/github.com/containerd/platforms/defaults_darwin.go
new file mode 100644
index 000000000..72355ca85
--- /dev/null
+++ b/vendor/github.com/containerd/platforms/defaults_darwin.go
@@ -0,0 +1,44 @@
+//go:build darwin
+
+/*
+ Copyright The containerd Authors.
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+*/
+
+package platforms
+
+import (
+ "runtime"
+
+ specs "github.com/opencontainers/image-spec/specs-go/v1"
+)
+
+// DefaultSpec returns the current platform's default platform specification.
+func DefaultSpec() specs.Platform {
+ return specs.Platform{
+ OS: runtime.GOOS,
+ Architecture: runtime.GOARCH,
+ // The Variant field will be empty if arch != ARM.
+ Variant: cpuVariant(),
+ }
+}
+
+// Default returns the default matcher for the platform.
+func Default() MatchComparer {
+ return Ordered(DefaultSpec(), specs.Platform{
+ // darwin runtime also supports Linux binary via runu/LKL
+ OS: "linux",
+ Architecture: runtime.GOARCH,
+ })
+}
diff --git a/vendor/github.com/containerd/platforms/defaults_freebsd.go b/vendor/github.com/containerd/platforms/defaults_freebsd.go
new file mode 100644
index 000000000..d3fe89e07
--- /dev/null
+++ b/vendor/github.com/containerd/platforms/defaults_freebsd.go
@@ -0,0 +1,43 @@
+/*
+ Copyright The containerd Authors.
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+*/
+
+package platforms
+
+import (
+ "runtime"
+
+ specs "github.com/opencontainers/image-spec/specs-go/v1"
+)
+
+// DefaultSpec returns the current platform's default platform specification.
+func DefaultSpec() specs.Platform {
+ return specs.Platform{
+ OS: runtime.GOOS,
+ Architecture: runtime.GOARCH,
+ // The Variant field will be empty if arch != ARM.
+ Variant: cpuVariant(),
+ }
+}
+
+// Default returns the default matcher for the platform.
+func Default() MatchComparer {
+ return Ordered(DefaultSpec(), specs.Platform{
+ OS: "linux",
+ Architecture: runtime.GOARCH,
+ // The Variant field will be empty if arch != ARM.
+ Variant: cpuVariant(),
+ })
+}
diff --git a/vendor/github.com/containerd/platforms/defaults_unix.go b/vendor/github.com/containerd/platforms/defaults_unix.go
new file mode 100644
index 000000000..44acc47eb
--- /dev/null
+++ b/vendor/github.com/containerd/platforms/defaults_unix.go
@@ -0,0 +1,40 @@
+//go:build !windows && !darwin && !freebsd
+
+/*
+ Copyright The containerd Authors.
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+*/
+
+package platforms
+
+import (
+ "runtime"
+
+ specs "github.com/opencontainers/image-spec/specs-go/v1"
+)
+
+// DefaultSpec returns the current platform's default platform specification.
+func DefaultSpec() specs.Platform {
+ return specs.Platform{
+ OS: runtime.GOOS,
+ Architecture: runtime.GOARCH,
+ // The Variant field will be empty if arch != ARM.
+ Variant: cpuVariant(),
+ }
+}
+
+// Default returns the default matcher for the platform.
+func Default() MatchComparer {
+ return Only(DefaultSpec())
+}
diff --git a/vendor/github.com/containerd/platforms/defaults_windows.go b/vendor/github.com/containerd/platforms/defaults_windows.go
new file mode 100644
index 000000000..427ed72eb
--- /dev/null
+++ b/vendor/github.com/containerd/platforms/defaults_windows.go
@@ -0,0 +1,118 @@
+/*
+ Copyright The containerd Authors.
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+*/
+
+package platforms
+
+import (
+ "fmt"
+ "runtime"
+ "strconv"
+ "strings"
+
+ specs "github.com/opencontainers/image-spec/specs-go/v1"
+ "golang.org/x/sys/windows"
+)
+
+// DefaultSpec returns the current platform's default platform specification.
+func DefaultSpec() specs.Platform {
+ major, minor, build := windows.RtlGetNtVersionNumbers()
+ return specs.Platform{
+ OS: runtime.GOOS,
+ Architecture: runtime.GOARCH,
+ OSVersion: fmt.Sprintf("%d.%d.%d", major, minor, build),
+ // The Variant field will be empty if arch != ARM.
+ Variant: cpuVariant(),
+ }
+}
+
+type windowsmatcher struct {
+ specs.Platform
+ osVersionPrefix string
+ defaultMatcher Matcher
+}
+
+// Match matches platform with the same windows major, minor
+// and build version.
+func (m windowsmatcher) Match(p specs.Platform) bool {
+ match := m.defaultMatcher.Match(p)
+
+ if match && m.OS == "windows" {
+ // HPC containers do not have OS version filled
+ if m.OSVersion == "" || p.OSVersion == "" {
+ return true
+ }
+
+ hostOsVersion := getOSVersion(m.osVersionPrefix)
+ ctrOsVersion := getOSVersion(p.OSVersion)
+ return checkHostAndContainerCompat(hostOsVersion, ctrOsVersion)
+ }
+
+ return match
+}
+
+func getOSVersion(osVersionPrefix string) osVersion {
+ parts := strings.Split(osVersionPrefix, ".")
+ if len(parts) < 3 {
+ return osVersion{}
+ }
+
+ majorVersion, _ := strconv.Atoi(parts[0])
+ minorVersion, _ := strconv.Atoi(parts[1])
+ buildNumber, _ := strconv.Atoi(parts[2])
+
+ return osVersion{
+ MajorVersion: uint8(majorVersion),
+ MinorVersion: uint8(minorVersion),
+ Build: uint16(buildNumber),
+ }
+}
+
+// Less sorts matched platforms in front of other platforms.
+// For matched platforms, it puts platforms with larger revision
+// number in front.
+func (m windowsmatcher) Less(p1, p2 specs.Platform) bool {
+ m1, m2 := m.Match(p1), m.Match(p2)
+ if m1 && m2 {
+ r1, r2 := revision(p1.OSVersion), revision(p2.OSVersion)
+ return r1 > r2
+ }
+ return m1 && !m2
+}
+
+func revision(v string) int {
+ parts := strings.Split(v, ".")
+ if len(parts) < 4 {
+ return 0
+ }
+ r, err := strconv.Atoi(parts[3])
+ if err != nil {
+ return 0
+ }
+ return r
+}
+
+func prefix(v string) string {
+ parts := strings.Split(v, ".")
+ if len(parts) < 4 {
+ return v
+ }
+ return strings.Join(parts[0:3], ".")
+}
+
+// Default returns the current platform's default platform specification.
+func Default() MatchComparer {
+ return Only(DefaultSpec())
+}
diff --git a/vendor/github.com/containerd/platforms/errors.go b/vendor/github.com/containerd/platforms/errors.go
new file mode 100644
index 000000000..5ad721e77
--- /dev/null
+++ b/vendor/github.com/containerd/platforms/errors.go
@@ -0,0 +1,30 @@
+/*
+ Copyright The containerd Authors.
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+*/
+
+package platforms
+
+import "errors"
+
+// These errors mirror the errors defined in [github.com/containerd/containerd/errdefs],
+// however, they are not exported as they are not expected to be used as sentinel
+// errors by consumers of this package.
+//
+//nolint:unused // not all errors are used on all platforms.
+var (
+ errNotFound = errors.New("not found")
+ errInvalidArgument = errors.New("invalid argument")
+ errNotImplemented = errors.New("not implemented")
+)
diff --git a/vendor/github.com/containerd/platforms/platform_compat_windows.go b/vendor/github.com/containerd/platforms/platform_compat_windows.go
new file mode 100644
index 000000000..89e66f0c0
--- /dev/null
+++ b/vendor/github.com/containerd/platforms/platform_compat_windows.go
@@ -0,0 +1,78 @@
+/*
+ Copyright The containerd Authors.
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+*/
+
+package platforms
+
+// osVersion is a wrapper for Windows version information
+// https://msdn.microsoft.com/en-us/library/windows/desktop/ms724439(v=vs.85).aspx
+type osVersion struct {
+ Version uint32
+ MajorVersion uint8
+ MinorVersion uint8
+ Build uint16
+}
+
+// Windows Client and Server build numbers.
+//
+// See:
+// https://learn.microsoft.com/en-us/windows/release-health/release-information
+// https://learn.microsoft.com/en-us/windows/release-health/windows-server-release-info
+// https://learn.microsoft.com/en-us/windows/release-health/windows11-release-information
+const (
+ // rs5 (version 1809, codename "Redstone 5") corresponds to Windows Server
+ // 2019 (ltsc2019), and Windows 10 (October 2018 Update).
+ rs5 = 17763
+
+ // v21H2Server corresponds to Windows Server 2022 (ltsc2022).
+ v21H2Server = 20348
+
+ // v22H2Win11 corresponds to Windows 11 (2022 Update).
+ v22H2Win11 = 22621
+)
+
+// List of stable ABI compliant ltsc releases
+// Note: List must be sorted in ascending order
+var compatLTSCReleases = []uint16{
+ v21H2Server,
+}
+
+// CheckHostAndContainerCompat checks if given host and container
+// OS versions are compatible.
+// It includes support for stable ABI compliant versions as well.
+// Every release after WS 2022 will support the previous ltsc
+// container image. Stable ABI is in preview mode for windows 11 client.
+// Refer: https://learn.microsoft.com/en-us/virtualization/windowscontainers/deploy-containers/version-compatibility?tabs=windows-server-2022%2Cwindows-10#windows-server-host-os-compatibility
+func checkHostAndContainerCompat(host, ctr osVersion) bool {
+ // check major minor versions of host and guest
+ if host.MajorVersion != ctr.MajorVersion ||
+ host.MinorVersion != ctr.MinorVersion {
+ return false
+ }
+
+ // If host is < WS 2022, exact version match is required
+ if host.Build < v21H2Server {
+ return host.Build == ctr.Build
+ }
+
+ var supportedLtscRelease uint16
+ for i := len(compatLTSCReleases) - 1; i >= 0; i-- {
+ if host.Build >= compatLTSCReleases[i] {
+ supportedLtscRelease = compatLTSCReleases[i]
+ break
+ }
+ }
+ return ctr.Build >= supportedLtscRelease && ctr.Build <= host.Build
+}
diff --git a/vendor/github.com/containerd/platforms/platforms.go b/vendor/github.com/containerd/platforms/platforms.go
new file mode 100644
index 000000000..1bbbdb91d
--- /dev/null
+++ b/vendor/github.com/containerd/platforms/platforms.go
@@ -0,0 +1,308 @@
+/*
+ Copyright The containerd Authors.
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+*/
+
+// Package platforms provides a toolkit for normalizing, matching and
+// specifying container platforms.
+//
+// Centered around OCI platform specifications, we define a string-based
+// specifier syntax that can be used for user input. With a specifier, users
+// only need to specify the parts of the platform that are relevant to their
+// context, providing an operating system or architecture or both.
+//
+// How do I use this package?
+//
+// The vast majority of use cases should simply use the match function with
+// user input. The first step is to parse a specifier into a matcher:
+//
+// m, err := Parse("linux")
+// if err != nil { ... }
+//
+// Once you have a matcher, use it to match against the platform declared by a
+// component, typically from an image or runtime. Since extracting an images
+// platform is a little more involved, we'll use an example against the
+// platform default:
+//
+// if ok := m.Match(Default()); !ok { /* doesn't match */ }
+//
+// This can be composed in loops for resolving runtimes or used as a filter for
+// fetch and select images.
+//
+// More details of the specifier syntax and platform spec follow.
+//
+// # Declaring Platform Support
+//
+// Components that have strict platform requirements should use the OCI
+// platform specification to declare their support. Typically, this will be
+// images and runtimes that should make these declaring which platform they
+// support specifically. This looks roughly as follows:
+//
+// type Platform struct {
+// Architecture string
+// OS string
+// Variant string
+// }
+//
+// Most images and runtimes should at least set Architecture and OS, according
+// to their GOARCH and GOOS values, respectively (follow the OCI image
+// specification when in doubt). ARM should set variant under certain
+// discussions, which are outlined below.
+//
+// # Platform Specifiers
+//
+// While the OCI platform specifications provide a tool for components to
+// specify structured information, user input typically doesn't need the full
+// context and much can be inferred. To solve this problem, we introduced
+// "specifiers". A specifier has the format
+// `||/[/]`. The user can provide either the
+// operating system or the architecture or both.
+//
+// An example of a common specifier is `linux/amd64`. If the host has a default
+// of runtime that matches this, the user can simply provide the component that
+// matters. For example, if a image provides amd64 and arm64 support, the
+// operating system, `linux` can be inferred, so they only have to provide
+// `arm64` or `amd64`. Similar behavior is implemented for operating systems,
+// where the architecture may be known but a runtime may support images from
+// different operating systems.
+//
+// # Normalization
+//
+// Because not all users are familiar with the way the Go runtime represents
+// platforms, several normalizations have been provided to make this package
+// easier to user.
+//
+// The following are performed for architectures:
+//
+// Value Normalized
+// aarch64 arm64
+// armhf arm
+// armel arm/v6
+// i386 386
+// x86_64 amd64
+// x86-64 amd64
+//
+// We also normalize the operating system `macos` to `darwin`.
+//
+// # ARM Support
+//
+// To qualify ARM architecture, the Variant field is used to qualify the arm
+// version. The most common arm version, v7, is represented without the variant
+// unless it is explicitly provided. This is treated as equivalent to armhf. A
+// previous architecture, armel, will be normalized to arm/v6.
+//
+// Similarly, the most common arm64 version v8, and most common amd64 version v1
+// are represented without the variant.
+//
+// While these normalizations are provided, their support on arm platforms has
+// not yet been fully implemented and tested.
+package platforms
+
+import (
+ "fmt"
+ "path"
+ "regexp"
+ "runtime"
+ "strconv"
+ "strings"
+
+ specs "github.com/opencontainers/image-spec/specs-go/v1"
+)
+
+var (
+ specifierRe = regexp.MustCompile(`^[A-Za-z0-9_-]+$`)
+ osAndVersionRe = regexp.MustCompile(`^([A-Za-z0-9_-]+)(?:\(([A-Za-z0-9_.-]*)\))?$`)
+)
+
+const osAndVersionFormat = "%s(%s)"
+
+// Platform is a type alias for convenience, so there is no need to import image-spec package everywhere.
+type Platform = specs.Platform
+
+// Matcher matches platforms specifications, provided by an image or runtime.
+type Matcher interface {
+ Match(platform specs.Platform) bool
+}
+
+// NewMatcher returns a simple matcher based on the provided platform
+// specification. The returned matcher only looks for equality based on os,
+// architecture and variant.
+//
+// One may implement their own matcher if this doesn't provide the required
+// functionality.
+//
+// Applications should opt to use `Match` over directly parsing specifiers.
+func NewMatcher(platform specs.Platform) Matcher {
+ return newDefaultMatcher(platform)
+}
+
+type matcher struct {
+ specs.Platform
+}
+
+func (m *matcher) Match(platform specs.Platform) bool {
+ normalized := Normalize(platform)
+ return m.OS == normalized.OS &&
+ m.Architecture == normalized.Architecture &&
+ m.Variant == normalized.Variant
+}
+
+func (m *matcher) String() string {
+ return FormatAll(m.Platform)
+}
+
+// ParseAll parses a list of platform specifiers into a list of platform.
+func ParseAll(specifiers []string) ([]specs.Platform, error) {
+ platforms := make([]specs.Platform, len(specifiers))
+ for i, s := range specifiers {
+ p, err := Parse(s)
+ if err != nil {
+ return nil, fmt.Errorf("invalid platform %s: %w", s, err)
+ }
+ platforms[i] = p
+ }
+ return platforms, nil
+}
+
+// Parse parses the platform specifier syntax into a platform declaration.
+//
+// Platform specifiers are in the format `[()]||[()]/[/]`.
+// The minimum required information for a platform specifier is the operating
+// system or architecture. The OSVersion can be part of the OS like `windows(10.0.17763)`
+// When an OSVersion is specified, then specs.Platform.OSVersion is populated with that value,
+// and an empty string otherwise.
+// If there is only a single string (no slashes), the
+// value will be matched against the known set of operating systems, then fall
+// back to the known set of architectures. The missing component will be
+// inferred based on the local environment.
+func Parse(specifier string) (specs.Platform, error) {
+ if strings.Contains(specifier, "*") {
+ // TODO(stevvooe): need to work out exact wildcard handling
+ return specs.Platform{}, fmt.Errorf("%q: wildcards not yet supported: %w", specifier, errInvalidArgument)
+ }
+
+ // Limit to 4 elements to prevent unbounded split
+ parts := strings.SplitN(specifier, "/", 4)
+
+ var p specs.Platform
+ for i, part := range parts {
+ if i == 0 {
+ // First element is [()]
+ osVer := osAndVersionRe.FindStringSubmatch(part)
+ if osVer == nil {
+ return specs.Platform{}, fmt.Errorf("%q is an invalid OS component of %q: OSAndVersion specifier component must match %q: %w", part, specifier, osAndVersionRe.String(), errInvalidArgument)
+ }
+
+ p.OS = normalizeOS(osVer[1])
+ p.OSVersion = osVer[2]
+ } else {
+ if !specifierRe.MatchString(part) {
+ return specs.Platform{}, fmt.Errorf("%q is an invalid component of %q: platform specifier component must match %q: %w", part, specifier, specifierRe.String(), errInvalidArgument)
+ }
+ }
+ }
+
+ switch len(parts) {
+ case 1:
+ // in this case, we will test that the value might be an OS (with or
+ // without the optional OSVersion specified) and look it up.
+ // If it is not known, we'll treat it as an architecture. Since
+ // we have very little information about the platform here, we are
+ // going to be a little more strict if we don't know about the argument
+ // value.
+ if isKnownOS(p.OS) {
+ // picks a default architecture
+ p.Architecture = runtime.GOARCH
+ if p.Architecture == "arm" && cpuVariant() != "v7" {
+ p.Variant = cpuVariant()
+ }
+
+ return p, nil
+ }
+
+ p.Architecture, p.Variant = normalizeArch(parts[0], "")
+ if p.Architecture == "arm" && p.Variant == "v7" {
+ p.Variant = ""
+ }
+ if isKnownArch(p.Architecture) {
+ p.OS = runtime.GOOS
+ return p, nil
+ }
+
+ return specs.Platform{}, fmt.Errorf("%q: unknown operating system or architecture: %w", specifier, errInvalidArgument)
+ case 2:
+ // In this case, we treat as a regular OS[(OSVersion)]/arch pair. We don't care
+ // about whether or not we know of the platform.
+ p.Architecture, p.Variant = normalizeArch(parts[1], "")
+ if p.Architecture == "arm" && p.Variant == "v7" {
+ p.Variant = ""
+ }
+
+ return p, nil
+ case 3:
+ // we have a fully specified variant, this is rare
+ p.Architecture, p.Variant = normalizeArch(parts[1], parts[2])
+ if p.Architecture == "arm64" && p.Variant == "" {
+ p.Variant = "v8"
+ }
+
+ return p, nil
+ }
+
+ return specs.Platform{}, fmt.Errorf("%q: cannot parse platform specifier: %w", specifier, errInvalidArgument)
+}
+
+// MustParse is like Parses but panics if the specifier cannot be parsed.
+// Simplifies initialization of global variables.
+func MustParse(specifier string) specs.Platform {
+ p, err := Parse(specifier)
+ if err != nil {
+ panic("platform: Parse(" + strconv.Quote(specifier) + "): " + err.Error())
+ }
+ return p
+}
+
+// Format returns a string specifier from the provided platform specification.
+func Format(platform specs.Platform) string {
+ if platform.OS == "" {
+ return "unknown"
+ }
+
+ return path.Join(platform.OS, platform.Architecture, platform.Variant)
+}
+
+// FormatAll returns a string specifier that also includes the OSVersion from the
+// provided platform specification.
+func FormatAll(platform specs.Platform) string {
+ if platform.OS == "" {
+ return "unknown"
+ }
+
+ if platform.OSVersion != "" {
+ OSAndVersion := fmt.Sprintf(osAndVersionFormat, platform.OS, platform.OSVersion)
+ return path.Join(OSAndVersion, platform.Architecture, platform.Variant)
+ }
+ return path.Join(platform.OS, platform.Architecture, platform.Variant)
+}
+
+// Normalize validates and translate the platform to the canonical value.
+//
+// For example, if "Aarch64" is encountered, we change it to "arm64" or if
+// "x86_64" is encountered, it becomes "amd64".
+func Normalize(platform specs.Platform) specs.Platform {
+ platform.OS = normalizeOS(platform.OS)
+ platform.Architecture, platform.Variant = normalizeArch(platform.Architecture, platform.Variant)
+
+ return platform
+}
diff --git a/vendor/github.com/containerd/platforms/platforms_other.go b/vendor/github.com/containerd/platforms/platforms_other.go
new file mode 100644
index 000000000..03f4dcd99
--- /dev/null
+++ b/vendor/github.com/containerd/platforms/platforms_other.go
@@ -0,0 +1,30 @@
+//go:build !windows
+
+/*
+ Copyright The containerd Authors.
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+*/
+
+package platforms
+
+import (
+ specs "github.com/opencontainers/image-spec/specs-go/v1"
+)
+
+// NewMatcher returns the default Matcher for containerd
+func newDefaultMatcher(platform specs.Platform) Matcher {
+ return &matcher{
+ Platform: Normalize(platform),
+ }
+}
diff --git a/vendor/github.com/containerd/platforms/platforms_windows.go b/vendor/github.com/containerd/platforms/platforms_windows.go
new file mode 100644
index 000000000..950e2a2dd
--- /dev/null
+++ b/vendor/github.com/containerd/platforms/platforms_windows.go
@@ -0,0 +1,34 @@
+/*
+ Copyright The containerd Authors.
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+*/
+
+package platforms
+
+import (
+ specs "github.com/opencontainers/image-spec/specs-go/v1"
+)
+
+// NewMatcher returns a Windows matcher that will match on osVersionPrefix if
+// the platform is Windows otherwise use the default matcher
+func newDefaultMatcher(platform specs.Platform) Matcher {
+ prefix := prefix(platform.OSVersion)
+ return windowsmatcher{
+ Platform: platform,
+ osVersionPrefix: prefix,
+ defaultMatcher: &matcher{
+ Platform: Normalize(platform),
+ },
+ }
+}
diff --git a/vendor/github.com/cyphar/filepath-securejoin/CHANGELOG.md b/vendor/github.com/cyphar/filepath-securejoin/CHANGELOG.md
new file mode 100644
index 000000000..04b5685ab
--- /dev/null
+++ b/vendor/github.com/cyphar/filepath-securejoin/CHANGELOG.md
@@ -0,0 +1,178 @@
+# Changelog #
+All notable changes to this project will be documented in this file.
+
+The format is based on [Keep a Changelog](http://keepachangelog.com/)
+and this project adheres to [Semantic Versioning](http://semver.org/).
+
+## [Unreleased] ##
+
+## [0.3.4] - 2024-10-09 ##
+
+### Fixed ###
+- Previously, some testing mocks we had resulted in us doing `import "testing"`
+ in non-`_test.go` code, which made some downstreams like Kubernetes unhappy.
+ This has been fixed. (#32)
+
+## [0.3.3] - 2024-09-30 ##
+
+### Fixed ###
+- The mode and owner verification logic in `MkdirAll` has been removed. This
+ was originally intended to protect against some theoretical attacks but upon
+ further consideration these protections don't actually buy us anything and
+ they were causing spurious errors with more complicated filesystem setups.
+- The "is the created directory empty" logic in `MkdirAll` has also been
+ removed. This was not causing us issues yet, but some pseudofilesystems (such
+ as `cgroup`) create non-empty directories and so this logic would've been
+ wrong for such cases.
+
+## [0.3.2] - 2024-09-13 ##
+
+### Changed ###
+- Passing the `S_ISUID` or `S_ISGID` modes to `MkdirAllInRoot` will now return
+ an explicit error saying that those bits are ignored by `mkdirat(2)`. In the
+ past a different error was returned, but since the silent ignoring behaviour
+ is codified in the man pages a more explicit error seems apt. While silently
+ ignoring these bits would be the most compatible option, it could lead to
+ users thinking their code sets these bits when it doesn't. Programs that need
+ to deal with compatibility can mask the bits themselves. (#23, #25)
+
+### Fixed ###
+- If a directory has `S_ISGID` set, then all child directories will have
+ `S_ISGID` set when created and a different gid will be used for any inode
+ created under the directory. Previously, the "expected owner and mode"
+ validation in `securejoin.MkdirAll` did not correctly handle this. We now
+ correctly handle this case. (#24, #25)
+
+## [0.3.1] - 2024-07-23 ##
+
+### Changed ###
+- By allowing `Open(at)InRoot` to opt-out of the extra work done by `MkdirAll`
+ to do the necessary "partial lookups", `Open(at)InRoot` now does less work
+ for both implementations (resulting in a many-fold decrease in the number of
+ operations for `openat2`, and a modest improvement for non-`openat2`) and is
+ far more guaranteed to match the correct `openat2(RESOLVE_IN_ROOT)`
+ behaviour.
+- We now use `readlinkat(fd, "")` where possible. For `Open(at)InRoot` this
+ effectively just means that we no longer risk getting spurious errors during
+ rename races. However, for our hardened procfs handler, this in theory should
+ prevent mount attacks from tricking us when doing magic-link readlinks (even
+ when using the unsafe host `/proc` handle). Unfortunately `Reopen` is still
+ potentially vulnerable to those kinds of somewhat-esoteric attacks.
+
+ Technically this [will only work on post-2.6.39 kernels][linux-readlinkat-emptypath]
+ but it seems incredibly unlikely anyone is using `filepath-securejoin` on a
+ pre-2011 kernel.
+
+### Fixed ###
+- Several improvements were made to the errors returned by `Open(at)InRoot` and
+ `MkdirAll` when dealing with invalid paths under the emulated (ie.
+ non-`openat2`) implementation. Previously, some paths would return the wrong
+ error (`ENOENT` when the last component was a non-directory), and other paths
+ would be returned as though they were acceptable (trailing-slash components
+ after a non-directory would be ignored by `Open(at)InRoot`).
+
+ These changes were done to match `openat2`'s behaviour and purely is a
+ consistency fix (most users are going to be using `openat2` anyway).
+
+[linux-readlinkat-emptypath]: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=65cfc6722361570bfe255698d9cd4dccaf47570d
+
+## [0.3.0] - 2024-07-11 ##
+
+### Added ###
+- A new set of `*os.File`-based APIs have been added. These are adapted from
+ [libpathrs][] and we strongly suggest using them if possible (as they provide
+ far more protection against attacks than `SecureJoin`):
+
+ - `Open(at)InRoot` resolves a path inside a rootfs and returns an `*os.File`
+ handle to the path. Note that the handle returned is an `O_PATH` handle,
+ which cannot be used for reading or writing (as well as some other
+ operations -- [see open(2) for more details][open.2])
+
+ - `Reopen` takes an `O_PATH` file handle and safely re-opens it to upgrade
+ it to a regular handle. This can also be used with non-`O_PATH` handles,
+ but `O_PATH` is the most obvious application.
+
+ - `MkdirAll` is an implementation of `os.MkdirAll` that is safe to use to
+ create a directory tree within a rootfs.
+
+ As these are new APIs, they may change in the future. However, they should be
+ safe to start migrating to as we have extensive tests ensuring they behave
+ correctly and are safe against various races and other attacks.
+
+[libpathrs]: https://github.com/openSUSE/libpathrs
+[open.2]: https://www.man7.org/linux/man-pages/man2/open.2.html
+
+## [0.2.5] - 2024-05-03 ##
+
+### Changed ###
+- Some minor changes were made to how lexical components (like `..` and `.`)
+ are handled during path generation in `SecureJoin`. There is no behaviour
+ change as a result of this fix (the resulting paths are the same).
+
+### Fixed ###
+- The error returned when we hit a symlink loop now references the correct
+ path. (#10)
+
+## [0.2.4] - 2023-09-06 ##
+
+### Security ###
+- This release fixes a potential security issue in filepath-securejoin when
+ used on Windows ([GHSA-6xv5-86q9-7xr8][], which could be used to generate
+ paths outside of the provided rootfs in certain cases), as well as improving
+ the overall behaviour of filepath-securejoin when dealing with Windows paths
+ that contain volume names. Thanks to Paulo Gomes for discovering and fixing
+ these issues.
+
+### Fixed ###
+- Switch to GitHub Actions for CI so we can test on Windows as well as Linux
+ and MacOS.
+
+[GHSA-6xv5-86q9-7xr8]: https://github.com/advisories/GHSA-6xv5-86q9-7xr8
+
+## [0.2.3] - 2021-06-04 ##
+
+### Changed ###
+- Switch to Go 1.13-style `%w` error wrapping, letting us drop the dependency
+ on `github.com/pkg/errors`.
+
+## [0.2.2] - 2018-09-05 ##
+
+### Changed ###
+- Use `syscall.ELOOP` as the base error for symlink loops, rather than our own
+ (internal) error. This allows callers to more easily use `errors.Is` to check
+ for this case.
+
+## [0.2.1] - 2018-09-05 ##
+
+### Fixed ###
+- Use our own `IsNotExist` implementation, which lets us handle `ENOTDIR`
+ properly within `SecureJoin`.
+
+## [0.2.0] - 2017-07-19 ##
+
+We now have 100% test coverage!
+
+### Added ###
+- Add a `SecureJoinVFS` API that can be used for mocking (as we do in our new
+ tests) or for implementing custom handling of lookup operations (such as for
+ rootless containers, where work is necessary to access directories with weird
+ modes because we don't have `CAP_DAC_READ_SEARCH` or `CAP_DAC_OVERRIDE`).
+
+## 0.1.0 - 2017-07-19
+
+This is our first release of `github.com/cyphar/filepath-securejoin`,
+containing a full implementation with a coverage of 93.5% (the only missing
+cases are the error cases, which are hard to mocktest at the moment).
+
+[Unreleased]: https://github.com/cyphar/filepath-securejoin/compare/v0.3.4...HEAD
+[0.3.3]: https://github.com/cyphar/filepath-securejoin/compare/v0.3.3...v0.3.4
+[0.3.3]: https://github.com/cyphar/filepath-securejoin/compare/v0.3.2...v0.3.3
+[0.3.2]: https://github.com/cyphar/filepath-securejoin/compare/v0.3.1...v0.3.2
+[0.3.1]: https://github.com/cyphar/filepath-securejoin/compare/v0.3.0...v0.3.1
+[0.3.0]: https://github.com/cyphar/filepath-securejoin/compare/v0.2.5...v0.3.0
+[0.2.5]: https://github.com/cyphar/filepath-securejoin/compare/v0.2.4...v0.2.5
+[0.2.4]: https://github.com/cyphar/filepath-securejoin/compare/v0.2.3...v0.2.4
+[0.2.3]: https://github.com/cyphar/filepath-securejoin/compare/v0.2.2...v0.2.3
+[0.2.2]: https://github.com/cyphar/filepath-securejoin/compare/v0.2.1...v0.2.2
+[0.2.1]: https://github.com/cyphar/filepath-securejoin/compare/v0.2.0...v0.2.1
+[0.2.0]: https://github.com/cyphar/filepath-securejoin/compare/v0.1.0...v0.2.0
diff --git a/vendor/github.com/cyphar/filepath-securejoin/LICENSE b/vendor/github.com/cyphar/filepath-securejoin/LICENSE
new file mode 100644
index 000000000..cb1ab88da
--- /dev/null
+++ b/vendor/github.com/cyphar/filepath-securejoin/LICENSE
@@ -0,0 +1,28 @@
+Copyright (C) 2014-2015 Docker Inc & Go Authors. All rights reserved.
+Copyright (C) 2017-2024 SUSE LLC. All rights reserved.
+
+Redistribution and use in source and binary forms, with or without
+modification, are permitted provided that the following conditions are
+met:
+
+ * Redistributions of source code must retain the above copyright
+notice, this list of conditions and the following disclaimer.
+ * Redistributions in binary form must reproduce the above
+copyright notice, this list of conditions and the following disclaimer
+in the documentation and/or other materials provided with the
+distribution.
+ * Neither the name of Google Inc. nor the names of its
+contributors may be used to endorse or promote products derived from
+this software without specific prior written permission.
+
+THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
diff --git a/vendor/github.com/cyphar/filepath-securejoin/README.md b/vendor/github.com/cyphar/filepath-securejoin/README.md
new file mode 100644
index 000000000..eaeb53fcd
--- /dev/null
+++ b/vendor/github.com/cyphar/filepath-securejoin/README.md
@@ -0,0 +1,169 @@
+## `filepath-securejoin` ##
+
+[![Go Documentation](https://pkg.go.dev/badge/github.com/cyphar/filepath-securejoin.svg)](https://pkg.go.dev/github.com/cyphar/filepath-securejoin)
+[![Build Status](https://github.com/cyphar/filepath-securejoin/actions/workflows/ci.yml/badge.svg)](https://github.com/cyphar/filepath-securejoin/actions/workflows/ci.yml)
+
+### Old API ###
+
+This library was originally just an implementation of `SecureJoin` which was
+[intended to be included in the Go standard library][go#20126] as a safer
+`filepath.Join` that would restrict the path lookup to be inside a root
+directory.
+
+The implementation was based on code that existed in several container
+runtimes. Unfortunately, this API is **fundamentally unsafe** against attackers
+that can modify path components after `SecureJoin` returns and before the
+caller uses the path, allowing for some fairly trivial TOCTOU attacks.
+
+`SecureJoin` (and `SecureJoinVFS`) are still provided by this library to
+support legacy users, but new users are strongly suggested to avoid using
+`SecureJoin` and instead use the [new api](#new-api) or switch to
+[libpathrs][libpathrs].
+
+With the above limitations in mind, this library guarantees the following:
+
+* If no error is set, the resulting string **must** be a child path of
+ `root` and will not contain any symlink path components (they will all be
+ expanded).
+
+* When expanding symlinks, all symlink path components **must** be resolved
+ relative to the provided root. In particular, this can be considered a
+ userspace implementation of how `chroot(2)` operates on file paths. Note that
+ these symlinks will **not** be expanded lexically (`filepath.Clean` is not
+ called on the input before processing).
+
+* Non-existent path components are unaffected by `SecureJoin` (similar to
+ `filepath.EvalSymlinks`'s semantics).
+
+* The returned path will always be `filepath.Clean`ed and thus not contain any
+ `..` components.
+
+A (trivial) implementation of this function on GNU/Linux systems could be done
+with the following (note that this requires root privileges and is far more
+opaque than the implementation in this library, and also requires that
+`readlink` is inside the `root` path and is trustworthy):
+
+```go
+package securejoin
+
+import (
+ "os/exec"
+ "path/filepath"
+)
+
+func SecureJoin(root, unsafePath string) (string, error) {
+ unsafePath = string(filepath.Separator) + unsafePath
+ cmd := exec.Command("chroot", root,
+ "readlink", "--canonicalize-missing", "--no-newline", unsafePath)
+ output, err := cmd.CombinedOutput()
+ if err != nil {
+ return "", err
+ }
+ expanded := string(output)
+ return filepath.Join(root, expanded), nil
+}
+```
+
+[libpathrs]: https://github.com/openSUSE/libpathrs
+[go#20126]: https://github.com/golang/go/issues/20126
+
+### New API ###
+
+While we recommend users switch to [libpathrs][libpathrs] as soon as it has a
+stable release, some methods implemented by libpathrs have been ported to this
+library to ease the transition. These APIs are only supported on Linux.
+
+These APIs are implemented such that `filepath-securejoin` will
+opportunistically use certain newer kernel APIs that make these operations far
+more secure. In particular:
+
+* All of the lookup operations will use [`openat2`][openat2.2] on new enough
+ kernels (Linux 5.6 or later) to restrict lookups through magic-links and
+ bind-mounts (for certain operations) and to make use of `RESOLVE_IN_ROOT` to
+ efficiently resolve symlinks within a rootfs.
+
+* The APIs provide hardening against a malicious `/proc` mount to either detect
+ or avoid being tricked by a `/proc` that is not legitimate. This is done
+ using [`openat2`][openat2.2] for all users, and privileged users will also be
+ further protected by using [`fsopen`][fsopen.2] and [`open_tree`][open_tree.2]
+ (Linux 5.2 or later).
+
+[openat2.2]: https://www.man7.org/linux/man-pages/man2/openat2.2.html
+[fsopen.2]: https://github.com/brauner/man-pages-md/blob/main/fsopen.md
+[open_tree.2]: https://github.com/brauner/man-pages-md/blob/main/open_tree.md
+
+#### `OpenInRoot` ####
+
+```go
+func OpenInRoot(root, unsafePath string) (*os.File, error)
+func OpenatInRoot(root *os.File, unsafePath string) (*os.File, error)
+func Reopen(handle *os.File, flags int) (*os.File, error)
+```
+
+`OpenInRoot` is a much safer version of
+
+```go
+path, err := securejoin.SecureJoin(root, unsafePath)
+file, err := os.OpenFile(path, unix.O_PATH|unix.O_CLOEXEC)
+```
+
+that protects against various race attacks that could lead to serious security
+issues, depending on the application. Note that the returned `*os.File` is an
+`O_PATH` file descriptor, which is quite restricted. Callers will probably need
+to use `Reopen` to get a more usable handle (this split is done to provide
+useful features like PTY spawning and to avoid users accidentally opening bad
+inodes that could cause a DoS).
+
+Callers need to be careful in how they use the returned `*os.File`. Usually it
+is only safe to operate on the handle directly, and it is very easy to create a
+security issue. [libpathrs][libpathrs] provides far more helpers to make using
+these handles safer -- there is currently no plan to port them to
+`filepath-securejoin`.
+
+`OpenatInRoot` is like `OpenInRoot` except that the root is provided using an
+`*os.File`. This allows you to ensure that multiple `OpenatInRoot` (or
+`MkdirAllHandle`) calls are operating on the same rootfs.
+
+> **NOTE**: Unlike `SecureJoin`, `OpenInRoot` will error out as soon as it hits
+> a dangling symlink or non-existent path. This is in contrast to `SecureJoin`
+> which treated non-existent components as though they were real directories,
+> and would allow for partial resolution of dangling symlinks. These behaviours
+> are at odds with how Linux treats non-existent paths and dangling symlinks,
+> and so these are no longer allowed.
+
+#### `MkdirAll` ####
+
+```go
+func MkdirAll(root, unsafePath string, mode int) error
+func MkdirAllHandle(root *os.File, unsafePath string, mode int) (*os.File, error)
+```
+
+`MkdirAll` is a much safer version of
+
+```go
+path, err := securejoin.SecureJoin(root, unsafePath)
+err = os.MkdirAll(path, mode)
+```
+
+that protects against the same kinds of races that `OpenInRoot` protects
+against.
+
+`MkdirAllHandle` is like `MkdirAll` except that the root is provided using an
+`*os.File` (the reason for this is the same as with `OpenatInRoot`) and an
+`*os.File` of the final created directory is returned (this directory is
+guaranteed to be effectively identical to the directory created by
+`MkdirAllHandle`, which is not possible to ensure by just using `OpenatInRoot`
+after `MkdirAll`).
+
+> **NOTE**: Unlike `SecureJoin`, `MkdirAll` will error out as soon as it hits
+> a dangling symlink or non-existent path. This is in contrast to `SecureJoin`
+> which treated non-existent components as though they were real directories,
+> and would allow for partial resolution of dangling symlinks. These behaviours
+> are at odds with how Linux treats non-existent paths and dangling symlinks,
+> and so these are no longer allowed. This means that `MkdirAll` will not
+> create non-existent directories referenced by a dangling symlink.
+
+### License ###
+
+The license of this project is the same as Go, which is a BSD 3-clause license
+available in the `LICENSE` file.
diff --git a/vendor/github.com/cyphar/filepath-securejoin/VERSION b/vendor/github.com/cyphar/filepath-securejoin/VERSION
new file mode 100644
index 000000000..42045acae
--- /dev/null
+++ b/vendor/github.com/cyphar/filepath-securejoin/VERSION
@@ -0,0 +1 @@
+0.3.4
diff --git a/vendor/github.com/cyphar/filepath-securejoin/doc.go b/vendor/github.com/cyphar/filepath-securejoin/doc.go
new file mode 100644
index 000000000..1ec7d065e
--- /dev/null
+++ b/vendor/github.com/cyphar/filepath-securejoin/doc.go
@@ -0,0 +1,39 @@
+// Copyright (C) 2014-2015 Docker Inc & Go Authors. All rights reserved.
+// Copyright (C) 2017-2024 SUSE LLC. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+// Package securejoin implements a set of helpers to make it easier to write Go
+// code that is safe against symlink-related escape attacks. The primary idea
+// is to let you resolve a path within a rootfs directory as if the rootfs was
+// a chroot.
+//
+// securejoin has two APIs, a "legacy" API and a "modern" API.
+//
+// The legacy API is [SecureJoin] and [SecureJoinVFS]. These methods are
+// **not** safe against race conditions where an attacker changes the
+// filesystem after (or during) the [SecureJoin] operation.
+//
+// The new API is made up of [OpenInRoot] and [MkdirAll] (and derived
+// functions). These are safe against racing attackers and have several other
+// protections that are not provided by the legacy API. There are many more
+// operations that most programs expect to be able to do safely, but we do not
+// provide explicit support for them because we want to encourage users to
+// switch to [libpathrs](https://github.com/openSUSE/libpathrs) which is a
+// cross-language next-generation library that is entirely designed around
+// operating on paths safely.
+//
+// securejoin has been used by several container runtimes (Docker, runc,
+// Kubernetes, etc) for quite a few years as a de-facto standard for operating
+// on container filesystem paths "safely". However, most users still use the
+// legacy API which is unsafe against various attacks (there is a fairly long
+// history of CVEs in dependent as a result). Users should switch to the modern
+// API as soon as possible (or even better, switch to libpathrs).
+//
+// This project was initially intended to be included in the Go standard
+// library, but [it was rejected](https://go.dev/issue/20126). There is now a
+// [new Go proposal](https://go.dev/issue/67002) for a safe path resolution API
+// that shares some of the goals of filepath-securejoin. However, that design
+// is intended to work like `openat2(RESOLVE_BENEATH)` which does not fit the
+// usecase of container runtimes and most system tools.
+package securejoin
diff --git a/vendor/github.com/cyphar/filepath-securejoin/join.go b/vendor/github.com/cyphar/filepath-securejoin/join.go
new file mode 100644
index 000000000..e0ee3f2b5
--- /dev/null
+++ b/vendor/github.com/cyphar/filepath-securejoin/join.go
@@ -0,0 +1,125 @@
+// Copyright (C) 2014-2015 Docker Inc & Go Authors. All rights reserved.
+// Copyright (C) 2017-2024 SUSE LLC. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+package securejoin
+
+import (
+ "errors"
+ "os"
+ "path/filepath"
+ "strings"
+ "syscall"
+)
+
+const maxSymlinkLimit = 255
+
+// IsNotExist tells you if err is an error that implies that either the path
+// accessed does not exist (or path components don't exist). This is
+// effectively a more broad version of [os.IsNotExist].
+func IsNotExist(err error) bool {
+ // Check that it's not actually an ENOTDIR, which in some cases is a more
+ // convoluted case of ENOENT (usually involving weird paths).
+ return errors.Is(err, os.ErrNotExist) || errors.Is(err, syscall.ENOTDIR) || errors.Is(err, syscall.ENOENT)
+}
+
+// SecureJoinVFS joins the two given path components (similar to [filepath.Join]) except
+// that the returned path is guaranteed to be scoped inside the provided root
+// path (when evaluated). Any symbolic links in the path are evaluated with the
+// given root treated as the root of the filesystem, similar to a chroot. The
+// filesystem state is evaluated through the given [VFS] interface (if nil, the
+// standard [os].* family of functions are used).
+//
+// Note that the guarantees provided by this function only apply if the path
+// components in the returned string are not modified (in other words are not
+// replaced with symlinks on the filesystem) after this function has returned.
+// Such a symlink race is necessarily out-of-scope of SecureJoinVFS.
+//
+// NOTE: Due to the above limitation, Linux users are strongly encouraged to
+// use [OpenInRoot] instead, which does safely protect against these kinds of
+// attacks. There is no way to solve this problem with SecureJoinVFS because
+// the API is fundamentally wrong (you cannot return a "safe" path string and
+// guarantee it won't be modified afterwards).
+//
+// Volume names in unsafePath are always discarded, regardless if they are
+// provided via direct input or when evaluating symlinks. Therefore:
+//
+// "C:\Temp" + "D:\path\to\file.txt" results in "C:\Temp\path\to\file.txt"
+func SecureJoinVFS(root, unsafePath string, vfs VFS) (string, error) {
+ // Use the os.* VFS implementation if none was specified.
+ if vfs == nil {
+ vfs = osVFS{}
+ }
+
+ unsafePath = filepath.FromSlash(unsafePath)
+ var (
+ currentPath string
+ remainingPath = unsafePath
+ linksWalked int
+ )
+ for remainingPath != "" {
+ if v := filepath.VolumeName(remainingPath); v != "" {
+ remainingPath = remainingPath[len(v):]
+ }
+
+ // Get the next path component.
+ var part string
+ if i := strings.IndexRune(remainingPath, filepath.Separator); i == -1 {
+ part, remainingPath = remainingPath, ""
+ } else {
+ part, remainingPath = remainingPath[:i], remainingPath[i+1:]
+ }
+
+ // Apply the component lexically to the path we are building.
+ // currentPath does not contain any symlinks, and we are lexically
+ // dealing with a single component, so it's okay to do a filepath.Clean
+ // here.
+ nextPath := filepath.Join(string(filepath.Separator), currentPath, part)
+ if nextPath == string(filepath.Separator) {
+ currentPath = ""
+ continue
+ }
+ fullPath := root + string(filepath.Separator) + nextPath
+
+ // Figure out whether the path is a symlink.
+ fi, err := vfs.Lstat(fullPath)
+ if err != nil && !IsNotExist(err) {
+ return "", err
+ }
+ // Treat non-existent path components the same as non-symlinks (we
+ // can't do any better here).
+ if IsNotExist(err) || fi.Mode()&os.ModeSymlink == 0 {
+ currentPath = nextPath
+ continue
+ }
+
+ // It's a symlink, so get its contents and expand it by prepending it
+ // to the yet-unparsed path.
+ linksWalked++
+ if linksWalked > maxSymlinkLimit {
+ return "", &os.PathError{Op: "SecureJoin", Path: root + string(filepath.Separator) + unsafePath, Err: syscall.ELOOP}
+ }
+
+ dest, err := vfs.Readlink(fullPath)
+ if err != nil {
+ return "", err
+ }
+ remainingPath = dest + string(filepath.Separator) + remainingPath
+ // Absolute symlinks reset any work we've already done.
+ if filepath.IsAbs(dest) {
+ currentPath = ""
+ }
+ }
+
+ // There should be no lexical components like ".." left in the path here,
+ // but for safety clean up the path before joining it to the root.
+ finalPath := filepath.Join(string(filepath.Separator), currentPath)
+ return filepath.Join(root, finalPath), nil
+}
+
+// SecureJoin is a wrapper around [SecureJoinVFS] that just uses the [os].* library
+// of functions as the [VFS]. If in doubt, use this function over [SecureJoinVFS].
+func SecureJoin(root, unsafePath string) (string, error) {
+ return SecureJoinVFS(root, unsafePath, nil)
+}
diff --git a/vendor/github.com/cyphar/filepath-securejoin/lookup_linux.go b/vendor/github.com/cyphar/filepath-securejoin/lookup_linux.go
new file mode 100644
index 000000000..290befa15
--- /dev/null
+++ b/vendor/github.com/cyphar/filepath-securejoin/lookup_linux.go
@@ -0,0 +1,389 @@
+//go:build linux
+
+// Copyright (C) 2024 SUSE LLC. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+package securejoin
+
+import (
+ "errors"
+ "fmt"
+ "os"
+ "path"
+ "path/filepath"
+ "slices"
+ "strings"
+
+ "golang.org/x/sys/unix"
+)
+
+type symlinkStackEntry struct {
+ // (dir, remainingPath) is what we would've returned if the link didn't
+ // exist. This matches what openat2(RESOLVE_IN_ROOT) would return in
+ // this case.
+ dir *os.File
+ remainingPath string
+ // linkUnwalked is the remaining path components from the original
+ // Readlink which we have yet to walk. When this slice is empty, we
+ // drop the link from the stack.
+ linkUnwalked []string
+}
+
+func (se symlinkStackEntry) String() string {
+ return fmt.Sprintf("<%s>/%s [->%s]", se.dir.Name(), se.remainingPath, strings.Join(se.linkUnwalked, "/"))
+}
+
+func (se symlinkStackEntry) Close() {
+ _ = se.dir.Close()
+}
+
+type symlinkStack []*symlinkStackEntry
+
+func (s *symlinkStack) IsEmpty() bool {
+ return s == nil || len(*s) == 0
+}
+
+func (s *symlinkStack) Close() {
+ if s != nil {
+ for _, link := range *s {
+ link.Close()
+ }
+ // TODO: Switch to clear once we switch to Go 1.21.
+ *s = nil
+ }
+}
+
+var (
+ errEmptyStack = errors.New("[internal] stack is empty")
+ errBrokenSymlinkStack = errors.New("[internal error] broken symlink stack")
+)
+
+func (s *symlinkStack) popPart(part string) error {
+ if s == nil || s.IsEmpty() {
+ // If there is nothing in the symlink stack, then the part was from the
+ // real path provided by the user, and this is a no-op.
+ return errEmptyStack
+ }
+ if part == "." {
+ // "." components are no-ops -- we drop them when doing SwapLink.
+ return nil
+ }
+
+ tailEntry := (*s)[len(*s)-1]
+
+ // Double-check that we are popping the component we expect.
+ if len(tailEntry.linkUnwalked) == 0 {
+ return fmt.Errorf("%w: trying to pop component %q of empty stack entry %s", errBrokenSymlinkStack, part, tailEntry)
+ }
+ headPart := tailEntry.linkUnwalked[0]
+ if headPart != part {
+ return fmt.Errorf("%w: trying to pop component %q but the last stack entry is %s (%q)", errBrokenSymlinkStack, part, tailEntry, headPart)
+ }
+
+ // Drop the component, but keep the entry around in case we are dealing
+ // with a "tail-chained" symlink.
+ tailEntry.linkUnwalked = tailEntry.linkUnwalked[1:]
+ return nil
+}
+
+func (s *symlinkStack) PopPart(part string) error {
+ if err := s.popPart(part); err != nil {
+ if errors.Is(err, errEmptyStack) {
+ // Skip empty stacks.
+ err = nil
+ }
+ return err
+ }
+
+ // Clean up any of the trailing stack entries that are empty.
+ for lastGood := len(*s) - 1; lastGood >= 0; lastGood-- {
+ entry := (*s)[lastGood]
+ if len(entry.linkUnwalked) > 0 {
+ break
+ }
+ entry.Close()
+ (*s) = (*s)[:lastGood]
+ }
+ return nil
+}
+
+func (s *symlinkStack) push(dir *os.File, remainingPath, linkTarget string) error {
+ if s == nil {
+ return nil
+ }
+ // Split the link target and clean up any "" parts.
+ linkTargetParts := slices.DeleteFunc(
+ strings.Split(linkTarget, "/"),
+ func(part string) bool { return part == "" || part == "." })
+
+ // Copy the directory so the caller doesn't close our copy.
+ dirCopy, err := dupFile(dir)
+ if err != nil {
+ return err
+ }
+
+ // Add to the stack.
+ *s = append(*s, &symlinkStackEntry{
+ dir: dirCopy,
+ remainingPath: remainingPath,
+ linkUnwalked: linkTargetParts,
+ })
+ return nil
+}
+
+func (s *symlinkStack) SwapLink(linkPart string, dir *os.File, remainingPath, linkTarget string) error {
+ // If we are currently inside a symlink resolution, remove the symlink
+ // component from the last symlink entry, but don't remove the entry even
+ // if it's empty. If we are a "tail-chained" symlink (a trailing symlink we
+ // hit during a symlink resolution) we need to keep the old symlink until
+ // we finish the resolution.
+ if err := s.popPart(linkPart); err != nil {
+ if !errors.Is(err, errEmptyStack) {
+ return err
+ }
+ // Push the component regardless of whether the stack was empty.
+ }
+ return s.push(dir, remainingPath, linkTarget)
+}
+
+func (s *symlinkStack) PopTopSymlink() (*os.File, string, bool) {
+ if s == nil || s.IsEmpty() {
+ return nil, "", false
+ }
+ tailEntry := (*s)[0]
+ *s = (*s)[1:]
+ return tailEntry.dir, tailEntry.remainingPath, true
+}
+
+// partialLookupInRoot tries to lookup as much of the request path as possible
+// within the provided root (a-la RESOLVE_IN_ROOT) and opens the final existing
+// component of the requested path, returning a file handle to the final
+// existing component and a string containing the remaining path components.
+func partialLookupInRoot(root *os.File, unsafePath string) (*os.File, string, error) {
+ return lookupInRoot(root, unsafePath, true)
+}
+
+func completeLookupInRoot(root *os.File, unsafePath string) (*os.File, error) {
+ handle, remainingPath, err := lookupInRoot(root, unsafePath, false)
+ if remainingPath != "" && err == nil {
+ // should never happen
+ err = fmt.Errorf("[bug] non-empty remaining path when doing a non-partial lookup: %q", remainingPath)
+ }
+ // lookupInRoot(partial=false) will always close the handle if an error is
+ // returned, so no need to double-check here.
+ return handle, err
+}
+
+func lookupInRoot(root *os.File, unsafePath string, partial bool) (Handle *os.File, _ string, _ error) {
+ unsafePath = filepath.ToSlash(unsafePath) // noop
+
+ // This is very similar to SecureJoin, except that we operate on the
+ // components using file descriptors. We then return the last component we
+ // managed open, along with the remaining path components not opened.
+
+ // Try to use openat2 if possible.
+ if hasOpenat2() {
+ return lookupOpenat2(root, unsafePath, partial)
+ }
+
+ // Get the "actual" root path from /proc/self/fd. This is necessary if the
+ // root is some magic-link like /proc/$pid/root, in which case we want to
+ // make sure when we do checkProcSelfFdPath that we are using the correct
+ // root path.
+ logicalRootPath, err := procSelfFdReadlink(root)
+ if err != nil {
+ return nil, "", fmt.Errorf("get real root path: %w", err)
+ }
+
+ currentDir, err := dupFile(root)
+ if err != nil {
+ return nil, "", fmt.Errorf("clone root fd: %w", err)
+ }
+ defer func() {
+ // If a handle is not returned, close the internal handle.
+ if Handle == nil {
+ _ = currentDir.Close()
+ }
+ }()
+
+ // symlinkStack is used to emulate how openat2(RESOLVE_IN_ROOT) treats
+ // dangling symlinks. If we hit a non-existent path while resolving a
+ // symlink, we need to return the (dir, remainingPath) that we had when we
+ // hit the symlink (treating the symlink as though it were a regular file).
+ // The set of (dir, remainingPath) sets is stored within the symlinkStack
+ // and we add and remove parts when we hit symlink and non-symlink
+ // components respectively. We need a stack because of recursive symlinks
+ // (symlinks that contain symlink components in their target).
+ //
+ // Note that the stack is ONLY used for book-keeping. All of the actual
+ // path walking logic is still based on currentPath/remainingPath and
+ // currentDir (as in SecureJoin).
+ var symStack *symlinkStack
+ if partial {
+ symStack = new(symlinkStack)
+ defer symStack.Close()
+ }
+
+ var (
+ linksWalked int
+ currentPath string
+ remainingPath = unsafePath
+ )
+ for remainingPath != "" {
+ // Save the current remaining path so if the part is not real we can
+ // return the path including the component.
+ oldRemainingPath := remainingPath
+
+ // Get the next path component.
+ var part string
+ if i := strings.IndexByte(remainingPath, '/'); i == -1 {
+ part, remainingPath = remainingPath, ""
+ } else {
+ part, remainingPath = remainingPath[:i], remainingPath[i+1:]
+ }
+ // If we hit an empty component, we need to treat it as though it is
+ // "." so that trailing "/" and "//" components on a non-directory
+ // correctly return the right error code.
+ if part == "" {
+ part = "."
+ }
+
+ // Apply the component lexically to the path we are building.
+ // currentPath does not contain any symlinks, and we are lexically
+ // dealing with a single component, so it's okay to do a filepath.Clean
+ // here.
+ nextPath := path.Join("/", currentPath, part)
+ // If we logically hit the root, just clone the root rather than
+ // opening the part and doing all of the other checks.
+ if nextPath == "/" {
+ if err := symStack.PopPart(part); err != nil {
+ return nil, "", fmt.Errorf("walking into root with part %q failed: %w", part, err)
+ }
+ // Jump to root.
+ rootClone, err := dupFile(root)
+ if err != nil {
+ return nil, "", fmt.Errorf("clone root fd: %w", err)
+ }
+ _ = currentDir.Close()
+ currentDir = rootClone
+ currentPath = nextPath
+ continue
+ }
+
+ // Try to open the next component.
+ nextDir, err := openatFile(currentDir, part, unix.O_PATH|unix.O_NOFOLLOW|unix.O_CLOEXEC, 0)
+ switch {
+ case err == nil:
+ st, err := nextDir.Stat()
+ if err != nil {
+ _ = nextDir.Close()
+ return nil, "", fmt.Errorf("stat component %q: %w", part, err)
+ }
+
+ switch st.Mode() & os.ModeType {
+ case os.ModeSymlink:
+ // readlinkat implies AT_EMPTY_PATH since Linux 2.6.39. See
+ // Linux commit 65cfc6722361 ("readlinkat(), fchownat() and
+ // fstatat() with empty relative pathnames").
+ linkDest, err := readlinkatFile(nextDir, "")
+ // We don't need the handle anymore.
+ _ = nextDir.Close()
+ if err != nil {
+ return nil, "", err
+ }
+
+ linksWalked++
+ if linksWalked > maxSymlinkLimit {
+ return nil, "", &os.PathError{Op: "securejoin.lookupInRoot", Path: logicalRootPath + "/" + unsafePath, Err: unix.ELOOP}
+ }
+
+ // Swap out the symlink's component for the link entry itself.
+ if err := symStack.SwapLink(part, currentDir, oldRemainingPath, linkDest); err != nil {
+ return nil, "", fmt.Errorf("walking into symlink %q failed: push symlink: %w", part, err)
+ }
+
+ // Update our logical remaining path.
+ remainingPath = linkDest + "/" + remainingPath
+ // Absolute symlinks reset any work we've already done.
+ if path.IsAbs(linkDest) {
+ // Jump to root.
+ rootClone, err := dupFile(root)
+ if err != nil {
+ return nil, "", fmt.Errorf("clone root fd: %w", err)
+ }
+ _ = currentDir.Close()
+ currentDir = rootClone
+ currentPath = "/"
+ }
+
+ default:
+ // If we are dealing with a directory, simply walk into it.
+ _ = currentDir.Close()
+ currentDir = nextDir
+ currentPath = nextPath
+
+ // The part was real, so drop it from the symlink stack.
+ if err := symStack.PopPart(part); err != nil {
+ return nil, "", fmt.Errorf("walking into directory %q failed: %w", part, err)
+ }
+
+ // If we are operating on a .., make sure we haven't escaped.
+ // We only have to check for ".." here because walking down
+ // into a regular component component cannot cause you to
+ // escape. This mirrors the logic in RESOLVE_IN_ROOT, except we
+ // have to check every ".." rather than only checking after a
+ // rename or mount on the system.
+ if part == ".." {
+ // Make sure the root hasn't moved.
+ if err := checkProcSelfFdPath(logicalRootPath, root); err != nil {
+ return nil, "", fmt.Errorf("root path moved during lookup: %w", err)
+ }
+ // Make sure the path is what we expect.
+ fullPath := logicalRootPath + nextPath
+ if err := checkProcSelfFdPath(fullPath, currentDir); err != nil {
+ return nil, "", fmt.Errorf("walking into %q had unexpected result: %w", part, err)
+ }
+ }
+ }
+
+ default:
+ if !partial {
+ return nil, "", err
+ }
+ // If there are any remaining components in the symlink stack, we
+ // are still within a symlink resolution and thus we hit a dangling
+ // symlink. So pretend that the first symlink in the stack we hit
+ // was an ENOENT (to match openat2).
+ if oldDir, remainingPath, ok := symStack.PopTopSymlink(); ok {
+ _ = currentDir.Close()
+ return oldDir, remainingPath, err
+ }
+ // We have hit a final component that doesn't exist, so we have our
+ // partial open result. Note that we have to use the OLD remaining
+ // path, since the lookup failed.
+ return currentDir, oldRemainingPath, err
+ }
+ }
+
+ // If the unsafePath had a trailing slash, we need to make sure we try to
+ // do a relative "." open so that we will correctly return an error when
+ // the final component is a non-directory (to match openat2). In the
+ // context of openat2, a trailing slash and a trailing "/." are completely
+ // equivalent.
+ if strings.HasSuffix(unsafePath, "/") {
+ nextDir, err := openatFile(currentDir, ".", unix.O_PATH|unix.O_NOFOLLOW|unix.O_CLOEXEC, 0)
+ if err != nil {
+ if !partial {
+ _ = currentDir.Close()
+ currentDir = nil
+ }
+ return currentDir, "", err
+ }
+ _ = currentDir.Close()
+ currentDir = nextDir
+ }
+
+ // All of the components existed!
+ return currentDir, "", nil
+}
diff --git a/vendor/github.com/cyphar/filepath-securejoin/mkdir_linux.go b/vendor/github.com/cyphar/filepath-securejoin/mkdir_linux.go
new file mode 100644
index 000000000..b5f674524
--- /dev/null
+++ b/vendor/github.com/cyphar/filepath-securejoin/mkdir_linux.go
@@ -0,0 +1,207 @@
+//go:build linux
+
+// Copyright (C) 2024 SUSE LLC. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+package securejoin
+
+import (
+ "errors"
+ "fmt"
+ "os"
+ "path/filepath"
+ "slices"
+ "strings"
+
+ "golang.org/x/sys/unix"
+)
+
+var (
+ errInvalidMode = errors.New("invalid permission mode")
+ errPossibleAttack = errors.New("possible attack detected")
+)
+
+// MkdirAllHandle is equivalent to [MkdirAll], except that it is safer to use
+// in two respects:
+//
+// - The caller provides the root directory as an *[os.File] (preferably O_PATH)
+// handle. This means that the caller can be sure which root directory is
+// being used. Note that this can be emulated by using /proc/self/fd/... as
+// the root path with [os.MkdirAll].
+//
+// - Once all of the directories have been created, an *[os.File] O_PATH handle
+// to the directory at unsafePath is returned to the caller. This is done in
+// an effectively-race-free way (an attacker would only be able to swap the
+// final directory component), which is not possible to emulate with
+// [MkdirAll].
+//
+// In addition, the returned handle is obtained far more efficiently than doing
+// a brand new lookup of unsafePath (such as with [SecureJoin] or openat2) after
+// doing [MkdirAll]. If you intend to open the directory after creating it, you
+// should use MkdirAllHandle.
+func MkdirAllHandle(root *os.File, unsafePath string, mode int) (_ *os.File, Err error) {
+ // Make sure there are no os.FileMode bits set.
+ if mode&^0o7777 != 0 {
+ return nil, fmt.Errorf("%w for mkdir 0o%.3o", errInvalidMode, mode)
+ }
+ // On Linux, mkdirat(2) (and os.Mkdir) silently ignore the suid and sgid
+ // bits. We could also silently ignore them but since we have very few
+ // users it seems more prudent to return an error so users notice that
+ // these bits will not be set.
+ if mode&^0o1777 != 0 {
+ return nil, fmt.Errorf("%w for mkdir 0o%.3o: suid and sgid are ignored by mkdir", errInvalidMode, mode)
+ }
+
+ // Try to open as much of the path as possible.
+ currentDir, remainingPath, err := partialLookupInRoot(root, unsafePath)
+ defer func() {
+ if Err != nil {
+ _ = currentDir.Close()
+ }
+ }()
+ if err != nil && !errors.Is(err, unix.ENOENT) {
+ return nil, fmt.Errorf("find existing subpath of %q: %w", unsafePath, err)
+ }
+
+ // If there is an attacker deleting directories as we walk into them,
+ // detect this proactively. Note this is guaranteed to detect if the
+ // attacker deleted any part of the tree up to currentDir.
+ //
+ // Once we walk into a dead directory, partialLookupInRoot would not be
+ // able to walk further down the tree (directories must be empty before
+ // they are deleted), and if the attacker has removed the entire tree we
+ // can be sure that anything that was originally inside a dead directory
+ // must also be deleted and thus is a dead directory in its own right.
+ //
+ // This is mostly a quality-of-life check, because mkdir will simply fail
+ // later if the attacker deletes the tree after this check.
+ if err := isDeadInode(currentDir); err != nil {
+ return nil, fmt.Errorf("finding existing subpath of %q: %w", unsafePath, err)
+ }
+
+ // Re-open the path to match the O_DIRECTORY reopen loop later (so that we
+ // always return a non-O_PATH handle). We also check that we actually got a
+ // directory.
+ if reopenDir, err := Reopen(currentDir, unix.O_DIRECTORY|unix.O_CLOEXEC); errors.Is(err, unix.ENOTDIR) {
+ return nil, fmt.Errorf("cannot create subdirectories in %q: %w", currentDir.Name(), unix.ENOTDIR)
+ } else if err != nil {
+ return nil, fmt.Errorf("re-opening handle to %q: %w", currentDir.Name(), err)
+ } else {
+ _ = currentDir.Close()
+ currentDir = reopenDir
+ }
+
+ remainingParts := strings.Split(remainingPath, string(filepath.Separator))
+ if slices.Contains(remainingParts, "..") {
+ // The path contained ".." components after the end of the "real"
+ // components. We could try to safely resolve ".." here but that would
+ // add a bunch of extra logic for something that it's not clear even
+ // needs to be supported. So just return an error.
+ //
+ // If we do filepath.Clean(remainingPath) then we end up with the
+ // problem that ".." can erase a trailing dangling symlink and produce
+ // a path that doesn't quite match what the user asked for.
+ return nil, fmt.Errorf("%w: yet-to-be-created path %q contains '..' components", unix.ENOENT, remainingPath)
+ }
+
+ // Make sure the mode doesn't have any type bits.
+ mode &^= unix.S_IFMT
+
+ // Create the remaining components.
+ for _, part := range remainingParts {
+ switch part {
+ case "", ".":
+ // Skip over no-op paths.
+ continue
+ }
+
+ // NOTE: mkdir(2) will not follow trailing symlinks, so we can safely
+ // create the final component without worrying about symlink-exchange
+ // attacks.
+ if err := unix.Mkdirat(int(currentDir.Fd()), part, uint32(mode)); err != nil {
+ err = &os.PathError{Op: "mkdirat", Path: currentDir.Name() + "/" + part, Err: err}
+ // Make the error a bit nicer if the directory is dead.
+ if err2 := isDeadInode(currentDir); err2 != nil {
+ err = fmt.Errorf("%w (%w)", err, err2)
+ }
+ return nil, err
+ }
+
+ // Get a handle to the next component. O_DIRECTORY means we don't need
+ // to use O_PATH.
+ var nextDir *os.File
+ if hasOpenat2() {
+ nextDir, err = openat2File(currentDir, part, &unix.OpenHow{
+ Flags: unix.O_NOFOLLOW | unix.O_DIRECTORY | unix.O_CLOEXEC,
+ Resolve: unix.RESOLVE_BENEATH | unix.RESOLVE_NO_SYMLINKS | unix.RESOLVE_NO_XDEV,
+ })
+ } else {
+ nextDir, err = openatFile(currentDir, part, unix.O_NOFOLLOW|unix.O_DIRECTORY|unix.O_CLOEXEC, 0)
+ }
+ if err != nil {
+ return nil, err
+ }
+ _ = currentDir.Close()
+ currentDir = nextDir
+
+ // It's possible that the directory we just opened was swapped by an
+ // attacker. Unfortunately there isn't much we can do to protect
+ // against this, and MkdirAll's behaviour is that we will reuse
+ // existing directories anyway so the need to protect against this is
+ // incredibly limited (and arguably doesn't even deserve mention here).
+ //
+ // Ideally we might want to check that the owner and mode match what we
+ // would've created -- unfortunately, it is non-trivial to verify that
+ // the owner and mode of the created directory match. While plain Unix
+ // DAC rules seem simple enough to emulate, there are a bunch of other
+ // factors that can change the mode or owner of created directories
+ // (default POSIX ACLs, mount options like uid=1,gid=2,umask=0 on
+ // filesystems like vfat, etc etc). We used to try to verify this but
+ // it just lead to a series of spurious errors.
+ //
+ // We could also check that the directory is non-empty, but
+ // unfortunately some pseduofilesystems (like cgroupfs) create
+ // non-empty directories, which would result in different spurious
+ // errors.
+ }
+ return currentDir, nil
+}
+
+// MkdirAll is a race-safe alternative to the [os.MkdirAll] function,
+// where the new directory is guaranteed to be within the root directory (if an
+// attacker can move directories from inside the root to outside the root, the
+// created directory tree might be outside of the root but the key constraint
+// is that at no point will we walk outside of the directory tree we are
+// creating).
+//
+// Effectively, MkdirAll(root, unsafePath, mode) is equivalent to
+//
+// path, _ := securejoin.SecureJoin(root, unsafePath)
+// err := os.MkdirAll(path, mode)
+//
+// But is much safer. The above implementation is unsafe because if an attacker
+// can modify the filesystem tree between [SecureJoin] and [os.MkdirAll], it is
+// possible for MkdirAll to resolve unsafe symlink components and create
+// directories outside of the root.
+//
+// If you plan to open the directory after you have created it or want to use
+// an open directory handle as the root, you should use [MkdirAllHandle] instead.
+// This function is a wrapper around [MkdirAllHandle].
+//
+// NOTE: The mode argument must be set the unix mode bits (unix.S_I...), not
+// the Go generic mode bits ([os.FileMode]...).
+func MkdirAll(root, unsafePath string, mode int) error {
+ rootDir, err := os.OpenFile(root, unix.O_PATH|unix.O_DIRECTORY|unix.O_CLOEXEC, 0)
+ if err != nil {
+ return err
+ }
+ defer rootDir.Close()
+
+ f, err := MkdirAllHandle(rootDir, unsafePath, mode)
+ if err != nil {
+ return err
+ }
+ _ = f.Close()
+ return nil
+}
diff --git a/vendor/github.com/cyphar/filepath-securejoin/open_linux.go b/vendor/github.com/cyphar/filepath-securejoin/open_linux.go
new file mode 100644
index 000000000..230be73f0
--- /dev/null
+++ b/vendor/github.com/cyphar/filepath-securejoin/open_linux.go
@@ -0,0 +1,103 @@
+//go:build linux
+
+// Copyright (C) 2024 SUSE LLC. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+package securejoin
+
+import (
+ "fmt"
+ "os"
+ "strconv"
+
+ "golang.org/x/sys/unix"
+)
+
+// OpenatInRoot is equivalent to [OpenInRoot], except that the root is provided
+// using an *[os.File] handle, to ensure that the correct root directory is used.
+func OpenatInRoot(root *os.File, unsafePath string) (*os.File, error) {
+ handle, err := completeLookupInRoot(root, unsafePath)
+ if err != nil {
+ return nil, &os.PathError{Op: "securejoin.OpenInRoot", Path: unsafePath, Err: err}
+ }
+ return handle, nil
+}
+
+// OpenInRoot safely opens the provided unsafePath within the root.
+// Effectively, OpenInRoot(root, unsafePath) is equivalent to
+//
+// path, _ := securejoin.SecureJoin(root, unsafePath)
+// handle, err := os.OpenFile(path, unix.O_PATH|unix.O_CLOEXEC)
+//
+// But is much safer. The above implementation is unsafe because if an attacker
+// can modify the filesystem tree between [SecureJoin] and [os.OpenFile], it is
+// possible for the returned file to be outside of the root.
+//
+// Note that the returned handle is an O_PATH handle, meaning that only a very
+// limited set of operations will work on the handle. This is done to avoid
+// accidentally opening an untrusted file that could cause issues (such as a
+// disconnected TTY that could cause a DoS, or some other issue). In order to
+// use the returned handle, you can "upgrade" it to a proper handle using
+// [Reopen].
+func OpenInRoot(root, unsafePath string) (*os.File, error) {
+ rootDir, err := os.OpenFile(root, unix.O_PATH|unix.O_DIRECTORY|unix.O_CLOEXEC, 0)
+ if err != nil {
+ return nil, err
+ }
+ defer rootDir.Close()
+ return OpenatInRoot(rootDir, unsafePath)
+}
+
+// Reopen takes an *[os.File] handle and re-opens it through /proc/self/fd.
+// Reopen(file, flags) is effectively equivalent to
+//
+// fdPath := fmt.Sprintf("/proc/self/fd/%d", file.Fd())
+// os.OpenFile(fdPath, flags|unix.O_CLOEXEC)
+//
+// But with some extra hardenings to ensure that we are not tricked by a
+// maliciously-configured /proc mount. While this attack scenario is not
+// common, in container runtimes it is possible for higher-level runtimes to be
+// tricked into configuring an unsafe /proc that can be used to attack file
+// operations. See [CVE-2019-19921] for more details.
+//
+// [CVE-2019-19921]: https://github.com/advisories/GHSA-fh74-hm69-rqjw
+func Reopen(handle *os.File, flags int) (*os.File, error) {
+ procRoot, err := getProcRoot()
+ if err != nil {
+ return nil, err
+ }
+
+ // We can't operate on /proc/thread-self/fd/$n directly when doing a
+ // re-open, so we need to open /proc/thread-self/fd and then open a single
+ // final component.
+ procFdDir, closer, err := procThreadSelf(procRoot, "fd/")
+ if err != nil {
+ return nil, fmt.Errorf("get safe /proc/thread-self/fd handle: %w", err)
+ }
+ defer procFdDir.Close()
+ defer closer()
+
+ // Try to detect if there is a mount on top of the magic-link we are about
+ // to open. If we are using unsafeHostProcRoot(), this could change after
+ // we check it (and there's nothing we can do about that) but for
+ // privateProcRoot() this should be guaranteed to be safe (at least since
+ // Linux 5.12[1], when anonymous mount namespaces were completely isolated
+ // from external mounts including mount propagation events).
+ //
+ // [1]: Linux commit ee2e3f50629f ("mount: fix mounting of detached mounts
+ // onto targets that reside on shared mounts").
+ fdStr := strconv.Itoa(int(handle.Fd()))
+ if err := checkSymlinkOvermount(procRoot, procFdDir, fdStr); err != nil {
+ return nil, fmt.Errorf("check safety of /proc/thread-self/fd/%s magiclink: %w", fdStr, err)
+ }
+
+ flags |= unix.O_CLOEXEC
+ // Rather than just wrapping openatFile, open-code it so we can copy
+ // handle.Name().
+ reopenFd, err := unix.Openat(int(procFdDir.Fd()), fdStr, flags, 0)
+ if err != nil {
+ return nil, fmt.Errorf("reopen fd %d: %w", handle.Fd(), err)
+ }
+ return os.NewFile(uintptr(reopenFd), handle.Name()), nil
+}
diff --git a/vendor/github.com/cyphar/filepath-securejoin/openat2_linux.go b/vendor/github.com/cyphar/filepath-securejoin/openat2_linux.go
new file mode 100644
index 000000000..ae3b381ef
--- /dev/null
+++ b/vendor/github.com/cyphar/filepath-securejoin/openat2_linux.go
@@ -0,0 +1,128 @@
+//go:build linux
+
+// Copyright (C) 2024 SUSE LLC. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+package securejoin
+
+import (
+ "errors"
+ "fmt"
+ "os"
+ "path/filepath"
+ "strings"
+ "sync"
+
+ "golang.org/x/sys/unix"
+)
+
+var hasOpenat2 = sync.OnceValue(func() bool {
+ fd, err := unix.Openat2(unix.AT_FDCWD, ".", &unix.OpenHow{
+ Flags: unix.O_PATH | unix.O_CLOEXEC,
+ Resolve: unix.RESOLVE_NO_SYMLINKS | unix.RESOLVE_IN_ROOT,
+ })
+ if err != nil {
+ return false
+ }
+ _ = unix.Close(fd)
+ return true
+})
+
+func scopedLookupShouldRetry(how *unix.OpenHow, err error) bool {
+ // RESOLVE_IN_ROOT (and RESOLVE_BENEATH) can return -EAGAIN if we resolve
+ // ".." while a mount or rename occurs anywhere on the system. This could
+ // happen spuriously, or as the result of an attacker trying to mess with
+ // us during lookup.
+ //
+ // In addition, scoped lookups have a "safety check" at the end of
+ // complete_walk which will return -EXDEV if the final path is not in the
+ // root.
+ return how.Resolve&(unix.RESOLVE_IN_ROOT|unix.RESOLVE_BENEATH) != 0 &&
+ (errors.Is(err, unix.EAGAIN) || errors.Is(err, unix.EXDEV))
+}
+
+const scopedLookupMaxRetries = 10
+
+func openat2File(dir *os.File, path string, how *unix.OpenHow) (*os.File, error) {
+ fullPath := dir.Name() + "/" + path
+ // Make sure we always set O_CLOEXEC.
+ how.Flags |= unix.O_CLOEXEC
+ var tries int
+ for tries < scopedLookupMaxRetries {
+ fd, err := unix.Openat2(int(dir.Fd()), path, how)
+ if err != nil {
+ if scopedLookupShouldRetry(how, err) {
+ // We retry a couple of times to avoid the spurious errors, and
+ // if we are being attacked then returning -EAGAIN is the best
+ // we can do.
+ tries++
+ continue
+ }
+ return nil, &os.PathError{Op: "openat2", Path: fullPath, Err: err}
+ }
+ // If we are using RESOLVE_IN_ROOT, the name we generated may be wrong.
+ // NOTE: The procRoot code MUST NOT use RESOLVE_IN_ROOT, otherwise
+ // you'll get infinite recursion here.
+ if how.Resolve&unix.RESOLVE_IN_ROOT == unix.RESOLVE_IN_ROOT {
+ if actualPath, err := rawProcSelfFdReadlink(fd); err == nil {
+ fullPath = actualPath
+ }
+ }
+ return os.NewFile(uintptr(fd), fullPath), nil
+ }
+ return nil, &os.PathError{Op: "openat2", Path: fullPath, Err: errPossibleAttack}
+}
+
+func lookupOpenat2(root *os.File, unsafePath string, partial bool) (*os.File, string, error) {
+ if !partial {
+ file, err := openat2File(root, unsafePath, &unix.OpenHow{
+ Flags: unix.O_PATH | unix.O_CLOEXEC,
+ Resolve: unix.RESOLVE_IN_ROOT | unix.RESOLVE_NO_MAGICLINKS,
+ })
+ return file, "", err
+ }
+ return partialLookupOpenat2(root, unsafePath)
+}
+
+// partialLookupOpenat2 is an alternative implementation of
+// partialLookupInRoot, using openat2(RESOLVE_IN_ROOT) to more safely get a
+// handle to the deepest existing child of the requested path within the root.
+func partialLookupOpenat2(root *os.File, unsafePath string) (*os.File, string, error) {
+ // TODO: Implement this as a git-bisect-like binary search.
+
+ unsafePath = filepath.ToSlash(unsafePath) // noop
+ endIdx := len(unsafePath)
+ var lastError error
+ for endIdx > 0 {
+ subpath := unsafePath[:endIdx]
+
+ handle, err := openat2File(root, subpath, &unix.OpenHow{
+ Flags: unix.O_PATH | unix.O_CLOEXEC,
+ Resolve: unix.RESOLVE_IN_ROOT | unix.RESOLVE_NO_MAGICLINKS,
+ })
+ if err == nil {
+ // Jump over the slash if we have a non-"" remainingPath.
+ if endIdx < len(unsafePath) {
+ endIdx += 1
+ }
+ // We found a subpath!
+ return handle, unsafePath[endIdx:], lastError
+ }
+ if errors.Is(err, unix.ENOENT) || errors.Is(err, unix.ENOTDIR) {
+ // That path doesn't exist, let's try the next directory up.
+ endIdx = strings.LastIndexByte(subpath, '/')
+ lastError = err
+ continue
+ }
+ return nil, "", fmt.Errorf("open subpath: %w", err)
+ }
+ // If we couldn't open anything, the whole subpath is missing. Return a
+ // copy of the root fd so that the caller doesn't close this one by
+ // accident.
+ rootClone, err := dupFile(root)
+ if err != nil {
+ return nil, "", err
+ }
+ return rootClone, unsafePath, lastError
+}
diff --git a/vendor/github.com/cyphar/filepath-securejoin/openat_linux.go b/vendor/github.com/cyphar/filepath-securejoin/openat_linux.go
new file mode 100644
index 000000000..949fb5f2d
--- /dev/null
+++ b/vendor/github.com/cyphar/filepath-securejoin/openat_linux.go
@@ -0,0 +1,59 @@
+//go:build linux
+
+// Copyright (C) 2024 SUSE LLC. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+package securejoin
+
+import (
+ "os"
+ "path/filepath"
+
+ "golang.org/x/sys/unix"
+)
+
+func dupFile(f *os.File) (*os.File, error) {
+ fd, err := unix.FcntlInt(f.Fd(), unix.F_DUPFD_CLOEXEC, 0)
+ if err != nil {
+ return nil, os.NewSyscallError("fcntl(F_DUPFD_CLOEXEC)", err)
+ }
+ return os.NewFile(uintptr(fd), f.Name()), nil
+}
+
+func openatFile(dir *os.File, path string, flags int, mode int) (*os.File, error) {
+ // Make sure we always set O_CLOEXEC.
+ flags |= unix.O_CLOEXEC
+ fd, err := unix.Openat(int(dir.Fd()), path, flags, uint32(mode))
+ if err != nil {
+ return nil, &os.PathError{Op: "openat", Path: dir.Name() + "/" + path, Err: err}
+ }
+ // All of the paths we use with openatFile(2) are guaranteed to be
+ // lexically safe, so we can use path.Join here.
+ fullPath := filepath.Join(dir.Name(), path)
+ return os.NewFile(uintptr(fd), fullPath), nil
+}
+
+func fstatatFile(dir *os.File, path string, flags int) (unix.Stat_t, error) {
+ var stat unix.Stat_t
+ if err := unix.Fstatat(int(dir.Fd()), path, &stat, flags); err != nil {
+ return stat, &os.PathError{Op: "fstatat", Path: dir.Name() + "/" + path, Err: err}
+ }
+ return stat, nil
+}
+
+func readlinkatFile(dir *os.File, path string) (string, error) {
+ size := 4096
+ for {
+ linkBuf := make([]byte, size)
+ n, err := unix.Readlinkat(int(dir.Fd()), path, linkBuf)
+ if err != nil {
+ return "", &os.PathError{Op: "readlinkat", Path: dir.Name() + "/" + path, Err: err}
+ }
+ if n != size {
+ return string(linkBuf[:n]), nil
+ }
+ // Possible truncation, resize the buffer.
+ size *= 2
+ }
+}
diff --git a/vendor/github.com/cyphar/filepath-securejoin/procfs_linux.go b/vendor/github.com/cyphar/filepath-securejoin/procfs_linux.go
new file mode 100644
index 000000000..8cc827d70
--- /dev/null
+++ b/vendor/github.com/cyphar/filepath-securejoin/procfs_linux.go
@@ -0,0 +1,440 @@
+//go:build linux
+
+// Copyright (C) 2024 SUSE LLC. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+package securejoin
+
+import (
+ "errors"
+ "fmt"
+ "os"
+ "runtime"
+ "strconv"
+ "sync"
+
+ "golang.org/x/sys/unix"
+)
+
+func fstat(f *os.File) (unix.Stat_t, error) {
+ var stat unix.Stat_t
+ if err := unix.Fstat(int(f.Fd()), &stat); err != nil {
+ return stat, &os.PathError{Op: "fstat", Path: f.Name(), Err: err}
+ }
+ return stat, nil
+}
+
+func fstatfs(f *os.File) (unix.Statfs_t, error) {
+ var statfs unix.Statfs_t
+ if err := unix.Fstatfs(int(f.Fd()), &statfs); err != nil {
+ return statfs, &os.PathError{Op: "fstatfs", Path: f.Name(), Err: err}
+ }
+ return statfs, nil
+}
+
+// The kernel guarantees that the root inode of a procfs mount has an
+// f_type of PROC_SUPER_MAGIC and st_ino of PROC_ROOT_INO.
+const (
+ procSuperMagic = 0x9fa0 // PROC_SUPER_MAGIC
+ procRootIno = 1 // PROC_ROOT_INO
+)
+
+func verifyProcRoot(procRoot *os.File) error {
+ if statfs, err := fstatfs(procRoot); err != nil {
+ return err
+ } else if statfs.Type != procSuperMagic {
+ return fmt.Errorf("%w: incorrect procfs root filesystem type 0x%x", errUnsafeProcfs, statfs.Type)
+ }
+ if stat, err := fstat(procRoot); err != nil {
+ return err
+ } else if stat.Ino != procRootIno {
+ return fmt.Errorf("%w: incorrect procfs root inode number %d", errUnsafeProcfs, stat.Ino)
+ }
+ return nil
+}
+
+var hasNewMountApi = sync.OnceValue(func() bool {
+ // All of the pieces of the new mount API we use (fsopen, fsconfig,
+ // fsmount, open_tree) were added together in Linux 5.1[1,2], so we can
+ // just check for one of the syscalls and the others should also be
+ // available.
+ //
+ // Just try to use open_tree(2) to open a file without OPEN_TREE_CLONE.
+ // This is equivalent to openat(2), but tells us if open_tree is
+ // available (and thus all of the other basic new mount API syscalls).
+ // open_tree(2) is most light-weight syscall to test here.
+ //
+ // [1]: merge commit 400913252d09
+ // [2]:
+ fd, err := unix.OpenTree(-int(unix.EBADF), "/", unix.OPEN_TREE_CLOEXEC)
+ if err != nil {
+ return false
+ }
+ _ = unix.Close(fd)
+ return true
+})
+
+func fsopen(fsName string, flags int) (*os.File, error) {
+ // Make sure we always set O_CLOEXEC.
+ flags |= unix.FSOPEN_CLOEXEC
+ fd, err := unix.Fsopen(fsName, flags)
+ if err != nil {
+ return nil, os.NewSyscallError("fsopen "+fsName, err)
+ }
+ return os.NewFile(uintptr(fd), "fscontext:"+fsName), nil
+}
+
+func fsmount(ctx *os.File, flags, mountAttrs int) (*os.File, error) {
+ // Make sure we always set O_CLOEXEC.
+ flags |= unix.FSMOUNT_CLOEXEC
+ fd, err := unix.Fsmount(int(ctx.Fd()), flags, mountAttrs)
+ if err != nil {
+ return nil, os.NewSyscallError("fsmount "+ctx.Name(), err)
+ }
+ return os.NewFile(uintptr(fd), "fsmount:"+ctx.Name()), nil
+}
+
+func newPrivateProcMount() (*os.File, error) {
+ procfsCtx, err := fsopen("proc", unix.FSOPEN_CLOEXEC)
+ if err != nil {
+ return nil, err
+ }
+ defer procfsCtx.Close()
+
+ // Try to configure hidepid=ptraceable,subset=pid if possible, but ignore errors.
+ _ = unix.FsconfigSetString(int(procfsCtx.Fd()), "hidepid", "ptraceable")
+ _ = unix.FsconfigSetString(int(procfsCtx.Fd()), "subset", "pid")
+
+ // Get an actual handle.
+ if err := unix.FsconfigCreate(int(procfsCtx.Fd())); err != nil {
+ return nil, os.NewSyscallError("fsconfig create procfs", err)
+ }
+ return fsmount(procfsCtx, unix.FSMOUNT_CLOEXEC, unix.MS_RDONLY|unix.MS_NODEV|unix.MS_NOEXEC|unix.MS_NOSUID)
+}
+
+func openTree(dir *os.File, path string, flags uint) (*os.File, error) {
+ dirFd := -int(unix.EBADF)
+ dirName := "."
+ if dir != nil {
+ dirFd = int(dir.Fd())
+ dirName = dir.Name()
+ }
+ // Make sure we always set O_CLOEXEC.
+ flags |= unix.OPEN_TREE_CLOEXEC
+ fd, err := unix.OpenTree(dirFd, path, flags)
+ if err != nil {
+ return nil, &os.PathError{Op: "open_tree", Path: path, Err: err}
+ }
+ return os.NewFile(uintptr(fd), dirName+"/"+path), nil
+}
+
+func clonePrivateProcMount() (_ *os.File, Err error) {
+ // Try to make a clone without using AT_RECURSIVE if we can. If this works,
+ // we can be sure there are no over-mounts and so if the root is valid then
+ // we're golden. Otherwise, we have to deal with over-mounts.
+ procfsHandle, err := openTree(nil, "/proc", unix.OPEN_TREE_CLONE)
+ if err != nil || hookForcePrivateProcRootOpenTreeAtRecursive(procfsHandle) {
+ procfsHandle, err = openTree(nil, "/proc", unix.OPEN_TREE_CLONE|unix.AT_RECURSIVE)
+ }
+ if err != nil {
+ return nil, fmt.Errorf("creating a detached procfs clone: %w", err)
+ }
+ defer func() {
+ if Err != nil {
+ _ = procfsHandle.Close()
+ }
+ }()
+ if err := verifyProcRoot(procfsHandle); err != nil {
+ return nil, err
+ }
+ return procfsHandle, nil
+}
+
+func privateProcRoot() (*os.File, error) {
+ if !hasNewMountApi() || hookForceGetProcRootUnsafe() {
+ return nil, fmt.Errorf("new mount api: %w", unix.ENOTSUP)
+ }
+ // Try to create a new procfs mount from scratch if we can. This ensures we
+ // can get a procfs mount even if /proc is fake (for whatever reason).
+ procRoot, err := newPrivateProcMount()
+ if err != nil || hookForcePrivateProcRootOpenTree(procRoot) {
+ // Try to clone /proc then...
+ procRoot, err = clonePrivateProcMount()
+ }
+ return procRoot, err
+}
+
+func unsafeHostProcRoot() (_ *os.File, Err error) {
+ procRoot, err := os.OpenFile("/proc", unix.O_PATH|unix.O_NOFOLLOW|unix.O_DIRECTORY|unix.O_CLOEXEC, 0)
+ if err != nil {
+ return nil, err
+ }
+ defer func() {
+ if Err != nil {
+ _ = procRoot.Close()
+ }
+ }()
+ if err := verifyProcRoot(procRoot); err != nil {
+ return nil, err
+ }
+ return procRoot, nil
+}
+
+func doGetProcRoot() (*os.File, error) {
+ procRoot, err := privateProcRoot()
+ if err != nil {
+ // Fall back to using a /proc handle if making a private mount failed.
+ // If we have openat2, at least we can avoid some kinds of over-mount
+ // attacks, but without openat2 there's not much we can do.
+ procRoot, err = unsafeHostProcRoot()
+ }
+ return procRoot, err
+}
+
+var getProcRoot = sync.OnceValues(func() (*os.File, error) {
+ return doGetProcRoot()
+})
+
+var hasProcThreadSelf = sync.OnceValue(func() bool {
+ return unix.Access("/proc/thread-self/", unix.F_OK) == nil
+})
+
+var errUnsafeProcfs = errors.New("unsafe procfs detected")
+
+type procThreadSelfCloser func()
+
+// procThreadSelf returns a handle to /proc/thread-self/ (or an
+// equivalent handle on older kernels where /proc/thread-self doesn't exist).
+// Once finished with the handle, you must call the returned closer function
+// (runtime.UnlockOSThread). You must not pass the returned *os.File to other
+// Go threads or use the handle after calling the closer.
+//
+// This is similar to ProcThreadSelf from runc, but with extra hardening
+// applied and using *os.File.
+func procThreadSelf(procRoot *os.File, subpath string) (_ *os.File, _ procThreadSelfCloser, Err error) {
+ // We need to lock our thread until the caller is done with the handle
+ // because between getting the handle and using it we could get interrupted
+ // by the Go runtime and hit the case where the underlying thread is
+ // swapped out and the original thread is killed, resulting in
+ // pull-your-hair-out-hard-to-debug issues in the caller.
+ runtime.LockOSThread()
+ defer func() {
+ if Err != nil {
+ runtime.UnlockOSThread()
+ }
+ }()
+
+ // Figure out what prefix we want to use.
+ threadSelf := "thread-self/"
+ if !hasProcThreadSelf() || hookForceProcSelfTask() {
+ /// Pre-3.17 kernels don't have /proc/thread-self, so do it manually.
+ threadSelf = "self/task/" + strconv.Itoa(unix.Gettid()) + "/"
+ if _, err := fstatatFile(procRoot, threadSelf, unix.AT_SYMLINK_NOFOLLOW); err != nil || hookForceProcSelf() {
+ // In this case, we running in a pid namespace that doesn't match
+ // the /proc mount we have. This can happen inside runc.
+ //
+ // Unfortunately, there is no nice way to get the correct TID to
+ // use here because of the age of the kernel, so we have to just
+ // use /proc/self and hope that it works.
+ threadSelf = "self/"
+ }
+ }
+
+ // Grab the handle.
+ var (
+ handle *os.File
+ err error
+ )
+ if hasOpenat2() {
+ // We prefer being able to use RESOLVE_NO_XDEV if we can, to be
+ // absolutely sure we are operating on a clean /proc handle that
+ // doesn't have any cheeky overmounts that could trick us (including
+ // symlink mounts on top of /proc/thread-self). RESOLVE_BENEATH isn't
+ // strictly needed, but just use it since we have it.
+ //
+ // NOTE: /proc/self is technically a magic-link (the contents of the
+ // symlink are generated dynamically), but it doesn't use
+ // nd_jump_link() so RESOLVE_NO_MAGICLINKS allows it.
+ //
+ // NOTE: We MUST NOT use RESOLVE_IN_ROOT here, as openat2File uses
+ // procSelfFdReadlink to clean up the returned f.Name() if we use
+ // RESOLVE_IN_ROOT (which would lead to an infinite recursion).
+ handle, err = openat2File(procRoot, threadSelf+subpath, &unix.OpenHow{
+ Flags: unix.O_PATH | unix.O_NOFOLLOW | unix.O_CLOEXEC,
+ Resolve: unix.RESOLVE_BENEATH | unix.RESOLVE_NO_XDEV | unix.RESOLVE_NO_MAGICLINKS,
+ })
+ if err != nil {
+ return nil, nil, fmt.Errorf("%w: %w", errUnsafeProcfs, err)
+ }
+ } else {
+ handle, err = openatFile(procRoot, threadSelf+subpath, unix.O_PATH|unix.O_NOFOLLOW|unix.O_CLOEXEC, 0)
+ if err != nil {
+ return nil, nil, fmt.Errorf("%w: %w", errUnsafeProcfs, err)
+ }
+ defer func() {
+ if Err != nil {
+ _ = handle.Close()
+ }
+ }()
+ // We can't detect bind-mounts of different parts of procfs on top of
+ // /proc (a-la RESOLVE_NO_XDEV), but we can at least be sure that we
+ // aren't on the wrong filesystem here.
+ if statfs, err := fstatfs(handle); err != nil {
+ return nil, nil, err
+ } else if statfs.Type != procSuperMagic {
+ return nil, nil, fmt.Errorf("%w: incorrect /proc/self/fd filesystem type 0x%x", errUnsafeProcfs, statfs.Type)
+ }
+ }
+ return handle, runtime.UnlockOSThread, nil
+}
+
+var hasStatxMountId = sync.OnceValue(func() bool {
+ var (
+ stx unix.Statx_t
+ // We don't care which mount ID we get. The kernel will give us the
+ // unique one if it is supported.
+ wantStxMask uint32 = unix.STATX_MNT_ID_UNIQUE | unix.STATX_MNT_ID
+ )
+ err := unix.Statx(-int(unix.EBADF), "/", 0, int(wantStxMask), &stx)
+ return err == nil && stx.Mask&wantStxMask != 0
+})
+
+func getMountId(dir *os.File, path string) (uint64, error) {
+ // If we don't have statx(STATX_MNT_ID*) support, we can't do anything.
+ if !hasStatxMountId() {
+ return 0, nil
+ }
+
+ var (
+ stx unix.Statx_t
+ // We don't care which mount ID we get. The kernel will give us the
+ // unique one if it is supported.
+ wantStxMask uint32 = unix.STATX_MNT_ID_UNIQUE | unix.STATX_MNT_ID
+ )
+
+ err := unix.Statx(int(dir.Fd()), path, unix.AT_EMPTY_PATH|unix.AT_SYMLINK_NOFOLLOW, int(wantStxMask), &stx)
+ if stx.Mask&wantStxMask == 0 {
+ // It's not a kernel limitation, for some reason we couldn't get a
+ // mount ID. Assume it's some kind of attack.
+ err = fmt.Errorf("%w: could not get mount id", errUnsafeProcfs)
+ }
+ if err != nil {
+ return 0, &os.PathError{Op: "statx(STATX_MNT_ID_...)", Path: dir.Name() + "/" + path, Err: err}
+ }
+ return stx.Mnt_id, nil
+}
+
+func checkSymlinkOvermount(procRoot *os.File, dir *os.File, path string) error {
+ // Get the mntId of our procfs handle.
+ expectedMountId, err := getMountId(procRoot, "")
+ if err != nil {
+ return err
+ }
+ // Get the mntId of the target magic-link.
+ gotMountId, err := getMountId(dir, path)
+ if err != nil {
+ return err
+ }
+ // As long as the directory mount is alive, even with wrapping mount IDs,
+ // we would expect to see a different mount ID here. (Of course, if we're
+ // using unsafeHostProcRoot() then an attaker could change this after we
+ // did this check.)
+ if expectedMountId != gotMountId {
+ return fmt.Errorf("%w: symlink %s/%s has an overmount obscuring the real link (mount ids do not match %d != %d)", errUnsafeProcfs, dir.Name(), path, expectedMountId, gotMountId)
+ }
+ return nil
+}
+
+func doRawProcSelfFdReadlink(procRoot *os.File, fd int) (string, error) {
+ fdPath := fmt.Sprintf("fd/%d", fd)
+ procFdLink, closer, err := procThreadSelf(procRoot, fdPath)
+ if err != nil {
+ return "", fmt.Errorf("get safe /proc/thread-self/%s handle: %w", fdPath, err)
+ }
+ defer procFdLink.Close()
+ defer closer()
+
+ // Try to detect if there is a mount on top of the magic-link. Since we use the handle directly
+ // provide to the closure. If the closure uses the handle directly, this
+ // should be safe in general (a mount on top of the path afterwards would
+ // not affect the handle itself) and will definitely be safe if we are
+ // using privateProcRoot() (at least since Linux 5.12[1], when anonymous
+ // mount namespaces were completely isolated from external mounts including
+ // mount propagation events).
+ //
+ // [1]: Linux commit ee2e3f50629f ("mount: fix mounting of detached mounts
+ // onto targets that reside on shared mounts").
+ if err := checkSymlinkOvermount(procRoot, procFdLink, ""); err != nil {
+ return "", fmt.Errorf("check safety of /proc/thread-self/fd/%d magiclink: %w", fd, err)
+ }
+
+ // readlinkat implies AT_EMPTY_PATH since Linux 2.6.39. See Linux commit
+ // 65cfc6722361 ("readlinkat(), fchownat() and fstatat() with empty
+ // relative pathnames").
+ return readlinkatFile(procFdLink, "")
+}
+
+func rawProcSelfFdReadlink(fd int) (string, error) {
+ procRoot, err := getProcRoot()
+ if err != nil {
+ return "", err
+ }
+ return doRawProcSelfFdReadlink(procRoot, fd)
+}
+
+func procSelfFdReadlink(f *os.File) (string, error) {
+ return rawProcSelfFdReadlink(int(f.Fd()))
+}
+
+var (
+ errPossibleBreakout = errors.New("possible breakout detected")
+ errInvalidDirectory = errors.New("wandered into deleted directory")
+ errDeletedInode = errors.New("cannot verify path of deleted inode")
+)
+
+func isDeadInode(file *os.File) error {
+ // If the nlink of a file drops to 0, there is an attacker deleting
+ // directories during our walk, which could result in weird /proc values.
+ // It's better to error out in this case.
+ stat, err := fstat(file)
+ if err != nil {
+ return fmt.Errorf("check for dead inode: %w", err)
+ }
+ if stat.Nlink == 0 {
+ err := errDeletedInode
+ if stat.Mode&unix.S_IFMT == unix.S_IFDIR {
+ err = errInvalidDirectory
+ }
+ return fmt.Errorf("%w %q", err, file.Name())
+ }
+ return nil
+}
+
+func checkProcSelfFdPath(path string, file *os.File) error {
+ if err := isDeadInode(file); err != nil {
+ return err
+ }
+ actualPath, err := procSelfFdReadlink(file)
+ if err != nil {
+ return fmt.Errorf("get path of handle: %w", err)
+ }
+ if actualPath != path {
+ return fmt.Errorf("%w: handle path %q doesn't match expected path %q", errPossibleBreakout, actualPath, path)
+ }
+ return nil
+}
+
+// Test hooks used in the procfs tests to verify that the fallback logic works.
+// See testing_mocks_linux_test.go and procfs_linux_test.go for more details.
+var (
+ hookForcePrivateProcRootOpenTree = hookDummyFile
+ hookForcePrivateProcRootOpenTreeAtRecursive = hookDummyFile
+ hookForceGetProcRootUnsafe = hookDummy
+
+ hookForceProcSelfTask = hookDummy
+ hookForceProcSelf = hookDummy
+)
+
+func hookDummy() bool { return false }
+func hookDummyFile(_ *os.File) bool { return false }
diff --git a/vendor/github.com/cyphar/filepath-securejoin/vfs.go b/vendor/github.com/cyphar/filepath-securejoin/vfs.go
new file mode 100644
index 000000000..36373f8c5
--- /dev/null
+++ b/vendor/github.com/cyphar/filepath-securejoin/vfs.go
@@ -0,0 +1,35 @@
+// Copyright (C) 2017-2024 SUSE LLC. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+package securejoin
+
+import "os"
+
+// In future this should be moved into a separate package, because now there
+// are several projects (umoci and go-mtree) that are using this sort of
+// interface.
+
+// VFS is the minimal interface necessary to use [SecureJoinVFS]. A nil VFS is
+// equivalent to using the standard [os].* family of functions. This is mainly
+// used for the purposes of mock testing, but also can be used to otherwise use
+// [SecureJoinVFS] with VFS-like system.
+type VFS interface {
+ // Lstat returns an [os.FileInfo] describing the named file. If the
+ // file is a symbolic link, the returned [os.FileInfo] describes the
+ // symbolic link. Lstat makes no attempt to follow the link.
+ // The semantics are identical to [os.Lstat].
+ Lstat(name string) (os.FileInfo, error)
+
+ // Readlink returns the destination of the named symbolic link.
+ // The semantics are identical to [os.Readlink].
+ Readlink(name string) (string, error)
+}
+
+// osVFS is the "nil" VFS, in that it just passes everything through to the os
+// module.
+type osVFS struct{}
+
+func (o osVFS) Lstat(name string) (os.FileInfo, error) { return os.Lstat(name) }
+
+func (o osVFS) Readlink(name string) (string, error) { return os.Readlink(name) }
diff --git a/vendor/github.com/distribution/reference/.gitattributes b/vendor/github.com/distribution/reference/.gitattributes
new file mode 100644
index 000000000..d207b1802
--- /dev/null
+++ b/vendor/github.com/distribution/reference/.gitattributes
@@ -0,0 +1 @@
+*.go text eol=lf
diff --git a/vendor/github.com/distribution/reference/.gitignore b/vendor/github.com/distribution/reference/.gitignore
new file mode 100644
index 000000000..dc07e6b04
--- /dev/null
+++ b/vendor/github.com/distribution/reference/.gitignore
@@ -0,0 +1,2 @@
+# Cover profiles
+*.out
diff --git a/vendor/github.com/distribution/reference/.golangci.yml b/vendor/github.com/distribution/reference/.golangci.yml
new file mode 100644
index 000000000..793f0bb7e
--- /dev/null
+++ b/vendor/github.com/distribution/reference/.golangci.yml
@@ -0,0 +1,18 @@
+linters:
+ enable:
+ - bodyclose
+ - dupword # Checks for duplicate words in the source code
+ - gofmt
+ - goimports
+ - ineffassign
+ - misspell
+ - revive
+ - staticcheck
+ - unconvert
+ - unused
+ - vet
+ disable:
+ - errcheck
+
+run:
+ deadline: 2m
diff --git a/vendor/github.com/distribution/reference/CODE-OF-CONDUCT.md b/vendor/github.com/distribution/reference/CODE-OF-CONDUCT.md
new file mode 100644
index 000000000..48f6704c6
--- /dev/null
+++ b/vendor/github.com/distribution/reference/CODE-OF-CONDUCT.md
@@ -0,0 +1,5 @@
+# Code of Conduct
+
+We follow the [CNCF Code of Conduct](https://github.com/cncf/foundation/blob/main/code-of-conduct.md).
+
+Please contact the [CNCF Code of Conduct Committee](mailto:conduct@cncf.io) in order to report violations of the Code of Conduct.
diff --git a/vendor/github.com/distribution/reference/CONTRIBUTING.md b/vendor/github.com/distribution/reference/CONTRIBUTING.md
new file mode 100644
index 000000000..ab2194665
--- /dev/null
+++ b/vendor/github.com/distribution/reference/CONTRIBUTING.md
@@ -0,0 +1,114 @@
+# Contributing to the reference library
+
+## Community help
+
+If you need help, please ask in the [#distribution](https://cloud-native.slack.com/archives/C01GVR8SY4R) channel on CNCF community slack.
+[Click here for an invite to the CNCF community slack](https://slack.cncf.io/)
+
+## Reporting security issues
+
+The maintainers take security seriously. If you discover a security
+issue, please bring it to their attention right away!
+
+Please **DO NOT** file a public issue, instead send your report privately to
+[cncf-distribution-security@lists.cncf.io](mailto:cncf-distribution-security@lists.cncf.io).
+
+## Reporting an issue properly
+
+By following these simple rules you will get better and faster feedback on your issue.
+
+ - search the bugtracker for an already reported issue
+
+### If you found an issue that describes your problem:
+
+ - please read other user comments first, and confirm this is the same issue: a given error condition might be indicative of different problems - you may also find a workaround in the comments
+ - please refrain from adding "same thing here" or "+1" comments
+ - you don't need to comment on an issue to get notified of updates: just hit the "subscribe" button
+ - comment if you have some new, technical and relevant information to add to the case
+ - __DO NOT__ comment on closed issues or merged PRs. If you think you have a related problem, open up a new issue and reference the PR or issue.
+
+### If you have not found an existing issue that describes your problem:
+
+ 1. create a new issue, with a succinct title that describes your issue:
+ - bad title: "It doesn't work with my docker"
+ - good title: "Private registry push fail: 400 error with E_INVALID_DIGEST"
+ 2. copy the output of (or similar for other container tools):
+ - `docker version`
+ - `docker info`
+ - `docker exec registry --version`
+ 3. copy the command line you used to launch your Registry
+ 4. restart your docker daemon in debug mode (add `-D` to the daemon launch arguments)
+ 5. reproduce your problem and get your docker daemon logs showing the error
+ 6. if relevant, copy your registry logs that show the error
+ 7. provide any relevant detail about your specific Registry configuration (e.g., storage backend used)
+ 8. indicate if you are using an enterprise proxy, Nginx, or anything else between you and your Registry
+
+## Contributing Code
+
+Contributions should be made via pull requests. Pull requests will be reviewed
+by one or more maintainers or reviewers and merged when acceptable.
+
+You should follow the basic GitHub workflow:
+
+ 1. Use your own [fork](https://help.github.com/en/articles/about-forks)
+ 2. Create your [change](https://github.com/containerd/project/blob/master/CONTRIBUTING.md#successful-changes)
+ 3. Test your code
+ 4. [Commit](https://github.com/containerd/project/blob/master/CONTRIBUTING.md#commit-messages) your work, always [sign your commits](https://github.com/containerd/project/blob/master/CONTRIBUTING.md#commit-messages)
+ 5. Push your change to your fork and create a [Pull Request](https://help.github.com/en/github/collaborating-with-issues-and-pull-requests/creating-a-pull-request-from-a-fork)
+
+Refer to [containerd's contribution guide](https://github.com/containerd/project/blob/master/CONTRIBUTING.md#successful-changes)
+for tips on creating a successful contribution.
+
+## Sign your work
+
+The sign-off is a simple line at the end of the explanation for the patch. Your
+signature certifies that you wrote the patch or otherwise have the right to pass
+it on as an open-source patch. The rules are pretty simple: if you can certify
+the below (from [developercertificate.org](http://developercertificate.org/)):
+
+```
+Developer Certificate of Origin
+Version 1.1
+
+Copyright (C) 2004, 2006 The Linux Foundation and its contributors.
+660 York Street, Suite 102,
+San Francisco, CA 94110 USA
+
+Everyone is permitted to copy and distribute verbatim copies of this
+license document, but changing it is not allowed.
+
+Developer's Certificate of Origin 1.1
+
+By making a contribution to this project, I certify that:
+
+(a) The contribution was created in whole or in part by me and I
+ have the right to submit it under the open source license
+ indicated in the file; or
+
+(b) The contribution is based upon previous work that, to the best
+ of my knowledge, is covered under an appropriate open source
+ license and I have the right under that license to submit that
+ work with modifications, whether created in whole or in part
+ by me, under the same open source license (unless I am
+ permitted to submit under a different license), as indicated
+ in the file; or
+
+(c) The contribution was provided directly to me by some other
+ person who certified (a), (b) or (c) and I have not modified
+ it.
+
+(d) I understand and agree that this project and the contribution
+ are public and that a record of the contribution (including all
+ personal information I submit with it, including my sign-off) is
+ maintained indefinitely and may be redistributed consistent with
+ this project or the open source license(s) involved.
+```
+
+Then you just add a line to every git commit message:
+
+ Signed-off-by: Joe Smith
+
+Use your real name (sorry, no pseudonyms or anonymous contributions.)
+
+If you set your `user.name` and `user.email` git configs, you can sign your
+commit automatically with `git commit -s`.
diff --git a/vendor/github.com/distribution/reference/GOVERNANCE.md b/vendor/github.com/distribution/reference/GOVERNANCE.md
new file mode 100644
index 000000000..200045b05
--- /dev/null
+++ b/vendor/github.com/distribution/reference/GOVERNANCE.md
@@ -0,0 +1,144 @@
+# distribution/reference Project Governance
+
+Distribution [Code of Conduct](./CODE-OF-CONDUCT.md) can be found here.
+
+For specific guidance on practical contribution steps please
+see our [CONTRIBUTING.md](./CONTRIBUTING.md) guide.
+
+## Maintainership
+
+There are different types of maintainers, with different responsibilities, but
+all maintainers have 3 things in common:
+
+1) They share responsibility in the project's success.
+2) They have made a long-term, recurring time investment to improve the project.
+3) They spend that time doing whatever needs to be done, not necessarily what
+is the most interesting or fun.
+
+Maintainers are often under-appreciated, because their work is harder to appreciate.
+It's easy to appreciate a really cool and technically advanced feature. It's harder
+to appreciate the absence of bugs, the slow but steady improvement in stability,
+or the reliability of a release process. But those things distinguish a good
+project from a great one.
+
+## Reviewers
+
+A reviewer is a core role within the project.
+They share in reviewing issues and pull requests and their LGTM counts towards the
+required LGTM count to merge a code change into the project.
+
+Reviewers are part of the organization but do not have write access.
+Becoming a reviewer is a core aspect in the journey to becoming a maintainer.
+
+## Adding maintainers
+
+Maintainers are first and foremost contributors that have shown they are
+committed to the long term success of a project. Contributors wanting to become
+maintainers are expected to be deeply involved in contributing code, pull
+request review, and triage of issues in the project for more than three months.
+
+Just contributing does not make you a maintainer, it is about building trust
+with the current maintainers of the project and being a person that they can
+depend on and trust to make decisions in the best interest of the project.
+
+Periodically, the existing maintainers curate a list of contributors that have
+shown regular activity on the project over the prior months. From this list,
+maintainer candidates are selected and proposed in a pull request or a
+maintainers communication channel.
+
+After a candidate has been announced to the maintainers, the existing
+maintainers are given five business days to discuss the candidate, raise
+objections and cast their vote. Votes may take place on the communication
+channel or via pull request comment. Candidates must be approved by at least 66%
+of the current maintainers by adding their vote on the mailing list. The
+reviewer role has the same process but only requires 33% of current maintainers.
+Only maintainers of the repository that the candidate is proposed for are
+allowed to vote.
+
+If a candidate is approved, a maintainer will contact the candidate to invite
+the candidate to open a pull request that adds the contributor to the
+MAINTAINERS file. The voting process may take place inside a pull request if a
+maintainer has already discussed the candidacy with the candidate and a
+maintainer is willing to be a sponsor by opening the pull request. The candidate
+becomes a maintainer once the pull request is merged.
+
+## Stepping down policy
+
+Life priorities, interests, and passions can change. If you're a maintainer but
+feel you must remove yourself from the list, inform other maintainers that you
+intend to step down, and if possible, help find someone to pick up your work.
+At the very least, ensure your work can be continued where you left off.
+
+After you've informed other maintainers, create a pull request to remove
+yourself from the MAINTAINERS file.
+
+## Removal of inactive maintainers
+
+Similar to the procedure for adding new maintainers, existing maintainers can
+be removed from the list if they do not show significant activity on the
+project. Periodically, the maintainers review the list of maintainers and their
+activity over the last three months.
+
+If a maintainer has shown insufficient activity over this period, a neutral
+person will contact the maintainer to ask if they want to continue being
+a maintainer. If the maintainer decides to step down as a maintainer, they
+open a pull request to be removed from the MAINTAINERS file.
+
+If the maintainer wants to remain a maintainer, but is unable to perform the
+required duties they can be removed with a vote of at least 66% of the current
+maintainers. In this case, maintainers should first propose the change to
+maintainers via the maintainers communication channel, then open a pull request
+for voting. The voting period is five business days. The voting pull request
+should not come as a surpise to any maintainer and any discussion related to
+performance must not be discussed on the pull request.
+
+## How are decisions made?
+
+Docker distribution is an open-source project with an open design philosophy.
+This means that the repository is the source of truth for EVERY aspect of the
+project, including its philosophy, design, road map, and APIs. *If it's part of
+the project, it's in the repo. If it's in the repo, it's part of the project.*
+
+As a result, all decisions can be expressed as changes to the repository. An
+implementation change is a change to the source code. An API change is a change
+to the API specification. A philosophy change is a change to the philosophy
+manifesto, and so on.
+
+All decisions affecting distribution, big and small, follow the same 3 steps:
+
+* Step 1: Open a pull request. Anyone can do this.
+
+* Step 2: Discuss the pull request. Anyone can do this.
+
+* Step 3: Merge or refuse the pull request. Who does this depends on the nature
+of the pull request and which areas of the project it affects.
+
+## Helping contributors with the DCO
+
+The [DCO or `Sign your work`](./CONTRIBUTING.md#sign-your-work)
+requirement is not intended as a roadblock or speed bump.
+
+Some contributors are not as familiar with `git`, or have used a web
+based editor, and thus asking them to `git commit --amend -s` is not the best
+way forward.
+
+In this case, maintainers can update the commits based on clause (c) of the DCO.
+The most trivial way for a contributor to allow the maintainer to do this, is to
+add a DCO signature in a pull requests's comment, or a maintainer can simply
+note that the change is sufficiently trivial that it does not substantially
+change the existing contribution - i.e., a spelling change.
+
+When you add someone's DCO, please also add your own to keep a log.
+
+## I'm a maintainer. Should I make pull requests too?
+
+Yes. Nobody should ever push to master directly. All changes should be
+made through a pull request.
+
+## Conflict Resolution
+
+If you have a technical dispute that you feel has reached an impasse with a
+subset of the community, any contributor may open an issue, specifically
+calling for a resolution vote of the current core maintainers to resolve the
+dispute. The same voting quorums required (2/3) for adding and removing
+maintainers will apply to conflict resolution.
diff --git a/vendor/github.com/distribution/reference/LICENSE b/vendor/github.com/distribution/reference/LICENSE
new file mode 100644
index 000000000..e06d20818
--- /dev/null
+++ b/vendor/github.com/distribution/reference/LICENSE
@@ -0,0 +1,202 @@
+Apache License
+ Version 2.0, January 2004
+ http://www.apache.org/licenses/
+
+ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
+
+ 1. Definitions.
+
+ "License" shall mean the terms and conditions for use, reproduction,
+ and distribution as defined by Sections 1 through 9 of this document.
+
+ "Licensor" shall mean the copyright owner or entity authorized by
+ the copyright owner that is granting the License.
+
+ "Legal Entity" shall mean the union of the acting entity and all
+ other entities that control, are controlled by, or are under common
+ control with that entity. For the purposes of this definition,
+ "control" means (i) the power, direct or indirect, to cause the
+ direction or management of such entity, whether by contract or
+ otherwise, or (ii) ownership of fifty percent (50%) or more of the
+ outstanding shares, or (iii) beneficial ownership of such entity.
+
+ "You" (or "Your") shall mean an individual or Legal Entity
+ exercising permissions granted by this License.
+
+ "Source" form shall mean the preferred form for making modifications,
+ including but not limited to software source code, documentation
+ source, and configuration files.
+
+ "Object" form shall mean any form resulting from mechanical
+ transformation or translation of a Source form, including but
+ not limited to compiled object code, generated documentation,
+ and conversions to other media types.
+
+ "Work" shall mean the work of authorship, whether in Source or
+ Object form, made available under the License, as indicated by a
+ copyright notice that is included in or attached to the work
+ (an example is provided in the Appendix below).
+
+ "Derivative Works" shall mean any work, whether in Source or Object
+ form, that is based on (or derived from) the Work and for which the
+ editorial revisions, annotations, elaborations, or other modifications
+ represent, as a whole, an original work of authorship. For the purposes
+ of this License, Derivative Works shall not include works that remain
+ separable from, or merely link (or bind by name) to the interfaces of,
+ the Work and Derivative Works thereof.
+
+ "Contribution" shall mean any work of authorship, including
+ the original version of the Work and any modifications or additions
+ to that Work or Derivative Works thereof, that is intentionally
+ submitted to Licensor for inclusion in the Work by the copyright owner
+ or by an individual or Legal Entity authorized to submit on behalf of
+ the copyright owner. For the purposes of this definition, "submitted"
+ means any form of electronic, verbal, or written communication sent
+ to the Licensor or its representatives, including but not limited to
+ communication on electronic mailing lists, source code control systems,
+ and issue tracking systems that are managed by, or on behalf of, the
+ Licensor for the purpose of discussing and improving the Work, but
+ excluding communication that is conspicuously marked or otherwise
+ designated in writing by the copyright owner as "Not a Contribution."
+
+ "Contributor" shall mean Licensor and any individual or Legal Entity
+ on behalf of whom a Contribution has been received by Licensor and
+ subsequently incorporated within the Work.
+
+ 2. Grant of Copyright License. Subject to the terms and conditions of
+ this License, each Contributor hereby grants to You a perpetual,
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
+ copyright license to reproduce, prepare Derivative Works of,
+ publicly display, publicly perform, sublicense, and distribute the
+ Work and such Derivative Works in Source or Object form.
+
+ 3. Grant of Patent License. Subject to the terms and conditions of
+ this License, each Contributor hereby grants to You a perpetual,
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
+ (except as stated in this section) patent license to make, have made,
+ use, offer to sell, sell, import, and otherwise transfer the Work,
+ where such license applies only to those patent claims licensable
+ by such Contributor that are necessarily infringed by their
+ Contribution(s) alone or by combination of their Contribution(s)
+ with the Work to which such Contribution(s) was submitted. If You
+ institute patent litigation against any entity (including a
+ cross-claim or counterclaim in a lawsuit) alleging that the Work
+ or a Contribution incorporated within the Work constitutes direct
+ or contributory patent infringement, then any patent licenses
+ granted to You under this License for that Work shall terminate
+ as of the date such litigation is filed.
+
+ 4. Redistribution. You may reproduce and distribute copies of the
+ Work or Derivative Works thereof in any medium, with or without
+ modifications, and in Source or Object form, provided that You
+ meet the following conditions:
+
+ (a) You must give any other recipients of the Work or
+ Derivative Works a copy of this License; and
+
+ (b) You must cause any modified files to carry prominent notices
+ stating that You changed the files; and
+
+ (c) You must retain, in the Source form of any Derivative Works
+ that You distribute, all copyright, patent, trademark, and
+ attribution notices from the Source form of the Work,
+ excluding those notices that do not pertain to any part of
+ the Derivative Works; and
+
+ (d) If the Work includes a "NOTICE" text file as part of its
+ distribution, then any Derivative Works that You distribute must
+ include a readable copy of the attribution notices contained
+ within such NOTICE file, excluding those notices that do not
+ pertain to any part of the Derivative Works, in at least one
+ of the following places: within a NOTICE text file distributed
+ as part of the Derivative Works; within the Source form or
+ documentation, if provided along with the Derivative Works; or,
+ within a display generated by the Derivative Works, if and
+ wherever such third-party notices normally appear. The contents
+ of the NOTICE file are for informational purposes only and
+ do not modify the License. You may add Your own attribution
+ notices within Derivative Works that You distribute, alongside
+ or as an addendum to the NOTICE text from the Work, provided
+ that such additional attribution notices cannot be construed
+ as modifying the License.
+
+ You may add Your own copyright statement to Your modifications and
+ may provide additional or different license terms and conditions
+ for use, reproduction, or distribution of Your modifications, or
+ for any such Derivative Works as a whole, provided Your use,
+ reproduction, and distribution of the Work otherwise complies with
+ the conditions stated in this License.
+
+ 5. Submission of Contributions. Unless You explicitly state otherwise,
+ any Contribution intentionally submitted for inclusion in the Work
+ by You to the Licensor shall be under the terms and conditions of
+ this License, without any additional terms or conditions.
+ Notwithstanding the above, nothing herein shall supersede or modify
+ the terms of any separate license agreement you may have executed
+ with Licensor regarding such Contributions.
+
+ 6. Trademarks. This License does not grant permission to use the trade
+ names, trademarks, service marks, or product names of the Licensor,
+ except as required for reasonable and customary use in describing the
+ origin of the Work and reproducing the content of the NOTICE file.
+
+ 7. Disclaimer of Warranty. Unless required by applicable law or
+ agreed to in writing, Licensor provides the Work (and each
+ Contributor provides its Contributions) on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
+ implied, including, without limitation, any warranties or conditions
+ of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
+ PARTICULAR PURPOSE. You are solely responsible for determining the
+ appropriateness of using or redistributing the Work and assume any
+ risks associated with Your exercise of permissions under this License.
+
+ 8. Limitation of Liability. In no event and under no legal theory,
+ whether in tort (including negligence), contract, or otherwise,
+ unless required by applicable law (such as deliberate and grossly
+ negligent acts) or agreed to in writing, shall any Contributor be
+ liable to You for damages, including any direct, indirect, special,
+ incidental, or consequential damages of any character arising as a
+ result of this License or out of the use or inability to use the
+ Work (including but not limited to damages for loss of goodwill,
+ work stoppage, computer failure or malfunction, or any and all
+ other commercial damages or losses), even if such Contributor
+ has been advised of the possibility of such damages.
+
+ 9. Accepting Warranty or Additional Liability. While redistributing
+ the Work or Derivative Works thereof, You may choose to offer,
+ and charge a fee for, acceptance of support, warranty, indemnity,
+ or other liability obligations and/or rights consistent with this
+ License. However, in accepting such obligations, You may act only
+ on Your own behalf and on Your sole responsibility, not on behalf
+ of any other Contributor, and only if You agree to indemnify,
+ defend, and hold each Contributor harmless for any liability
+ incurred by, or claims asserted against, such Contributor by reason
+ of your accepting any such warranty or additional liability.
+
+ END OF TERMS AND CONDITIONS
+
+ APPENDIX: How to apply the Apache License to your work.
+
+ To apply the Apache License to your work, attach the following
+ boilerplate notice, with the fields enclosed by brackets "{}"
+ replaced with your own identifying information. (Don't include
+ the brackets!) The text should be enclosed in the appropriate
+ comment syntax for the file format. We also recommend that a
+ file or class name and description of purpose be included on the
+ same "printed page" as the copyright notice for easier
+ identification within third-party archives.
+
+ Copyright {yyyy} {name of copyright owner}
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+
diff --git a/vendor/github.com/distribution/reference/MAINTAINERS b/vendor/github.com/distribution/reference/MAINTAINERS
new file mode 100644
index 000000000..9e0a60c8b
--- /dev/null
+++ b/vendor/github.com/distribution/reference/MAINTAINERS
@@ -0,0 +1,26 @@
+# Distribution project maintainers & reviewers
+#
+# See GOVERNANCE.md for maintainer versus reviewer roles
+#
+# MAINTAINERS (cncf-distribution-maintainers@lists.cncf.io)
+# GitHub ID, Name, Email address
+"chrispat","Chris Patterson","chrispat@github.com"
+"clarkbw","Bryan Clark","clarkbw@github.com"
+"corhere","Cory Snider","csnider@mirantis.com"
+"deleteriousEffect","Hayley Swimelar","hswimelar@gitlab.com"
+"heww","He Weiwei","hweiwei@vmware.com"
+"joaodrp","João Pereira","jpereira@gitlab.com"
+"justincormack","Justin Cormack","justin.cormack@docker.com"
+"squizzi","Kyle Squizzato","ksquizzato@mirantis.com"
+"milosgajdos","Milos Gajdos","milosthegajdos@gmail.com"
+"sargun","Sargun Dhillon","sargun@sargun.me"
+"wy65701436","Wang Yan","wangyan@vmware.com"
+"stevelasker","Steve Lasker","steve.lasker@microsoft.com"
+#
+# REVIEWERS
+# GitHub ID, Name, Email address
+"dmcgowan","Derek McGowan","derek@mcgstyle.net"
+"stevvooe","Stephen Day","stevvooe@gmail.com"
+"thajeztah","Sebastiaan van Stijn","github@gone.nl"
+"DavidSpek", "David van der Spek", "vanderspek.david@gmail.com"
+"Jamstah", "James Hewitt", "james.hewitt@gmail.com"
diff --git a/vendor/github.com/distribution/reference/Makefile b/vendor/github.com/distribution/reference/Makefile
new file mode 100644
index 000000000..c78576b75
--- /dev/null
+++ b/vendor/github.com/distribution/reference/Makefile
@@ -0,0 +1,25 @@
+# Project packages.
+PACKAGES=$(shell go list ./...)
+
+# Flags passed to `go test`
+BUILDFLAGS ?=
+TESTFLAGS ?=
+
+.PHONY: all build test coverage
+.DEFAULT: all
+
+all: build
+
+build: ## no binaries to build, so just check compilation suceeds
+ go build ${BUILDFLAGS} ./...
+
+test: ## run tests
+ go test ${TESTFLAGS} ./...
+
+coverage: ## generate coverprofiles from the unit tests
+ rm -f coverage.txt
+ go test ${TESTFLAGS} -cover -coverprofile=cover.out ./...
+
+.PHONY: help
+help:
+ @awk 'BEGIN {FS = ":.*##"; printf "\nUsage:\n make \033[36m\033[0m\n"} /^[a-zA-Z_\/%-]+:.*?##/ { printf " \033[36m%-27s\033[0m %s\n", $$1, $$2 } /^##@/ { printf "\n\033[1m%s\033[0m\n", substr($$0, 5) } ' $(MAKEFILE_LIST)
diff --git a/vendor/github.com/distribution/reference/README.md b/vendor/github.com/distribution/reference/README.md
new file mode 100644
index 000000000..172a02e0b
--- /dev/null
+++ b/vendor/github.com/distribution/reference/README.md
@@ -0,0 +1,30 @@
+# Distribution reference
+
+Go library to handle references to container images.
+
+
+
+[![Build Status](https://github.com/distribution/reference/actions/workflows/test.yml/badge.svg?branch=main&event=push)](https://github.com/distribution/reference/actions?query=workflow%3ACI)
+[![GoDoc](https://img.shields.io/badge/go.dev-reference-007d9c?logo=go&logoColor=white&style=flat-square)](https://pkg.go.dev/github.com/distribution/reference)
+[![License: Apache-2.0](https://img.shields.io/badge/License-Apache--2.0-blue.svg)](LICENSE)
+[![codecov](https://codecov.io/gh/distribution/reference/branch/main/graph/badge.svg)](https://codecov.io/gh/distribution/reference)
+[![FOSSA Status](https://app.fossa.com/api/projects/custom%2B162%2Fgithub.com%2Fdistribution%2Freference.svg?type=shield)](https://app.fossa.com/projects/custom%2B162%2Fgithub.com%2Fdistribution%2Freference?ref=badge_shield)
+
+This repository contains a library for handling references to container images held in container registries. Please see [godoc](https://pkg.go.dev/github.com/distribution/reference) for details.
+
+## Contribution
+
+Please see [CONTRIBUTING.md](CONTRIBUTING.md) for details on how to contribute
+issues, fixes, and patches to this project.
+
+## Communication
+
+For async communication and long running discussions please use issues and pull requests on the github repo.
+This will be the best place to discuss design and implementation.
+
+For sync communication we have a #distribution channel in the [CNCF Slack](https://slack.cncf.io/)
+that everyone is welcome to join and chat about development.
+
+## Licenses
+
+The distribution codebase is released under the [Apache 2.0 license](LICENSE).
diff --git a/vendor/github.com/distribution/reference/SECURITY.md b/vendor/github.com/distribution/reference/SECURITY.md
new file mode 100644
index 000000000..aaf983c0f
--- /dev/null
+++ b/vendor/github.com/distribution/reference/SECURITY.md
@@ -0,0 +1,7 @@
+# Security Policy
+
+## Reporting a Vulnerability
+
+The maintainers take security seriously. If you discover a security issue, please bring it to their attention right away!
+
+Please DO NOT file a public issue, instead send your report privately to cncf-distribution-security@lists.cncf.io.
diff --git a/vendor/github.com/distribution/reference/distribution-logo.svg b/vendor/github.com/distribution/reference/distribution-logo.svg
new file mode 100644
index 000000000..cc9f4073b
--- /dev/null
+++ b/vendor/github.com/distribution/reference/distribution-logo.svg
@@ -0,0 +1 @@
+
\ No newline at end of file
diff --git a/vendor/github.com/distribution/reference/helpers.go b/vendor/github.com/distribution/reference/helpers.go
new file mode 100644
index 000000000..d10c7ef83
--- /dev/null
+++ b/vendor/github.com/distribution/reference/helpers.go
@@ -0,0 +1,42 @@
+package reference
+
+import "path"
+
+// IsNameOnly returns true if reference only contains a repo name.
+func IsNameOnly(ref Named) bool {
+ if _, ok := ref.(NamedTagged); ok {
+ return false
+ }
+ if _, ok := ref.(Canonical); ok {
+ return false
+ }
+ return true
+}
+
+// FamiliarName returns the familiar name string
+// for the given named, familiarizing if needed.
+func FamiliarName(ref Named) string {
+ if nn, ok := ref.(normalizedNamed); ok {
+ return nn.Familiar().Name()
+ }
+ return ref.Name()
+}
+
+// FamiliarString returns the familiar string representation
+// for the given reference, familiarizing if needed.
+func FamiliarString(ref Reference) string {
+ if nn, ok := ref.(normalizedNamed); ok {
+ return nn.Familiar().String()
+ }
+ return ref.String()
+}
+
+// FamiliarMatch reports whether ref matches the specified pattern.
+// See [path.Match] for supported patterns.
+func FamiliarMatch(pattern string, ref Reference) (bool, error) {
+ matched, err := path.Match(pattern, FamiliarString(ref))
+ if namedRef, isNamed := ref.(Named); isNamed && !matched {
+ matched, _ = path.Match(pattern, FamiliarName(namedRef))
+ }
+ return matched, err
+}
diff --git a/vendor/github.com/distribution/reference/normalize.go b/vendor/github.com/distribution/reference/normalize.go
new file mode 100644
index 000000000..f4128314c
--- /dev/null
+++ b/vendor/github.com/distribution/reference/normalize.go
@@ -0,0 +1,255 @@
+package reference
+
+import (
+ "fmt"
+ "strings"
+
+ "github.com/opencontainers/go-digest"
+)
+
+const (
+ // legacyDefaultDomain is the legacy domain for Docker Hub (which was
+ // originally named "the Docker Index"). This domain is still used for
+ // authentication and image search, which were part of the "v1" Docker
+ // registry specification.
+ //
+ // This domain will continue to be supported, but there are plans to consolidate
+ // legacy domains to new "canonical" domains. Once those domains are decided
+ // on, we must update the normalization functions, but preserve compatibility
+ // with existing installs, clients, and user configuration.
+ legacyDefaultDomain = "index.docker.io"
+
+ // defaultDomain is the default domain used for images on Docker Hub.
+ // It is used to normalize "familiar" names to canonical names, for example,
+ // to convert "ubuntu" to "docker.io/library/ubuntu:latest".
+ //
+ // Note that actual domain of Docker Hub's registry is registry-1.docker.io.
+ // This domain will continue to be supported, but there are plans to consolidate
+ // legacy domains to new "canonical" domains. Once those domains are decided
+ // on, we must update the normalization functions, but preserve compatibility
+ // with existing installs, clients, and user configuration.
+ defaultDomain = "docker.io"
+
+ // officialRepoPrefix is the namespace used for official images on Docker Hub.
+ // It is used to normalize "familiar" names to canonical names, for example,
+ // to convert "ubuntu" to "docker.io/library/ubuntu:latest".
+ officialRepoPrefix = "library/"
+
+ // defaultTag is the default tag if no tag is provided.
+ defaultTag = "latest"
+)
+
+// normalizedNamed represents a name which has been
+// normalized and has a familiar form. A familiar name
+// is what is used in Docker UI. An example normalized
+// name is "docker.io/library/ubuntu" and corresponding
+// familiar name of "ubuntu".
+type normalizedNamed interface {
+ Named
+ Familiar() Named
+}
+
+// ParseNormalizedNamed parses a string into a named reference
+// transforming a familiar name from Docker UI to a fully
+// qualified reference. If the value may be an identifier
+// use ParseAnyReference.
+func ParseNormalizedNamed(s string) (Named, error) {
+ if ok := anchoredIdentifierRegexp.MatchString(s); ok {
+ return nil, fmt.Errorf("invalid repository name (%s), cannot specify 64-byte hexadecimal strings", s)
+ }
+ domain, remainder := splitDockerDomain(s)
+ var remote string
+ if tagSep := strings.IndexRune(remainder, ':'); tagSep > -1 {
+ remote = remainder[:tagSep]
+ } else {
+ remote = remainder
+ }
+ if strings.ToLower(remote) != remote {
+ return nil, fmt.Errorf("invalid reference format: repository name (%s) must be lowercase", remote)
+ }
+
+ ref, err := Parse(domain + "/" + remainder)
+ if err != nil {
+ return nil, err
+ }
+ named, isNamed := ref.(Named)
+ if !isNamed {
+ return nil, fmt.Errorf("reference %s has no name", ref.String())
+ }
+ return named, nil
+}
+
+// namedTaggedDigested is a reference that has both a tag and a digest.
+type namedTaggedDigested interface {
+ NamedTagged
+ Digested
+}
+
+// ParseDockerRef normalizes the image reference following the docker convention,
+// which allows for references to contain both a tag and a digest. It returns a
+// reference that is either tagged or digested. For references containing both
+// a tag and a digest, it returns a digested reference. For example, the following
+// reference:
+//
+// docker.io/library/busybox:latest@sha256:7cc4b5aefd1d0cadf8d97d4350462ba51c694ebca145b08d7d41b41acc8db5aa
+//
+// Is returned as a digested reference (with the ":latest" tag removed):
+//
+// docker.io/library/busybox@sha256:7cc4b5aefd1d0cadf8d97d4350462ba51c694ebca145b08d7d41b41acc8db5aa
+//
+// References that are already "tagged" or "digested" are returned unmodified:
+//
+// // Already a digested reference
+// docker.io/library/busybox@sha256:7cc4b5aefd1d0cadf8d97d4350462ba51c694ebca145b08d7d41b41acc8db5aa
+//
+// // Already a named reference
+// docker.io/library/busybox:latest
+func ParseDockerRef(ref string) (Named, error) {
+ named, err := ParseNormalizedNamed(ref)
+ if err != nil {
+ return nil, err
+ }
+ if canonical, ok := named.(namedTaggedDigested); ok {
+ // The reference is both tagged and digested; only return digested.
+ newNamed, err := WithName(canonical.Name())
+ if err != nil {
+ return nil, err
+ }
+ return WithDigest(newNamed, canonical.Digest())
+ }
+ return TagNameOnly(named), nil
+}
+
+// splitDockerDomain splits a repository name to domain and remote-name.
+// If no valid domain is found, the default domain is used. Repository name
+// needs to be already validated before.
+func splitDockerDomain(name string) (domain, remoteName string) {
+ maybeDomain, maybeRemoteName, ok := strings.Cut(name, "/")
+ if !ok {
+ // Fast-path for single element ("familiar" names), such as "ubuntu"
+ // or "ubuntu:latest". Familiar names must be handled separately, to
+ // prevent them from being handled as "hostname:port".
+ //
+ // Canonicalize them as "docker.io/library/name[:tag]"
+
+ // FIXME(thaJeztah): account for bare "localhost" or "example.com" names, which SHOULD be considered a domain.
+ return defaultDomain, officialRepoPrefix + name
+ }
+
+ switch {
+ case maybeDomain == localhost:
+ // localhost is a reserved namespace and always considered a domain.
+ domain, remoteName = maybeDomain, maybeRemoteName
+ case maybeDomain == legacyDefaultDomain:
+ // canonicalize the Docker Hub and legacy "Docker Index" domains.
+ domain, remoteName = defaultDomain, maybeRemoteName
+ case strings.ContainsAny(maybeDomain, ".:"):
+ // Likely a domain or IP-address:
+ //
+ // - contains a "." (e.g., "example.com" or "127.0.0.1")
+ // - contains a ":" (e.g., "example:5000", "::1", or "[::1]:5000")
+ domain, remoteName = maybeDomain, maybeRemoteName
+ case strings.ToLower(maybeDomain) != maybeDomain:
+ // Uppercase namespaces are not allowed, so if the first element
+ // is not lowercase, we assume it to be a domain-name.
+ domain, remoteName = maybeDomain, maybeRemoteName
+ default:
+ // None of the above: it's not a domain, so use the default, and
+ // use the name input the remote-name.
+ domain, remoteName = defaultDomain, name
+ }
+
+ if domain == defaultDomain && !strings.ContainsRune(remoteName, '/') {
+ // Canonicalize "familiar" names, but only on Docker Hub, not
+ // on other domains:
+ //
+ // "docker.io/ubuntu[:tag]" => "docker.io/library/ubuntu[:tag]"
+ remoteName = officialRepoPrefix + remoteName
+ }
+
+ return domain, remoteName
+}
+
+// familiarizeName returns a shortened version of the name familiar
+// to the Docker UI. Familiar names have the default domain
+// "docker.io" and "library/" repository prefix removed.
+// For example, "docker.io/library/redis" will have the familiar
+// name "redis" and "docker.io/dmcgowan/myapp" will be "dmcgowan/myapp".
+// Returns a familiarized named only reference.
+func familiarizeName(named namedRepository) repository {
+ repo := repository{
+ domain: named.Domain(),
+ path: named.Path(),
+ }
+
+ if repo.domain == defaultDomain {
+ repo.domain = ""
+ // Handle official repositories which have the pattern "library/"
+ if strings.HasPrefix(repo.path, officialRepoPrefix) {
+ // TODO(thaJeztah): this check may be too strict, as it assumes the
+ // "library/" namespace does not have nested namespaces. While this
+ // is true (currently), technically it would be possible for Docker
+ // Hub to use those (e.g. "library/distros/ubuntu:latest").
+ // See https://github.com/distribution/distribution/pull/3769#issuecomment-1302031785.
+ if remainder := strings.TrimPrefix(repo.path, officialRepoPrefix); !strings.ContainsRune(remainder, '/') {
+ repo.path = remainder
+ }
+ }
+ }
+ return repo
+}
+
+func (r reference) Familiar() Named {
+ return reference{
+ namedRepository: familiarizeName(r.namedRepository),
+ tag: r.tag,
+ digest: r.digest,
+ }
+}
+
+func (r repository) Familiar() Named {
+ return familiarizeName(r)
+}
+
+func (t taggedReference) Familiar() Named {
+ return taggedReference{
+ namedRepository: familiarizeName(t.namedRepository),
+ tag: t.tag,
+ }
+}
+
+func (c canonicalReference) Familiar() Named {
+ return canonicalReference{
+ namedRepository: familiarizeName(c.namedRepository),
+ digest: c.digest,
+ }
+}
+
+// TagNameOnly adds the default tag "latest" to a reference if it only has
+// a repo name.
+func TagNameOnly(ref Named) Named {
+ if IsNameOnly(ref) {
+ namedTagged, err := WithTag(ref, defaultTag)
+ if err != nil {
+ // Default tag must be valid, to create a NamedTagged
+ // type with non-validated input the WithTag function
+ // should be used instead
+ panic(err)
+ }
+ return namedTagged
+ }
+ return ref
+}
+
+// ParseAnyReference parses a reference string as a possible identifier,
+// full digest, or familiar name.
+func ParseAnyReference(ref string) (Reference, error) {
+ if ok := anchoredIdentifierRegexp.MatchString(ref); ok {
+ return digestReference("sha256:" + ref), nil
+ }
+ if dgst, err := digest.Parse(ref); err == nil {
+ return digestReference(dgst), nil
+ }
+
+ return ParseNormalizedNamed(ref)
+}
diff --git a/vendor/github.com/distribution/reference/reference.go b/vendor/github.com/distribution/reference/reference.go
new file mode 100644
index 000000000..900398bde
--- /dev/null
+++ b/vendor/github.com/distribution/reference/reference.go
@@ -0,0 +1,432 @@
+// Package reference provides a general type to represent any way of referencing images within the registry.
+// Its main purpose is to abstract tags and digests (content-addressable hash).
+//
+// Grammar
+//
+// reference := name [ ":" tag ] [ "@" digest ]
+// name := [domain '/'] remote-name
+// domain := host [':' port-number]
+// host := domain-name | IPv4address | \[ IPv6address \] ; rfc3986 appendix-A
+// domain-name := domain-component ['.' domain-component]*
+// domain-component := /([a-zA-Z0-9]|[a-zA-Z0-9][a-zA-Z0-9-]*[a-zA-Z0-9])/
+// port-number := /[0-9]+/
+// path-component := alpha-numeric [separator alpha-numeric]*
+// path (or "remote-name") := path-component ['/' path-component]*
+// alpha-numeric := /[a-z0-9]+/
+// separator := /[_.]|__|[-]*/
+//
+// tag := /[\w][\w.-]{0,127}/
+//
+// digest := digest-algorithm ":" digest-hex
+// digest-algorithm := digest-algorithm-component [ digest-algorithm-separator digest-algorithm-component ]*
+// digest-algorithm-separator := /[+.-_]/
+// digest-algorithm-component := /[A-Za-z][A-Za-z0-9]*/
+// digest-hex := /[0-9a-fA-F]{32,}/ ; At least 128 bit digest value
+//
+// identifier := /[a-f0-9]{64}/
+package reference
+
+import (
+ "errors"
+ "fmt"
+ "strings"
+
+ "github.com/opencontainers/go-digest"
+)
+
+const (
+ // RepositoryNameTotalLengthMax is the maximum total number of characters in a repository name.
+ RepositoryNameTotalLengthMax = 255
+
+ // NameTotalLengthMax is the maximum total number of characters in a repository name.
+ //
+ // Deprecated: use [RepositoryNameTotalLengthMax] instead.
+ NameTotalLengthMax = RepositoryNameTotalLengthMax
+)
+
+var (
+ // ErrReferenceInvalidFormat represents an error while trying to parse a string as a reference.
+ ErrReferenceInvalidFormat = errors.New("invalid reference format")
+
+ // ErrTagInvalidFormat represents an error while trying to parse a string as a tag.
+ ErrTagInvalidFormat = errors.New("invalid tag format")
+
+ // ErrDigestInvalidFormat represents an error while trying to parse a string as a tag.
+ ErrDigestInvalidFormat = errors.New("invalid digest format")
+
+ // ErrNameContainsUppercase is returned for invalid repository names that contain uppercase characters.
+ ErrNameContainsUppercase = errors.New("repository name must be lowercase")
+
+ // ErrNameEmpty is returned for empty, invalid repository names.
+ ErrNameEmpty = errors.New("repository name must have at least one component")
+
+ // ErrNameTooLong is returned when a repository name is longer than RepositoryNameTotalLengthMax.
+ ErrNameTooLong = fmt.Errorf("repository name must not be more than %v characters", RepositoryNameTotalLengthMax)
+
+ // ErrNameNotCanonical is returned when a name is not canonical.
+ ErrNameNotCanonical = errors.New("repository name must be canonical")
+)
+
+// Reference is an opaque object reference identifier that may include
+// modifiers such as a hostname, name, tag, and digest.
+type Reference interface {
+ // String returns the full reference
+ String() string
+}
+
+// Field provides a wrapper type for resolving correct reference types when
+// working with encoding.
+type Field struct {
+ reference Reference
+}
+
+// AsField wraps a reference in a Field for encoding.
+func AsField(reference Reference) Field {
+ return Field{reference}
+}
+
+// Reference unwraps the reference type from the field to
+// return the Reference object. This object should be
+// of the appropriate type to further check for different
+// reference types.
+func (f Field) Reference() Reference {
+ return f.reference
+}
+
+// MarshalText serializes the field to byte text which
+// is the string of the reference.
+func (f Field) MarshalText() (p []byte, err error) {
+ return []byte(f.reference.String()), nil
+}
+
+// UnmarshalText parses text bytes by invoking the
+// reference parser to ensure the appropriately
+// typed reference object is wrapped by field.
+func (f *Field) UnmarshalText(p []byte) error {
+ r, err := Parse(string(p))
+ if err != nil {
+ return err
+ }
+
+ f.reference = r
+ return nil
+}
+
+// Named is an object with a full name
+type Named interface {
+ Reference
+ Name() string
+}
+
+// Tagged is an object which has a tag
+type Tagged interface {
+ Reference
+ Tag() string
+}
+
+// NamedTagged is an object including a name and tag.
+type NamedTagged interface {
+ Named
+ Tag() string
+}
+
+// Digested is an object which has a digest
+// in which it can be referenced by
+type Digested interface {
+ Reference
+ Digest() digest.Digest
+}
+
+// Canonical reference is an object with a fully unique
+// name including a name with domain and digest
+type Canonical interface {
+ Named
+ Digest() digest.Digest
+}
+
+// namedRepository is a reference to a repository with a name.
+// A namedRepository has both domain and path components.
+type namedRepository interface {
+ Named
+ Domain() string
+ Path() string
+}
+
+// Domain returns the domain part of the [Named] reference.
+func Domain(named Named) string {
+ if r, ok := named.(namedRepository); ok {
+ return r.Domain()
+ }
+ domain, _ := splitDomain(named.Name())
+ return domain
+}
+
+// Path returns the name without the domain part of the [Named] reference.
+func Path(named Named) (name string) {
+ if r, ok := named.(namedRepository); ok {
+ return r.Path()
+ }
+ _, path := splitDomain(named.Name())
+ return path
+}
+
+// splitDomain splits a named reference into a hostname and path string.
+// If no valid hostname is found, the hostname is empty and the full value
+// is returned as name
+func splitDomain(name string) (string, string) {
+ match := anchoredNameRegexp.FindStringSubmatch(name)
+ if len(match) != 3 {
+ return "", name
+ }
+ return match[1], match[2]
+}
+
+// Parse parses s and returns a syntactically valid Reference.
+// If an error was encountered it is returned, along with a nil Reference.
+func Parse(s string) (Reference, error) {
+ matches := ReferenceRegexp.FindStringSubmatch(s)
+ if matches == nil {
+ if s == "" {
+ return nil, ErrNameEmpty
+ }
+ if ReferenceRegexp.FindStringSubmatch(strings.ToLower(s)) != nil {
+ return nil, ErrNameContainsUppercase
+ }
+ return nil, ErrReferenceInvalidFormat
+ }
+
+ var repo repository
+
+ nameMatch := anchoredNameRegexp.FindStringSubmatch(matches[1])
+ if len(nameMatch) == 3 {
+ repo.domain = nameMatch[1]
+ repo.path = nameMatch[2]
+ } else {
+ repo.domain = ""
+ repo.path = matches[1]
+ }
+
+ if len(repo.path) > RepositoryNameTotalLengthMax {
+ return nil, ErrNameTooLong
+ }
+
+ ref := reference{
+ namedRepository: repo,
+ tag: matches[2],
+ }
+ if matches[3] != "" {
+ var err error
+ ref.digest, err = digest.Parse(matches[3])
+ if err != nil {
+ return nil, err
+ }
+ }
+
+ r := getBestReferenceType(ref)
+ if r == nil {
+ return nil, ErrNameEmpty
+ }
+
+ return r, nil
+}
+
+// ParseNamed parses s and returns a syntactically valid reference implementing
+// the Named interface. The reference must have a name and be in the canonical
+// form, otherwise an error is returned.
+// If an error was encountered it is returned, along with a nil Reference.
+func ParseNamed(s string) (Named, error) {
+ named, err := ParseNormalizedNamed(s)
+ if err != nil {
+ return nil, err
+ }
+ if named.String() != s {
+ return nil, ErrNameNotCanonical
+ }
+ return named, nil
+}
+
+// WithName returns a named object representing the given string. If the input
+// is invalid ErrReferenceInvalidFormat will be returned.
+func WithName(name string) (Named, error) {
+ match := anchoredNameRegexp.FindStringSubmatch(name)
+ if match == nil || len(match) != 3 {
+ return nil, ErrReferenceInvalidFormat
+ }
+
+ if len(match[2]) > RepositoryNameTotalLengthMax {
+ return nil, ErrNameTooLong
+ }
+
+ return repository{
+ domain: match[1],
+ path: match[2],
+ }, nil
+}
+
+// WithTag combines the name from "name" and the tag from "tag" to form a
+// reference incorporating both the name and the tag.
+func WithTag(name Named, tag string) (NamedTagged, error) {
+ if !anchoredTagRegexp.MatchString(tag) {
+ return nil, ErrTagInvalidFormat
+ }
+ var repo repository
+ if r, ok := name.(namedRepository); ok {
+ repo.domain = r.Domain()
+ repo.path = r.Path()
+ } else {
+ repo.path = name.Name()
+ }
+ if canonical, ok := name.(Canonical); ok {
+ return reference{
+ namedRepository: repo,
+ tag: tag,
+ digest: canonical.Digest(),
+ }, nil
+ }
+ return taggedReference{
+ namedRepository: repo,
+ tag: tag,
+ }, nil
+}
+
+// WithDigest combines the name from "name" and the digest from "digest" to form
+// a reference incorporating both the name and the digest.
+func WithDigest(name Named, digest digest.Digest) (Canonical, error) {
+ if !anchoredDigestRegexp.MatchString(digest.String()) {
+ return nil, ErrDigestInvalidFormat
+ }
+ var repo repository
+ if r, ok := name.(namedRepository); ok {
+ repo.domain = r.Domain()
+ repo.path = r.Path()
+ } else {
+ repo.path = name.Name()
+ }
+ if tagged, ok := name.(Tagged); ok {
+ return reference{
+ namedRepository: repo,
+ tag: tagged.Tag(),
+ digest: digest,
+ }, nil
+ }
+ return canonicalReference{
+ namedRepository: repo,
+ digest: digest,
+ }, nil
+}
+
+// TrimNamed removes any tag or digest from the named reference.
+func TrimNamed(ref Named) Named {
+ repo := repository{}
+ if r, ok := ref.(namedRepository); ok {
+ repo.domain, repo.path = r.Domain(), r.Path()
+ } else {
+ repo.domain, repo.path = splitDomain(ref.Name())
+ }
+ return repo
+}
+
+func getBestReferenceType(ref reference) Reference {
+ if ref.Name() == "" {
+ // Allow digest only references
+ if ref.digest != "" {
+ return digestReference(ref.digest)
+ }
+ return nil
+ }
+ if ref.tag == "" {
+ if ref.digest != "" {
+ return canonicalReference{
+ namedRepository: ref.namedRepository,
+ digest: ref.digest,
+ }
+ }
+ return ref.namedRepository
+ }
+ if ref.digest == "" {
+ return taggedReference{
+ namedRepository: ref.namedRepository,
+ tag: ref.tag,
+ }
+ }
+
+ return ref
+}
+
+type reference struct {
+ namedRepository
+ tag string
+ digest digest.Digest
+}
+
+func (r reference) String() string {
+ return r.Name() + ":" + r.tag + "@" + r.digest.String()
+}
+
+func (r reference) Tag() string {
+ return r.tag
+}
+
+func (r reference) Digest() digest.Digest {
+ return r.digest
+}
+
+type repository struct {
+ domain string
+ path string
+}
+
+func (r repository) String() string {
+ return r.Name()
+}
+
+func (r repository) Name() string {
+ if r.domain == "" {
+ return r.path
+ }
+ return r.domain + "/" + r.path
+}
+
+func (r repository) Domain() string {
+ return r.domain
+}
+
+func (r repository) Path() string {
+ return r.path
+}
+
+type digestReference digest.Digest
+
+func (d digestReference) String() string {
+ return digest.Digest(d).String()
+}
+
+func (d digestReference) Digest() digest.Digest {
+ return digest.Digest(d)
+}
+
+type taggedReference struct {
+ namedRepository
+ tag string
+}
+
+func (t taggedReference) String() string {
+ return t.Name() + ":" + t.tag
+}
+
+func (t taggedReference) Tag() string {
+ return t.tag
+}
+
+type canonicalReference struct {
+ namedRepository
+ digest digest.Digest
+}
+
+func (c canonicalReference) String() string {
+ return c.Name() + "@" + c.digest.String()
+}
+
+func (c canonicalReference) Digest() digest.Digest {
+ return c.digest
+}
diff --git a/vendor/github.com/distribution/reference/regexp.go b/vendor/github.com/distribution/reference/regexp.go
new file mode 100644
index 000000000..65bc49d79
--- /dev/null
+++ b/vendor/github.com/distribution/reference/regexp.go
@@ -0,0 +1,163 @@
+package reference
+
+import (
+ "regexp"
+ "strings"
+)
+
+// DigestRegexp matches well-formed digests, including algorithm (e.g. "sha256:").
+var DigestRegexp = regexp.MustCompile(digestPat)
+
+// DomainRegexp matches hostname or IP-addresses, optionally including a port
+// number. It defines the structure of potential domain components that may be
+// part of image names. This is purposely a subset of what is allowed by DNS to
+// ensure backwards compatibility with Docker image names. It may be a subset of
+// DNS domain name, an IPv4 address in decimal format, or an IPv6 address between
+// square brackets (excluding zone identifiers as defined by [RFC 6874] or special
+// addresses such as IPv4-Mapped).
+//
+// [RFC 6874]: https://www.rfc-editor.org/rfc/rfc6874.
+var DomainRegexp = regexp.MustCompile(domainAndPort)
+
+// IdentifierRegexp is the format for string identifier used as a
+// content addressable identifier using sha256. These identifiers
+// are like digests without the algorithm, since sha256 is used.
+var IdentifierRegexp = regexp.MustCompile(identifier)
+
+// NameRegexp is the format for the name component of references, including
+// an optional domain and port, but without tag or digest suffix.
+var NameRegexp = regexp.MustCompile(namePat)
+
+// ReferenceRegexp is the full supported format of a reference. The regexp
+// is anchored and has capturing groups for name, tag, and digest
+// components.
+var ReferenceRegexp = regexp.MustCompile(referencePat)
+
+// TagRegexp matches valid tag names. From [docker/docker:graph/tags.go].
+//
+// [docker/docker:graph/tags.go]: https://github.com/moby/moby/blob/v1.6.0/graph/tags.go#L26-L28
+var TagRegexp = regexp.MustCompile(tag)
+
+const (
+ // alphanumeric defines the alphanumeric atom, typically a
+ // component of names. This only allows lower case characters and digits.
+ alphanumeric = `[a-z0-9]+`
+
+ // separator defines the separators allowed to be embedded in name
+ // components. This allows one period, one or two underscore and multiple
+ // dashes. Repeated dashes and underscores are intentionally treated
+ // differently. In order to support valid hostnames as name components,
+ // supporting repeated dash was added. Additionally double underscore is
+ // now allowed as a separator to loosen the restriction for previously
+ // supported names.
+ separator = `(?:[._]|__|[-]+)`
+
+ // localhost is treated as a special value for domain-name. Any other
+ // domain-name without a "." or a ":port" are considered a path component.
+ localhost = `localhost`
+
+ // domainNameComponent restricts the registry domain component of a
+ // repository name to start with a component as defined by DomainRegexp.
+ domainNameComponent = `(?:[a-zA-Z0-9]|[a-zA-Z0-9][a-zA-Z0-9-]*[a-zA-Z0-9])`
+
+ // optionalPort matches an optional port-number including the port separator
+ // (e.g. ":80").
+ optionalPort = `(?::[0-9]+)?`
+
+ // tag matches valid tag names. From docker/docker:graph/tags.go.
+ tag = `[\w][\w.-]{0,127}`
+
+ // digestPat matches well-formed digests, including algorithm (e.g. "sha256:").
+ //
+ // TODO(thaJeztah): this should follow the same rules as https://pkg.go.dev/github.com/opencontainers/go-digest@v1.0.0#DigestRegexp
+ // so that go-digest defines the canonical format. Note that the go-digest is
+ // more relaxed:
+ // - it allows multiple algorithms (e.g. "sha256+b64:") to allow
+ // future expansion of supported algorithms.
+ // - it allows the "" value to use urlsafe base64 encoding as defined
+ // in [rfc4648, section 5].
+ //
+ // [rfc4648, section 5]: https://www.rfc-editor.org/rfc/rfc4648#section-5.
+ digestPat = `[A-Za-z][A-Za-z0-9]*(?:[-_+.][A-Za-z][A-Za-z0-9]*)*[:][[:xdigit:]]{32,}`
+
+ // identifier is the format for a content addressable identifier using sha256.
+ // These identifiers are like digests without the algorithm, since sha256 is used.
+ identifier = `([a-f0-9]{64})`
+
+ // ipv6address are enclosed between square brackets and may be represented
+ // in many ways, see rfc5952. Only IPv6 in compressed or uncompressed format
+ // are allowed, IPv6 zone identifiers (rfc6874) or Special addresses such as
+ // IPv4-Mapped are deliberately excluded.
+ ipv6address = `\[(?:[a-fA-F0-9:]+)\]`
+)
+
+var (
+ // domainName defines the structure of potential domain components
+ // that may be part of image names. This is purposely a subset of what is
+ // allowed by DNS to ensure backwards compatibility with Docker image
+ // names. This includes IPv4 addresses on decimal format.
+ domainName = domainNameComponent + anyTimes(`\.`+domainNameComponent)
+
+ // host defines the structure of potential domains based on the URI
+ // Host subcomponent on rfc3986. It may be a subset of DNS domain name,
+ // or an IPv4 address in decimal format, or an IPv6 address between square
+ // brackets (excluding zone identifiers as defined by rfc6874 or special
+ // addresses such as IPv4-Mapped).
+ host = `(?:` + domainName + `|` + ipv6address + `)`
+
+ // allowed by the URI Host subcomponent on rfc3986 to ensure backwards
+ // compatibility with Docker image names.
+ domainAndPort = host + optionalPort
+
+ // anchoredTagRegexp matches valid tag names, anchored at the start and
+ // end of the matched string.
+ anchoredTagRegexp = regexp.MustCompile(anchored(tag))
+
+ // anchoredDigestRegexp matches valid digests, anchored at the start and
+ // end of the matched string.
+ anchoredDigestRegexp = regexp.MustCompile(anchored(digestPat))
+
+ // pathComponent restricts path-components to start with an alphanumeric
+ // character, with following parts able to be separated by a separator
+ // (one period, one or two underscore and multiple dashes).
+ pathComponent = alphanumeric + anyTimes(separator+alphanumeric)
+
+ // remoteName matches the remote-name of a repository. It consists of one
+ // or more forward slash (/) delimited path-components:
+ //
+ // pathComponent[[/pathComponent] ...] // e.g., "library/ubuntu"
+ remoteName = pathComponent + anyTimes(`/`+pathComponent)
+ namePat = optional(domainAndPort+`/`) + remoteName
+
+ // anchoredNameRegexp is used to parse a name value, capturing the
+ // domain and trailing components.
+ anchoredNameRegexp = regexp.MustCompile(anchored(optional(capture(domainAndPort), `/`), capture(remoteName)))
+
+ referencePat = anchored(capture(namePat), optional(`:`, capture(tag)), optional(`@`, capture(digestPat)))
+
+ // anchoredIdentifierRegexp is used to check or match an
+ // identifier value, anchored at start and end of string.
+ anchoredIdentifierRegexp = regexp.MustCompile(anchored(identifier))
+)
+
+// optional wraps the expression in a non-capturing group and makes the
+// production optional.
+func optional(res ...string) string {
+ return `(?:` + strings.Join(res, "") + `)?`
+}
+
+// anyTimes wraps the expression in a non-capturing group that can occur
+// any number of times.
+func anyTimes(res ...string) string {
+ return `(?:` + strings.Join(res, "") + `)*`
+}
+
+// capture wraps the expression in a capturing group.
+func capture(res ...string) string {
+ return `(` + strings.Join(res, "") + `)`
+}
+
+// anchored anchors the regular expression by adding start and end delimiters.
+func anchored(res ...string) string {
+ return `^` + strings.Join(res, "") + `$`
+}
diff --git a/vendor/github.com/distribution/reference/sort.go b/vendor/github.com/distribution/reference/sort.go
new file mode 100644
index 000000000..416c37b07
--- /dev/null
+++ b/vendor/github.com/distribution/reference/sort.go
@@ -0,0 +1,75 @@
+/*
+ Copyright The containerd Authors.
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+*/
+
+package reference
+
+import (
+ "sort"
+)
+
+// Sort sorts string references preferring higher information references.
+//
+// The precedence is as follows:
+//
+// 1. [Named] + [Tagged] + [Digested] (e.g., "docker.io/library/busybox:latest@sha256:")
+// 2. [Named] + [Tagged] (e.g., "docker.io/library/busybox:latest")
+// 3. [Named] + [Digested] (e.g., "docker.io/library/busybo@sha256:")
+// 4. [Named] (e.g., "docker.io/library/busybox")
+// 5. [Digested] (e.g., "docker.io@sha256:")
+// 6. Parse error
+func Sort(references []string) []string {
+ var prefs []Reference
+ var bad []string
+
+ for _, ref := range references {
+ pref, err := ParseAnyReference(ref)
+ if err != nil {
+ bad = append(bad, ref)
+ } else {
+ prefs = append(prefs, pref)
+ }
+ }
+ sort.Slice(prefs, func(a, b int) bool {
+ ar := refRank(prefs[a])
+ br := refRank(prefs[b])
+ if ar == br {
+ return prefs[a].String() < prefs[b].String()
+ }
+ return ar < br
+ })
+ sort.Strings(bad)
+ var refs []string
+ for _, pref := range prefs {
+ refs = append(refs, pref.String())
+ }
+ return append(refs, bad...)
+}
+
+func refRank(ref Reference) uint8 {
+ if _, ok := ref.(Named); ok {
+ if _, ok = ref.(Tagged); ok {
+ if _, ok = ref.(Digested); ok {
+ return 1
+ }
+ return 2
+ }
+ if _, ok = ref.(Digested); ok {
+ return 3
+ }
+ return 4
+ }
+ return 5
+}
diff --git a/vendor/github.com/docker/cli/AUTHORS b/vendor/github.com/docker/cli/AUTHORS
new file mode 100644
index 000000000..d6d23b3de
--- /dev/null
+++ b/vendor/github.com/docker/cli/AUTHORS
@@ -0,0 +1,890 @@
+# File @generated by scripts/docs/generate-authors.sh. DO NOT EDIT.
+# This file lists all contributors to the repository.
+# See scripts/docs/generate-authors.sh to make modifications.
+
+A. Lester Buck III
+Aanand Prasad
+Aaron L. Xu
+Aaron Lehmann
+Aaron.L.Xu
+Abdur Rehman
+Abhinandan Prativadi
+Abin Shahab
+Abreto FU
+Ace Tang
+Addam Hardy
+Adolfo Ochagavía
+Adrian Plata
+Adrien Duermael
+Adrien Folie
+Adyanth Hosavalike
+Ahmet Alp Balkan
+Aidan Feldman
+Aidan Hobson Sayers
+AJ Bowen
+Akhil Mohan
+Akihiro Suda
+Akim Demaille
+Alan Thompson
+Albert Callarisa
+Alberto Roura
+Albin Kerouanton
+Aleksa Sarai
+Aleksander Piotrowski
+Alessandro Boch
+Alex Couture-Beil
+Alex Mavrogiannis
+Alex Mayer
+Alexander Boyd
+Alexander Chneerov
+Alexander Larsson
+Alexander Morozov
+Alexander Ryabov
+Alexandre González
+Alexey Igrychev
+Alexis Couvreur
+Alfred Landrum
+Ali Rostami
+Alicia Lauerman
+Allen Sun
+Alvin Deng
+Amen Belayneh
+Amey Shrivastava <72866602+AmeyShrivastava@users.noreply.github.com>
+Amir Goldstein
+Amit Krishnan
+Amit Shukla
+Amy Lindburg
+Anca Iordache
+Anda Xu
+Andrea Luzzardi
+Andreas Köhler
+Andres G. Aragoneses
+Andres Leon Rangel
+Andrew France
+Andrew Hsu
+Andrew Macpherson
+Andrew McDonnell
+Andrew Po
+Andrey Petrov
+Andrii Berehuliak
+André Martins
+Andy Goldstein
+Andy Rothfusz
+Anil Madhavapeddy
+Ankush Agarwal
+Anne Henmi
+Anton Polonskiy
+Antonio Murdaca
+Antonis Kalipetis
+Anusha Ragunathan
+Ao Li
+Arash Deshmeh
+Arko Dasgupta
+Arnaud Porterie
+Arnaud Rebillout
+Arthur Peka
+Ashly Mathew
+Ashwini Oruganti
+Aslam Ahemad
+Azat Khuyiyakhmetov
+Bardia Keyoumarsi
+Barnaby Gray
+Bastiaan Bakker
+BastianHofmann
+Ben Bodenmiller
+Ben Bonnefoy
+Ben Creasy
+Ben Firshman
+Benjamin Boudreau
+Benjamin Böhmke
+Benjamin Nater
+Benoit Sigoure
+Bhumika Bayani
+Bill Wang
+Bin Liu
+Bingshen Wang
+Bishal Das
+Bjorn Neergaard
+Boaz Shuster
+Boban Acimovic
+Bogdan Anton
+Boris Pruessmann
+Brad Baker
+Bradley Cicenas
+Brandon Mitchell
+Brandon Philips
+Brent Salisbury
+Bret Fisher
+Brian (bex) Exelbierd
+Brian Goff
+Brian Tracy
+Brian Wieder
+Bruno Sousa
+Bryan Bess
+Bryan Boreham
+Bryan Murphy
+bryfry
+Cameron Spear
+Cao Weiwei
+Carlo Mion
+Carlos Alexandro Becker
+Carlos de Paula
+Ce Gao
+Cedric Davies
+Cezar Sa Espinola
+Chad Faragher
+Chao Wang
+Charles Chan
+Charles Law
+Charles Smith
+Charlie Drage
+Charlotte Mach
+ChaYoung You
+Chee Hau Lim
+Chen Chuanliang
+Chen Hanxiao
+Chen Mingjie
+Chen Qiu
+Chris Chinchilla
+Chris Couzens
+Chris Gavin
+Chris Gibson
+Chris McKinnel
+Chris Snow
+Chris Vermilion
+Chris Weyl
+Christian Persson
+Christian Stefanescu
+Christophe Robin
+Christophe Vidal
+Christopher Biscardi
+Christopher Crone
+Christopher Jones
+Christopher Svensson
+Christy Norman
+Chun Chen
+Clinton Kitson
+Coenraad Loubser
+Colin Hebert
+Collin Guarino
+Colm Hally
+Comical Derskeal <27731088+derskeal@users.noreply.github.com>
+Conner Crosby
+Corey Farrell
+Corey Quon
+Cory Bennet
+Cory Snider
+Craig Osterhout
+Craig Wilhite
+Cristian Staretu
+Daehyeok Mun
+Dafydd Crosby
+Daisuke Ito
+dalanlan
+Damien Nadé
+Dan Cotora
+Danial Gharib
+Daniel Artine
+Daniel Cassidy
+Daniel Dao
+Daniel Farrell
+Daniel Gasienica