Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add GCS and S3 using GO Cloud Dev #70

Merged
merged 6 commits into from
Oct 8, 2024
Merged

Conversation

khrm
Copy link
Member

@khrm khrm commented Oct 7, 2024

  • This removes the need to maintain the configuration for individual cloud providers.
  • Refactor OCI code to remove redundant replace.
  • Added Unit Test for Upload and Fetch for cloud provider.

@vdemeester
Copy link
Member

@khrm sorry, needs a rebase 🤦🏼

@khrm khrm force-pushed the blob branch 3 times, most recently from 578177a to 451bd58 Compare October 7, 2024 13:51
@khrm khrm force-pushed the blob branch 3 times, most recently from a6d453f to f8c149c Compare October 7, 2024 14:22
@khrm
Copy link
Member Author

khrm commented Oct 7, 2024

@vdemeester Updated this.

@khrm
Copy link
Member Author

khrm commented Oct 8, 2024

After the call we decided to merge this.

Copy link
Member Author

@khrm khrm left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

/assign @PuneetPunamiya

if err != nil {
return err
}
defer clean() //nolint:errcheck
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

you don't have to ignore that error if you want to

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

OK. So I am just logging if some error happens.

return err
}

if err := tar.Untar(ctx, file, folder); err != nil {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i wonder if we can optimize the reading of the object and unar it to the same step without having to go by a temporary file locally.... this would greatly optimize it with a large cache, wdyt ?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We are compressing it, not only taring it. That function name should be changed. Maybe we can do it in a follow up PR

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

sounds good, can you create an issue for tracking?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

BTW, I change from io.Readall to io.copy. Won't Readall give OOM?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Created an issue to track this.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yeah io.Copy stream instead of reading everything at once in memory like ReadAll but that's beside the point that we can stream directly to gzip/upload to bucket.

@chmouel chmouel merged commit adcb62a into openshift-pipelines:main Oct 8, 2024
5 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants