Remove duplicated retry logic for S3 artifacts#209
Conversation
I know about retry mechanism in boto. The idea here is to have a unifying interface for retries across the library, as we need this behavior not only for s3. |
It would be better to feed unified timeout values than overwriting and actually having a high chance of multiplying the retry times applied to S3 connections. |
Got you. How about we set boto retries to 0 to avoid the multiplying effect? |
I would still prefer to use the |
Alright. I'm fine with that, but please don't remove the library. I see it can be useful in other places in this library. |
0577fa0 to
c0cabdb
Compare
I agree but we are not using it at the moment. As we use Git we can always return to the state where this library has been added once the functionality is needed. |
c0cabdb to
1efe214
Compare
Codecov Report✅ All modified and coverable lines are covered by tests. Additional details and impacted files@@ Coverage Diff @@
## main #209 +/- ##
==========================================
- Coverage 87.66% 87.62% -0.05%
==========================================
Files 33 33
Lines 1549 1543 -6
==========================================
- Hits 1358 1352 -6
Misses 191 191 ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
vivus-ignis
left a comment
There was a problem hiding this comment.
I'm ok with the change in general, but I don't get all these formatting changes to be honest. What tool do you use for formatting?
What this PR does / why we need it:
This PR removes added "retrying" logic for
gardenlinux.s3.S3Artifactsas the underlyingboto3implementation has an advanced and upstream supported feature already build-in.