-
Notifications
You must be signed in to change notification settings - Fork 0
Soft deleted bucket restore with sample #32
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Summary of Changes
Hello @shubhangi-google, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!
This pull request enhances the Google Cloud Storage Ruby client samples by adding support for managing soft-deleted buckets. It provides new samples and acceptance tests that allow users to query information about soft-deleted buckets, list them, and perform restoration operations, thereby improving the client's capabilities for disaster recovery and data management.
Highlights
- New Soft Delete Operations: Introduced new sample code and corresponding acceptance tests for interacting with soft-deleted Google Cloud Storage buckets. This includes functionality to retrieve details of a specific soft-deleted bucket, list all soft-deleted buckets, and restore a soft-deleted bucket.
- New Sample Files: Three new Ruby sample files have been added:
storage_get_soft_deleted_bucket.rb,storage_list_soft_deleted_buckets.rb, andstorage_restore_bucket.rb, demonstrating the new soft-delete capabilities. - Acceptance Test Updates: The
buckets_test.rbandproject_test.rbacceptance test suites have been updated to include tests for the newly added soft-delete and restore functionalities. - Extensive Test Commenting: A significant portion of existing bucket lifecycle and management tests within
buckets_test.rb(lines 67-650) have been commented out, likely to isolate the new soft-delete tests or for future refactoring. - Debugging Additions: Debug
putsstatements have been added tohelper.rbandstorage_get_soft_deleted_bucket.rbto print the project name, andGoogle::Apis.logger.level = Logger::DEBUGwas added tobuckets_test.rbto enable debug logging.
Using Gemini Code Assist
The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.
Invoking Gemini
You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.
| Feature | Command | Description |
|---|---|---|
| Code Review | /gemini review |
Performs a code review for the current pull request in its current state. |
| Pull Request Summary | /gemini summary |
Provides a summary of the current pull request in its current state. |
| Comment | @gemini-code-assist | Responds in comments when explicitly tagged, both in issue comments and review comments. |
| Help | /gemini help |
Displays a list of available commands. |
Customization
To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.
Limitations & Feedback
Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments or fill out our survey to provide feedback.
You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.
Footnotes
-
Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution. ↩
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request adds samples and tests for soft-deleted bucket functionality. However, existing tests are commented out, new tests are incomplete, and sample files are missing nil checks. Debugging statements should also be removed.
| # describe "bucket lifecycle" do | ||
| # it "create_bucket, create_bucket_class_location, list_buckets, get_bucket_metadata, delete_bucket" do | ||
| # # create_bucket | ||
| # bucket_name = random_bucket_name | ||
| # refute storage_client.bucket bucket_name | ||
|
|
||
| retry_resource_exhaustion do | ||
| assert_output "Created bucket: #{bucket_name}\n" do | ||
| create_bucket bucket_name: bucket_name | ||
| end | ||
| end | ||
|
|
||
| refute_nil storage_client.bucket bucket_name | ||
|
|
||
| # create_bucket_class_location | ||
|
|
||
| secondary_bucket_name = random_bucket_name | ||
| location = "ASIA" | ||
| storage_class = "COLDLINE" | ||
| refute storage_client.bucket secondary_bucket_name | ||
|
|
||
| retry_resource_exhaustion do | ||
| assert_output "Created bucket #{secondary_bucket_name} in #{location} with #{storage_class} class\n" do | ||
| create_bucket_class_location bucket_name: secondary_bucket_name | ||
| end | ||
| end | ||
|
|
||
| secondary_bucket = storage_client.bucket secondary_bucket_name | ||
| refute_nil secondary_bucket | ||
| assert_equal location, secondary_bucket.location | ||
| assert_equal storage_class, secondary_bucket.storage_class | ||
|
|
||
| # list_buckets | ||
| out, _err = capture_io do | ||
| list_buckets | ||
| end | ||
|
|
||
| assert_includes out, "ruby-storage-samples-" | ||
|
|
||
| # get_bucket_metadata | ||
| out, _err = capture_io do | ||
| get_bucket_metadata bucket_name: bucket_name | ||
| end | ||
|
|
||
| assert_includes out, bucket_name | ||
|
|
||
| # delete_bucket | ||
| assert_output "Deleted bucket: #{bucket_name}\n" do | ||
| delete_bucket bucket_name: bucket_name | ||
| end | ||
|
|
||
|
|
||
| refute storage_client.bucket bucket_name | ||
|
|
||
| delete_bucket_helper bucket_name | ||
| delete_bucket_helper secondary_bucket_name | ||
| end | ||
| end | ||
|
|
||
| describe "storage_create_bucket_dual_region" do | ||
| it "creates dual region bucket" do | ||
| location = "US" | ||
| region_1 = "US-EAST1" | ||
| region_2 = "US-WEST1" | ||
| location_type = "dual-region" | ||
| bucket_name = random_bucket_name | ||
| refute storage_client.bucket bucket_name | ||
|
|
||
| expected = "Bucket #{bucket_name} created:\n" | ||
| expected += "- location: #{location}\n" | ||
| expected += "- location_type: #{location_type}\n" | ||
| expected += "- custom_placement_config:\n" | ||
| expected += " - data_locations: #{[region_1, region_2]}\n" | ||
|
|
||
| retry_resource_exhaustion do | ||
| assert_output expected do | ||
| StorageCreateBucketDualRegion.new.storage_create_bucket_dual_region bucket_name: bucket_name, | ||
| region_1: region_1, | ||
| region_2: region_2 | ||
| end | ||
| end | ||
|
|
||
| refute_nil storage_client.bucket bucket_name | ||
|
|
||
| delete_bucket_helper bucket_name | ||
| end | ||
| end | ||
|
|
||
| describe "storage_create_bucket_hierarchical_namespace" do | ||
| it "creates hierarchical namespace enabled bucket" do | ||
| bucket_name = random_bucket_name | ||
| refute storage_client.bucket bucket_name | ||
|
|
||
| expected = "Created bucket #{bucket_name} with Hierarchical Namespace enabled.\n" | ||
|
|
||
| retry_resource_exhaustion do | ||
| assert_output expected do | ||
| create_bucket_hierarchical_namespace bucket_name: bucket_name | ||
| end | ||
| end | ||
|
|
||
| refute_nil storage_client.bucket bucket_name | ||
|
|
||
| delete_bucket_helper bucket_name | ||
| end | ||
| end | ||
|
|
||
| describe "storage_create_bucket_with_object_retention" do | ||
| it "creates a bucket with object retention enabled." do | ||
| bucket_name = random_bucket_name | ||
| refute storage_client.bucket bucket_name | ||
|
|
||
| expected = "Created bucket #{bucket_name} with object retention setting: Enabled\n" | ||
|
|
||
| retry_resource_exhaustion do | ||
| assert_output expected do | ||
| create_bucket_with_object_retention bucket_name: bucket_name | ||
| end | ||
| end | ||
|
|
||
| refute_nil storage_client.bucket bucket_name | ||
|
|
||
| file_name = "test_object_retention" | ||
|
|
||
| bucket = storage_client.bucket bucket_name | ||
|
|
||
| out, _err = capture_io do | ||
| set_object_retention_policy bucket_name: bucket.name, | ||
| content: "hello world", | ||
| destination_file_name: file_name | ||
| end | ||
|
|
||
| assert_includes out, "Retention policy for file #{file_name}" | ||
|
|
||
| file = bucket.file file_name | ||
| file.retention = { | ||
| mode: nil, | ||
| retain_until_time: nil, | ||
| override_unlocked_retention: true | ||
| } | ||
| delete_bucket_helper bucket_name | ||
| end | ||
| end | ||
|
|
||
| describe "autoclass" do | ||
| it "get_autoclass, set_autoclass" do | ||
| bucket_name = random_bucket_name | ||
| refute storage_client.bucket bucket_name | ||
|
|
||
| storage_client.create_bucket bucket_name, autoclass_enabled: true | ||
|
|
||
| assert_output(/autoclass config set to true./) do | ||
| get_autoclass bucket_name: bucket_name | ||
| end | ||
|
|
||
| assert_output(/autoclass terminal storage class set to NEARLINE./) do | ||
| get_autoclass bucket_name: bucket_name | ||
| end | ||
|
|
||
| assert_output(/autoclass terminal storage class set to ARCHIVE./) do | ||
| set_autoclass bucket_name: bucket_name, toggle: true, terminal_storage_class: "ARCHIVE" | ||
| end | ||
|
|
||
| assert_output(/autoclass config set to false./) do | ||
| set_autoclass bucket_name: bucket_name, toggle: false | ||
| end | ||
|
|
||
| delete_bucket_helper bucket_name | ||
| end | ||
| end | ||
|
|
||
| describe "cors" do | ||
| it "cors_configuration, remove_cors_configuration" do | ||
| bucket.cors { |c| c.clear } | ||
| assert bucket.cors.empty? | ||
|
|
||
| # cors_configuration | ||
| assert_output "Set CORS policies for bucket #{bucket.name}\n" do | ||
| cors_configuration bucket_name: bucket.name | ||
| end | ||
|
|
||
| bucket.refresh! | ||
| assert_equal 1, bucket.cors.count | ||
| rule = bucket.cors.first | ||
| assert_equal ["*"], rule.origin | ||
| assert_equal ["PUT", "POST"], rule.methods | ||
| assert_equal ["Content-Type", "x-goog-resumable"], rule.headers | ||
| assert_equal 3600, rule.max_age | ||
|
|
||
| # remove_cors_configuration | ||
| assert_output "Remove CORS policies for bucket #{bucket.name}\n" do | ||
| remove_cors_configuration bucket_name: bucket.name | ||
| end | ||
| bucket.refresh! | ||
| assert bucket.cors.empty? | ||
| end | ||
| end | ||
|
|
||
| describe "requester_pays" do | ||
| it "enable_requester_pays, disable_requester_pays, get_requester_pays_status" do | ||
| # enable_requester_pays | ||
| bucket.requester_pays = false | ||
|
|
||
| assert_output "Requester pays has been enabled for #{bucket.name}\n" do | ||
| enable_requester_pays bucket_name: bucket.name | ||
| end | ||
| bucket.refresh! | ||
| assert bucket.requester_pays? | ||
|
|
||
| # get_requester_pays_status | ||
| assert_output "Requester pays status is enabled for #{bucket.name}\n" do | ||
| get_requester_pays_status bucket_name: bucket.name | ||
| end | ||
| assert bucket.requester_pays? | ||
|
|
||
| # disable_requester_pays | ||
| assert_output "Requester pays has been disabled for #{bucket.name}\n" do | ||
| disable_requester_pays bucket_name: bucket.name | ||
| end | ||
| bucket.refresh! | ||
| refute bucket.requester_pays? | ||
|
|
||
| # get_requester_pays_status | ||
| assert_output "Requester pays status is disabled for #{bucket.name}\n" do | ||
| get_requester_pays_status bucket_name: bucket.name | ||
| end | ||
| refute bucket.requester_pays? | ||
| end | ||
| end | ||
|
|
||
| describe "uniform_bucket_level_access" do | ||
| it "enable_uniform_bucket_level_access, get_uniform_bucket_level_access, disable_uniform_bucket_level_access" do | ||
| # enable_uniform_bucket_level_access | ||
| bucket.uniform_bucket_level_access = false | ||
|
|
||
| assert_output "Uniform bucket-level access was enabled for #{bucket.name}.\n" do | ||
| enable_uniform_bucket_level_access bucket_name: bucket.name | ||
| end | ||
|
|
||
| bucket.refresh! | ||
| assert bucket.uniform_bucket_level_access? | ||
|
|
||
| # get_uniform_bucket_level_access | ||
| assert_output "Uniform bucket-level access is enabled for #{bucket.name}.\nBucket " \ | ||
| "will be locked on #{bucket.uniform_bucket_level_access_locked_at}.\n" do | ||
| get_uniform_bucket_level_access bucket_name: bucket.name | ||
| end | ||
| assert bucket.uniform_bucket_level_access? | ||
|
|
||
| # disable_uniform_bucket_level_access | ||
| assert_output "Uniform bucket-level access was disabled for #{bucket.name}.\n" do | ||
| disable_uniform_bucket_level_access bucket_name: bucket.name | ||
| end | ||
| # retry_resource_exhaustion do | ||
| # assert_output "Created bucket: #{bucket_name}\n" do | ||
| # create_bucket bucket_name: bucket_name | ||
| # end | ||
| # end | ||
|
|
||
| bucket.refresh! | ||
| refute bucket.uniform_bucket_level_access? | ||
| # refute_nil storage_client.bucket bucket_name | ||
|
|
||
| # get_uniform_bucket_level_access | ||
| assert_output "Uniform bucket-level access is disabled for #{bucket.name}.\n" do | ||
| get_uniform_bucket_level_access bucket_name: bucket.name | ||
| end | ||
| refute bucket.uniform_bucket_level_access? | ||
|
|
||
| bucket.uniform_bucket_level_access = false | ||
| end | ||
| end | ||
|
|
||
| describe "default Cloud KMS encryption key" do | ||
| it "set_bucket_default_kms_key, bucket_delete_default_kms_key" do | ||
| refute bucket.default_kms_key | ||
|
|
||
| # set_bucket_default_kms_key | ||
| assert_output "Default KMS key for #{bucket.name} was set to #{kms_key}\n" do | ||
| set_bucket_default_kms_key bucket_name: bucket.name, | ||
| default_kms_key: kms_key | ||
| end | ||
|
|
||
| bucket.refresh! | ||
| assert_equal bucket.default_kms_key, kms_key | ||
|
|
||
| # bucket_delete_default_kms_key | ||
| assert_output "Default KMS key was removed from #{bucket.name}\n" do | ||
| bucket_delete_default_kms_key bucket_name: bucket.name | ||
| end | ||
|
|
||
| bucket.refresh! | ||
| refute bucket.default_kms_key | ||
| end | ||
| end | ||
|
|
||
| describe "get bucket class and location data" do | ||
| bucket_name = random_bucket_name | ||
| location = "US" | ||
| storage_class = "COLDLINE" | ||
|
|
||
| it "get_bucket_class_and_location" do | ||
| storage_client.create_bucket bucket_name, | ||
| location: location, | ||
| storage_class: storage_class | ||
| expected_output = "Bucket #{bucket_name} storage class is " \ | ||
| "#{storage_class}, and the location is #{location}\n" | ||
| assert_output expected_output do | ||
| get_bucket_class_and_location bucket_name: bucket_name | ||
| end | ||
| end | ||
| end | ||
|
|
||
| describe "labels" do | ||
| it "add_bucket_label, remove_bucket_label" do | ||
| # add_bucket_label | ||
| label_key = "label_key" | ||
| label_value = "label_value" | ||
|
|
||
| assert_output "Added label #{label_key} with value #{label_value} to #{bucket.name}\n" do | ||
| add_bucket_label bucket_name: bucket.name, | ||
| label_value: label_value, | ||
| label_key: label_key | ||
| end | ||
|
|
||
| bucket.refresh! | ||
| assert_equal bucket.labels[label_key], label_value | ||
|
|
||
| # remove_bucket_label | ||
| assert_output "Deleted label #{label_key} from #{bucket.name}\n" do | ||
| remove_bucket_label bucket_name: bucket.name, | ||
| label_key: label_key | ||
| end | ||
|
|
||
| bucket.refresh! | ||
| assert bucket.labels[label_key].empty? | ||
| end | ||
| end | ||
|
|
||
| describe "lifecycle management" do | ||
| let(:bucket) { create_bucket_helper random_bucket_name } | ||
| after { delete_bucket_helper bucket.name } | ||
|
|
||
| it "enable_bucket_lifecycle_management, disable_bucket_lifecycle_management" do | ||
| # enable_bucket_lifecycle_management | ||
| out, _err = capture_io do | ||
| enable_bucket_lifecycle_management bucket_name: bucket.name | ||
| end | ||
|
|
||
| assert_includes out, "Lifecycle management is enabled" | ||
|
|
||
| # disable_bucket_lifecycle_management | ||
| out, _err = capture_io do | ||
| disable_bucket_lifecycle_management bucket_name: bucket.name | ||
| end | ||
|
|
||
| assert_includes out, "Lifecycle management is disabled" | ||
| end | ||
| end | ||
|
|
||
| describe "retention policy" do | ||
| let(:bucket) { create_bucket_helper random_bucket_name } | ||
| after { delete_bucket_helper bucket.name } | ||
|
|
||
| it "set_retention_policy, get_retention_policy, remove_retention_policy" do | ||
| # set_retention_policy | ||
| assert_output "Retention period for #{bucket.name} is now #{retention_period} seconds.\n" do | ||
| set_retention_policy bucket_name: bucket.name, | ||
| retention_period: retention_period | ||
| end | ||
|
|
||
| bucket.refresh! | ||
| assert_equal bucket.retention_period, retention_period | ||
|
|
||
| # get_retention_policy | ||
| out, _err = capture_io do | ||
| get_retention_policy bucket_name: bucket.name | ||
| end | ||
|
|
||
| assert_includes out, "period: #{retention_period}\n" | ||
|
|
||
| # remove_retention_policy | ||
| assert_equal bucket.retention_period, retention_period | ||
| assert_output "Retention policy for #{bucket.name} has been removed.\n" do | ||
| remove_retention_policy bucket_name: bucket.name | ||
| end | ||
| # # create_bucket_class_location | ||
|
|
||
| bucket.refresh! | ||
| refute bucket.retention_period | ||
| # secondary_bucket_name = random_bucket_name | ||
| # location = "ASIA" | ||
| # storage_class = "COLDLINE" | ||
| # refute storage_client.bucket secondary_bucket_name | ||
|
|
||
| # lock_retention_policy | ||
| bucket.retention_period = retention_period | ||
| out, _err = capture_io do | ||
| lock_retention_policy bucket_name: bucket.name | ||
| end | ||
| # retry_resource_exhaustion do | ||
| # assert_output "Created bucket #{secondary_bucket_name} in #{location} with #{storage_class} class\n" do | ||
| # create_bucket_class_location bucket_name: secondary_bucket_name | ||
| # end | ||
| # end | ||
|
|
||
| assert_includes out, "Retention policy for #{bucket.name} is now locked." | ||
| bucket.refresh! | ||
| assert bucket.retention_policy_locked? | ||
| # secondary_bucket = storage_client.bucket secondary_bucket_name | ||
| # refute_nil secondary_bucket | ||
| # assert_equal location, secondary_bucket.location | ||
| # assert_equal storage_class, secondary_bucket.storage_class | ||
|
|
||
| # remove_retention_policy | ||
| assert_output "Policy is locked and retention policy can't be removed.\n" do | ||
| remove_retention_policy bucket_name: bucket.name | ||
| end | ||
| end | ||
| end | ||
| # # list_buckets | ||
| # out, _err = capture_io do | ||
| # list_buckets | ||
| # end | ||
|
|
||
| describe "default_event_based_hold" do | ||
| it "enable_default_event_based_hold, get_default_event_based_hold, disable_default_event_based_hold" do | ||
| # enable_default_event_based_hold | ||
| assert_output "Default event-based hold was enabled for #{bucket.name}.\n" do | ||
| enable_default_event_based_hold bucket_name: bucket.name | ||
| end | ||
| # assert_includes out, "ruby-storage-samples-" | ||
|
|
||
| bucket.refresh! | ||
| assert bucket.default_event_based_hold? | ||
| # # get_bucket_metadata | ||
| # out, _err = capture_io do | ||
| # get_bucket_metadata bucket_name: bucket_name | ||
| # end | ||
|
|
||
| # get_default_event_based_hold | ||
| assert_output "Default event-based hold is enabled for #{bucket.name}.\n" do | ||
| get_default_event_based_hold bucket_name: bucket.name | ||
| end | ||
|
|
||
| # disable_default_event_based_hold | ||
| bucket.update do |b| | ||
| b.default_event_based_hold = true | ||
| end | ||
|
|
||
| assert_output "Default event-based hold was disabled for #{bucket.name}.\n" do | ||
| disable_default_event_based_hold bucket_name: bucket.name | ||
| end | ||
| # assert_includes out, bucket_name | ||
|
|
||
| bucket.refresh! | ||
| refute bucket.default_event_based_hold? | ||
| # # delete_bucket | ||
| # assert_output "Deleted bucket: #{bucket_name}\n" do | ||
| # delete_bucket bucket_name: bucket_name | ||
| # end | ||
|
|
||
| # get_default_event_based_hold | ||
| assert_output "Default event-based hold is not enabled for #{bucket.name}.\n" do | ||
| get_default_event_based_hold bucket_name: bucket.name | ||
| end | ||
| end | ||
| end | ||
|
|
||
| describe "storage_class" do | ||
| it "change_default_storage_class" do | ||
| assert_equal "STANDARD", bucket.storage_class | ||
| # refute storage_client.bucket bucket_name | ||
|
|
||
| assert_output "Default storage class for bucket #{bucket.name} has been set to COLDLINE\n" do | ||
| change_default_storage_class bucket_name: bucket.name | ||
| end | ||
| # delete_bucket_helper bucket_name | ||
| # delete_bucket_helper secondary_bucket_name | ||
| # end | ||
| # end |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| puts "project_name while bucket creation" | ||
| puts new_bucket.service.project | ||
| # output, _err = capture_io do | ||
| get_soft_deleted_bucket bucket_name: new_bucket_name, generation: new_generation | ||
| # end | ||
|
|
||
| assert_output "Versioning was enabled for bucket #{bucket.name}\n" do | ||
| enable_versioning bucket_name: bucket.name | ||
| end | ||
| bucket.refresh! | ||
| assert bucket.versioning? | ||
|
|
||
| # disable_versioning | ||
| assert_output "Versioning was disabled for bucket #{bucket.name}\n" do | ||
| disable_versioning bucket_name: bucket.name | ||
| end | ||
| bucket.refresh! | ||
| refute bucket.versioning? | ||
| # assert_includes output, "soft_delete_time for #{new_bucket_name} is" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This test case appears incomplete. Remove the puts statements and uncomment the capture_io block and assertion to properly verify the functionality.
output, _err = capture_io do
get_soft_deleted_bucket bucket_name: new_bucket_name, generation: new_generation
end
assert_includes output, "soft_delete_time for #{new_bucket_name} is"| let(:generation) { bucket.generation } | ||
| let(:bucket) { fixture_bucket } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| _out, _err = capture_io do | ||
| restore_bucket bucket_name: bucket.name, generation: generation | ||
| end | ||
| assert "#{bucket.name} Bucket restored" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| soft_delete_time = deleted_bucket_fetch.soft_delete_time | ||
| hard_delete_time = deleted_bucket_fetch.hard_delete_time | ||
|
|
||
| if (soft_delete_time && hard_delete_time).nil? | ||
| puts "Not Found" | ||
| else | ||
| puts "soft_delete_time for #{deleted_bucket_fetch.name} is - #{soft_delete_time}" | ||
| puts "hard_delete_time for #{deleted_bucket_fetch.name} is - #{hard_delete_time}" | ||
| end |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The storage.bucket method can return nil if the bucket is not found. Handle this case to avoid a NoMethodError.
if deleted_bucket_fetch
soft_delete_time = deleted_bucket_fetch.soft_delete_time
hard_delete_time = deleted_bucket_fetch.hard_delete_time
if soft_delete_time && hard_delete_time
puts "soft_delete_time for #{deleted_bucket_fetch.name} is - #{soft_delete_time}"
puts "hard_delete_time for #{deleted_bucket_fetch.name} is - #{hard_delete_time}"
else
puts "Bucket #{bucket_name} found, but it is not in a soft-deleted state."
end
else
puts "Soft-deleted bucket #{bucket_name} with generation #{generation} not found."
end| require_relative "../storage_get_autoclass" | ||
| require_relative "../storage_set_autoclass" | ||
| require_relative "../storage_move_object" | ||
| Google::Apis.logger.level = Logger::DEBUG |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| puts "project_name while deleting the bucket" | ||
| puts storage_client.project |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| puts "project_name while fetching the deleted bucket" | ||
| puts storage.project |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| if bucket_restored.name == bucket_name | ||
| puts "#{bucket_name} Bucket restored" | ||
| else | ||
| puts "#{bucket_name} Bucket not restored" | ||
| end |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you for opening a Pull Request! Before submitting your PR, there are a few things you can do to make sure it goes smoothly:
bundle exec rake ciin the gem subdirectory.closes: #<issue_number_goes_here>