Skip to content

[Draft] add option to clear the alloc cache#1623

Closed
tdavidcl wants to merge 2 commits intoShamrock-code:mainfrom
tdavidcl:patch-2026-02-08-17-56
Closed

[Draft] add option to clear the alloc cache#1623
tdavidcl wants to merge 2 commits intoShamrock-code:mainfrom
tdavidcl:patch-2026-02-08-17-56

Conversation

@tdavidcl
Copy link
Copy Markdown
Member

@tdavidcl tdavidcl commented Feb 8, 2026

No description provided.

@github-actions
Copy link
Copy Markdown
Contributor

github-actions Bot commented Feb 8, 2026

Thanks @tdavidcl for opening this PR!

You can do multiple things directly here:
1 - Comment pre-commit.ci run to run pre-commit checks.
2 - Comment pre-commit.ci autofix to apply fixes.
3 - Add label autofix.ci to fix authorship & pre-commit for every commit made.
4 - Add label light-ci to only trigger a reduced & faster version of the CI (need the full one before merge).
5 - Add label trigger-ci to create an empty commit to trigger the CI.

Once the workflow completes a message will appear displaying informations related to the run.

Also the PR gets automatically reviewed by gemini, you can:
1 - Comment /gemini review to trigger a review
2 - Comment /gemini summary for a summary
3 - Tag it using @gemini-code-assist either in the PR or in review comments on files

@gemini-code-assist
Copy link
Copy Markdown
Contributor

Summary of Changes

Hello @tdavidcl, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request enhances memory management within the shamalgs::collective namespace by adding explicit cache clearing capabilities. It introduces clear() methods to relevant cache structures and integrates this functionality into the distributed_data_sparse_comm function, allowing for more controlled and efficient resource utilization by resetting allocated memory after use.

Highlights

  • Cache Clearing Mechanism: Introduced a new clear() method within internal cache structures to explicitly reset their contents, specifically cache1 and cache2.
  • DDSCommCache Integration: Implemented a clear() method for DDSCommCache that leverages std::visit to correctly invoke the clear() operation on the active variant of the underlying cache.
  • Automatic Cache Reset: Integrated the cache clearing functionality into the distributed_data_sparse_comm function, ensuring the cache is automatically cleared upon completion of its operations.
Changelog
  • src/shamalgs/include/shamalgs/collective/distributedDataComm.hpp
    • Added an inline void clear() method to an internal cache struct, which clears its cache1 and cache2 members.
    • Added an inline void clear() method to the DDSCommCache struct, utilizing std::visit to clear the currently active cache variant.
  • src/shamalgs/src/collective/distributedDataComm.cpp
    • Invoked the cache.clear() method at the end of the distributed_data_sparse_comm function to ensure the cache is reset after execution.
Activity
  • The pull request is currently in a draft state, indicating it is under active development or awaiting initial review.
  • No specific review comments or other activities have been recorded yet.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces clear() methods to DDSCommCache and DDSCommCacheTarget, providing a way to release memory from the allocation cache. This is a useful addition for memory management. However, the cache is then unconditionally cleared within distributed_data_sparse_comm, which negates the performance benefits of caching across multiple calls. My review highlights this issue and suggests making the cache clearing behavior optional or leaving it to the caller to decide when to clear the cache.

}
}

cache.clear();
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

Calling cache.clear() here unconditionally defeats the purpose of the cache, which is to reuse buffer allocations across multiple calls to distributed_data_sparse_comm. With this change, buffers will be deallocated at the end of this function and reallocated at the beginning of the next call, which can be inefficient.

The clear() method is a valuable addition to DDSCommCache as it provides control over memory management. However, the decision to clear the cache should be left to the caller, who can do so when communication patterns change or when memory needs to be reclaimed.

I recommend removing this line to preserve the caching behavior. The caller can then explicitly call cache.clear() when necessary.

Alternatively, if the goal is to control this from distributed_data_sparse_comm, you could add a boolean parameter to the function signature (e.g., bool clear_cache_at_end = false) to make this behavior optional.

@github-actions
Copy link
Copy Markdown
Contributor

github-actions Bot commented Feb 8, 2026

Workflow report

workflow report corresponding to commit 5d23818
Commiter email is timothee.davidcleris@proton.me

Light CI is enabled. This will only run the basic tests and not the full tests.
Merging a PR require the job "on PR / all" to pass which is disabled in this case.

Pre-commit check report

Some failures were detected in base source checks checks.
Check the On PR / Linting / Base source checks (pull_request) job in the tests for more detailled output

Suggested changes

Detailed changes :
diff --git a/src/shamsys/src/NodeInstance.cpp b/src/shamsys/src/NodeInstance.cpp
index 4f8f4093..c30164a1 100644
--- a/src/shamsys/src/NodeInstance.cpp
+++ b/src/shamsys/src/NodeInstance.cpp
@@ -107,7 +107,8 @@ namespace syclinit {
     std::shared_ptr<sham::DeviceScheduler> sched_alt;
 
     std::string callback_mem_perf_info1() {
-        return "Memory performance info (compute) :\n" + sham::details::log_mem_perf_info(sched_compute);
+        return "Memory performance info (compute) :\n"
+               + sham::details::log_mem_perf_info(sched_compute);
     }
     std::string callback_mem_perf_info2() {
         return "Memory performance info (alt) :\n" + sham::details::log_mem_perf_info(sched_alt);

@tdavidcl tdavidcl closed this Feb 24, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant