-
-
Notifications
You must be signed in to change notification settings - Fork 16
London | 25-SDC-July | Ali Qassab | Sprint 2 | Improve code with caches #48
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
OracPrime
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There are some serious inefficiencies with this attempt to make the code more efficient - worth revisiting to improve the efficiency
| return 0 | ||
|
|
||
| # Create cache key from current state | ||
| cache_key = (total, tuple(coins)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
converting the list to a tuple every time is going to be expensive. Also at each stage you're creating a new list, which will also be expensive. The new lists are always the same list, but starting later. Can you think of a way you could refactor the code so that it always uses the same tuple, and the recursion is achieved by passing a parameter indicating where the first usable entry in the tuple is? In that situation, how much can you simplify the key to the cache as well?
| result = fibonacci(n - 1) + fibonacci(n - 2) | ||
|
|
||
| cache[n] = result | ||
| return result |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This file is good.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Refactored to use a single tuple with an coin_index parameter instead of creating new lists via slicing. This eliminates expensive tuple conversions and list slicing, and simplifies the cache key to (total, coin_index).
Learners, PR Template
Self checklist
Changelist
Added memoization caching to improve performance:
Both functions now run significantly faster while maintaining correct results.
Questions
Ready for review! All tests pass.