-
Notifications
You must be signed in to change notification settings - Fork 575
feat: integrating new log sync #19125
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: next
Are you sure you want to change the base?
Conversation
d8fc670 to
ce35192
Compare
9dfb49e to
96a2caa
Compare
ce35192 to
440c4f6
Compare
03ff16f to
4248f60
Compare
440c4f6 to
dba8831
Compare
4248f60 to
68eecc3
Compare
dba8831 to
bbe743a
Compare
1d52d40 to
2644665
Compare
2644665 to
7df5f00
Compare
370568a to
1b335df
Compare
7df5f00 to
d45dade
Compare
1b335df to
fb60766
Compare
54ff988 to
9c04f8f
Compare
7d71804 to
822c8e6
Compare
011c79a to
bee9f9d
Compare
822c8e6 to
88998e0
Compare
Fixes #17775 In this PR I implement a new log sync algorithm we've come up with in Buenos Aires that should be fully robust against any kind of private log losses. The algorithm is explained in the body of `loadPrivateLogsForSenderRecipientPair` function. That function should be the entrypoint when reviewing this PR (if checking the utility functions first you would not have enough context). Unfortunately this PR introduces a performance regression that is tracked by [this issue](https://linear.app/aztec-labs/issue/F-229/improve-log-sync-performance). I am fairly certain that the regression is caused by an unreasonable number of requests performed by the `loadLogsForRange` function - that will be tackled in a followup PR. In this PR the algorithm is not yet integrated into the system. That is done in a [PR up the stack](#19125). The directory structure is not yet final - I wanted to keep this PR contained in one place to not have conflicts with Martin's PR. So please ignore that for now (will move stuff around in a final pass). My plan is as follows: 1. Merge this PR, 2. fix the regression by modifying the `getLogsByTag` API such that it gives me all the info I need (now it doesn't give me block timestamp), 3. once that PR is merged most likely wait for Martin's refactor to be merged and then rebase and polish my integrating new log sync PR, 4. move some files around to make it all cuter. ## Update on regression of time in e2e tests Unfortunately realized that this has caused a regression and log sync is now significantly slower. Did 2 runs of the `e2e_l1_with_wall_time` test and on `next` the results are 337 and 334 seconds but with the new code it's 661 and 658 seconds. Will work on dropping that down before merging that "integrating new log sync" PR.
Fixes #17775 In this PR I implement a new log sync algorithm we've come up with in Buenos Aires that should be fully robust against any kind of private log losses. The algorithm is explained in the body of `loadPrivateLogsForSenderRecipientPair` function. That function should be the entrypoint when reviewing this PR (if checking the utility functions first you would not have enough context). Unfortunately this PR introduces a performance regression that is tracked by [this issue](https://linear.app/aztec-labs/issue/F-229/improve-log-sync-performance). I am fairly certain that the regression is caused by an unreasonable number of requests performed by the `loadLogsForRange` function - that will be tackled in a followup PR. In this PR the algorithm is not yet integrated into the system. That is done in a [PR up the stack](#19125). The directory structure is not yet final - I wanted to keep this PR contained in one place to not have conflicts with Martin's PR. So please ignore that for now (will move stuff around in a final pass). My plan is as follows: 1. Merge this PR, 2. fix the regression by modifying the `getLogsByTag` API such that it gives me all the info I need (now it doesn't give me block timestamp), 3. once that PR is merged most likely wait for Martin's refactor to be merged and then rebase and polish my integrating new log sync PR, 4. move some files around to make it all cuter. ## Update on regression of time in e2e tests Unfortunately realized that this has caused a regression and log sync is now significantly slower. Did 2 runs of the `e2e_l1_with_wall_time` test and on `next` the results are 337 and 334 seconds but with the new code it's 661 and 658 seconds. Will work on dropping that down before merging that "integrating new log sync" PR.
88998e0 to
99f51a4
Compare
33ab122 to
fe0a7f9
Compare
88998e0 to
1497662
Compare
1497662 to
7afc547
Compare
fe0a7f9 to
2beecab
Compare
| let sendersDataProvider: SendersDataProvider; | ||
| let logService: LogService; | ||
|
|
||
| describe('sync tagged logs', () => { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
These were the old algorithm tests. The new algo tests are in yarn-project/pxe/src/tagging/recipient_sync/ directory.
I didn't modify the bulkRetrieveLogs tests here - I just needed to update its test setup because it was messed up a bit and it relied on the test setup of the "sync tagged logs" tests.
47bce0f to
6e01080
Compare
6a3088c to
7912feb
Compare

Fixes #17775