Skip to content

Optimize channel list scrolling performance#6197

Draft
aleksandar-apostolov wants to merge 6 commits intov7from
perf/channel-list-scrolling-v3
Draft

Optimize channel list scrolling performance#6197
aleksandar-apostolov wants to merge 6 commits intov7from
perf/channel-list-scrolling-v3

Conversation

@aleksandar-apostolov
Copy link
Contributor

@aleksandar-apostolov aleksandar-apostolov commented Feb 27, 2026

Goal

Optimize channel list scrolling performance — eliminate cold-start frozen frames, reduce scroll jank during WebSocket event floods, batch redundant DB writes, and eliminate unnecessary object allocation during watcher event bursts.

Implementation

1. Eliminate cold-start Davey (~2.9s)
Replace Thread.sleep busy-wait in awaitInitializationState with runBlocking + coroutine suspension — wakes instantly on state change instead of polling every 100ms.

2. Reduce scroll jank from WebSocket event floods
Move channel list combine pipeline off Main via flowOn(Default), add 100ms debounce to collapse rapid events (e.g. 30 user.watching.start events), and reuse ChannelItemState instances by CID so Compose skips recomposition for unchanged items.

3. Memoize Compose work during scroll
remember() for channel name formatting, message preview, and timestamp; contentType on LazyList for better item reuse; remove redundant Crossfade from Avatar (Coil handles it); replace collectAsState() with direct .value access in lambda blocks.

4. Batch user DB writes
Replace fire-and-forget per-call insertMany with a 3s batched flush in DatabaseUserRepository. Cache updates remain immediate; only Room writes are deferred. Entities are deduplicated by user ID (last-write-wins) at flush time — measured 732 enqueued → 14 written (98% dedup).

5. Eliminate redundant object allocation on watcher events
When 30 channels load, 30 user.watching.start WebSocket events fire for the same user. Each triggered Channel.copy() on all 30 channels even when user data was identical, creating ~900 unnecessary Channel allocations. Three layered fixes:

  • Channel.updateUsers — check structural equality before copy(). User is a data class, so != is correct. Skip copy when user data hasn't changed.
  • DatabaseUserRepository.cacheUsers — skip snapshot() emission when cache content is unchanged. MutableStateFlow already deduplicates downstream, but snapshot() creates an expensive LinkedHashMap copy we can avoid entirely.
  • EventHandlerSequential — filter UserStartWatchingEvent/UserStopWatchingEvent from user extraction. These only carry the current user (whose data doesn't change). Real profile changes arrive via UserUpdatedEvent.

Result: GC cycles reduced from 4 to 2, garbage collected from 110MB to 66MB on Xiaomi Redmi 13.

Results (cold start, 30 channels, emulator)

Metric Before After
Davey reports 11 (total ~13.8s) 2 (total ~2.4s)
Skipped frames 383 67
Worst frame avg 181ms 72ms
Worst frame max 567ms 337ms
user.watching.start jank 91 events, frozen frames 0 impact
DB write operations (user burst) unbatched (many) 1 (732→14 deduped)
VM state updates 11 4
Time to smooth scroll (<16ms avg) ~10s ~4s

Results (cold start, 30 channels, Xiaomi Redmi 13)

Metric Before After
GC cycles 4 2
GC freed 110MB 66MB
Davey reports 8 8 (unchanged — bottleneck is Compose layout/draw on Helio G91, not GC)

UI Changes

No visual changes — this is a performance-only PR.

Testing

  • Cold start the app, verify channel list loads and scrolls smoothly
  • Verify no regressions in channel list content (names, avatars, timestamps, unread badges)
  • Verify typing indicators still update in the channel list
  • Verify channel mute/unmute reflects correctly
  • Verify new message arrivals update the list without visible jank
  • Run existing unit tests for DatabaseUserRepository, ChannelListViewModel
  • Run Compose UI tests for channel list

Reviewer Checklist

  • UI Components sample runs & works
  • Compose sample runs & works
  • UI Changes correct (before & after images)
  • Bugs validated (bugfixes)
  • New feature tested and works
  • All code we touched has new or updated KDocs
  • Check the SDK Size Comparison table in the CI logs

Replace Thread.sleep busy-wait in awaitInitializationState with
runBlocking + coroutine suspension — wakes instantly on state change
instead of polling every 100ms (eliminates ~2.9s Davey on cold start).

Move channel list combine pipeline off Main via flowOn(Default),
add 100ms debounce to collapse rapid WebSocket event floods, and
reuse ChannelItemState instances by CID to let Compose skip
recomposition for unchanged items.
Memoize channel name and message preview with remember(). Wrap
Timestamp formatting in remember(date, formatType). Add contentType
to LazyList for better item reuse. Replace collectAsState() with
direct .value access for stable user state in lambda blocks. Remove
duplicate Crossfade from Avatar (Coil already handles crossfade).
Replace fire-and-forget per-call insertMany with a batched flush.
Cache updates remain immediate; only Room writes are deferred.
Entities are deduplicated by user ID (last-write-wins) at flush time.
@github-actions
Copy link
Contributor

github-actions bot commented Feb 27, 2026

PR checklist ✅

All required conditions are satisfied:

  • Title length is OK (or ignored by label).
  • At least one pr: label exists.
  • Sections ### Goal, ### Implementation, and ### Testing are filled.

🎉 Great job! This PR is ready for review.

@github-actions
Copy link
Contributor

github-actions bot commented Feb 27, 2026

SDK Size Comparison 📏

SDK Before After Difference Status
stream-chat-android-client 5.25 MB 5.69 MB 0.44 MB 🟡
stream-chat-android-ui-components 10.60 MB 10.97 MB 0.37 MB 🟡
stream-chat-android-compose 12.81 MB 11.95 MB -0.86 MB 🚀

Channel.updateUsers checks structural equality before copy,
cacheUsers skips emission when data unchanged, and watcher
events are filtered from user extraction pipeline.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

pr:improvement Improvement

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant