Skip to content

Conversation

@Jordonbc
Copy link
Owner

No description provided.

Copy link

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

Comment on lines +20 to +24
let max_lines = max_lines.unwrap_or(1500).clamp(1, 10_000);
let path = std::path::Path::new("logs").join("openvcs.log");
let Ok(data) = std::fs::read_to_string(path) else { return vec![]; };

let lines: Vec<&str> = data.lines().collect();

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Badge Avoid rereading entire app log on each poll

When the app log grows large (long-running sessions or verbose logging), tail_app_log loads the entire openvcs.log into memory (read_to_string then lines().collect()), even though only the last max_lines are needed. The UI polls this command every second (Frontend outputLog.ts around lines 120–131), so this becomes O(file size) work per second and can stall the backend/UI or spike memory for large logs. Consider streaming from the end or maintaining an in-memory ring buffer to return only the last N lines.

Useful? React with 👍 / 👎.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants