Skip to content

Fix offset issue#466

Open
richardm90 wants to merge 18 commits intocodefori:mainfrom
richardm90:fix-offset-issue
Open

Fix offset issue#466
richardm90 wants to merge 18 commits intocodefori:mainfrom
richardm90:fix-offset-issue

Conversation

@richardm90
Copy link
Copy Markdown
Contributor

@richardm90 richardm90 commented Oct 31, 2025

The Problem

I've experienced odd behaviour when changing RPG source in that the linter would suddenly display a "Variable name casing does not match definition" warning at various points within the source I was editing. Although this appears to be a linter issue it goes further, the tokens appear to lose their offset within the source so when you use something like F2 to rename a symbol it loses track of where those symbols are and renames the wrong text in the source.

  • I notice the problem most often when adding additional /include files - I do use a lot of /include files.
  • I've not noticed it whilst editing local source only remote source.

I was able to pull together a very simple example that replicates the problem, which I've also created a video that demonstrates the problem as it's not easy to understand from the description.

rm82test-2025-10-30_20.45.00.mp4

Full disclosure I leaned on Claude Code to help fix the problem and produce the documentation of the changes.

The extension was experiencing race condition offset issues that manifested as false linter errors during live editing:

  1. Immediate parsing on every keystroke - The parser was invoked synchronously on every document change
  2. Multiple concurrent parse operations - When users typed quickly, multiple parse operations would run simultaneously on different document versions
  3. Stale/incorrect offsets - This resulted in duplicate definitions with incorrect line/column offsets in the parser cache
  4. Duplicate include file fetches - The same include file could be fetched multiple times during a single parse operation
  5. False linter diagnostics - The linter would report incorrect errors (e.g., variable name casing issues) based on stale offset data
  6. Poor user experience - Diagnostics would flicker, show false warnings, and create confusion during normal editing

The Solution

The fix implements a comprehensive debouncing and synchronization mechanism with multiple layers of protection:

  1. Debouncing (server.ts)

    • Added a 300ms debounce timer to delay parsing until the user pauses typing
    • Each new keystroke clears the previous timer and restarts the countdown
    • This prevents triggering expensive parse operations on every character typed
    • Include files and first-open documents bypass the debounce so include resolution is not artificially delayed
  2. Parse State Tracking

const documentParseState: {
  [uri: string]: {
    timer?: NodeJS.Timeout,      // The debounce timer
    parseId: number,             // Incrementing counter for each change
    parseStartTime?: number,     // Timestamp for duration tracking
    isParsing: boolean,          // Whether a parse is currently active
    needsReparse: boolean        // Whether a re-parse is queued
  }
} = {};
  1. Parse ID Validation

    • Each document change increments a parseId counter
    • Parse operations capture the current parseId before starting
    • Diagnostics are only updated if the parseId still matches when the parse completes
    • This invalidates any slow/stale parse operations that finish after newer changes have been made
  2. Concurrent Parse Handling

    • Only one parse at a time is allowed per document
    • If a new change arrives while a parse is active, it sets needsReparse = true rather than starting a second concurrent parse
    • When the active parse completes, it checks needsReparse and triggers the queued re-parse automatically
    • Combined with parseId validation, this ensures stale results are never applied to diagnostics
  3. Include File Fetch Deduplication (server.ts)

Fixed critical bug where fetchingInProgress flag was cleared too early:

// BEFORE (buggy):
fetchingInProgress[includeString] = false;  // Cleared here
if (validUri) {
  const validSource = await getFileRequest(validUri);  // But async work continues!
  ...
}

// AFTER (fixed):
if (validUri) {
  const validSource = await getFileRequest(validUri);
  if (validSource) {
    fetchingInProgress[includeString] = false;  // Cleared AFTER async work
    return { found: true, ... };
  }
}
fetchingInProgress[includeString] = false;  // Or here if not found

This prevents duplicate server requests for the same include file during a single parse operation.

  1. Fixed Nested Include Parent URI (parser.ts)

Changed the include file fetch call to pass fileUri (the current file being parsed) instead of workingUri (the root document). This fixes:

  • Incorrect parent file being logged for nested includes
  • Workspace context resolution using the wrong requesting file
  1. Flattened Resolution Caches (connection.ts, linter/index.ts)

Changed resolvedMembers and resolvedStreamfiles from per-document maps ({[baseUri]: {[fileKey]: ...}}) to global flat maps ({[fileKey]: ...}):

  • Member and streamfile resolution results are connection-level (based on library list), not document-specific
  • Closing a document no longer clears the cache, so re-opening a file benefits from previous resolutions
  • The onDidClose handler in the linter no longer purges these caches
  1. Configurable Log Level Setting (package.json, connection.ts)

Added a new vscode-rpgle.logLevel setting with options: none, error, warn, info (default), debug.

  • All logging uses logWithTimestamp() which respects the configured level
  • Log level is read from workspace configuration on server initialization
  • Clean filename display via getDisplayName() strips query parameters and decodes URL encoding
  1. Enhanced Logging (server.ts, connection.ts)

Added comprehensive timestamped logging for debugging:

  • Parse lifecycle tracking (start, completion, duration, parseId, stale detection)
  • Include file fetch operations with timing and cache status
  • URI validation with local cache hits vs server requests
  • File fetch operations showing cache hits vs server fetches with byte counts
  • Clean filename extraction (strips query parameters for readability)
  1. Error Handling (lines 398-402)
  • Added .catch() block to gracefully handle parse errors
  • Logs errors with timing information
  • Ensures proper cleanup even on parse failure

Technical Flow

  1. User types → Document change event fires
  2. Clear old timer → Cancel any pending parse operation
  3. Increment parseId → Invalidate any in-flight parses
  4. Start new timer (300ms for edits, 0ms for first open/includes)
  5. Timer fires → Check if a parse is already active
  6. If active → Set needsReparse = true and wait
  7. If idle → Start new parse operation with include fetches
  8. Include fetch deduplication → Prevent duplicate server requests
  9. Parse completes → Validate parseId → Only update diagnostics if still latest
  10. Check needsReparse → Trigger queued re-parse if needed

Key Improvements

  1. Parse ID validation prevents stale results from updating diagnostics
  2. Single active parse with re-parse queuing ensures consistency without blocking responsiveness
  3. Include file debounce bypass ensures include resolution is not artificially delayed
  4. Include fetch deduplication eliminates redundant server requests
  5. Fixed nested include parent URI ensures correct file context for include resolution
  6. Flattened resolution caches persist across file open/close cycles for better performance
  7. Configurable log level gives users control over logging verbosity
  8. Enhanced logging provides visibility into parse operations and performance
  9. Better error handling ensures state consistency

Files Changed

  • extension/server/src/server.ts - Debouncing, parse state tracking, concurrent parse handling, include fetch deduplication, logging
  • extension/server/src/connection.ts - Logging infrastructure, configurable log level, flattened resolution caches, enhanced logging for URI validation, file fetches, and member/streamfile resolution
  • extension/server/src/providers/linter/index.ts - Removed per-document cache clearing on close (caches are now global)
  • language/parser.ts - Fixed include file fetch to use current file URI instead of root document URI
  • package.json - Added vscode-rpgle.logLevel configuration setting

Observations

I noticed the following whilst working on this fix.

  • The /include files are loaded a lot, the extension seems to reload the same file many, many times. I connect over VPN for a lot of my clients and loading source can take a while.
    • URI validation cache relies on VS Code's TextDocuments collection, which evicts (garbage collects) include files after ~5 minutes when they're not open in a tab.
  • File declarations don't always load the results in the outline view.
  • There doesn't appear to be a way to reload the content of the /include files, if they've changed.
  • There is no feedback on whether the current source is still being parsed.
  • There is an option in the Outline view to reload cache though this only appears to reload file declarations, maybe this should reload all /includes too.
  • How often is the outline view updated? It doesn't always appear to be up to date with recent source changes.
  • RPGLINT.JSON is not cached in TextDocuments collection, causing a server round-trip every time it's needed.
  • If you open a file to browse some code and then quickly close the file the parsing continues in the background.

Checklist

  • have tested my change
  • updated relevant documentation
  • Remove any/all console.logs I added
  • eslint is not complaining
  • have added myself to the contributors' list in the README
  • for feature PRs: PR only includes one feature enhancement

richardm90 and others added 2 commits October 30, 2025 15:51
Prevents multiple concurrent parse operations during live editing that
were causing duplicate definitions with incorrect offsets, resulting in
false linter errors about variable name casing.
Improves the offset issue fix by adding parse ID tracking to invalidate
stale parse operations, a parsing flag to prevent concurrent parses,
error handling, and reduces debounce time from 500ms to 300ms for better
responsiveness.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
@worksofliam worksofliam self-requested a review November 3, 2025 03:45
richardm90 and others added 10 commits November 12, 2025 18:49
Removes the `parsing` state flag that was preventing concurrent parses.
The new approach allows up to 2 concurrent parses per document, with
automatic invalidation of stale results via the parseId mechanism.

This fixes the issue where document changes during an active parse
would not trigger a new parse, leaving the latest changes unparsed.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
Adds comprehensive timestamped logging for debugging parse operations:
- Parse lifecycle tracking (start, completion, duration, parseId)
- Include file fetch operations with timing and cache status
- URI validation with timing
- File fetch operations with cache hits/misses
- Clean filename extraction (strips query parameters)

Fixes include fetch deduplication bug where fetchingInProgress flag
was cleared too early, before async getFileRequest() completed. This
caused duplicate server requests for the same include file during a
single parse operation. Now the flag is only cleared after all async
work completes, properly preventing duplicate fetches.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
@richardm90 richardm90 marked this pull request as ready for review April 1, 2026 18:08
@richardm90
Copy link
Copy Markdown
Contributor Author

I have added tests for the bits I could but I haven't created tests for the debounce logic nor the race condition.

@worksofliam worksofliam self-assigned this Apr 3, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants